Uncommon Descent Serving The Intelligent Design Community

Is the term “biological information” meaningful?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Some say that it’s not clear that the term is useful. A friend writes, “Information is always “about” something. How does one quantify “aboutness. ” He considers the term too vague to be helpful.”

He suggests the work of Christoph Adami, for example, this piece at Quanta:

The polymath Christoph Adami is investigating life’s origins by reimagining living things as self-perpetuating information strings.

Once you start thinking about life as information, how does it change the way you think about the conditions under which life might have arisen?

Life is information stored in a symbolic language. It’s self-referential, which is necessary because any piece of information is rare, and the only way you make it stop being rare is by copying the sequence with instructions given within the sequence. The secret of all life is that through the copying process, we take something that is extraordinarily rare and make it extraordinarily abundant.

But where did that first bit of self-referential information come from?

We of course know that all life on Earth has enormous amounts of information that comes from evolution, which allows information to grow slowly. Before evolution, you couldn’t have this process. As a consequence, the first piece of information has to have arisen by chance.

A lot of your work has been in figuring out just that probability, that life would have arisen by chance.

On the one hand, the problem is easy; on the other, it’s difficult. We don’t know what that symbolic language was at the origins of life. It could have been RNA or any other set of molecules. But it has to have been an alphabet. The easy part is asking simply what the likelihood of life is, given absolutely no knowledge of the distribution of the letters of the alphabet. In other words, each letter of the alphabet is at your disposal with equal frequency. More.

So Chance wrote an alphabet? And created “aboutness”? It;s not the same type of chance we live with today.

See also: Does the universe have a “most basic ingredient” that isn’t information?

New Scientist astounds: Information is physical

and

Data basic: An introduction to information theory

Follow UD News at Twitter!

Comments
Mung: The other thing most people don't understand (and have probably never thought about) is that any time we implement an additional code or protocol to convey a message (such as a single bit to tell us who won the coin toss in the Super Bowl), it ultimately requires more overall channel capacity to convey the message, not less. This is doubly true when our code implements a string that lacks specified complexity, such as in the case of a single bit. I don't have a lot of time tonight to expand on this and I hope that makes some sense, but let me know if it would be helpful for me to elaborate.Eric Anderson
February 19, 2017
February
02
Feb
19
19
2017
10:38 PM
10
10
38
PM
PDT
Mung @46:
Eric, I know you and I have had our differences in the past over this whole “information” topic, but now I am beginning to wonder why, lol.
Thank you for your kind words. :) I think we have to be careful about coin toss examples. There have been, IMHO, a number of missteps in the analysis over the years on these pages as people have discussed coin tosses. The value of the coin toss examples is that they are simple and avoid some of the complexities. The risk of the coin toss examples is that they tend to smuggle in through the back door some additional information that appears to have been generated through the coin toss – rather like Dawkins’ weasel program smuggled in the information needed to generate the desired result. That said, let's consider a simple coin toss, as proposed: We have to be careful when we talk about the “quantity” of information. If we don’t know anything else about the coin toss, it doesn’t convey anything. As you well state, it might represent “which team got to choose whether to kick or receive.” How do we quantify the concept of which team got to choose, what that means in a particular instance, how it might impact the larger game? For example, in the recent Super Bowl, the overtime coin toss was of substantial significance and, potentially, had a large impact on the outcome of the game in that particular circumstance and moment of the game. That coin toss was much more significant, due to the rules and momentum of the game, than the initial coin toss. Yet in each case we are presented with a single bit. I agree with you that “looking only at the physics of the coin toss might utterly overlook the message that is conveyed by the result.” Thus, there might be many different messages conveyed by a coin toss. The importance, or relevance, or significance could be vastly different in each case. In other cases, it might signify nothing at all. Does it make sense in every case to say that we received the same quantity of information? Logically, it seems there are two possible approaches here: 1. We say that we have received the same exact amount of information in each case, namely one bit. However, we acknowledge that this one bit of information ties back to a much larger informational landscape: in some cases leading to significant, important, vast quantities of information; in other cases leading to nothing of import. This is not without rationale and, at first blush, this is a tempting approach to take, particularly when we acknowledge that information is generally conveyed, received, and understood within some broader context. 2. We say that the bit we measured is largely independent from the information. There are a couple of reasons this is the case. First, the information conveyed could have been conveyed with a larger number of bits and in various ways. In other words, there is nothing inherently critical about the information itself that requires a certain number of bits. Second, we can get any quantity of bits we want, without any useful information attached (see your Liddle comment below). Of these two approaches, the second is the more consistent and rational. Allow me to expand on that for a moment. Part of the problem with the simple binary coin toss example, such as the Super Bowl coin toss, is that we have agreed, behind the scenes, on a significant meaning that can be expressed in a single bit. A simple yes/no or up/down decision point. Then, when we measure the bit – more strictly, when we measure the amount of channel capacity required to transmit our simple message – we are tempted to think we have measured the quantity of information conveyed by the pre-agreed-upon bit. We have smuggled in through the back door, as it were, a huge amount of information, set up and prepared, and arranged, and pre-agreed-upon, in such a way that a simple signal – a switch – can convey a great deal of information. We probably don’t ever get away from this completely, as information doesn’t necessarily exist in a vacuum. There is always some background knowledge required. But the point is clear. However, we are not quite there yet. Let's take the analysis one step further. Consider the fact that there is a huge difference between the following two scenarios: 1. Here are the rules of communication (e.g., the grammatical rules of the English language) we have agreed upon. Now I may convey some information to you later using these rules of communication. versus 2. Here is a large amount of information. I am conveying it to you now (or have already conveyed it to you) using our agreed-upon rules of communication. You have it already in your possession and have it handy. Later, I will send you an additional piece of information – a simple binary yes/no piece of information. When you receive that additional piece of information, then you will know whether to take into account or to disregard that much larger message I previously conveyed. The above should make it clear what is really happening in a coin toss situation, such as the Super Bowl coin toss. In addition, as soon as we step away from these basic yes/no type of inquiries and start looking at more detailed communication, the distinction between the information and the channel capacity required to convey the information becomes even more clear. -----
Now say we toss the coin 500 times. Someone might argue that in doing so we have generated 500 bits of information. Information about what? [A question Elizabeth Liddle never seemed to be able to answer.]
Yes, I agree with you that this is problematic. There is no evident meaning, or purpose, or function in the sequence. Yet, per the Shannon measure, we have generated just as great a quantity of so-called “information”, as if we had a meaningful string. This is the confusion that Liddle and company have fallen into. Again, much of the confusion results from the idea that the Shannon measure is measuring information. It isn’t. Shannon himself was quite clear on that point. What we are measuring is channel capacity. In intelligent design terms, we are measuring complexity. Whether or not there is real information in the string is entirely a separate inquiry. And one that has to be made separate from and independent of the particular channel, or size of channel, used to convey that information.
Frankly, I think what many people don’t understand is that shannon information isn’t even defined without a given probability distribution. But the shannon measure can be defined for any probability distribution.
Yes, quite true. Again though, even once we have defined a probability distribution and run our Shannon calculation, we need to realize we are not measuring information, just the channel capacity that would be relevant to conveying a particular piece of information, if it were there.Eric Anderson
February 19, 2017
February
02
Feb
19
19
2017
06:57 PM
6
06
57
PM
PDT
Critics will never be forced to accept the design inference on the basis of anything, because the only reason why they don’t accept it is ideological and dogmatic. I am firmly convinced that ID theory and the design inference for biological functional information is so self-evident, according to the facts we know, that only dogma and cognitive obstinacy can induce any reasonable intelligent person to deny those things.
:DMung
February 19, 2017
February
02
Feb
19
19
2017
08:27 AM
8
08
27
AM
PDT
Mung: Some simple answers to old questions: a) UNIPROT is a database of proteins. This is the link: http://www.uniprot.org/ Let's say you do a search for some protein, for example myoglobin. You get 903 results. You can filter the search for organism, or in other ways. However, the first result (in this case) is the human proteins. You get the Uniprot accession number: P02144 and the entry name: MYG_HUMAN By clicking on the first value, you get a very detailed page about the protein, including function, domains, and the sequence. b) BLAST is an online local alignment tool at NCBI. For proteins, you use BLASTP. Here is the link: https://blast.ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastp&PAGE_TYPE=BlastSearch&LINK_LOC=blasthome Here, you can paste the accession number from uniprot (in this case, P02144) in the first filed of the online form, and blast the sequence against all known sequences in the non-redundant protein sequences database (the default). You can also filter the alignment by organism or group of organisms. So, blasting human myoglobin against cartilaginous fish, you get 12 hits. The best is listed first. You can see the bitscore of homology: 127 bits and the Expect value: 4e-38 the number and percent of identities: 65/148(44%) (referred to the query coverage, that is 96%) the number and percent of positives: 87/148(58%) (referred to the query coverage) the number and percent of gaps: 1/148(0%) and the alignment itself. This just as a start.gpuccio
February 18, 2017
February
02
Feb
18
18
2017
03:21 PM
3
03
21
PM
PDT
Eric Anderson: Thank you for your thoughtful comments. I fully agree with you. I agree with you more than you may think. You see, I have never said that functional complexity is simply the result of a calculation. As you very correctly say, it is the result of a procedure which includes: a) The explicit definition of a function b) An explicit and objective procedure to assess if the object we observe can implement the function or not c) The computation of the bits of information that are necessary to implement that function The whole process is one procedure. All parts are necessary. I have never given any emphasis to the computation itself. Of course, the whole procedure is a cognitive process that includes some computation. I have also clearly stated that all computations are essentially cognitive processes, and that therefore there is no essential difference between defining and assessing a function and measuring or computing a numerical variable: all are cognitive processes, that arise in the mind and only in the mind. I firmly believe in the neo-platonic interpretation of mathematics and logic: they are cognitive fields that are not empirical, and arise in the mind. I fully agree with Penrose about that. Numbers and categories are both essential components of mathematical thought. Mathematics is not computation: it is much more. When you define a function, you generate a binary partition in a set (the search space), and you get two subsets (function yes, function no). That is mathemathical reasoning, even if it is not "computation". Categorical and numeric variables are the foundation of all statistical thought. I do agree that the concept of function is deeper than other forms of categorizations: its very peculiar content is the idea of purpose, of "being able to do something desired", and purpose is a very peculiar experience of consciousness. But meaning too cannot be defined without referring to consciousness. All mathematics, indeed all science and philosophy and everything else in human experience arises only, only in conscious experiences. I fully agree that no absolute value of functional information exists for an object. In all my OPs, starting with this one: https://uncommondescent.com/intelligent-design/functional-information-defined/ I have always said that any value of functional information is related to one explicitly defined function. So, it should be clear that: a) Functional information is a property of an explicitly defined function, not of the object. b) An object can implement a function only if it exhibits the functional information linked to the function. c) The same object can implement different functions, and therefore we can compute different values of functional information for each function that the object can implement. d) If an object can implement even one single function that is complex according to some appropriate threshold, we can infer design for that object. A final note: Critics will never be forced to accept the design inference on the basis of anything, because the only reason why they don't accept it is ideological and dogmatic. I am firmly convinced that ID theory and the design inference for biological functional information is so self-evident, according to the facts we know, that only dogma and cognitive obstinacy can induce any reasonable intelligent person to deny those things. And the value of ID is not "mathematical". It is empirical. ID is empirical science. It is not a theorem. Like all empirical theories, it uses mathematics to build the best explanation of known facts. But that's all.gpuccio
February 18, 2017
February
02
Feb
18
18
2017
03:00 PM
3
03
00
PM
PDT
gpuccio: Thank you for your thoughtful comments, as always. I apologize for the length, but bear with me a for a moment, and I hope I can better explain the issue I am focusing on: Let me reiterate that there is much we agree on, and I appreciate your willingness to delve into the nuances – I feel that is often when we build our collective understanding the most. Let me also reiterate that I fully agree we can calculate complexity. That is not the issue. The question is whether function alone, independent of complexity, (not functional complexity) is amenable to precise mathematical calculation. If we take the view that function alone can be calculated with mathematical precision, it seems that we end up with some very strange results. First, it stretches the use of language to call our assessment of function a “calculation.” This would be a strange calculation indeed – one that uses no formula, performs no mathematical operations, has no units of measure. I have never seen such a calculation. Furthermore, if we call every assessment or recognition we make as intelligent beings a “calculation”, then every time I walk into a room and recognize a table or a chair, I am performing a mathematical calculation? Every time I walk down the street and recognize a car or a tree or a lamp post, I am performing a mathematical calculation? When I see the words “I love you” scrawled on a paper, my recognition was based on a mathematical calculation? It strains the concept of a mathematical calculation to the breaking point. It also unfortunately minimizes the other important aspects of life and of intelligence that allow me to see and understand and recognize and feel and appreciate the world, – reducing them to a “calculation.” The fact is, our recognition of function or meaning or purpose – the kinds of things we are looking for when we infer design – is not a mathematical calculation. Yes, it is a logical and cognitive awareness and allows us to make a decision, but it is not a mathematical calculation. And it doesn’t become a mathematical operation just because we arbitrarily assign a mathematical-sounding word (“binary”) to the decision. Second, there are often various ways to implement a given function, various ways to instantiate that function in the real world. As a result, the calculation of complexity will be different in each case. If we then mistakenly think our calculation of complexity constitutes a calculation of function, we end up with a strange result. It would indeed be a strange kind of math that resulted in different results for the same function. Third, it is possible to get the same exact number of bits of complexity from non-function or from pure nonsense strings. This is precisely why some people get confused (like Elizabeth Liddle and company previously on these pages) and think they have discovered how complex information can arise through the generation of random strings. After all, if we claim that our complex functional string yields a result of X bits, and the same exact result of X bits can be produced from pure random nonsense, then we haven’t adequately distinguished between the two cases – we have done ourselves a disservice and have confused the issue. It is critical that we distinguish between function and complexity and not lump both together into the same bucket. And when we use a term like “functional complexity”, we need to remember we are speaking about two distinct aspects, not a single aspect. ----- Well, that is plenty for now. Let me again say that I agree with your approach to intelligent design generally and practically. We both understand that to determine design we have to determine function and we have to determine complexity. Only when both are present, can we infer design. My point is that this analysis consists of two parts: one a mathematical calculation of complexity; the other a logical or cognitive analysis of whether we have function, meaning, purpose, information. I fear that some design proponents would like to make the concept of complex specified information into a watertight, unassailable, purely mathematical analysis. They seem to believe that if we can somehow make the design inference a purely mathematical construct, then opposition to intelligent design will fade away, critics will be forced to accept the design inference on the basis of pure, calculable math, and intelligent design will win the day. I don’t think the design inference is amenable to such an approach. Yes, complexity is essentially a mathematical calculation and it forms an important part of the design inference. But the recognition of function is not a sanitized, mathematical calculation. And I don’t think that is a problem. It doesn’t need to be. Most of what we do in life as we recognize design on a day-to-day basis is not based on mathematical formulas and calculations. That is OK. The recognition of function comes from our intelligence and our general ability to reason and our awareness and our experience and our understanding of how the world works. That is a perfectly reasonable and supportable approach. Then the complexity calculation can be added to help avoid false positives and to make the inference to design a rigorous determination.Eric Anderson
February 18, 2017
February
02
Feb
18
18
2017
12:34 PM
12
12
34
PM
PDT
Mung, You see? As I told you @51, your questions @48 were very technical, so let's better ask gpuccio. He can handle those technical issues very nicely and explains them very clearly. Although some politely dissenting interlocutors pretend not to understand what he explains. :) You may see his comments @52, 53, 55. gpuccio handles the BLAST tools so well, that I don't hesitate to ask him questions associated with that established technology. Now I'm glad I brought up the morphogen gradients here in this discussion thread. For doing that I got in return a bag of goodies! Can't complain. :) BTW, please note that the whole issue of morphogen gradients formation and interpretation is not settled yet. There remain unanswered questions. Anyone interested in the subject may find many references to recent research papers on that topic in the discussion thread "Mystery at the heart of life" Y'all have a good weekend!Dionisio
February 18, 2017
February
02
Feb
18
18
2017
09:59 AM
9
09
59
AM
PDT
gpuccio,
The whole acquisition of functional information in vertebrates for this process, as computed from only 5 of the 82 components, is at least of 3403 bits. I could easily compute it for the whole system of 82 proteins, but I think this should already give some idea of what we are dealing with.
Wow! That's a very good illustration! I see your point clearly. I think I learned another interesting lesson. Thank you!Dionisio
February 18, 2017
February
02
Feb
18
18
2017
09:42 AM
9
09
42
AM
PDT
Mung: I hope my post #55 is an answer to some of your questions.gpuccio
February 18, 2017
February
02
Feb
18
18
2017
05:32 AM
5
05
32
AM
PDT
Dionisio: Just an example. The human proteins returned by an UNIPROT search for the GO function "regulation of BMP signaling pathway", which is a morphogen based function, are 82. I give you here, as an example, my computation of the vertebrate-related functional bits (defined as bits conserved in vertebrates, and which are not present in pre-vertebrates) of the first 5 of them, in the order they are given from UNIPROT: 1) Fibrillin-1 (P35555): 1299 bits 2) Bone morphogenetic protein 4 (P12644): 200 bits 3) Tyrosine-protein kinase ABL1 (P00519): 681 bits 4) Caveolin-1 (Q03135): 128 bits 5) Neurogenic locus notch homolog 1 (P46531): 1095 bits These are just 5 out of 82, for the regulation of a morphogen-based process. The result is: a) 3 proteins out of five have complex functionally specified information beyond the 500 bits threshold (two of them much beyond!). b) 4 proteins out of 5 have complex functionally specified information beyond the threshold of 150 bits. c) The whole acquisition of functional information in vertebrates for this process, as computed from only 5 of the 82 components, is at least of 3403 bits. I could easily compute it for the whole system of 82 proteins, but I think this should already give some idea of what we are dealing with.gpuccio
February 18, 2017
February
02
Feb
18
18
2017
05:31 AM
5
05
31
AM
PDT
GPuccio: ... if we try to compute how many of those bits are really necessary for the protein function, then we have a measure of its functional information.
Like a word cannot be understood without context (sentence, paragraph, ... , the language as a whole), one cannot understand 'function' without context. The number of really necessary bits can tell us that the implementation of a certain function in a certain (biological) context requires a lot of information.
Mung: Can it be quantified? Measured? What are it’s units of measurement?
We can measure the amount of information required to implement a function in some given context.Origenes
February 18, 2017
February
02
Feb
18
18
2017
04:57 AM
4
04
57
AM
PDT
Dionisio: Of course, for connected systems, we should sum the bits of functional information of each protein in the system to get the functional information for the whole system.gpuccio
February 18, 2017
February
02
Feb
18
18
2017
04:57 AM
4
04
57
AM
PDT
Dionisio: It is certainly functionally specified biological information: it is information which is necessary for some defined function, and it is present in biological objects. It is almost certainly complex at the threshold of 150 bits, and probably in many cases also at the threshold of 500 bits: but to be sure of that, we should quantify the functionally specified information for each protein. If you give me some names of proteins, I can tell you very quickly the bitscore of conserved information in vertebrates for those proteins, which is a way of measuring a well defined level of functional information.gpuccio
February 18, 2017
February
02
Feb
18
18
2017
04:53 AM
4
04
53
AM
PDT
Mung @48: I don't know how to answer those questions correctly. Let's better ask gpuccio to comment on what is written @47.Dionisio
February 18, 2017
February
02
Feb
18
18
2017
04:06 AM
4
04
06
AM
PDT
Mung: "Perhaps one day you can share the bioinformatics tools you use and show the rest of us how to do the same sort of research you are engaged in." Well, I have already explained in some detail what I do. The main tool I use is the online BLAST software at NCBI. Now I am using it locally, that is a little more complex. The informations about proteins are derived mainly from UNIPROT.gpuccio
February 18, 2017
February
02
Feb
18
18
2017
02:37 AM
2
02
37
AM
PDT
Mung: Thank you for the kind words. You say: "So perhaps the answer is that biological information is actually a subset of functional information. But that would mean that all biological information had to be functional." No. The fact is that "biological information" is a not well defined term. I would say: a) "Information", in a general sense, is a very wide concept. According to Dembski, if I am not wrong, any result conveys information, because it eliminates other possible results. So, if we have a random string of, say, 100 values, obtained by tossin a coin 100 times, that string has high information in it: 100 bits. It "informs" us that, of all the possible sequences (2^100), that specific one occurred. So, if someone asks: "What sequence occurred when you tossed that coin 100 times"? He can expect any of 2^100 answers. When we give him the exact sequence, we reduce the probabilities to 1, the correct result. So, even in a Shannon sense, that reduction of uncertainty is due to the fact that we are conveying 100 bits of information. So, in this general sense, "information" is only a measure of the reduction of uncertainty that can be conveyed, in bits. b) The term "meaningful information", or "functional information", instead, refers to that part of information that is linked to a specification, in the form of meaning of function. That is a subset of the total conformation that a string of bits can have. As we have seen, a string of 100 bits has the potential of conveys 100 bits of information, even if it is a random string. But, in terms of meaning or function, a random string has almost no information at all. Just try to define a function for a random string that is complex (of course, without using a posteriori the specific sequence of bits that you observe in the string). You just cannot find any complex function implemented by a random string. So, functional information is a subset of the generic concept of "information" (as described in a) ). c) Now, I don't know what you mean with the term "biological information". Information is biological if we find it in biological objects. Biological information is not different from any other form of information. So, we can have biological information in the generic sense defined in a), or biological functional information as described in b). For example, if we have a proteins that is 100 aminoacid long, the potential information in that protein is about 430 bits. However, those 430 bits of "biological information" (in the general sense), have no necessary connection to a function. But, if we try to compute how many of those bits are really necessary for the protein function, then we have a measure of its functional information. Let's say that, using a metrics based on conservation for long evolutionary times, as I have proposed, we conclude that the functional information in that protein is about 150 bits. That is a measure of the "biological functional information" in that protein. I believe that, when we speak of "biological information" in our discussions, we almost always mean "biological functional information". I usually specify that I am talking of functional information, because that is the only concept that is relevant for our discussion. So, it is obvious that "biological functional information" is functional: that's what it is by definition. "Biological information" in a general sense is very easy to compute: for a protein, it's enough to multiply the length in AAs by 4.3. But that is simply the search space for that protein, and in itself it has no special interest. It is simply a very trivial concept. Functional information, instead, be it biological or not, is the basic concept for ID theory, because it is the foundation for design inference.gpuccio
February 18, 2017
February
02
Feb
18
18
2017
02:34 AM
2
02
34
AM
PDT
Dionisio, Can it be quantified? Measured? What are it's units of measurement?Mung
February 17, 2017
February
02
Feb
17
17
2017
07:32 PM
7
07
32
PM
PDT
Morphogen gradients provide positional information that help determine the fate of the receptor cells that interpret the concentrations. Is this another case of biological information? Is it complex? Is it functional? Is it specified?Dionisio
February 17, 2017
February
02
Feb
17
17
2017
06:29 PM
6
06
29
PM
PDT
Eric, I know you and I have had our differences in the past over this whole "information" topic, but now I am beginning to wonder why, lol. Exploring further the concept of information and aboutness. Say someone tosses a coin, and they tell us that the coin landed heads up. Then we might say we have received one bit of information. This assumes the shannon measure in which each outcome is equally likely. We'll set aside how we know that or why we should assume it. :) So we might say that we received some quantity of information about the outcome of a coin toss, and that the quantity of information received was one bit. But his tells us nothing of why the coin was tossed. Which team got to choose whether to kick or receive, for example. There can be many reasons to toss a coin. Yet looking only at the physics of the coin toss might utterly overlook the message that is conveyed by the result. Any problems with that so far? Now say we toss the coin 500 times. Someone might argue that in doing so we have generated 500 bits of information. Information about what? [A question Elizabeth Liddle never seemed to be able to answer.] Information about whether the coin is fair? Information about why the coin was being tossed in the first place? Information about the distribution of H and T? Information about how to build a bridge? Frankly, I think what many people don't understand is that shannon information isn't even defined without a given probability distribution. But the shannon measure can be defined for any probability distribution. well ... enough rambling for one post ...Mung
February 17, 2017
February
02
Feb
17
17
2017
06:08 PM
6
06
08
PM
PDT
gpuccio my friend! I have the utmost respect for you. I hope you know that. If I had one complaint, it would be that you do not do enough OPs! Perhaps one day you can share the bioinformatics tools you use and show the rest of us how to do the same sort of research you are engaged in. Baby steps. :) The original question had to do with the concept of biological information and whether and in what meaningful sense biological information could be measured. It seems to me that your response is that we should consider some subset of biological information, which you refer to as functional information. It also seems to me that in doing so we abandon the concept of biological information altogether, as evidenced by the ease with which you move outside biology in order to explain the concept (in programming terms). So perhaps the answer is that biological information is actually a subset of functional information. But that would mean that all biological information had to be functional. Thoughts?Mung
February 17, 2017
February
02
Feb
17
17
2017
05:49 PM
5
05
49
PM
PDT
Eric:
So, no. I don’t think we can throw out the entire concept of common descent.
I know, right? What concept would we replace it with? Even young earth creationism, with it's disembarked animals, accepts the reality of common descent from an original pair or perhaps an initial population. I just don't know why this topic keeps coming up! :)Mung
February 17, 2017
February
02
Feb
17
17
2017
05:36 PM
5
05
36
PM
PDT
Eric @ 41
Why not? A designer isn’t necessarily constrained to a single approach. Why not create something that is able to develop and change over time? As I noted earlier, whether organisms really have that capability is a separate question, and front loading is at this stage highly speculative, but it is perfectly consistent with design.
All good points but I think if we make the case for staggered insertion of complex specified information the last vehicle it would expect is directed point mutations :-)bill cole
February 17, 2017
February
02
Feb
17
17
2017
04:55 PM
4
04
55
PM
PDT
bill cole @38:
This is a good point. What i think is the poster child for this theory is the claim that we share a common ancestor with Apes and all mammals share a common ancestor. This does not appear to be how life is designed. I appears designed for animals to stay in their current form. The exceptions like finch modifications, I agree with.
You are absolutely right that it appears organisms are designed to stay in their current form. Even the alleged exceptions confirm the rule. The real takehome lesson from the peppered moths, the finch beaks, the insects and insecticide, and all similar claims of evolution is this -- and you can take it to the bank: Organisms have the ability to temporarily oscillate around a norm, while ultimately avoiding fundamental change. This is the key observation from all such examples.Eric Anderson
February 17, 2017
February
02
Feb
17
17
2017
01:28 PM
1
01
28
PM
PDT
bill cole @39: Why not? A designer isn't necessarily constrained to a single approach. Why not create something that is able to develop and change over time? As I noted earlier, whether organisms really have that capability is a separate question, and front loading is at this stage highly speculative, but it is perfectly consistent with design.Eric Anderson
February 17, 2017
February
02
Feb
17
17
2017
01:13 PM
1
01
13
PM
PDT
gpuccio: I know you've talked in the past about the determination of function being a binary assessment. In general, I agree with you. But we shouldn't mistakenly think that means we are performing a calculation. All it means is that we are making a decision yes our no. It is a decision point, but not a calculation. Think of it this way: What formula are you using to calculate the function? I'm pretty sure I know the formula you are using to calculate the complexity. But what formula are you using to calculate the function? (And to Mung's point, what is the unit of measure? )Eric Anderson
February 17, 2017
February
02
Feb
17
17
2017
01:08 PM
1
01
08
PM
PDT
gpuccio
But, of course, it is possible to “evolve by design interventions”. Which is compatible with common descent. Guided mutations and intelligent selection of random mutations are possible procedures that can implement design by non random variation.
The issue here is we are thinking about this IMO only because of the darwinian paradigm. Why would a designer mutate a genome to new function? If you can design a human why not just down load the software mods(DNA sequences) completely designed?bill cole
February 17, 2017
February
02
Feb
17
17
2017
11:13 AM
11
11
13
AM
PDT
Eric
I think you have to be more specific. Not all ideas of common descent are broken. After all, presumably you accept the fact that you descended from your parents, grand parents, great-grand parents and so on. And presumably you accept the fact that you are slightly different from your great-great grandfather.
This is a good point. What i think is the poster child for this theory is the claim that we share a common ancestor with Apes and all mammals share a common ancestor. This does not appear to be how life is designed. I appears designed for animals to stay in their current form. The exceptions like finch modifications, I agree with.bill cole
February 17, 2017
February
02
Feb
17
17
2017
11:06 AM
11
11
06
AM
PDT
gpuccio @34:
Designed descent by designed modifications is a perfectly reasonable way to implement a new design. That’s what many programmers do when they implement new features in existing software.
That seems like a valid scientific inference from the available evidences and the known precedents.Dionisio
February 17, 2017
February
02
Feb
17
17
2017
10:49 AM
10
10
49
AM
PDT
Mung: I see that I have not fully answered your last question. "In what units is function measured?" Function is not "measured". It is assessed as present or absent, according to some explicit definition and procedure. It is a binary variable, therefore it has no units. Your question is like asking in what units is sex measured. Functional information, instead, is a numerical value: the number of bits that are necessary to implement the defined function. Therefore, the natural unit of measure for functional information is the bit.gpuccio
February 17, 2017
February
02
Feb
17
17
2017
09:57 AM
9
09
57
AM
PDT
bill cole: "This design flies in the face of any evolutionary concept of animal a and b evolving from animal c. " Yes, if you mean "evolving without any design intervention". But, of course, it is possible to "evolve by design interventions". Which is compatible with common descent. Guided mutations and intelligent selection of random mutations are possible procedures that can implement design by non random variation. The problem of common descent should be analyzed and, I hope, solved in the end according to empirical observations and impartial reasoning, but it remains a problem not strictly connected to the problem of design.gpuccio
February 17, 2017
February
02
Feb
17
17
2017
09:54 AM
9
09
54
AM
PDT
1 2 3

Leave a Reply