Uncommon Descent Serving The Intelligent Design Community

Intelligent Design Basics – Information – Part IV – Shannon II

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The concept of information is central to intelligent design.  In previous discussions, we have examined the basic concept of information, we have considered the question of when information arises, and we have briefly dipped our toes into the waters of Shannon information.  In the present post, I put forward an additional discussion regarding the latter, both so that the resource is out there front and center and also to counter some of the ambiguity and potential confusion surrounding the Shannon metric.

As I have previously suggested, much of the confusion regarding “Shannon information” arises from the unfortunate twin facts that (i) the Shannon measurement has come to be referred to by the word “information,” and (ii) many people fail to distinguish between information about an object and information contained in or represented by an object.  Unfortunately, due to linguistic inertia, there is little to be done about the former.  Let us consider today an example that I hope will help address the latter.

At least one poster on these pages takes regular opportunity to remind us that information must, by definition, be meaningful – it must inform.  This is reasonable, particularly as we consider the etymology of the word and its standard dictionary definitions, one of which is: “the act or fact of informing.”

Why, then, the occasional disagreement about whether Shannon information is “meaningful”?

The key is to distinguish between information about an object and information contained in or represented by an object.

A Little String Theory

Consider the following two strings:

String 1:

kqsifpsbeiiserglabetpoebarrspmsnagraytetfs

String 2:

String 2

The first string is an essentially-random string of English letters.  (Let’s leave aside for a moment the irrelevant question of whether anything is truly “random.”  In addition, let’s assume that the string does not contain any kind of hidden code or message.  Treat it for what it is intended to be: a random string of English letters.)

The second string is, well, a string.  (Assume it is a real string, dear reader, not just a photograph – we’re dealing with an Internet blog; if we were in a classroom setting I would present students with a real physical string.)

There are a number of instructive similarities between these two strings.  Let’s examine them in detail.

Information about a String

It is possible for us to examine a string and learn something about the string.

String of Letters

Regarding String 1 we can quickly determine some characteristics about the string and can make the following affirmative statements:

1. This string consists of forty-two English letters.

2. This string has only lower-case characters.

3. This string has no spaces, numerals, or special characters.

It is possible for us to determine the foregoing based on our understanding of English characters, and given certain parameters (for example, I have provided as a given in this case that we are dealing with English characters, rather than random squiggles on the page, etc.).  It is also possible to generate these affirmative statements about the string because the creator of the statements has a framework within which to create such statements to convey those three pieces of information, namely, the English language.

In addition to the above quickly-ascertainable characteristics of the string, we could think of additional characteristics if we were to try.

For example, let’s assume that some enterprising fellow (we’ll call him Shannon) were to come up with an algorithm that allowed us to determine how much information could – in theory – be contained in a string consisting of those 3 characteristics: a string with forty-two English letters, using only lower-case characters, and with no spaces, numerals, or special characters.  Let’s even assume that Shannon’s algorithm required some additional given parameters in this particular case, such as the assumption that all possible letters occurred at least once, that all letters could occur with the relative frequency at which they show up in the string and so forth.  Shannon has also, purely as a convenience for discussing the results of his algorithm, given us a name for the unit of measurement resulting from his algorithm: the “bit.”

In sum, what Shannon has come up with is a series of parameters, a system for identifying and analyzing a particular characteristic of the string of letters.  And within the confines of that system – given the parameters of that system and the particular algorithm put forward by Shannon – we can now plug in our string and create another affirmative statement about that characteristic of the string.  In this case, we plug in the string, and Shannon’s algorithm spits out “one hundred sixty-eight bits.”  As a result, based on Shannon’s system and based on our ability in the English language to describe characteristics of things, we can now write a fourth affirmative statement about how many bits are required to convey the string:

4. This string requires one hundred sixty-eight bits.

Please note that the above 4 affirmative pieces of information about the string are by no means comprehensive.  We could think of another half dozen characteristics of the string without trying too hard.  For example, we could measure the string by looking at the number of characters of a certain height, or those that use only straight lines, or those that have an enclosed circle, or those that use a certain amount of ink, and on and on.  This is not an idle example.  Font makers right up to the present day still take into consideration these kinds of characteristics when designing fonts, and, indeed, publishers can be notoriously picky about which font they publish in.  As long as we lay out with reasonable detail the particular parameters of our analysis and agree upon how we are going to measure them, then we can plug in our string, generate a numerical answer and generate additional affirmative statements about the string in question.  And – note this well – it is every bit as meaningful to say “the string requires X amount of ink” as to say “the string requires X bits.”

Now, let us take a deep breath and press on by looking back to our statement #4 about the number of bits.  Where did that statement #4 come from?  Was it contained in or represented by the string?  No.  It is a statement that was (i) generated by an intelligent agent, (ii) using rules of the English language, and (iii) based on an agreed-upon measurement system created by and adopted by intelligent agents.  Statement #4 “The string rquires one hundred sixty-eight bits” is information – information in the full, complete, meaningful, true sense of the word.  But, and this is key, it was not contained in the artifact itself; rather, it was created by an intelligent agent, using the tools of analysis and discovery, and articulated using a system of encoded communication.

Much of the confusion arises in discussions of “Shannon information” because people reflexively assume that by running a string through the Shannon algorithm and then creating (by use of that algorithm and agreed-upon communication conventions) an affirmative, meaningful, information-bearing statement about the string, that we have somehow measured meaningful information contained in the string.  We haven’t.

Some might argue that while this is all well and good, we should still say that the string contains “Shannon information” because, after all, that is the wording of convention.  Fair enough.  As I said at the outset, we can hardly hope to correct an unfortunate use of terminology and decades of linguistic inertia.  But we need to be very clear that the so-called Shannon “information” is in fact not contained in the string.  The only meaning we have anywhere here is the meaning Shannon has attached to the description of one particular characteristic of the string.  It is meaning, in other words, created by an intelligent agent upon observation of the string and using the conventions of communication, not in the string itself.

Lest anyone is still unconvinced, let us hear from Shannon himself:

“The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point.  Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities.  These semantic aspects of communication are irrelevant to the engineering problem. (bold added)”*

Furthermore, in contrast to the string we have been reviewing, let us look at the following string:

“thisstringrequiresonehundredsixtyeightbits”

What makes this string different from our first string?  If we plug this new string into the Shannon algorithm, we come back with a similar result: 168 bits.  The difference is that in the first case we were simply ascertaining a characteristic about the string.  In this new case the string itself contains or represents meaningful information.

String of Cotton

Now let us consider String 2.  Again, we can create affirmative statements about this string, such as:

1. This string consists of multiple smaller threads.

2. This string is white.

3. This string is made of cotton.

Now, critically, we can – just as we did with our string of letters – come up with other characteristics.  Let’s suppose, for example, that some enterprising individual decides that it might be useful to know how long the string is.  So we come up with a system that uses some agreed-upon parameters and a named unit of measurement.  Hypothetically, let’s call it, say, a “millimeter.”  So now, based on that agreed-upon system we can plug in our string and come up with another affirmative statement:

4. This string is one hundred sixty-eight millimeters long.

This is a genuine piece of information – useful, informative, meaningful.  And it was not contained in the string itself.  Rather, it was information about the string, created by an intelligent agent, using tools of analysis and discovery, and articulated in an agreed-upon communications convention.

It would not make sense to say that String 2 contains “Length information.”  Rather, I assign a length value to String 2 after I measure it with agreed-upon tools and an agreed-upon system of measurement.  That length number is now a piece of information which I, as an intelligent being, have created and which can be encoded and transmitted just like any other piece of information and communicated to describe String 2.

After all, where does the concept of “millimeter” come from?  How is it defined?  How is it spelled?  What meaning does it convey?  The concept of “millimeter” was not learned by examining String 2; it was not inherent in String 2.  Indeed, everything about this “millimeter” concept was created by intelligent beings, by agreement and convention, and by using rules of encoding and transmission.  Again, nothing about “millimeter” was derived by or has anything inherent to do with String 2.  Even the very number assigned to the “millimeter” measurement has meaning only because we have imposed it from the outside.

One might be tempted to protest: “But String 2 still has a length, we just need to measure it!”

Of course.  If by having a “length” we simply mean that it occupies an area of space.  Yes, it has a physical property that we define as length, which when understood at its most basic, simply means that we are dealing with a three-dimensional object existing in real space.  That is, after all, what a physical object is.  That is to say: the string exists.  And that is about all we can say about the string unless and until we start to impose – from the outside – some system of measurement or comparison or evaluation.  In other words, we can use information that we create to describe the object that exists before us.

Systems of Measurement

There is no special magic or meaning or anything inherently more substantive in the Shannon measurement than in any other system of measurement.  It is no more substantive to say that String 1 contains “Shannon information” than to say String 2 contains “Length information.”  This is true notwithstanding the unfortunate popularity of the former term and the blessed absence in our language of the latter term.

This may seem rather esoteric, but it is a critical point and one that, once grasped, will help us avoid no small number of rhetorical traps, semantic games, and logical missteps:

Information can be created by an intelligent being about an object or to describe an object; but information is not inherently contained in an object by its mere existence.

We need to avoid the intellectual trap of thinking that just because a particular measurement system calls its units “bits” and has unfortunately come to be known in common parlance as Shannon “information,” that such a system is any more substantive or meaningful or informative or inherently deals contains more “information” than a measurement system that uses units like “points” or “gallons” or “kilograms” or “millimeters.”

To be sure, if a particular measurement system gains traction amongst practitioners as an agreed-upon system, it can then prove useful to help us describe and compare and contrast objects.  Indeed, the Shannon metric has proven very useful in the communications industry; so too, the particular size and shape and style of the characters in String 1 (i.e., the “font”) is very useful in the publishing industry.

The Bottom String Line

Intelligent beings have the known ability to generate new information by using tools of discovery and analysis, with the results being contained in or represented in a code, language, or other form of communication system.  That information arises as a result of, and upon application of, those tools of discovery and can then be subsequently encoded.  And that information is information in the straight-forward, ordinary understanding of the word: that which informs and is meaningful.  In contrast, an object by its mere existence, whether a string of letters or a string of cotton, does not contain information in and of itself.

So if we say that Shannon information is “meaningful,” what we are really saying is that the statement we made – as intelligent agents, based on Shannon’s system and using our English language conventions – the statement that we made to describe a particular characteristic of the string, is meaningful.  That is of course true, but not because the string somehow contains that information, but rather because the statement we created is itself information – information created by us as intelligent agents and encoded and conveyed in the English language.

This is just as Shannon himself affirmed.  Namely, the stuff in String 1 has, in and of itself, no inherent meaning.  And the stuff that has the meaning (the statement we created about the number of bits) is meaningful precisely because it informs, because it contains information, encoded in the conventions of the English language, and precisely because it is not just “Shannon information.”

—–

* Remember that Shannon’s primary concern was that of communication.  More narrowly, the technology of communication systems.  The act of communication and the practical requirements for communication, yes, are usually related to, but are not the same thing as information.  Remembering this can help keep things straight.

Comments
Dionisio: For some reason, the link does not work for me. And I have not received your email! Some adverse destiny must be at work :)gpuccio
October 20, 2014
October
10
Oct
20
20
2014
09:17 AM
9
09
17
AM
PDT
"Your string 1 may be considered complex but it doesn’t have any specification other than the string itself. OTOH Hamlet’s soliloquy is both complex and specified." Precisely.Eric Anderson
October 20, 2014
October
10
Oct
20
20
2014
09:04 AM
9
09
04
AM
PDT
gpuccio @33: I think you're right that, although transmission aspects do occur in biology, for the most part ID is interested in the generation of the information. The Shannon metric is useful as a way to calculate complexity for certain things (strings of characters being our typical example), but in some ways it was really intended to drive toward a different question. As a result, the focus on "Shannon information" by some ID proponents has been something of a distraction, in my opinion, although it is hard to come up with another simple, easy-to-use avenue to calculate complexity, so I'm not sure what else to use.Eric Anderson
October 20, 2014
October
10
Oct
20
20
2014
09:04 AM
9
09
04
AM
PDT
Eric:
CSI incorporates the concept of complexity as the “C”, which can be measured in a variety of ways, including by Shannon’s metric (for strings of characters/symbols). But CSI is broader than that, in that CSI ties that complexity calculation to real-world function/meaning.
Oops, I was mainly referring to the "specification" part. Your string 1 may be considered complex but it doesn't have any specification other than the string itself. OTOH Hamlet's soliloquy is both complex and specified.Joe
October 20, 2014
October
10
Oct
20
20
2014
03:41 AM
3
03
41
AM
PDT
Eric: I agree with what you say. ID is obviously more than Shannon's theory: its purpose is to infer design. The specification, and the computation of the complexity linked to it, are certainly peculiar aspects of ID theory. Shannon's metric, in the right context, is a very good way to measure that complexity (as shown by Durston's application). The specification, in the traditional ID inference, can be considered a binary measure: either it is there, or not. As you know, there are different kinds of specification. I generally use functional specification, but indeed any objectively and explicitly defined rule which generates a binary partition in the search space is an example of specification. Once a specification is given, we can compute the complexity linked to it (the target space / search space ratio, given the partition generated by that specification). Another example of specification is pre-specification. That is more similar to the Shannon scenario. Let's say that I have a random string, and I give my specification as "this string". That is valid, but only if used as a pre-specification. IOWs, I can measure the probability of getting exactly that string in a new random search, and the probability will be 1:search space (there is only one string identical to the one which specifies). So, we could say that Shannon pre-specifies his string: this is the string that we want to communicate. He is using a pre-specification, and not a specification based on meaning and function. That's why he can avoid dealing with meaning and function. Another consequence of the binary form of specification in the ID inference is that the inference is the same for any kind of specification, once we get some specified complexity. IOWs, we infer design with the same degree of "strength", according to the value of the specified complexity, for any kind of specification (meaning, function, pre-specification, special forms of order). In that sense, the inference is similar for a Shakespeare sonnet and for one random pre-specified string which is successfully "generated" again after having been specified. The type of specification, however, cab help us very much in the other part of the ID inference: excluding explanations based on necessity. Indeed, while both pre-specification and specification given by some special order can often be explained by some necessity mechanism already existing in the system, and in that case do not allow a design inference, the specification based on meaning or function is the best of all: necessity algorithms cannot explain meaning or function or, to explain it, they must be usually much more complex than the observed object itself. Finally, you say: "I like the idea of it being primarily a generation problem; that is probably true. I’m wondering, however, if ID is limited to generation? There are aspects of biology that deal with transmission of information, so it seems transmission is relevant as well." Well, certainly there are many aspects of biology that deal with transmission of information, and they grow every day. But the point is: the ID interest in that case would be: how did the system which transmits information in this biological being arise? How was its generated? Because you need an informationally complex functional system to transmit information. So, in the end, ID is interested always in the design inference, IOWs, in identifying the origin of information from a conscious agent.gpuccio
October 20, 2014
October
10
Oct
20
20
2014
01:33 AM
1
01
33
AM
PDT
Roy:
So DNA does not contain information.
DNA most certainly does contain information. That is why I said information is not inherently contained in an object by its mere existence. The fact that information can be encoded in a physical medium (like DNA) is quite clear. gpuccio has given a good answer, but you might check out this OP, as I address this issue directly, including the DNA situation: https://uncommondescent.com/informatics/intelligent-design-basics-information/Eric Anderson
October 19, 2014
October
10
Oct
19
19
2014
05:39 PM
5
05
39
PM
PDT
Glenn @14: Good thoughts. BA77: Thanks for the references to some preliminary calculations that give us a hint as to the amount of information in a cell.Eric Anderson
October 19, 2014
October
10
Oct
19
19
2014
05:32 PM
5
05
32
PM
PDT
Joe @13:
That said, this is why, IMO, Dembski came up with “complex specified information”- to differentiate between the meaningless and the meaningful. Where Shannon is all about information carrying capacity, Dembski is all about what that information is/ does/ means.
Exactly. CSI incorporates the concept of complexity as the "C", which can be measured in a variety of ways, including by Shannon's metric (for strings of characters/symbols). But CSI is broader than that, in that CSI ties that complexity calculation to real-world function/meaning.Eric Anderson
October 19, 2014
October
10
Oct
19
19
2014
05:29 PM
5
05
29
PM
PDT
Glenn @11: Thank you for your thoughtful comment, which highlights the fact that Shannon was interested in communication and the communication process, not on the substantive content of the string. I think your example of the WWII operator’s role is spot on. The reason this discussion comes up in the intelligent design debate is not because we are talking so much about faithful communication of a pre-existing information-rich string, but because we are interested in how information-rich strings get produced in the first place. (gpuccio raised this point and I responded above @28, with one caveat about transmission in biological systems.) What typically happens in the ID debate is an exchange something like the following:
ID proponent: This string is highly improbable and is therefore unlikely to have been stumbled upon by chance. Indeed, here is my calculation of how much “Shannon information” there is, and the odds of that string arising by chance exceed the probabilistic resources of the known universe. ID opponent: That is silly. I can generate strings that have that much Shannon information all day. Look, this is my wonderful evolutionary algorithm I wrote on my computer. It easily generates strings with as much or more information than your string. ID proponent: But, but . . . your string doesn’t do anything, it doesn’t have any meaning or function. ID opponent: Who cares? My string has just as much or more Shannon information than your string. So obviously information – lots of it – can be generated through natural evolutionary processes and, therefore, everything you say is wrong and ID is bunk.
The issue I am addressing is not whether for communication purposes the operator of the communications equipment is obligated to treat the string as chock-full of information (the operator is, whether due to military orders in your WWII example or due to contractual obligations like our modern telecommunications operators). The issue I am addressing is that the number of bits required to transmit a string (the Shannon metric) is independent of whether the underlying string constitutes meaningful information. Yes, as I said in the OP, by definitional fiat we could call everything that is transmitted through a communications network “information.” But doing so robs the word of content and is utterly useless in addressing the all-too-familiar disconnect between the ID proponent and the ID opponent. Furthermore, such a generalized and "everything-is-information approach" is anathema to the “information” we are interested in for purposes of ID. I would note that you said everything transmitted by the operator was “pure information.” But that statement is possible only because you assumed at the outset that every message was “packed completely full of information.” That is an assumption that simply does not hold in the situation of trying to generate meaningful biological information from a string of nucleotides or amino acids. Indeed, that assumption is known to not be the case. So, contrary to the low-level WWII operator who is obligated by virtue of his job to assume that every string handed to him is meaningful information, the question of whether a string in the real world – particularly one that we come across and have to determine whether it is designed or not – is very much an open question. As a result, there is very much a legitimate question as to whether or not the string contains real information. We can’t get around this real-world distinction by simply defining every string as “information.”Eric Anderson
October 19, 2014
October
10
Oct
19
19
2014
05:26 PM
5
05
26
PM
PDT
gpuccio @7: Some good comments. A couple of quick reflections:
Shannon’s theory is obviously about communication of a string which is a message. It is true that he does not enter into the debate of why that string is a message. But he assumes that it is the message that we want to communicate. So, the string always has the meaning of being the message we want to communicate. IOWs, it is functional in conveying a desired message.
I believe I understand what you are saying, with one important caveat. Shannon calls the string to be communicated a “message” because that is the term used in discussing communication. But he does not assume that it is functional or that it conveys anything in particular. Indeed, he assumes that those aspects are irrelevant. It doesn’t matter whether the “message” to be communicated is what we would normally understand as a message: namely, an encoded piece of information intended for receipt and decoding by a recipient. It could just as well be a jumble of random nonsense.
Shannon’s theory is about the answer to a simple question: how many bits are needed to communicate that specific message?
Agreed.
So, he is measuring functional information (in respect to a communication problem).
Again, he isn’t interested in whether the message has any function or any meaning. He is only interested in how many bits are required (as you noted); in other words, he is interested in how big the pipeline has to be to carry the string, whether or not the string is meaningful or nonsensical.
ID is measuring functional information in respect to a generation problem.
I like the idea of it being primarily a generation problem; that is probably true. I’m wondering, however, if ID is limited to generation? There are aspects of biology that deal with transmission of information, so it seems transmission is relevant as well.
Now, the important point that I want to suggest is that neither Shannon’s theory of communicating the message nor ID theory of generating the message are really “qualitative”. Both are “quantitative” theories. In a sense, neither deals with the problem of “what is meaning” and “what is function”.
I agree that Shannon’s theory is explicitly non-qualitative. However, ID is very much interested in the qualitative aspect. Behe’s whole notion of irreducible complexity (which is, after all, a subset of CSI) is built upon identifying and appreciating real-world functionality. It is only the “C” part of CSI that can be related to Shannon’s theory. And I agree that Shannon’s metric can be a useful metric to use when analyzing the complexity of a particular string. It is less useful in direct application to physical machines in real three-dimensional space, but even there, as kairosfocus has pointed out, we can perhaps use some kind of complexity calculation if we consider the amount of information required to specify such a machine (such as in an AutoCAD file). But the “S” part of CSI is not equivalent to Shannon’s metric. Indeed, it is precisely the thing that Shannon said he was not addressing, namely, meaning, function, purpose, etc.
Shannon generates a partition in the space of all possible communicated strings: this string is the message, all other strings are not. ID generates a partition in the space of all possible generated strings: this set of strings is functional (the target space); all the others are not. At that point, both reasonings are interested only in quantitative measurements: how many bits are necessary to convey the message, how many bits are necessary to implement the defined function. Neither theory really measures the meaning or the function. They only measure the quantitative bits necessary to deal with that meaning/function in a specific context (communicate it/generate it). OK?
I see where you are headed, and I think we are largely on the same page. Let me try to state it this way: We cannot measure something like meaning or function in the same way that we can measure complexity. So we are reduced to just measuring complexity. In Shannon’s case, that is all he was interested in. In ID’s case we typically only measure complexity once we have already identified that there is a meaning or function. (In other words, once we see a “specification”.)
Does that quantitative consideration tell us anything specific about the three different “messages”? No. They are very different, not only because they convey different meanings, but also because those meanings are of very different type and quality.
Agreed.
But our “quantitative” theories (both Shannon’s theory and ID) are not really dealing with that aspect.
Agreed, as to the complexity calculation. But ID is broader than Shannon’s theory. It necessarily encompasses not only a complexity calculation but a (in my view non-calculable) recognition of the specification (i.e., a meaning, function, purpose, etc.). If ID is only about calculating complexity then we don’t need ID. Dembski’s whole insight, if you will, about how we detect design was the recognition that we have to tie the complexity calculation to a specification. So, yes, ID includes a component of complexity that is similar to Shannon’s metric (and indeed, we can use Shannon’s direct metric to calculate it in many cases). But ID is broader than Shannon’s concept and additionally includes the tying of that calculation to a specification.Eric Anderson
October 19, 2014
October
10
Oct
19
19
2014
04:50 PM
4
04
50
PM
PDT
All, thanks for the good thoughts. I wish I had time for a detailed discussion of all the excellent comments on this thread, but perhaps I can at least offer a couple of quick reactions on a few of them. For convenience of discussion, I will do so in separate individual comments later today.Eric Anderson
October 19, 2014
October
10
Oct
19
19
2014
04:09 PM
4
04
09
PM
PDT
gpuccio Apparently groovamos fixed the link in his post #15 It gave me a 6.98 Mb file named blog_attachment.pdf which I just emailed to you check your email the document title is information theory and communication systemsDionisio
October 19, 2014
October
10
Oct
19
19
2014
03:33 PM
3
03
33
PM
PDT
http://posterwall.com/blog_attachment.php?attachmentid=4573&d=1318295130 groovamos' link takes me to a PDF document on information theory and communicationDionisio
October 19, 2014
October
10
Oct
19
19
2014
03:22 PM
3
03
22
PM
PDT
groovamos: the link does not work, apparently.gpuccio
October 19, 2014
October
10
Oct
19
19
2014
02:17 PM
2
02
17
PM
PDT
Roy: It should be simple, why don't you understand? We can derive information from any object about the object itself. So, any object is a source of information about itself. But that does not mean that the object conveys meaningful information about something else. A DNA protein coding gene certainly can give us information about itself, like any other object: it is a molecule, made of atoms, and so on. It has molecular weight, and so on. But the sequence of nucleotides in it is all another matter: it describes something else, a functional protein. With the correct procedures, it can convey that meaningful information to a protein synthesis system, and indeed it does exactly that in the cell. So, a water molecule is a molecule, but it has no meaningful information about anything else. A protein coding gene is a molecule, but it conveys in its symbolic sequence a very meaningful information about something else. Should be simple.gpuccio
October 19, 2014
October
10
Oct
19
19
2014
02:11 PM
2
02
11
PM
PDT
Semi related: Programming of Life - video https://www.youtube.com/watch?v=mIwXH7W_FOk Seminar by Dr. Don Johnson Apologetics Symposium - Sept. 2014 Cedar Park Church, Bothell WAbornagain77
October 19, 2014
October
10
Oct
19
19
2014
12:37 PM
12
12
37
PM
PDT
Information can be created by an intelligent being about an object or to describe an object; but information is not inherently contained in an object by its mere existence.
So DNA does not contain information. RoyRoy
October 19, 2014
October
10
Oct
19
19
2014
12:34 PM
12
12
34
PM
PDT
This 'information' is one slippery beast. On par with entropy, and most interestingly, possibly intimately connected with it. But it is certainly real, whether or not we can pin it down yet. Although with darwinist harping about the lack of formality, one could think they don't believe in it as a real thing. Perhaps it is ALL meaning, 'information' being 100% immaterial. Created solely by non-physical minds, any physical arrangement related to information is just that - a relation.butifnot
October 19, 2014
October
10
Oct
19
19
2014
10:34 AM
10
10
34
AM
PDT
GBDixon: Thank you again for your very useful comments in #14. :) As BA has already mentioned, your approach is completely compatible with the concept of functional information in biological molecules, and in particular in functional proteins, and with the ways used to compute it and to infer design according to the complexity observed.gpuccio
October 19, 2014
October
10
Oct
19
19
2014
09:38 AM
9
09
38
AM
PDT
Moreover, Dr Andy C. McIntosh, Professor of Thermodynamics at the University of Leeds, holds that it is non-material information which is constraining "the local thermodynamics (of a cell) to be in a non-equilibrium state of raised free energy'
Information and Thermodynamics in Living Systems - Andy C. McIntosh - May 2013 Excerpt: The third view then that we have proposed in this paper is the top down approach. In this paradigm, the information is non-material and constrains the local thermodynamics to be in a non-equilibrium state of raised free energy. It is the information which is the active ingredient, and the matter and energy are passive to the laws of thermodynamics within the system. As a consequence of this approach, we have developed in this paper some suggested principles of information exchange which have some parallels with the laws of thermodynamics which undergird this approach.,,, http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0008 - Dr Andy C. McIntosh is the Professor of Thermodynamics (the highest teaching/research rank in the U.K.) at the University of Leeds.
Dr. McIntosh's contention that 'non-material' information is constraining "the local thermodynamics (of a cell) to be in a non-equilibrium state of raised free energy' has now been verified. In showing you this verification, first it is important to learn that 'non-local', beyond space and time, quantum entanglement, can be used as a 'quantum information channel',,,
Quantum Entanglement and Information Quantum entanglement is a physical resource, like energy, associated with the peculiar nonclassical correlations that are possible between separated quantum systems. Entanglement can be measured, transformed, and purified. A pair of quantum systems in an entangled state can be used as a quantum information channel to perform computational and cryptographic tasks that are impossible for classical systems. The general study of the information-processing capabilities of quantum systems is the subject of quantum information theory. http://plato.stanford.edu/entries/qt-entangle/
And this 'non-material', non-local, beyond space and time quantum entanglement/information, is now found, as Dr. McIntosh had theorized, to be 'holding life together', i.e. 'constraining "the local thermodynamics (of a cell) to be in a non-equilibrium state of raised free energy''
Quantum entanglement holds together life’s blueprint – 2010 Excerpt: When the researchers analysed the DNA without its helical structure, they found that the electron clouds were not entangled. But when they incorporated DNA’s helical structure into the model, they saw that the electron clouds of each base pair became entangled with those of its neighbours (arxiv.org/abs/1006.4053v1). “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/ Coherent Intrachain energy migration at room temperature - Elisabetta Collini & Gregory Scholes - University of Toronto - Science, 323, (2009), pp. 369-73 Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state. http://www.scimednet.org/quantum-coherence-living-cells-and-protein/
classical 'digital' information is found to be a subset of ‘non-local' (i.e. beyond space and time) quantum entanglement/information by the following method:
Quantum knowledge cools computers: New understanding of entropy – June 2011 Excerpt: No heat, even a cooling effect; In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.” http://www.sciencedaily.com/releases/2011/06/110601134300.htm
,,,And here is the evidence that quantum information is in fact ‘conserved’;,,,
Quantum no-hiding theorem experimentally confirmed for first time Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html Quantum no-deleting theorem Excerpt: A stronger version of the no-cloning theorem and the no-deleting theorem provide permanence to quantum information. To create a copy one must import the information from some part of the universe and to delete a state one needs to export it to another part of the universe where it will continue to exist. http://en.wikipedia.org/wiki/Quantum_no-deleting_theorem#Consequence
Besides providing direct empirical falsification of neo-Darwinian claims as to the generation of information, the implication of finding 'non-local', beyond space and time, and ‘conserved’ quantum information in molecular biology on a massive scale is fairly, and pleasantly, obvious:
Does Quantum Biology Support A Quantum Soul? – Stuart Hameroff - video (notes in description) http://vimeo.com/29895068
Verse and Music:
John 1:1-4 In the beginning was the Word, and the Word was with God, and the Word was God. He was in the beginning with God. All things were made through Him, and without Him nothing was made that was made. In Him was life, and the life was the light of men. ROYAL TAILOR - HOLD ME TOGETHER https://www.youtube.com/watch?v=vbpJ2FeeJgw
bornagain77
October 19, 2014
October
10
Oct
19
19
2014
07:56 AM
7
07
56
AM
PDT
GBDixon as to:
"But this is how I would go about approaching the problem of how much information a cell contains."
Your approach is called 'functional' information (or more precisely dCSI by gppucio), and functional information has been fleshed out for proteins to a certain extent by Dr. Szostak, Dr. Johnson, Dr. Abel, Dr. Durston and others,,,
Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak: Abstract: Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define 'functional information,' I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA-GTP binding energy), I(Ex)= -log2 [F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function > Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of functions. http://www.pnas.org/content/104/suppl.1/8574.full Mathematically Defining Functional Information In Molecular Biology - Kirk Durston - video https://vimeo.com/1775160 Programming of Life - Information - Shannon, Functional & Prescriptive – video https://www.youtube.com/watch?v=h3s1BXfZ-3w Measuring the functional sequence complexity of proteins - Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors - 2007 Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,, http://www.tbiomed.com/content/4/1/47
At the 17 minute mark of the following video, Winston Ewert uses a more precise metric to derive a higher value for the functional information inherent in proteins:
Proposed Information Metric: Conditional Kolmogorov Complexity (Ewert) - July 2012 - video http://www.youtube.com/watch?v=fm3mm3ofAYU
Here Kalinsky gives a 'ballpark figure' for the functional infomation required for the simplest life:
Intelligent Design: Required by Biological Life? K.D. Kalinsky - Pg. 11 Excerpt: It is estimated that the simplest life form would require at least 382 protein-coding genes. Using our estimate in Case Four of 700 bits of functional information required for the average protein, we obtain an estimate of about 267,000 bits for the simplest life form. Again, this is well above Inat and it is about 10^80,000 times more likely that ID (Intelligent Design) could produce the minimal genome than mindless natural processes. http://www.newscholars.com/papers/ID%20Web%20Article.pdf
Yet GBDixon there is another way that, IMHO, tells us more precisely how much information a cell contains,,, First, in this endeavor, it is important to learn that information resides throughout the cell, not just in DNA sequences,,
“Live memory” of the cell, the other hereditary memory of living systems - 2005 Excerpt: To understand this notion of “live memory”, its role and interactions with DNA must be resituated; indeed, operational information belongs as much to the cell body and to its cytoplasmic regulatory protein components and other endogenous or exogenous ligands as it does to the DNA database. We will see in Section 2, using examples from recent experiments in biology, the principal roles of “live memory” in relation to the four aspects of cellular identity, memory of form, hereditary transmission and also working memory. http://www.ncbi.nlm.nih.gov/pubmed/15888340
As well, in our endeavor to understand precisely how much information a cell contains, it is important to learn that 'The equations of information theory and the second law are the same',,,
"Is there a real connection between entropy in physics and the entropy of information? ....The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental..." Siegfried, Dallas Morning News, 5/14/90, [Quotes Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin]
And by using this relationship between entropy and information, researchers have calculated, from themodynamic considereations, that 'a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica',,,
“a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong 'The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica." Carl Sagan, "Life" in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894
For calculations, from the thermodynamic perspective, please see the following site:
Molecular Biophysics – Information theory. Relation between information and entropy: - Setlow-Pollard, Ed. Addison Wesley Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz' deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures. https://docs.google.com/document/d/18hO1bteXTPOqQtd2H12PI5wFFoTjwg8uBAU5N0nEQIE/edit
Moreover, through many years of extreme effort, this conection between thermodynamics and information was finally experimentally verified to be true,,,
Demonic device converts information to energy - 2010 Excerpt: "This is a beautiful experimental demonstration that information has a thermodynamic content," says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. "This tells us something new about how the laws of thermodynamics work on the microscopic scale," says Jarzynski. http://www.scientificamerican.com/article.cfm?id=demonic-device-converts-inform
bornagain77
October 19, 2014
October
10
Oct
19
19
2014
07:55 AM
7
07
55
AM
PDT
Glenn@ 14- that is Crick's version of biological information: Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein. Sir Francis Crick in "Central Dogma"Joe
October 19, 2014
October
10
Oct
19
19
2014
07:14 AM
7
07
14
AM
PDT
I have to admit that I do not understand the OP at all, and the thread as a whole. For example I can't tease out any understanding from "4. This string requires ..." and the two paragraphs preceding. I have to say that I hope not too many other EE's from the opposing camp will be sizing us up here. But as a good sport I'm going to link to a chapter by Carlson from his textbook which I have. You guys should read it if you want a concise summary of what Shannon and his colleagues were after. This chapter is similar but more extensive than the 1949 Shannon paper (not the more famous 1948 paper). If you guys as non-specialists are looking for understanding, I recommend going to the horses mouth - download this, and I hope you have some better understanding than I'm having right now, then maybe we can get onto the same page: http://posterwall.com/blog_attachment.php?attachmentid=4573&d=1318295130groovamos
October 19, 2014
October
10
Oct
19
19
2014
07:13 AM
7
07
13
AM
PDT
People seem to struggle with the idea of how to apply information theory to biology, and there are lots of confusing or conflicting ideas. But here is a simple exercise that can place a lower bound on the information content of a cell. 1. We take a DNA sequence that encodes to a protein. We call this a valid message because we know there is a receiver (the cell) that does something with it: make a protein. 2. For this section of DNA there are a multitude of possible DNA combinations that do not do anything useful, or are harmful. We may be able to find a few more combinations that encode to other useful proteins (experts could comment on that) or that encode to the same protein (some sort of redundancy). These would be added to the set of valid messages. 3. The number of valid messages and the number of all possible DNA sequences (total messages) are then used to determine the information content of that DNA sequence. 4. This can be done for every DNA sequence that has a known function, and the results combined to give a lower bound on the cell's DNA information content. I am out of my comfort zone here...my expertise is in digital communication theory. But this is how I would go about approaching the problem of how much information a cell contains. GlennGBDixon
October 19, 2014
October
10
Oct
19
19
2014
06:33 AM
6
06
33
AM
PDT
GBDixon hit it- good job. The transmitting and receiving equipment doesn't give a darn about meaning. However we care that the number of bits transmitted = the number of bits received. And that is where Shannon comes in. That said, this is why, IMO, Dembski came up with "complex specified information"- to differentiate between the meaningless and the meaningful. Where Shannon is all about information carrying capacity, Dembski is all about what that information is/ does/ means.Joe
October 19, 2014
October
10
Oct
19
19
2014
06:25 AM
6
06
25
AM
PDT
GBDixon: Thank you for the very clear contribution. :)gpuccio
October 19, 2014
October
10
Oct
19
19
2014
06:15 AM
6
06
15
AM
PDT
Hi Eric, all, I'll pop in again as this is the only topic I feel qualified to comment on. I think you are making this too complicated, Eric, and you are violating an assumption Shannon made. Perhaps this illustration can help: You are the radio operator on a WWII ship. Every message given to you to send is in code. You have no idea what the message means: that is not your job. Your job is to make sure the message is sent and that it is received absolutely correctly on the other end. In our modern world we do the same thing but our computers do it for us (TCP/IP is such as example). The computer makes no judgement on the meaning of the message, it just sends it where it has been commanded to go. The point is, at the level where Shannon information is calculated, the assumption is that EVERY message given to us is packed completely full of information. It is pointless to speculate on whether a random ASCII string has information or not. In the low-level communication domain Shannon and we are working in, we assume the string has the maximum content of information. As the WWII radio operator, you attach extra information to your message that will help the receiver to calculate if he or she (lots of women radio operators in that day) got it right. Shannon does this as well, but in a formal manner. The extra information makes the set of valid messages that can be received much much smaller than all possible messages, and when an invalid message is received the error correction codes map the received message to the closest valid message. Shannon de-emphasizes meaning because he is talking about the things the radio operator must worry about. The assumption is that higher-ups encoded meaning, and will decode it on the other end. The message may contain gibberish as an enemy decoy, it may have extra characters appended (common practice), but the radioman is not the judge of thst: it is all information to him. Further, he has no way to tell what or how much meaning each message has. The receiver of the message attaches meaning to it according to pre-arranged rules. So Shannon's assumption is that all stuff going through a Shannon channel is pure information. Here is an example message: ogjkesijdydnzobiweucgoiqme We are not allowed to comment on whether or not this is information at the Shannon level: it is. In this case, it would be a very common type of message: it happens to be a portion of the first paragraph of your article, zipped, then represented by the lower case ascii characters. A computer may send exactly this message. I realize you stated your string does not represent anything, but that is against the rules. The assumption, again, is that all stuff sent is pure information. As an interesting sidelight, speaking of random strings, information sent at near-channel-capacity is indistinguishable (to the uninformed observer) from random noise. GlennGBDixon
October 19, 2014
October
10
Oct
19
19
2014
05:56 AM
5
05
56
AM
PDT
Mung: Thanks for your thoughts. Just a couple of quick comments before the sandman arrives: The information (as you have regularly and correctly pointed out, the "meaningful" aspect) is (i) created by an intelligent being, (ii) using an arbitrary, agreed-upon, measurement system, and (iii) expressed in an arbitrary agreed-upon unit, which (iv) can then be encoded and communicated to other intelligent beings, who (v) can then decode and understand the information because they are also familiar with the measurement system and units in question. In other words, the entire exercise of creating what we refer to as "Shannon information" is a mental activity (or investigative activity) if you will. The statement: "This string requires one hundred sixty-eight bits" is information, yes. But not because that was somehow contained in String 1, but because it was created by an intelligent being to describe String 1. Also, the reason I highlighted these two strings: kqsifpsbeiiserglabetpoebarrspmsnagraytetfs and thisstringrequiresonehundredsixtyeightbits is because the Shannon measurement (using the particular parameters I have outlined) spits out the same result. In other words, one particular way of measuring a string of characters happens to give the same answer for both strings. However, the strings are obviously quite different in their actual content. With each string of characters we can create a new piece of information, namely, a description in the English language that tells us something about the string. But only the second string contains information in its own right. I would also add, that I believe the only logical alternative to what I have outlined is to argue that everything, everywhere contains information. Such a view is not only contrary to our experience and understanding of what information is, but is utterly useless because we then have no ability to determine when we are dealing with information. I discussed this in detail in an earlier thread, so I won't get into the details here unless people are interested. But the upshot is that in order to have any rational conversation about information at all, we must distinguish between information about an object and information contained in an object.Eric Anderson
October 18, 2014
October
10
Oct
18
18
2014
10:34 PM
10
10
34
PM
PDT
Warren Weaver:
In fact, two messages, one of which is heavily loaded with meaning and the other of which is pure nonsense, can be exactly equivalent, from the present viewpoint, as regards information.
Mung
October 18, 2014
October
10
Oct
18
18
2014
08:12 PM
8
08
12
PM
PDT
One if by land, two if by sea. In binary: 0 if by land 1 if by sea. Reduced further, 0 or 1. Assuming a finite alphabet where each symbol is equally likely and given the selection (actualization) of one or the other we can say we have an amount of information of one bit. But this measure of information tells us nothing about the meaning of the "0" or the meaning of the "1."
“One, if by land, and two, if by sea” phrase was coined by the American poet, Henry W. Longfellow in his poem, Paul Revere’s Ride. It was a reference to the secret signal orchestrated by Revere during his historic ride from Boston to Concord on the verge of American Revolutionary War. The signal was meant to alert patriots about the route the British troops chose to advance to Concord. - http://www.paul-revere-heritage.com/one-if-by-land-two-if-by-sea.html
It ought to go without saying, but alas, it needs to be said: From the fact that this measure of information tells us nothing about the meaning of the "0" or the meaning of the "1" it does not follow that there is or even can be such a thing as "meaningless information."Mung
October 18, 2014
October
10
Oct
18
18
2014
07:51 PM
7
07
51
PM
PDT
1 2

Leave a Reply