Uncommon Descent Serving The Intelligent Design Community

At Some Point, the Obvious Becomes Transparently Obvious (or, Recognizing the Forrest, With all its Barbs, Through the Trees)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

At UD we have many brilliant ID apologists, and they continue to mount what I perceive as increasingly indefensible assaults on the creative powers of the Darwinian mechanism of random errors filtered by natural selection. In addition, they present overwhelming positive evidence that the only known source of functionally specified, highly integrated information-processing systems, with such sophisticated technology as error detection and repair, is intelligent design.

[Part 2 is here. ]

This should be obvious to any unbiased observer with a decent education in basic mathematics and expertise in any rigorous engineering discipline.

Here is my analysis: The Forrests of the world don’t want to admit that there is design in the universe and living systems — even when the evidence bludgeons them over the head from every corner of contemporary science, and when the trajectory of the evidence makes their thesis less and less believable every day.

Why would such a person hold on to a transparently obvious 19th-century pseudo-scientific fantasy, when all the evidence of modern science points in the opposite direction?

I can see the Forrest through the trees. Can you?

Comments
Upright BiPed: Thank you for your long and thoughtful post. No problem about the delay - a slow pace suits me right now, as I have a rather long to-do list! But this is interesting.
Lizzie, “the confusion has arisen because I was trying to establish what criterion UB wanted to use for information.” We talked about it, and many things were mentioned. Do we want to have a conversation, and then turn around only to remember what you can fit into a convenient definition, pretending for a moment that we can fit the entirety of our knowledge on a postage stamp and then argue over what gets left off? What would Popper say? Operational definitions are not limitless constructs; they are as fallible as any other good idea (and in a variety of contexts). If in this instance they can be used to skirt the strength of an opposing argument, they will be. And we wouldn’t want that to happen.
If "they can be used to skirt the strength of an opposing argument" the aren't what they say on the tin:) That's why I want to get this right. However, I'm not quite sure what you mean when you say "operational definitions are not limitless constructs". To be useful, they need to be as limiting as possible (in one sense anyway, possibly not the sense you intended, which is why I am asking for clarification), i.e. leave as little as possible open for subjective nuance or alternative interpretation. I am not looking for a "convenient" definition. I am looking for a rigorously specified definition that can be applied to any candidate output, so that the presence ir absence and/or the quantity of the thing defined can be ascertained objectively. To quote wikipedia: "An operational definition defines something (e.g. a variable, term, or object) in terms of the specific process or set of validation tests used to determine its presence and quantity. That is, one defines something in terms of the operations that count as measuring it." http://en.wikipedia.org/wiki/Operational_definition
So relax…and spare me the pedantics. ;) If I say something illogical and unsupported, you won’t need your rule book to point it out to me. You say that you want a solid definition and you don’t want any shifting of goalposts. Well, exactly which goalpost would you like then? If it’s not too much to ask; is it the one that actually reflects reality? You say that you never promised abiogenesis, and that is technically correct, yet at least in large measure, that is exactly what you propose. Living things are animated by the organization that comes from the rise of information, specifically information that is recorded by means of a sequence of repeating chemical symbols mapped to specific actions taken by the cellular machinery. If you can explain the rise of this symbolically recorded information, then you can most probably explain Life. As for myself, this is the only goalpost that ultimately matters.
What I am looking for, as I am proposing to demonstrate that it can be generated by no more than Chance and Necessity, is an operational definition of the kind of Information that you (or IDists) claim requires Intelligent Design to generate. So if that definition includes a threshold of some kind, obviously I don't want that threshold to move! And if I succeed, according to the agreed operational definition, I don't want people to say: ah, but this has nothing to do with real chemistry (unless of course chemistry is included in the operational definition). And, conversely, I don't want any wiggle room for myself either. That's really the whole point of operationalizing a hypothesis - to make sure that the playing field is level, and both sides are clear about what both success and failure would look like.
Also, you are approaching this with a specific end in mind, and you have already stated what that end is. Your intent in this is to be able to say that ID must “think again” because it’s “flawed”. You’ve illuminated this intent several times already.
Sure. But that's the nature of scientific inquiry - I am setting up a test of the hypothesis that, contrary to the claims of ID, Information (of a specific type, which we are currently trying to operationalise) can be generated without Intelligent Design. Obviously I will do my best to find a context that supports my hypothesis. But I may fail. That's the downside (but also the glory) of science. On the other hand, if I succeed, then the ID argument fails.
And you proposed to empower this ignominious conclusion by designing a fully non-empirical simulation, separated by orders of magnitude from what actually happens in reality. Hello?
No, I am not proposing to "empower a conclusion". I am proposing to test a hypothesis. The conclusion will depend on the results of that test. If I fail, I will not be able to conclude that I have succeeded, obviously :) In other words I plan to conclude something - the conclusion is not foregone. That wouldn't be science, and isn't what I propose. As for your second point: the study involves empirical hypothesis testing. It could probably be tested non-empirically, i.e. purely mathematically, from first principles, I don't know. But increasingly, hypothesis that depend on non-linear interactions between multiple variables actually have to be tested empirically by running iterative computations (as in finding out the intricacies of the Mandebrot set, for instance), and, when it comes to hypotheses that include stochastic processes, by running models. That these empirical studies are run on computers doesn't make them not empirical. And I am not actually proposing a "simulation" at all - although my model is inspired by theories about abiogenesis. It is not intended to demonstrate that life formed spontaneously from chemical reactions in the early earth. It is intended to test the hypothesis that Information of the kind considered to be the signature of Life can be generated without Intelligent Design (i.e. from Chance and Necessity alone).
You see Lizzie, at this point it no longer matters what I want you to show, it’s what you want to show. If I were you, I would choose the size of my bite wisely. And given that you will not be going for the only goalpost that actually reflects reality, I would suggest more than a teaspoon of humility in announcing the stunning breadth of your conclusions.
Well, humility is always good advice :) But it certainly matters what you want me to show, because my claim was that I believed that I could demonstrate to be possible something that you believe to be impossible. I originally understood that your claim was that Information of the kind that is seen in living things could not be generated by Darwinian processes. I think it can, and I offered to demonstrate that it could. Sure it was a bit lacking in humility, I guess, but it's not as though I was unprepared to put my efforts where my mouth is and risk hubris :) I am.
Now before I move on to other matters, I would like to clear up how we got here. To save space I will only post the relevant text. You were talking to BA77 about genetic information and said: I simply do not accept the tenet that replication with modification + natural selection cannot introduce “new information” into the genome. It demonstrably can, IMO, on any definition of information I am aware of. To which I (butted-in) and replied: Neo-Darwinism doesn’t have a mechanism to bring information into existence in the first place. To speak freely of what it can do with information once it exist, is to ignore the 600lbs assump-tion in the room. And then you stated: Well, tell me what definition of information you are using, and I’ll see if I can demon-strate that it can And in my return: You are going to demonstrate how neo-darwinism brought information into existence in the first place??? Please feel free to use whatever definition of information you like. If that definition is meaningless, then we’ll surely both know it.
Thank you very much for this - I was unable to find the original conversation unfortunately. This lets us back up: My response to ba77 was "I simply do not accept the tenet that replication with modification + natural selection cannot introduce “new information” into the genome." The reason I said that it "demonstrably" could, is that on any definition of information that I am/was aware of, a new variant of an existing allele "tells" the cell to do something slightly different to what the old allele did. So we have a "message" (the new DNA sequence) and a "receiver" (the cell processes) and a "meaning" (a different instruction, which could be to make a slightly different protein, or to make that protein under a slightly different set of contingencies, or to change the ratios of two different protein variants), i.e.has a phenotypic effect. And we know that new alleles happen from time to time, and we know quite a lot about the various mechanisms by which those variants are generated. Moreover, if that allele turns out to improve the organisms chances of breeding successfully, however slightly, that information is not only meaningful (makes a difference to the phenotype) but useful from the point of view of the population through which that allele starts to propagate, as it increases the probability that the population will continue to thrive in that environment. However, you then raised a different(and highly important) claim) that: "Neo-Darwinism doesn’t have a mechanism to bring information into existence in the first place. To speak freely of what it can do with information once it exist, is to ignore the 600lbs assump-tion in the room. Now, I may have understood what you meant by "in the first place" - you may simply have meant: "the source of the new allele" which, indeed, is not explained by "Darwinism", (Darwin didn't even know about genetics), nor by "neo-Darwinism" as I understand (or misunderstand) the term as a modification on Darwin's original concept of natural selection, but by what we now know about the mechanisms of DNA replication processes and the generation of variance. If so, it is true that neo-Darwinism doesn't account for it, but not true that we can't. However, at the time I assumed that you did not mean this, but meant: but how could the first Information-bearing self-replicator come about, if Darwinian processes only account (as they do) for the selection of useful Information once self-replicators-with-variance have appeared? And I assumed you meant this on a theoretical level, as posed by Dembski: how can mere Chance and Necessity generate Information in the first place? And that is what I offered (in good faith) to try an demonstrate - by setting up a model in which there is no initial breeding-with-variance population, but only a world of Deterministic and non-deterministic rules (Necessities and Chance) from which I hope my self-replicators will emerge.
So now moving on… There is an underlying issue within this conversation that I have tried and failed to get you to realize. In explaining it again, I must note that I somewhat separate myself from several proponents on this forum, so any embarrassment here is my very own. I think that there are many here who disagree with me at some point or another, and that is perfectly fine. I make no absolutely comment about the validity of their perceptions of the evidence; it’s just that I have my own.
Cool. Groupthink is boring :)
I’d first like to remind you that I am not making an argument about CSI, or Shannon Information, or Kolmogorov complexity, or any of it. Nor am I suggesting that these things are not interesting, important, and play a role in the issues at hand. But, I am making a purely semiotic case for the existence of information.
OK. In that case I do seem to have misunderstood you. I apologise.
In order to try and focus the discussion on the point I am trying to convey to you, I would like to ask you for a moment of your imagination. (I have done this before on UD, so readers in the second matinee can fall asleep at will). Lizzie, imagine for a moment you are the sole person on a lifeless planet in a distant galaxy. You stand there in your spacesuit gazing out across the inanimate nothingness. Then as you go about your mission, your experience and training brings something of a striking thought to mind. It occurs to you that outside your spacesuit, there is absolutely nothing that means anything at all to anything else. Your spacesuit represents a monumental divide in observed semiotic reality. Outside your suit there is no information, there are no symbols and no meaning of any kind. The rocks stacked upon themselves in the outcroppings mean absolutely nothing to the other rocks, nor to the molecules in the atmosphere or anything else. Yet, inside your suit it is a completely different matter; signals and symbols, and information, and meaning abound in all directions. My own suggestion is that there are three domains in which these things exist. First there is your demonstrated ability as a sentient being to create symbols and assign meaning at will. Then there are also the systems within your body that are constantly creating and utilizing transient information by means of intercellular signals and second messengers, etc. These systems are created by the configuration of the specialized constituent parts, discretely created, each one brought into existence by the third domain of semiotic reality. That third domain being the recorded information in your genome which is replete with semiotic content – sequenced patterns of discrete chemical symbols.
I'm with you up to this point, I think. Beautifully put.
Now, I notice that you choke on the word “symbol”. My message to you is that it doesn’t matter what we call it; it is what it is, a relational mapping of two discrete objects/things. One thing represents another thing, but is separate from it. And if that symbol should reach a receiver, then the mapping between the symbol and the object being symbolized becomes realized by that receiver.
So far, so good-ish.
You seem to prefer calling a symbol a “representation” instead, which is fine by me, except that it doesn’t capture the reality. The shadow of a tree could be construed as a representation of a tree, but the word “tree” is a symbolic representation. They are distinctly different. The shadow contains no information and it doesn’t exist in order to do so. The word “tree” is a symbol (matter/energy arranged to contain information) which exist specifically to do so.
Yes, I understand that. I don't have a problem with the word "symbol" per se, precisely because of the distinction you make. My problem is in applying the world "symbol" to something that is not (IMO) self-evidently a symbol-user. I don't think that a ribozome is self-evidently a symbol-user!
The point I would like you to understand, is that recorded information cannot exist without symbols (symbolic representations).
hmmm. Well, I would be happy to accept this as definitional, but then I'd probably want to argue a bit more about what a symbol is. However, let's put that to one side for now.
So revisiting your lifeless planet, there are no symbols and therefore no information outside your suit, but inside suit it is the core reality that must be addressed.
I am more than happy to agree that there are symbols within the suit but not outside, and if symbols are the prerequisite for information, then the only information is inside the suit. Cool.
I know that you are stalwart against anthro-humanizing the observations, and inputting into them some-thing that is not there. Yet what is there has been repeatedly validated.
In what sense and where? (Not disputing it, but just wanting to get clear what you are saying.)
And it must be understood, the human capacities which you wish to not conflate with the observations – those that we are told did not arise for billions of years after the origin of Life – show every sign of having been in existence from the very start.
There's a sense in which I agree with you, but probably not a sense you would approve :)
As I said upthread, humans did not invent symbolic representations or recorded information; we found it was already existed.
An important point, and one that needs to be unpacked before we can proceed. Good.
Given the length of this post already, I am going to cut to the chase. You want goalposts that don’t move? You want to design a non-empirical simulation to send ID packing? My only hope is to try and bring you back to reality. Here is my list (probably non-comprehensive). We can argue over these points if you wish, but I am confident that each can be fully supported. And as I said from the very start, you can develop your own operational definition. You asking me to do it for you only illuminates your desire to compete; it has nothing to do with the search for truth.
Oh, there you are quite wrong, although I fully accept that the communication fault may be on my side. Firstly, the reason I want an operational definition has nothing to do with "competition" and everything to do with making sure we are talking about the same thing (not apples on one side and oranges on the other) when you say you think X is the case and I think it is not. That's not competitive, though it may be dialectical; that's no problem though, science is intrinsically dialectical (which is why Popper proposed the criterion of falsfication). Secondly, the reason I want you propose, or at least approve, the operational definition, is not either laziness or competitiveness on my part, but merely an essential part of ensuring that I am actually addressing the postulate you are putting forward. Thirdly, and this is simply personal: I am a notorioiusly uncompetitive person, to a degree that can easily be personally problematic! I am simply not interested in "winning" for the sake of winning - anything. I'd far rather lose an argument and be enlightened than win it and remain in error. I can't prove this to you of course, but it is true.
1. The origin of recorded information has never been associated with anything but the living kingdom; never from the remaining inanimate world.
Yes, that is probably true, although I am still stuck on this "symbol" thing. On my own understanding of the word, I'd say that all symbol-users are alive. I would not, however, willingly say that all living things are symbol users. This is the part we need to hammer out.
2. The state of an object does not contain information; it is no more than the state of an object. To become recorded information, it requires a mechanism in order to bring that recording into existence outside of the object itself. As I said earlier, a carbon atom has a state which a physicist can demonstrate, but a librarian can demonstrate the information exists as well. They both must be accounted for.
OK. This is important, so I'm going to try to be as articulate as I can: I am certainly happy to stipulate that information only exists when it is "recorded". And I'd like to suggest that "recording" must involve a) the storage of the information in some form that can be "read" by another object in such away that that object can change its own state according to the "information" read. If you are happy with this (I don't think it's perfect, but it's not bad) then I'm with you. And in that context,then I would accept that DNA, for example, contains recorded information, as it can be "read" by another object (which, depending on the level of analysis, we can regard as the cell itself, or a specific ribozome) which then changes its own state (kinetically or morphologically) as a result. And if you want to call this "symbolic" then that is fine. And I would still probably agree that this is largely found in living things, possibly exclusively, but not necessarily necessarily so (the double use of necessarily is not a typo!)
3. A rational distinction is made between a) matter, b) information, and c) matter which has been arranged in order to record information.
Indeed.
4. Matter that has been arranged in order to contain information doesn’t exist without symbolic repre-sentations. Prove it otherwise.
Well, if we define information as recorded information, and if we define recorded information as symbolic, then this is necessarily true, indeed, circular. So obvious not falsifiable. However, if there is wiggle room between recorded information and symbolic representation, then it is not circular, but then I need to know in what way you are distinguishing recorded information from symbolic representation.
5. From all known sources, symbols and symbolic representations are freely chosen (they have to be in order to operate as symbols). And as a matter of observable fact, when we look into the genome, we find physico-dynamically inert patterns of symbols. That is, the chemical bonds that cause them to exist as they do, do not determine the order in which they exist – and the order in which they exist is where the information is.
OK, so you do seem to agree with me that a key property of a symbol (as opposed to a sign or a template) is that it is arbitrarily assigned to a signifier. And your claim is that the relationship between the chemical bonds that "cause the [patterns of symbols] to exist as they do do not determine the order in which they exist". hmmm. I would certainly agree that given one nucleotide, there is no chemical grounds for predicting the next. However, I would not agree (if it were what you were saying) that a given sequence (a codon, for instance) is chemically unrelated to the amino acid that it "codes" for. Is that what you are saying? Although I might agree that a different kind of cell (perhaps on another planet) might have a different kind of ribosome that resulted in a different amino acid from the one that would result from a given codon in an earthly cell. So if that is the sense in which the codon is abitrarily assigned, then I guess I could get behind that, and concede that "symbol" is appropriate. OK, I'll buy it :) (If it's what you mean).
6. Recorded information requires a (discrete) suitable medium in order to exist – a medium that allows the required freedom of arrangement.
Agreed.
7. A distinction is made between information presented in analog form, versus that in the genome which is a sequence of repeating digital symbols being decoded in a linear fashion following rules established by the configuration of the system (that configuration itself being determined by the information it is created to decode).
What distinction? Or what distinction that matters? (Also I'm uneasy about "digital" here, but maybe it's OK.)
8. The origin of information requires a mechanism to establish the relationship (mapping) between the object and the symbolic representation which is to symbolize it.
OK, well, assuming we are now on the same page regarding the use of "symbol" to describe such things as transcription, then yes. Although of course, that mapping is the kind of thing that evolutionary processes (I would argue) can account for. For example, if, in early life forms, there were several kinds of ribosomes, some resulting in one set of mappings, some in another, if one kind tended to be more efficient at promoting successful replication than the others, it would tend to become more prevalent, go to "fixation" and be inherited by all its descendents. or, alternatively, go to fixation by simple drift, and ditto.
9. Recorded information exists for a purpose, that purpose being manifest as a receiver of the information – that which is to be informed.
Now we are getting philosophical! I'm happy to go there, but will leave it hanging for now :)
You indicate that you can provide evidence that neo-Darwinian processes can assimilate all these points as well as those we’ve already discussed. My hat’s off to you. Your simulation will have nothing to do with chemical reality, and it will end with an unsupported Darwinian assumption (as they always do) but it should be interesting nonetheless. Cheers…
No, I'm not going to attempt to demonstrate that the specific Instantiation of Information in cell biochemistry was brought about by Darwinian processes, because, indeed, it may not have been. All I am proposing to demonstrate is that Information (recorded Information, even symbolic information, as I think we now mutually understand it) can arise from a non-intelligent source. Not that it did in the case of life. And because of that limited objective, chemical reality is irrelevant. However, what is not irrelevant is your very helpful unpacking of the essentials and principles at stake. So I can now reframe may project as: To test the hypothesis that symbolic information can arise from non-intelligent sources, where "symbolic representation" is the recording of information about the state of an object that can be read by another object whose future state[s] are contingent on that information, and "non-intelligent sources" are sources that consist only of Chance and Necessity. If I succeed, I will not have demonstrated that life evolved without input from an Intelligent Designer, but I will, I submit, have demonstrated that we cannot conclude that it must have had input from an ID on the grounds that non-intelligent sources cannot create symbolic representations. Does that make sense? I've responded in some detail, because I think you hit a lot of nails on the head, and I wanted to make sure that I figured out which nails I'm happy with, and which nails are genuine differences between us. I hope this brings us closer to the nub of the issue at issue :)Elizabeth Liddle
June 12, 2011
June
06
Jun
12
12
2011
07:05 AM
7
07
05
AM
PDT
F/N 2: Observe a noise/error handling procedure for the case of misloaded tRNAs:
The two major groups of tRNA synthetases, class I and II, seem to minimize impacts of misinserted amino acids in protein sequences by tRNAs that were misloaded by these tRNA synthetases [1,2]. Accurate loading of tRNA acceptor stems with cognate amino acids by tRNA synthetases is a crucial step in protein synthesis, and indeed misacylated (misloaded) tRNAs are frequently edited by tRNA synthetases [3], which sometimes even edit tRNAs at advanced stages in the translational pathway [4]. Both pre- or post-transfer editing occur. These mechanisms are not exclusive and depend on catalytic sites other than the aminoacylation site [5,6]. The complex editing functions of some tRNA synthetases probably originated from multifunctionality of ancient tRNA synthetases, at the origins of the genetic code and the translational machinery [7]. Note that some mutations affecting editing associate with mitochondrial diseases [8].
Error handling methods and editing, even rooted in multifuncitonality [!!!!!] are of course yet another level of sophistication in an information system. This thing is getting plainer and plainer. GEM of TKIkairosfocus
June 12, 2011
June
06
Jun
12
12
2011
04:41 AM
4
04
41
AM
PDT
F/N: From the April 14 Newsflash thread OP, again: ____________ >> what about the more complex definition in the 2005 Specification paper by Dembski? Namely: define ?S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [?] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 log of the conditional probability P(T|H) multiplied by the number of similar cases ?S(t) and also by the maximum number of binary search-events in our observed universe 10^120] ? = – log2[10^120 ·?S(T)·P(T|H)] . . . eqn n1 How about this (we are now embarking on an exercise in “open notebook” science): 1 –> 10^120 ~ 2^398 2 –> Following Hartley, we can define Information on a probability metric: I = – log(p) . . . eqn n2 3 –> So, we can re-present the Chi-metric: Chi = – log2(2^398 * D2 * p) . . . eqn n3 Chi = Ip – (398 + K2) . . . eqn n4 4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities. 5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits. >> _______________ Now, the issue for today is that there is a challenge to p, to get to I. the answer to this is direct and simple: while it is theoretically grounded on the above considerations, I is a very familiar entity, one normally estimated more directly from symbol frequency patterns, or from directly observed storage capacity. As we just saw, the estimates on such bases will be CONSERVATIVE. The how do you get to p objection is misdirected. Yes there are limitations, so we make a reasonable estimate, and where possible a conservative one. DNA has four states per symbol, and proteins generally -- this has to be noted because of a certain class of objector who would pounce on the rare exceptions -- have 20 per symbol. There may be some adjustment relative to symbol frequencies. But that is not going to overwhelm a situation where you have hundreds of proteins averaging 300 AA's coded for by D/RNA with three letters per codon, and with supportive regulatory elements. Just as a sampler, let us think of 200 proteins, at 300 AA avg, or information to account for 60,000 AAs, at 3 bases each, with say 10% more for regulation, making for a minimal genome of 200,000 or so 4-state elements. That is definitely in the order observed for simplest life and it is two orders of magnitude of bits beyond the cosmos level informational threshold, where each bit doubles the config space. However you may want to adjust and cite limitations, it does not take away the central implication of the functionally specific complex organisation and information in the living cell: it is best explained on design. GEM of TKIkairosfocus
June 12, 2011
June
06
Jun
12
12
2011
04:26 AM
4
04
26
AM
PDT
Dr Liddle: Following up:
if I don’t know how many symbols you could have used, then you have sent me not much more than 1 bit, because while the first bit may surprise me, by the end of the message each subsequent one is reducing my uncertainty that the next will not be a 1 by only a tiny amount. And this goes back to the point I was trying to make to kairosfocus; to know how much information there is in a signal, we have to know something about what other signals are possible.
In short, we need to know about the communication system and its protocols. Immediately, this highlights that a symbolic or modulated communication system is an irreducibly complex, sophisticated entity, which in turn points straight to design as its best explanation. But that is a bit of an aside. More direct to our considerations is that such a communication system has a range of possible legitimate signals, and a protocol of rules that controls how such signals are encoded, modulated, detected, demodulated, decoded, and used. Again, pointing to sophisticated, integrated design. Going further, we are dealing with a coded information system in the heart of cell based life, using a 4-state digital code based on highly endothermic, complex -- thus, inherently thermodynamically unstable -- chemicals known to be assembled into polymers based on an algorithmic process. All of which points us back to the questions I asked previously on the known sources of algorithms, codes, and assembly lines. Transparently obvious: intelligence, with intent and knowledge and skill. Now, too, to configure such messages, we need things that are inherently highly flexible, i.e the elements in the strings etc must be highly contingent. It actually turns out that a lot of alternative chemistry could happen to both D/RNA or proteins [or, more properly their monomers . . . start with just he implications of possible opposite chirality, and how that would destroy folding and/or key-lock fitting, where the other chirality has the same heats of formation as a rule], but the controlled environment of the cell, is set up to block that from happening. That is the context in which we see a 4-state digital symbol system, with an assembly line system that uses the mRNA as a tape to guide step by step assembly of proteins, which are based on essentially a 20-state system, with some relatively rare mods. Proteins function in the cell, based on AA sequence, folding, agglomeration and activation. All of which are quite remote from the specifying of a particular sequence of AA's. Even, the loading of a particular AA to a particular tRNA taxi molecule is a matter of a universal connector, with the actual AA attached being informationally controlled by the setting up of a loading enzyme. Which is in turn the product of the same system. All of this is extremely highly contingent, and would point to the information content estimates we have been using being CONSERVATIVE. In other words the field of chemical possibilities is much larger than we have been considering. But being conservative is good. Within the ambit of the set-up system, we have a 4-state digital coded info storage subsystem. That gives us a carrying capacity of 2 bits per symbol, some of which may not be used in any given case, as we may have redundancies and symbol frequencies that are not flat-distributed. Not tha this makes a material difference. the same extends to proteins, where there are maybe 80 and by now more possible AAs that could be used, and all but a few of these will be chiral. But, conservative is good. Protein chains are assembled step by step and may be chaperoned to fold right -- the prions [mad cow disease] issue -- so they will function. Conservative, again. That all feeds back into the expression: Chi_500 = I*S - 500 The 500 bits takes int eh thresholds set by considerations of sufficiently isolating the narrow islands of interest and/or function, that the search space challenge will swamp out any random walk plus trial and error rooted approach, including impossibly fast ones. The only way out of this is to bias the search, so that the walk has an oracle to pull it in, allowing hill-climbing on a trend. But,t hat is precisely to jigger the case. The evidence of protein folds is that they are deeply isolated islands in sequence space. Codes are likewise, once we get to any complexity worth discussing. And, functional organisation of complex entities on a wiring diagram can be reduced to the same pattern, through devising a structured set of yes/no questions to construct the wiring diagram: the teeth in the saying "a picture is worth a thousand words." But, all of this has been said in various ways, over and over and over. And the conclusion is increasingly transparently obvious. But then, in an era where to say the obvious is to bare one's throat to those all too eager to slice with the knife, the objectively obvious is often the least subjectively obvious. But, we can all see for ourselves the balance of the case -- and it is noteworthy that just for saying he objectively obvious I am now the subject of a slander blog that is produced by one who has no hesitation to indulge in outing intimidatory behaviour, and in outright false accusations of UD being a nest of perversion, as well as a mouth in need of Sister V's soap cleansing. Worse, in the name of freedom of expression, such misbehaviour is tolerated or even enabled by those who should know better. Can you imagine, I have seen the turnabout accusation that to point out that I have every right to shun such cesspits is to offend those who are there delicately reasoning quite decently amidst the stench and the angry mosquitoes tanking up on rage and fallacious or slanderous talking points? Patent absurdity. Mi ca'an believe it!!!! Anyway, I think we can await the promised simulation. GEM of TKIkairosfocus
June 12, 2011
June
06
Jun
12
12
2011
04:08 AM
4
04
08
AM
PDT
Mung:
Elizabeth Liddle @293: Well, let me give a more nuanced answer: I’m trying to get beyond nuanced. :) If we don’t have clear and unambiguous answers we cannot hope to agree.
Absolutely. But clear and unambiguous answers depend on clear and unambiguous questions. We need to formulate the question in such a way that the answer is not, and cannot be, ambiguous. For instance, you say that my statement that "...on that definition, any stochastic process creates information" is false. But you just gave an example of a stochastic process that created information, not a stochastic process that didn't. But let's not get bogged down here: I am simply after a clear definition that captures what we want to capture, and as I hope I have made clear above (and it seems consistent with your own posts): information quantity, in the sense that we want it to mean anything, is a function not only of the signal, but of the expectations of the receiver. In the absence of knowledge about the expectations of the receiver, we could compute the probability distribution of the characters of the message from the message itself, which is the sense in which a stochastic process can create a measureable amount of information: we can compute -log2(P) where P is the probability of each items as given by the frequency of its occurrence within the message. But that isn't terribly useful, as your example of a series of ones elegantly shows. To get a sensible measure of information we have to compute P from an independent source of information regarding the probability distribution of each item, and as far as the receiver is concerned, that source of information has also to be available. Otherwise the message won't be "about" anything :) So perhaps we are nearing an operational definition of "aboutness", which must reference an additional independent source of information regarding the probability distribution of the characters in the message, under the null hypothesis of a random draw. That enables me to differentiate between a string of Ones that are drawn from a probability distribution in which Ones have a probability of 1 and any other character has a distribution of zero, in which case the information content is zer0 (-log2(2)=0), and a string of Ones that are drawn from a probability distribution in which Ones and Twos have equal probability, and all other characters have zero, in which case each item in the message will convey 1 bit of information (-log2(.5)=1). This is why I keep banging on about the important of establishing the probability distribution of the components of the message under the null hypothesis of "no signal".
If I know that the system you are using to send me a message has uses two possible symbols, 1, and 0, then you have sent me 100 bits of information (or however many ones there are, I didn’t count).
Well, it wasn’t a sequence of 100 1?s, thank God. (In my browser the 1?s extend across the page with no line break. So I apologize for that.) But lets say, for the sake of argument, that the pattern I sent did consist of a series of 100 1?s. You are now saying that my pattern of strictly a sequence of 1?s contains the same amount of Shannon Information as your pattern of 0?s and 1?s which were completely random. How can that be?
See above (sorry I was a bit inarticulate last night, not booze, just exhaustion). I hope this morning's attempt is clearer. The short version is: How much Shannon information is in your string is not simply a function of the string, but of independent information regarding the probability distribution under the null from which the items in that string were drawn.
At some point in the series, shouldn’t your surprisal have actually been reduced?
Depends on my priors regarding the probability distribution under the null :) If I knew nothing about it, and got a string of ones, in many senses of the word "surprise", my surprise would have been gradually reduced, until I concluded that all this blooming signal was ever going to produce was Ones. On the other hand if I knew that the source of the message normally produced messages that overall contained equal numbers of ones and zeros (ones and zeros equiprobable) then I'd be just as surprised by each successive one as I was by the last, and I'd become increasingly certain that the message was not a random draw from Ones and Zeros. I could even bring out my trusty binomial theorem to compute just how certain I was!
You also said that your example had 100 bits of Shannon Information but seemed to intuitively recognize that my series of 1?s had “not very much” information. So I hope you’ll understand my confusion at the apparent lack of consistency.
Not lack of consistency but drawing attention to a crucial extra source of information we need if we are going to distinguish signal from noise. This is why I disagree with kairosfocus that we can distinguish signal from noise reliably by looking at the message itself. We can't - we rely on independent information about the probability distributions under the null of noise.
In the same post @93 You wrote: If instead of coin tosses, I sent 1010101010101010101….. You’d start to make some pretty good guesses at the rest of the series, so the amount of new information I’d created would be very small. You are not being consistent.
I'm trying to point out the fact that we need more information than simply the message itself in order to figure out how much information it contains. I'm sorry this was unclear - I do not have an axe to grind about what Information is. I want to make sure we have an operational definition that captures what an IDist would regard as legitimate Information (the kind that is claimed not to be generatable by Chance and Necessity). We are making some progress I think :)
If you had repeated your example of the repeating pattern “10101010…” for a total of 100 characters, would you say that it contained 100 bits of Shannon Information? IOW, you need to explain how a fixed sequence contains the same amount of Shannon Information as a randomly generated sequence.
I hope I have clarified what I think we all agree, that the Information content of a message cannot be computed sensibly from the message itself alone, but that we need to also factor in (and find a way of quantifying) the additional information that is required in order to compute it. We can still compute a value without that additional information, by looking at the frequency distributions in the message itself, but it won't mean much. For if we compute it for the 1010101010 example, we can quickly see that the probability of a Zero given a previous One is 1, and the probability of a One, given a previous Zero is also 1. So the information can be computed as zero (or approaching zero). But given a prior as to the probability distribution under the null it could be an extremely informative message - it could, for instance represent in binary form a large and important integer. In which case the number of bits transmitted would be 100.Elizabeth Liddle
June 12, 2011
June
06
Jun
12
12
2011
03:49 AM
3
03
49
AM
PDT
Dr Liddle: The P(T|H) term etc get subsumed in the limit, in effect a threshold is set beyond which these will not reasonably go for the solar system or the observed cosmos. In efect you have set every atom to work looking for the edge of a zone of interest, but with a big enough field, the isolation of the zones tells. With Chi_1,000, the whole observed cosmos is unable to scan enough of the space of possibilities to make a difference from no scan. I have already shown how that happens, so I will not repeat myself. That's why there is a threshold imposed. The estimates for actual parameters will REDUCE the scope of search below that. Think about converting he observable cosmos into banana plantations, trains to move the bananas, forests of paper and monkeys typing impossibly fast at keyboards, from the big bang to the heat death, they will not exceed the limit we have set. Nor will any other scenario. As VJT showed, months ago, now. We have an upper limit, and we have reason to see that we are going to be within that limit, then we see also how the resources of the solar system or cosmos will be vastly inadequate. GEM of TKIkairosfocus
June 11, 2011
June
06
Jun
11
11
2011
06:31 PM
6
06
31
PM
PDT
...to know how much information there is in a signal, we have to know something about what other signals are possible.
There's that word again. ABOUT. Did you read my post @283?Mung
June 11, 2011
June
06
Jun
11
11
2011
05:58 PM
5
05
58
PM
PDT
Lizzie, “the confusion has arisen because I was trying to establish what criterion UB wanted to use for information.” We talked about it, and many things were mentioned. Do we want to have a conversation, and then turn around only to remember what you can fit into a convenient definition, pretending for a moment that we can fit the entirety of our knowledge on a postage stamp and then argue over what gets left off? What would Popper say? Operational definitions are not limitless constructs; they are as fallible as any other good idea (and in a variety of contexts). If in this instance they can be used to skirt the strength of an opposing argument, they will be. And we wouldn't want that to happen. So relax…and spare me the pedantics. ;) If I say something illogical and unsupported, you won’t need your rule book to point it out to me. You say that you want a solid definition and you don’t want any shifting of goalposts. Well, exactly which goalpost would you like then? If it’s not too much to ask; is it the one that actually reflects reality? You say that you never promised abiogenesis, and that is technically correct, yet at least in large measure, that is exactly what you propose. Living things are animated by the organization that comes from the rise of information, specifically information that is recorded by means of a sequence of repeating chemical symbols mapped to specific actions taken by the cellular machinery. If you can explain the rise of this symbolically recorded information, then you can most probably explain Life. As for myself, this is the only goalpost that ultimately matters. Also, you are approaching this with a specific end in mind, and you have already stated what that end is. Your intent in this is to be able to say that ID must “think again” because it’s “flawed”. You’ve illuminated this intent several times already. And you proposed to empower this ignominious conclusion by designing a fully non-empirical simulation, separated by orders of magnitude from what actually happens in reality. Hello? You see Lizzie, at this point it no longer matters what I want you to show, it’s what you want to show. If I were you, I would choose the size of my bite wisely. And given that you will not be going for the only goalpost that actually reflects reality, I would suggest more than a teaspoon of humility in announcing the stunning breadth of your conclusions. Now before I move on to other matters, I would like to clear up how we got here. To save space I will only post the relevant text. You were talking to BA77 about genetic information and said:
I simply do not accept the tenet that replication with modification + natural selection cannot introduce “new information” into the genome. It demonstrably can, IMO, on any definition of information I am aware of.
To which I (butted-in) and replied:
Neo-Darwinism doesn’t have a mechanism to bring information into existence in the first place. To speak freely of what it can do with information once it exist, is to ignore the 600lbs assump-tion in the room.
And then you stated:
Well, tell me what definition of information you are using, and I’ll see if I can demon-strate that it can
And in my return:
You are going to demonstrate how neo-darwinism brought information into existence in the first place??? Please feel free to use whatever definition of information you like. If that definition is meaningless, then we’ll surely both know it.
- - - - - - - - - - - So now moving on… There is an underlying issue within this conversation that I have tried and failed to get you to realize. In explaining it again, I must note that I somewhat separate myself from several proponents on this forum, so any embarrassment here is my very own. I think that there are many here who disagree with me at some point or another, and that is perfectly fine. I make no absolutely comment about the validity of their perceptions of the evidence; it’s just that I have my own. I’d first like to remind you that I am not making an argument about CSI, or Shannon Information, or Kolmogorov complexity, or any of it. Nor am I suggesting that these things are not interesting, important, and play a role in the issues at hand. But, I am making a purely semiotic case for the existence of information. In order to try and focus the discussion on the point I am trying to convey to you, I would like to ask you for a moment of your imagination. (I have done this before on UD, so readers in the second matinee can fall asleep at will). Lizzie, imagine for a moment you are the sole person on a lifeless planet in a distant galaxy. You stand there in your spacesuit gazing out across the inanimate nothingness. Then as you go about your mission, your experience and training brings something of a striking thought to mind. It occurs to you that outside your spacesuit, there is absolutely nothing that means anything at all to anything else. Your spacesuit represents a monumental divide in observed semiotic reality. Outside your suit there is no information, there are no symbols and no meaning of any kind. The rocks stacked upon themselves in the outcroppings mean absolutely nothing to the other rocks, nor to the molecules in the atmosphere or anything else. Yet, inside your suit it is a completely different matter; signals and symbols, and information, and meaning abound in all directions. My own suggestion is that there are three domains in which these things exist. First there is your demonstrated ability as a sentient being to create symbols and assign meaning at will. Then there are also the systems within your body that are constantly creating and utilizing transient information by means of intercellular signals and second messengers, etc. These systems are created by the configuration of the specialized constituent parts, discretely created, each one brought into existence by the third domain of semiotic reality. That third domain being the recorded information in your genome which is replete with semiotic content - sequenced patterns of discrete chemical symbols. Now, I notice that you choke on the word “symbol”. My message to you is that it doesn’t matter what we call it; it is what it is, a relational mapping of two discrete objects/things. One thing represents another thing, but is separate from it. And if that symbol should reach a receiver, then the mapping between the symbol and the object being symbolized becomes realized by that receiver. You seem to prefer calling a symbol a “representation” instead, which is fine by me, except that it doesn’t capture the reality. The shadow of a tree could be construed as a representation of a tree, but the word “tree” is a symbolic representation. They are distinctly different. The shadow contains no information and it doesn’t exist in order to do so. The word “tree” is a symbol (matter/energy arranged to contain information) which exist specifically to do so. The point I would like you to understand, is that recorded information cannot exist without symbols (symbolic representations). So revisiting your lifeless planet, there are no symbols and therefore no information outside your suit, but inside suit it is the core reality that must be addressed. I know that you are stalwart against anthro-humanizing the observations, and inputting into them some-thing that is not there. Yet what is there has been repeatedly validated. And it must be understood, the human capacities which you wish to not conflate with the observations - those that we are told did not arise for billions of years after the origin of Life – show every sign of having been in existence from the very start. As I said upthread, humans did not invent symbolic representations or recorded information; we found it was already existed. Given the length of this post already, I am going to cut to the chase. You want goalposts that don’t move? You want to design a non-empirical simulation to send ID packing? My only hope is to try and bring you back to reality. Here is my list (probably non-comprehensive). We can argue over these points if you wish, but I am confident that each can be fully supported. And as I said from the very start, you can develop your own operational definition. You asking me to do it for you only illuminates your desire to compete; it has nothing to do with the search for truth. 1. The origin of recorded information has never been associated with anything but the living kingdom; never from the remaining inanimate world. 2. The state of an object does not contain information; it is no more than the state of an object. To become recorded information, it requires a mechanism in order to bring that recording into existence outside of the object itself. As I said earlier, a carbon atom has a state which a physicist can demonstrate, but a librarian can demonstrate the information exists as well. They both must be accounted for. 3. A rational distinction is made between a) matter, b) information, and c) matter which has been arranged in order to record information. 4. Matter that has been arranged in order to contain information doesn’t exist without symbolic repre-sentations. Prove it otherwise. 5. From all known sources, symbols and symbolic representations are freely chosen (they have to be in order to operate as symbols). And as a matter of observable fact, when we look into the genome, we find physico-dynamically inert patterns of symbols. That is, the chemical bonds that cause them to exist as they do, do not determine the order in which they exist – and the order in which they exist is where the information is. 6. Recorded information requires a (discrete) suitable medium in order to exist – a medium that allows the required freedom of arrangement. 7. A distinction is made between information presented in analog form, versus that in the genome which is a sequence of repeating digital symbols being decoded in a linear fashion following rules established by the configuration of the system (that configuration itself being determined by the information it is created to decode). 8. The origin of information requires a mechanism to establish the relationship (mapping) between the object and the symbolic representation which is to symbolize it. 9. Recorded information exists for a purpose, that purpose being manifest as a receiver of the information – that which is to be informed. - - - - - - - - - - - You indicate that you can provide evidence that neo-Darwinian processes can assimilate all these points as well as those we’ve already discussed. My hat’s off to you. Your simulation will have nothing to do with chemical reality, and it will end with an unsupported Darwinian assumption (as they always do) but it should be interesting nonetheless. Cheers…Upright BiPed
June 11, 2011
June
06
Jun
11
11
2011
05:54 PM
5
05
54
PM
PDT
Elizabeth Liddle @293:
Well, let me give a more nuanced answer:
I'm trying to get beyond nuanced. :) If we don't have clear and unambiguous answers we cannot hope to agree.
If I know that the system you are using to send me a message has uses two possible symbols, 1, and 0, then you have sent me 100 bits of information (or however many ones there are, I didn’t count).
Well, it wasn't a sequence of 100 1's, thank God. (In my browser the 1's extend across the page with no line break. So I apologize for that.) But lets say, for the sake of argument, that the pattern I sent did consist of a series of 100 1's. You are now saying that my pattern of strictly a sequence of 1's contains the same amount of Shannon Information as your pattern of 0's and 1's which were completely random. How can that be? At some point in the series, shouldn't your surprisal have actually been reduced? You also said that your example had 100 bits of Shannon Information but seemed to intuitively recognize that my series of 1's had "not very much" information. So I hope you'll understand my confusion at the apparent lack of consistency. In the same post @93 You wrote:
If instead of coin tosses, I sent 1010101010101010101….. You’d start to make some pretty good guesses at the rest of the series, so the amount of new information I’d created would be very small.
You are not being consistent. If you had repeated your example of the repeating pattern "10101010..." for a total of 100 characters, would you say that it contained 100 bits of Shannon Information? IOW, you need to explain how a fixed sequence contains the same amount of Shannon Information as a randomly generated sequence.Mung
June 11, 2011
June
06
Jun
11
11
2011
05:46 PM
5
05
46
PM
PDT
Please excuse for the length of the post I am about to make. I have been away from the computer while the conversation raged on (and will likely be away for the remainder of the weekend). So I am just catching up to everyone else's word count. :)Upright BiPed
June 11, 2011
June
06
Jun
11
11
2011
05:36 PM
5
05
36
PM
PDT
HINT: Q1A: Was the result of the first coin tossed a heads? Regardless of whether the answer is a 0 or a 1, if Lizzie has responded truthfully, the configuration of the first element in the sequence is known. Q2B: Was the result of the second coin toss a tails? Regardless of whether the answer is a 0 or a 1, if Lizzie has responded truthfully, the configuration of the second element in the sequence is known. So then the question becomes, why must Upright BiPed ask 100 questions? p.s. Each answer provides 1 BIT of information. p.p.s. Note that each answer produces a reduction in the uncertainty about something.Mung
June 11, 2011
June
06
Jun
11
11
2011
05:01 PM
5
05
01
PM
PDT
Well, let me give a more nuanced answer: If I know that the system you are using to send me a message has uses two possible symbols, 1, and 0, then you have sent me 100 bits of information (or however many ones there are, I didn't count). However, if I don't know how many symbols you could have used, then you have sent me not much more than 1 bit, because while the first bit may surprise me, by the end of the message each subsequent one is reducing my uncertainty that the next will not be a 1 by only a tiny amount. And this goes back to the point I was trying to make to kairosfocus; to know how much information there is in a signal, we have to know something about what other signals are possible. If your message was the result of a series of coin-tosses, and I knew that, your message would contain a lot of information (100 bits). If I didn't know that ones and zeros on each go were equiprobable, though, I'd quickly infer that your cat had gone to sleep on your keyboard. So no, I don't think it reduces my claim to absurdity. As long as I know that pattern X is not the only pattern possible, then a replication of pattern X tells me that information has been transferred, whether that pattern is a pattern of all ones or the pattern you gave later. So both qualify as information. How much information depends on whether I have prior knowledge of the probability distribution from which the symbols are drawn, or whether I have to deduce it from the probability distribution observed in the signal.Elizabeth Liddle
June 11, 2011
June
06
Jun
11
11
2011
04:56 PM
4
04
56
PM
PDT
On the Information Content of a Randomly Generated Sequence (cont.) : Elizabeth Liddle @93:
So if I send you a series of 100 ones and zeros, and I arrange it so that at each position, ones and zeros are equiprobable, then I have sent you 100 bits of information, right? Well, I don’t even need natural selection to do that, I can just toss a coin 100 times! And, by an entirely stochastic process, I have sent you 100 hundred bits of information. So on that definition, any stochastic process creates information. Indeed, the more “intelligent” the process, the less information I actually create.
: Upright BiPed @202:
The reason I asked you what it was about is because if information is not about anything then its not information – at best, in the Shannon sense, it’s noise. - – - – - This is why I said I don’t care what you want to say the information is about, but it must be about something. Your choice.
So in the case that Lizzie tosses a fair coin which has a "heads" on one side and a "tails" on the other side 100 times and records the sequence where H stands for heads and T stands for tails. Lizzie then encodes each H as a 1 and each T as a zero. She then transmits the sequence of 1's and 0's to Upright BiPed. IF Upright BiPed understands that a 0 "means" a Tails and a 1 "means" a Heads. THEN, Lizzie has indeed transmitted 100 bits of information ABOUT the sequence of coin tosses which she recorded. So it was not the case that the information was not about anything. Why 100 bits? Say that it is the case that Upright BiPed is asked to discover (become informed about) the recorded sequence of coin tosses by asking a series of questions to which the response would consist of only YES/NO or TRUE/FALSE (binary = base 2) answers. Say that Upright BiPed and Lizzie had agreed that by convention, in response to the question posed by Upright BiPed, Lizzie would be truthful and send a "0" to represent a NO or FALSE and that she would send a "1" to represent a YES or TRUE. How many questions, minimum would Upright BiPed need to ask in order to become fully informed about the sequence of heads and tails recorded by Lizzie? Lizzie:
So on that definition, any stochastic process creates information.
FALSE.Mung
June 11, 2011
June
06
Jun
11
11
2011
04:50 PM
4
04
50
PM
PDT
Elizabeth Liddle @208: My position is not that Stuff (events, phenomena, complexity, whatever) is either caused by Accident (things bumped into each other in such a way that something amazing and improbable occurred) or Design (someone deliberately planned these amazing things – it couldn’t possibly have happened by Accident)... And on the other hand we have: Creatures of Accident: The Rise of the Animal KingdomMung
June 11, 2011
June
06
Jun
11
11
2011
04:26 PM
4
04
26
PM
PDT
Yes, and in that instance, Mung, not very much information.
Specifically, how much? Isn't the information content of that pattern measurable? You were able to come up with a value of 100 bits of Shannon Information for your example, so I assume you know how to measure the information content of my example.
Yes, and in that instance, Mung, not very much information.
How do you know it's not very much, if you can't measure it? I congratulate you for understanding the argument, but do you not see it as a reductio ad absurdum to your claim?
And if a pattern is transmitted, I suggest that information has been transferred.
And I suggest that it depends upon the pattern along with some other factor or factors. So at the other end of the scale I offer you the following: d;slit 8upoq4ewyt sjhfgoij54ir e;laieu kjfnfdl skjt ljts s/a/.khjtpwoo96p[3q9u6;l2 That's a pattern, right? Can you explain why it qualifies as information?Mung
June 11, 2011
June
06
Jun
11
11
2011
04:18 PM
4
04
18
PM
PDT
kairosfocus:
The log reduced form of the Chi metric is not about the formulation of chance hyps, it is about the issue of finding isolated islands of interest in large enough config spaces.
The log reduction (which of course is standard with Shannon information) isn't the point, kairosfocus - the point is that in all versions of Chi that I have seen, eg in the UD glossary: ? = – log2[10^120 ·?S(T)·P(T|H)] you need a value for P(T|H), where P(T|H) "is the probability of being in a given target zone in a search space, on a relevant chance hypothesis". Taking the log doesn't appear to me to obviate that requirement :) What matters, surely, is whether what happened is vanishingly unlikely under the null hypothesis of No Intelligent Designer (or, if you like, by Chance and Necessity alone). So unless we actually calculate the probability under that null, we cannot determine whether it could be expected to happen at least once in the number of events that are possible in the known universe, or whatever alpha you want to use. I'm not querying the alpha value; I'm asking how you calculate the probability of the observed pattern under the null hypothesis.Elizabeth Liddle
June 11, 2011
June
06
Jun
11
11
2011
04:00 PM
4
04
00
PM
PDT
Yes, and in that instance, Mung, not very much information.Elizabeth Liddle
June 11, 2011
June
06
Jun
11
11
2011
03:47 PM
3
03
47
PM
PDT
F/N: Dr Liddle: The log reduced form of the Chi metric is not about the formulation of chance hyps, it is about the issue of finding isolated islands of interest in large enough config spaces. For our solar system the number of configs for 500 bits is 48 ordered of mag more than the number of quantum states for the 10^ 57 or so atoms involved, and for 1,000 bits we are 10^ 150 beyond the number of Planck time q-states for the 10^80 or so atoms in the observed cosmos. [There are about 10^30 Planck times in the fastest -- ionic -- chemical reaction times.] The point being, if all the atoms of the observed cosmos -- or for a more realistic limit our solar system -- working flat out under the most favourable possible conditions could not sample an appreciable fraction of the states, the scope of your search just rounded down to an effective zero. UNLESS YOU BELIEVE IN INCREDIBLE LUCK NOT DISTINGUISHABLE FROM MIRACLES. So, if your informational measure is specific and comes in a scope over the thresholds given, the chance hyp is irrelevant, it is not going to exceed a Planck time quantum state search of 10^102 or 10^150 states. So, note, with warranted specificity explicitly invoked: Chi_500 = I* S - 500, bits beyond the solar system threshold GEM of TKIkairosfocus
June 11, 2011
June
06
Jun
11
11
2011
03:31 PM
3
03
31
PM
PDT
Hi Mung: What happens is that if you do a maximisation on the h-formula, you g4t peak value when p1 is flat across i, as a result of the math, I think Shannon even plotted a maximum diagram in his original paper IIRC. That is just an oddity of the mathematics, and it is irrelevant to real world signals, as real world codes do not go to zero redundancy, and will not push to have all symbols appearing with the same relative frequency in typical messages. As to noise vs signal characteristics, one key one is the classic eye diagram where the degree of opening of the eye will mark a clean/dirty signal. GEM of TKIkairosfocus
June 11, 2011
June
06
Jun
11
11
2011
03:13 PM
3
03
13
PM
PDT
And if a pattern is transmitted, I suggest that information has been transferred.
111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111Mung
June 11, 2011
June
06
Jun
11
11
2011
03:09 PM
3
03
09
PM
PDT
Mung, re your last point: I think it is a very important point, and, indeed, embodied in the Shannon notion of "reduction of uncertainty". This whole aspect of information is of professional interest to me because I'm interested in the neural mechanisms that encode "surprise". As for the question of whether the sender can be different from the source: What I was distinguishing was the content of the signal at source from the content of the signal at reception. Degradation can occur between those to points (and does). But, for example, with my Duplo blocks there was a "signal" (the order of the blocks) that had a "source" (the original string) and was duplicated with less than perfect fidelity (signal at source differed from signal at reception), ergo there was loss of signal/noise contamination. However, the sender was coterminous with the signal-at-at-source and the receiver was coterminous with the signal-at-reception. So I can see why kairosfocus rejected it, even though, in some senses of the word "information" (even in Shannon terms), "information" had been transmitted from source to receiver, albeit imperfectly. When it comes to living things, there are a number of analogs to signal theory that can be applied at different levels. For example, we can regard DNA as both the signal-at-source, and the RNA as the receiver. Or we can regard the parent cell as the sender, the DNA as the transmission medium and the daughter cell as the receiver. Or we can regard an unknown Intelligent Designer as the sender, DNA as the signal, transmitted from cell to cell, and the cell mechanisms of reproduction as the receiver. Or we can regard the environment as the sender, differential reproduction as the message, transmitted from one generation to the next and the next generation as the receiver. In that last instance, the message can even be expressed in words: "the alleles that work best in this environment are the ones you have now". In this sense, the "information" comes from exactly the same place as the "information" that is supposed to be "smuggled into" the genomes via the fitness function in a GA! So yes, let's go back to Shannon and his concept of "reduction of uncertainty". In a cell, it seems to me, we have a "signal" encoded in the cell's DNA that is transmitted to the daughter cells (I say cells, plural, because cells replicate by division, unlike most multicellular animals). However, more than merely the cell's DNA is transmitted; what is also transmitted (at least in a multicellular organism) is the state of the parent cell. And the state of the daughter cells may well change from the state they inherit, in which cases they pass on that additional information to their daughter cells, and so on. This is why I am very wary of focussing on DNA as the coded "message". The really important bit of coding is the updates. Not only that, but the cell also needs to respond to signals from other cells, as well as from the external word, in order to fulfill its functions. So it is far from straightforward to map signal theory on to the activities of living cells, and therefore to account for all the information that is being transmitted at any given time. However, what I do think is that any definition of Information, to be useful, has to involve the concept of transmission. Transmission is what enables us to consider "specification", and is why, above, I pointed out that we cannot simply separate signal from noise without knowing something about the signal. And so, I would argue, that any system in which a pattern is consistently duplicated involves the transmission of information. We can have transmission without duplication (or at least the duplication can be in a very different modality) but I don't see that we can have duplication without transmission. And if a pattern is transmitted, I suggest that information has been transferred.Elizabeth Liddle
June 11, 2011
June
06
Jun
11
11
2011
02:12 PM
2
02
12
PM
PDT
On the Meaning of Shannon Information Hopefully I'm not beating a dead horse here, but I'm not sure this question was ever resolved. In order for something to qualify as information in the Shannon sense, it must have some surprisal value. If there is no surprisal value then it is not information in the Shannon sense of the term. But for there to be a surprisal value there must be some expectation. The receiver would have to be surprised about something. We can also phrase this in terms of uncertainty and the reduction in uncertainty upon receipt of some amount of information. Does it follow from the above observations that information, to qualify as Shannon Information, must be about something and must reduce the uncertainty about something? Can the above thoughts be expanded upon and/or made more clear? Am I conflating the measurement with what is being measured?
A fundamental, but a somehow forgotten fact, is that information is always information about something. - The Mathematical Theory of Information
Mung
June 11, 2011
June
06
Jun
11
11
2011
12:35 PM
12
12
35
PM
PDT
(sorry to be answering in teaspoonfuls)
I consider myself to be a micro-blogger. :) One point per post is about all I can handle. Just ask BA77, lol.Mung
June 11, 2011
June
06
Jun
11
11
2011
12:28 PM
12
12
28
PM
PDT
But it probably makes sense even if we postulate non-intelligent senders and receivers (e.g. a cell and its progeny).
At first I thought I agreed with that statement, but on second thought, lol. Can a sender be different from the source? Let's remove the possible equivocation. By sender do you mean information source or transmitter? By receiver do you mean receiver or destination? Please see Fig. 1: http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf
I do think the fidelity of the transmission is an important aspect of the concept.
Absolutely. If it cannot be corrupted, can it be information? [An interesting theological question!]Mung
June 11, 2011
June
06
Jun
11
11
2011
12:24 PM
12
12
24
PM
PDT
Mung: (sorry to be answerwing in teaspoonfuls) Re Ruby: you are probably right, but I am doing a Java course right now! Re 275:
iirc, the original “challenge” said nothing about CSI.
tbh, I can't actually remember the original challenge (it was in a different thread) but what I intended when I made the claim was to demonstrate that whatever kind of information UDists say can't be generated by "Darwinian processes" can be :) However, we then got on to issues of how Darwinian process get started in the first place, hence my current formulation. As for CSI, I am assuming that something that counts as CSI is the relevant kind of information.
1. Clarify whether Information will meet the challenge, or whether it needs to be Complex Specified Information.
That would be a good start.
2. Don’t we need to get Information first, before we can get to Complex Specified Information? If you can’t generate Information you sure as heck can’t generate CSI, so why not start with Information.
I would claim that I can already do that (indeed my Duplo Chemistry demonstrated that, I would contend - faithful tranmission from one generation to the next).
3. It was your claim, you should get to choose (imo). Baby steps.
Well, I guess.
I fail to see how anyone can object. For Darwinism just does propose to get to CSI, but in little steps, not big ones.
Yes. Re 276:
On the proposed simulation
My intention is to circumvent both those objections firstly by providing no fitness function, and relying on the intrinsic “fitness function” embodied in any self-building-replicator (i.e. a self-replicator that sees to its own self-replication) namely, the more efficiently it produces offspring, the more prevalent the traits it passes to those offspring will be represented in the next generation.
I see some potential problems you may face. 1. Will you have a fixed population size, or have you decided?
Yes, I have decided, and no there will not be a fixed population size, and I'm not even starting with any self-replicators at all, just a sea of materials from which they may form. How many self-replicators emerge will depend on how the vMonomers they combine, and the population size will not be constrained in any way.
2. There will be no intrinsic fitness function, because you won’t actually have any self-building-replicators.
Well, my challenge is to set up my virtual world so that they emerge from the conditions in that world (the chemistry and the physics).
3. If and when you get one, how will you decide how efficient it is without a fitness function?
I won't decide. Fitness will be an intrinsic property in the sense that is an intrinsic property of living self-replicators. Individuals that self-replicate better are fitter than those that self-replicate less efficiently. Natural selection is differential reproduction. I expect my self-replicators to self-replicate with differential efficiency.
No particular need to reply, just some things to think about.
Yes, indeed :)Elizabeth Liddle
June 11, 2011
June
06
Jun
11
11
2011
12:07 PM
12
12
07
PM
PDT
Mung:
Hi Lizzie, Let me just talk, hopefully briefly. On the one hand I think perhaps my attempts to contribute have actually hindered the debate. I think you probably feel pulled in different directions and that you’re not really getting a coherent message from us.
Well,things are a little divergent right now, but that is often the dark before dawn :)
So in one sense I feel I should shut up and let you and Upright BiPed work things out. It was my intent to see if you two could come to an agreement on the challenge to be met and not introduce my own qualifications. But on the other had I find this all so intriguing. And I think it could be fun to know the results of your experiment just for the sake of seeing what happens and then debating the meaning, if any, of the results.
Me too :)
So I don’t see myself bowing out. But I will try to make myself clear about whether I am being critical of your project or just talking about concepts and ideas. If I am talking about your virtual chemical world I’ll try to make it clear.
Well there are a lot of strands to this issue, and sorting them out is part of the process of solving the problems. I always go on about how the essence of problem-solving is a good problem statement :) So if I seem as though I am niggling, it's not evasiveness, just terminal commitment to nailing down stray loose concepts before proceeding.
My suggestion is that first and foremost you talk to UPB and try to understand what the goal of the project is and whether step by step you are even addressing the issue raised. I think you were on the right path when you were talking about send and receiver, but that’s just my opinion. Does it make sense to talk about information apart from communication?
Personally, I don't think so. But it probably makes sense even if we postulalate non-intelligent senders and receivers (e.g. a cell and its progeny). I do think the fidelity of the transmission is an important aspect of the concept.
Best Wishes
reciprocated :)Elizabeth Liddle
June 11, 2011
June
06
Jun
11
11
2011
10:52 AM
10
10
52
AM
PDT
Mung:
Also, naive question: what does F/N stand for? I’ve been wondering!
Footnote. ;) But I think it’s great that you can ask the question. Says good things about you.
Thanks for the information and the kind remarks *blush* Oddly enough someone said the same thing to me yesterday about some questions I'd asked at a meeting :) I've never minded asking silly questions, and sometimes I find that I'm not the only one who doesn't know the answer! Not always, though. Still, I don't mind looking silly if I get the information in the end, even if I'm last to know :) Curiosity can be a powerful and under-rated drive :)Elizabeth Liddle
June 11, 2011
June
06
Jun
11
11
2011
10:27 AM
10
10
27
AM
PDT
On the Information Content of a Randomly Generated Sequence : kairosfocus @260:
I have previously pointed out that the underlying premise of the Hartley-based metric is that information can be distinguished from noise by its characteristics associated with meaningful signals.
I think you are being redundant in the use of the term "meaningful signal." What would a meaningless signal look like? Can there be a sign that is not about anything at all?
...to then take this anomaly of the metric and use it to pretend that a random bit or symbol string more generally is thus an instance of real meaningful information, is to commit an equivocation and to misunderstand why Shannon focused on the weighted average H-metric.
I think we're in agreement here. I think this is what I have been trying to say for some time. At first my objection was intuitive, but now I think I am beginning to have real understanding.
To then take this and try to infer that a random bit string is informational in any meaningful sense, is clearly a basic error of snipping out of context and distorting, often driven by misunderstanding.
Once again let me quote MacKay: Shannon’s analysis of the ‘amount of information’ in a signal, which disclaimed explicitly any concern with its meaning, was widely misinterpreted to imply that the engineers had defined a concept of information per se that was totally divorced from that of meaning. It appears to me that we are in agreement. So once again I raise the question. Does a randomly generated sequence contain maximal Shannon Information? Donald Johnson said it does. I said it didn't. You seemed to side with Johnson. I am thinking I am more right now than then. But have you changed your mind? Are you now saying, that to make that claim requires an equivocation? If so, I agree. In a follow-up post I'll address what I think about how random sequences obtain their Shannon Information content.Mung
June 11, 2011
June
06
Jun
11
11
2011
09:55 AM
9
09
55
AM
PDT
On the proposed simulation
My intention is to circumvent both those objections firstly by providing no fitness function, and relying on the intrinsic “fitness function” embodied in any self-building-replicator (i.e. a self-replicator that sees to its own self-replication) namely, the more efficiently it produces offspring, the more prevalent the traits it passes to those offspring will be represented in the next generation.
I see some potential problems you may face. 1. Will you have a fixed population size, or have you decided? 2. There will be no intrinsic fitness function, because you won't actually have any self-building-replicators. 3. If and when you get one, how will you decide how efficient it is without a fitness function? No particular need to reply, just some things to think about.Mung
June 11, 2011
June
06
Jun
11
11
2011
09:29 AM
9
09
29
AM
PDT
Information, or Complex Specified Information (CSI) iirc, the original "challenge" said nothing about CSI. 1. Clarify whether Information will meet the challenge, or whether it needs to be Complex Specified Information. 2. Don't we need to get Information first, before we can get to Complex Specified Information? If you can't generate Information you sure as heck can't generate CSI, so why not start with Information. 3. It was your claim, you should get to choose (imo). Baby steps. I fail to see how anyone can object. For Darwinism just does propose to get to CSI, but in little steps, not big ones.Mung
June 11, 2011
June
06
Jun
11
11
2011
09:01 AM
9
09
01
AM
PDT
1 2 3 4 5 13

Leave a Reply