Uncommon Descent Serving The Intelligent Design Community

At Some Point, the Obvious Becomes Transparently Obvious (or, Recognizing the Forrest, With all its Barbs, Through the Trees)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

At UD we have many brilliant ID apologists, and they continue to mount what I perceive as increasingly indefensible assaults on the creative powers of the Darwinian mechanism of random errors filtered by natural selection. In addition, they present overwhelming positive evidence that the only known source of functionally specified, highly integrated information-processing systems, with such sophisticated technology as error detection and repair, is intelligent design.

[Part 2 is here. ]

This should be obvious to any unbiased observer with a decent education in basic mathematics and expertise in any rigorous engineering discipline.

Here is my analysis: The Forrests of the world don’t want to admit that there is design in the universe and living systems — even when the evidence bludgeons them over the head from every corner of contemporary science, and when the trajectory of the evidence makes their thesis less and less believable every day.

Why would such a person hold on to a transparently obvious 19th-century pseudo-scientific fantasy, when all the evidence of modern science points in the opposite direction?

I can see the Forrest through the trees. Can you?

Comments
Dr Liddle: The Hartley-based info metric will assign a maximal value of H, average info per symbol to a flat random string of symbols, due to the mathematics of weighted averages: H = - [SUM on i] pi* log pi This is an artifact of the mathematics, and is irrelevant to the issue that meaningful informational strings can be measured on observed frequencies of symbols interpreted as probabilities, yielding a measure of information: Ik = log (1/pk) = - log pk A flat random string of digits equivalent to 100 coins, will take a value on the metric, but that has nothing to do with whether or not it is informational in the functional, meaningful sense. Now, we can in fact address the measurement of meaningful information, on the fact that it will normally be structured per rules of meaning [such as with say a compressed A/D conversion rule for an analogue signal] or even codes [ASCII text in English] and an observed event E will therefore come from a confined region T of the set of possibilities for a string or related set of symbols. You will doubtless recall above the thought exercise of 1,000 coins in a tray with square slots, let's say a 10 X 100-slot array. A tray full of coins simply tossed will with extremely high probability be near to 50-50 H/T in no particular order. This is the statistically dominant cluster of configs, or, microstates if you will. So if you saw a pattern that reflected that overwhelming dominance, there would be no reason to remark on it. All, is as expected. But, if the same tray were now to be seen as having the ASCII code for the first 143 characters of this post, that would transform our estimate of the best explanation. Precisely because what we now see is utterly unexpected on the null hyp of chance distributions. Given the scope of the possibilities for 1,000 bits, we could transform the whole observed cosmos into coin trays like that and toss the for its thermodynamic lifespan and we would have utterly no credible basis for expecting that ANY such tray -- much less the one we have in hand so to speak -- would do anything like that. For, event he most impossibly fast coin tosses would not be able to sample more than 1 in 10^150 of the config space, so chance is not a credible explanation of such a specific, complex [the space of possibilities is very large] and informationally functional event. A far better explanation is that the coins were configured by choice, the other known causal explanation of highly contingent outcomes. (As was discussed earlier, necessity will produce strongly similar outcomes under similar initial conditions, i.e. this is how we detect and identify natural laws of necessity, like F = ma etc] Now, over the past few months, there has been a considerable discussion on the Dembski Chi-metric in and around UD. The upshot of such is that it is best to reduce the metric -- through expanding the log and simplifying the threshold value -- to the following form:
Namely:
define ?S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [?] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 log of the conditional probability P(T|H) multiplied by the number of similar cases ?S(t) and also by the maximum number of binary search-events in our observed universe 10^120] ? = – log2[10^120 ·?S(T)·P(T|H)] . . . eqn n1
How about this (we are now embarking on an exercise in “open notebook” science): 1 –> 10^120 ~ 2^398 2 –> Following Hartley, we can define Information on a probability metric: I = – log(p) . . . eqn n2 3 –> So, we can re-present the Chi-metric: Chi = – log2(2^398 * D2 * p) . . . eqn n3 Chi = Ip – (398 + K2) . . . eqn n4 4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities. 5 –> Where also, [following VJT and the implications of there being about 10^102 possible Planck-time quantum states of the 10^57 or so atoms in our solar system since the big bang] K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . .
Introducing as well the dummy variable S for observed or inferred specificity to a simply describable zone of interest T, where if an event E from T in a space of possibilities W, is so specific, S = 1, 0 otherwise: Chi_500 = I*S - 500, bits beyond the threshold. As was shown above, this will show how if we have a randomly generated bit string I will be high but S will be zero, and if we have a forced orderly repetitive pattern like unit cells in a crystal, we will have I low or zero even though S is 1. (All of this, BTW, is quite similar to the reasoning behind the simple brute force X-metric that is in the UD WAC's and which produces an equivalent result for the 1000 bit threshold. (I prefer this as this takes in the resources of the observed cosmos.) When MG first made her guest post the X-metric was used to show how the CSI can be estimated for such an event.) The point of the reduced Chi metric is that it allows us to identify events from a narrowly specific zone that has an observed or inferred meaning or function, and to then address the challenge of getting to the configuration on a random walk driven trial and error search, the benchmark search. For, it has been shown that on average searches will do no better than this, if a search is picked at random from the set of possible algorithms. As has also been shown [cluster of papers by Evo Info Lab], a simple subtraction will then suffice to show a value for intelligently injected active and problem specific information that allows a search to outperform the benchmark. GEM of TKIkairosfocus
June 10, 2011
June
06
Jun
10
10
2011
05:32 AM
5
05
32
AM
PDT
Mung:
Well, Upright BiPed may think i’m strange, but so far I’ve heard no disagreement, so I have no reason to think we’re at odds. Why not just ask UB if it would be ok to use Shannon’s measure to operationalize whatever you propose to offer as information? You knew that your original “information” was <bnot about anything, and none of us disagreed with you. We all saw right away that it wasn’t about anything. So I see no reason to think that we can’t agree whether an example is about something.
But, pending an explicit reference to a method for quantifying information, this is what I propose:
iirc, UB never objected to your use of Shannon’s measure, and even cites Shannon’s paper, and only wanted to know what the “information” was about. You had to admit it wasn’t about anything. It wasn’t information at all. You appear to have abandoned Shannon Information after having first introduced it. Can you explain why?
Because obviously it doesn't work as a measure of the kind of information that either you or UB count (reasonably) as information. Because my information (despite having 100 bits in Shannon terms) wasn't "about" anything, you regard its information content as zero. So what I need is a measure of information that won't give us a false positive. There's a metric for CSI in the glossary, but it seems to me to come with problems, and right now, as UB has agreed to my operationalized version of his/her second definition, I'm happy to go with that. More to the point, I've described how I propose to attempt the challenge. If you both are happy with the proposal, I am happy to start work :) It may take me a while though.Elizabeth Liddle
June 10, 2011
June
06
Jun
10
10
2011
12:54 AM
12
12
54
AM
PDT
EL,
Well, that’s potentially a problem, Upright BiPed. Mung says that you can measure Information without knowing what it is about. But you are saying it has to be about something, and in order to know whether it’s about anything we have to have some criterion by which to judge whether it’s about anything.
The issue is that I am not necessarily trying to measure information. I am not asking you to measure information. I am not interested in the ratio of signal to noise, or how many bits of data are relayed, or how much uncertainty is alleviated. None of that. And the question of about-ness was only related to the information having a function; it was never intended to be a stumbling block. I am not suggesting (even for a moment) that these other issues are unimportant, they are just not what I am asking for. I am trying to get you to demonstrate a natural process whereby a symbolic representation of a discrete state becomes embedded in a discrete medium, then that representation/medium is transferred to a receiver in order for that receiver to become informed by that representation. (And quite frankly, I am giving you a HUGE amount of leeway, given the facets of recorded information and information transfer that have yet to even be discussed). Why am I asking for this? Well, primarily because you suggested you could do it. But moreover, because this is what information is, and it is also what is found in the living cell. Information is being used to create function, and that is an observed reality. I am not interested in a loose example that truly only fulfils the need to complete a game; I am interested in an example relevant to the observation.Upright BiPed
June 9, 2011
June
06
Jun
9
09
2011
04:32 PM
4
04
32
PM
PDT
Dr Liddle: A few notes: 1 --> To distinguish signal from noise the signal has to have informational characteristics, not noise characteristics. Signal to noise ratio is in fact a key metric in communications. And you do not need to know the specific meaning to spot a signal from noise. (Yes, this is a design inference.) 2 --> You have gone on to say:
I’m going to start off with a “toy” chemistry – a virtual environment populated with units (chemicals, atoms, ions, whatever) that have certain properties (affinities, phobias, ambiphilic, etc) in a fluid medium where motion is essentially brownian (all directions equiprobable) unless influenced by another unit. I may have to introduce an analog of convection, but at this stage I’m not sure. And what I propose to do is that starting with a random distribution of these units, a self-replicating population of more complex units will evolve, in which each unit (or “organism” if you like, or “critter”) has, encoded with in it, the “recipe” for its own offspring. That way we will have a Darwinian process (if I achieve it) where I don’t even specify a fitness function that isn’t intrinsic to the “chemistry”, that depends entirely on random motion (“Chance” if you like) and “necessity” (the toy chemistry) to create an “organism” with a “genome” that encodes information for making the next generation. Information “about” the next generation that is “sent” to the processes involved in replication.
3 --> Sorry, but this is hand waving. You are essentially calling for the unweaving of diffusion and brownian motion to create complex self-replicating systems, with informational control. the number of dispersed states will so overwhelm clumped at random states then the random clumped ones the functionally combined ones that you will run straight into Hoyle's tornado in a junkyard. (Cf my own thought experiment discussion here.) 4 --> In short, you have run straight into the second law of thermodynamics, statistical form. 5 --> Going further, the empirical evidence of the informaitonal polymers of life show tha they tend to combine on standard modules, with sugar-phosphate chains for D/RNA and with COOH- NH2 chains for proteins. IT IS THE SEQUENCING THAT IS HIGHLY CONTINGENT, AND THAT SEQUENCING IS INFORMATIONALLY CONTROLLED USING ALGORITHMIC STEP BY STEP PROCESSES WITH SET-UP, START, STEPS AND HALTING. 6 --> We have not addressed chirality, peptide vs non peptide bonds, and the cross reaction of other chemicals in a soup, or the high probability of breakdown of highly endothermic molecules. 7 --> there is a reason why we found this exchange [read down a bit from here] between Orgel and Shapiro a few years ago:
[[Shapiro:] RNA's building blocks, nucleotides contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern . . . . [[S]ome writers have presumed that all of life's building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case. A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life . . . . To rescue the RNA-first concept from this otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . [[Orgel:] If complex cycles analogous to metabolic cycles could have operated on the primitive Earth, before the appearance of enzymes or other informational polymers, many of the obstacles to the construction of a plausible scenario for the origin of life would disappear . . . . It must be recognized that assessment of the feasibility of any particular proposed prebiotic cycle must depend on arguments about chemical plausibility, rather than on a decision about logical possibility . . . few would believe that any assembly of minerals on the primitive Earth is likely to have promoted these syntheses in significant yield . . . . Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [[for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [[8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [[6]? . . . Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . . The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help.
8 --> And, as for the evolution ideas, the basic problem is the origin of the functional information, in a context of high contingency [and I notice you have not responded to the die and 1,000 coin examples above].Lucky noise, filtered by trial and error is not a plausible source of informaiton, algorithms, codes, and string data structures. 9 --> As for meaningful info, the way to measure it is to look at its functional specificity, in the context of its complexity -- thence the active info injected t outperform random walk and trial and error, something that the design thinkers are clearly bringing out. GEM of TKIkairosfocus
June 9, 2011
June
06
Jun
9
09
2011
03:58 PM
3
03
58
PM
PDT
Elizabeth Liddle @208:
My position is not that Stuff (events, phenomena, complexity, whatever) is either caused by Accident (things bumped into each other in such a way that something amazing and improbable occurred) or Design (someone deliberately planned these amazing things – it couldn’t possibly have happened by Accident),
"I do not see why a “purposeless, mindless process” should not produce purposeful entities, and indeed, I think it did and does." - Elizabeth LiddleMung
June 9, 2011
June
06
Jun
9
09
2011
03:29 PM
3
03
29
PM
PDT
Well, Upright BiPed may think i'm strange, but so far I've heard no disagreement, so I have no reason to think we're at odds. Why not just ask UB if it would be ok to use Shannon's measure to operationalize whatever you propose to offer as information? You knew that your original "information" was <bnot about anything, and none of us disagreed with you. We all saw right away that it wasn't about anything. So I see no reason to think that we can't agree whether an example is about something.
But, pending an explicit reference to a method for quantifying information, this is what I propose:
iirc, UB never objected to your use of Shannon's measure, and even cites Shannon's paper, and only wanted to know what the "information" was about. You had to admit it wasn't about anything. It wasn't information at all. You appear to have abandoned Shannon Information after having first introduced it. Can you explain why?Mung
June 9, 2011
June
06
Jun
9
09
2011
03:18 PM
3
03
18
PM
PDT
Hi, Chris, @ #180! I do apologise for keeping you waiting. You wrote:
Greetings Lizzie, I mean for the terms Accident and Design to be as all-encompassing as possible – hence the initial capital letters. For me personally (going beyond the remit of ID science), either the Universe, and everything in it, is a product of the Grand Architect, or it just made itself without any kind of design whatsoever. With that emphasis in mind, I put it to you that *all* explanations that are on offer to explain existence ultimately fall into one of those two categories. The two main contenders in science – neo-Darwinian Evolution and Intelligent Design – are just competing explanations for Accident versus Design. There is no third way. Just saying there is one, without providing any details, doesn’t count by the way!
Fair enough! And my answer is a sideways one, I'm afraid (though I don't really apologise for that - sometimes problems can be solved by turning them sideways!) My position is not that Stuff (events, phenomena, complexity, whatever) is either caused by Accident (things bumped into each other in such a way that something amazing and improbable occurred) or Design (someone deliberately planned these amazing things - it couldn't possibly have happened by Accident), but that where stochastic processes involve feedback loops, Design, and even, what I call Intentional Design (which I find less ambiguous than Intelligent Design) emerges. In other words, I think that Intentional agents are one of the possible results from chaos (in the technical sense of non-linear stochastic processes). Now I'm still not giving you any details, although I'm happy to elaborate (though it's a long story....) - but that's my take. It certainly doesn't rule out a creator God (you could certainly make a case for the genius required to realise that a feedback system eventually, given eternity, will generate intelligent, intentional, moral creatures :)) but it does, I submit, make postulating that such a being might need to tinker with the thing unnecessary (and, IMO, bad theology!)
This thread and others are littered with definitions of information. Either every single definition provided fails because the cell is just “a simple homogenous globule of plasm”. Or, actually, we all know what we’re talking about here (we can even see it in the video in the top right hand corner of this very webpage) and all this talk of ‘gathering dust’ and 1s and 0s is, at best, missing the point (at worst, deliberately avoiding it). Why choose ‘gathering dust’ as a starting point for information (particularly in the cell) when supercomputers and superfactories are far more obvious and accurate associations?
Because if I (or anyone else) is to make the case that "information" can arise from simple beginnings, we need to know what the simplest possible "information" example is. I hope it goes without saying (though from responses on another thread I guess it does) that no "Darwinist" thinks that the first modern cell formed "accidentally" from a fortuitous coming together of lipids, polymers, amino acids and proteins. That is clear nonsense. The issue is not whether that might have happened (vanishingly unlikely) but whether precursors of that first modern cell are possible, and whether the very early precursors might have been simple enough to have formed spontaneously under plausible scenarios for the environment on early earth. But that, of course, isn't a Darwinian issue. Darwin knew he hadn't solved that problem. No-one has, yet. So putting aside that problem, a secondary issue is: given the essentials for Darwinian evolution, namely self-replicating entities whose offspring vary from their parents, and vary in such a way that at least a few of them reproduce more efficiently than their parents, can the "information" we see in modern cells arise? I think the answer is clearly yes, but obviously people here differ. So what I will try to do is to show that even with a very primitive (but of course "toy") chemistry, "organisms" with these properties emerge. It'll be fun, but may take me a while :) Cheers LizzieElizabeth Liddle
June 9, 2011
June
06
Jun
9
09
2011
02:57 PM
2
02
57
PM
PDT
@ Upright BiPed
Lizzie: “because Upright BiPed asked my what my original “message” was about, and of course, it wasn’t “about” anything.” The reason I asked you what it was about is because if informion is not about anything then its not informion – at best, in the Shannon sense, it’s noise. This was exactly Shannon’s point in his schematic Fig. 1 on the second page of his famous paper. It offers a schematic diagram with five individually-named boxes. From left to right there is as arrow which passes through four of the five boxes in a specific order to indicate the flow of information. The flow begins at “Information Source” then passes through “Transmitter” to “Receiver” and finally to “Destination”. The fifth of the five boxes is tangentially tied to the flow of information between the “Transmitter” and the “Receiver”. The fifth box in entitle “Noise Source”. - – - – - This is why I said I don’t care what you want to say the information is about, but it must be about something. Your choice.
Well, that's potentially a problem, Upright BiPed. Mung says that you can measure Information without knowing what it is about. But you are saying it has to be about something, and in order to know whether it's about anything we have to have some criterion by which to judge whether it's about anything. Do you see what I mean? However, I'm not losing heart, because I think that operational definition we hammered out still works, although I'd like something with less wiggle room if possible. But, pending an explicit reference to a method for quantifying information, this is what I propose: I'm going to start off with a "toy" chemistry - a virtual environment populated with units (chemicals, atoms, ions, whatever) that have certain properties (affinities, phobias, ambiphilic, etc) in a fluid medium where motion is essentially brownian (all directions equiprobable) unless influenced by another unit. I may have to introduce an analog of convection, but at this stage I'm not sure. And what I propose to do is that starting with a random distribution of these units, a self-replicating population of more complex units will evolve, in which each unit (or "organism" if you like, or "critter") has, encoded with in it, the "recipe" for its own offspring. That way we will have a Darwinian process (if I achieve it) where I don't even specify a fitness function that isn't intrinsic to the "chemistry", that depends entirely on random motion ("Chance" if you like) and "necessity" (the toy chemistry) to create an "organism" with a "genome" that encodes information for making the next generation. Information "about" the next generation that is "sent" to the processes involved in replication. If I succeeded, would you accept that I had met the challenge, or do you foresee a problem? (I have to say, I'm not sure I can do it!)Elizabeth Liddle
June 9, 2011
June
06
Jun
9
09
2011
02:20 PM
2
02
20
PM
PDT
To test a claim we need a set of definitions that will enable an independent objective observer to evaluate whether the claim has been met.
Why do we need a set of definitions when we already have a measure? ME: We don’t need to know what it [a particular signal] is about in order to measure it, so why do we need to know what it [a particular signal] is about in order to operationalize it? But hey, maybe I'm way off base.
An operational definition defines something (e.g. a variable, term, or object) in terms of the specific process or set of validation tests used to determine its presence and quantity. That is, one defines something in terms of the operations that count as measuring it. http://en.wikipedia.org/wiki/Operational_definition
Lot;s of good stuff on that wiki page:
In quantum mechanics the notion of operational definitions is closely related to the idea of observables, that is, definitions based upon what can be measured. Operational definitions are the foundation of the diagnostic nomenclature of mental disorders (classification of mental disorders) from the DSM-III onward. An operational definition is a procedure agreed upon for translation of a concept into measurement of some kind
Now I don't speak for Upright BiPed, and if he disagrees he can certainly say so, but I'm willing to see where Shannon Information takes us, since it already exists as an accepted measure of information. The mistake I think you're making, which both UB and I have pointed out, is that you seem to assume that information can be generated simply by flipping a coin and that Shannon Information defines a concept of information per se that is totally divorced from that of meaning.
Shannon’s analysis of the ‘amount of information’ in a signal, which disclaimed explicitly any concern with its meaning, was widely misinterpreted to imply that the engineers had defined a concept of information per se that was totally divorced from that of meaning. – Donald M. MacKay, Information, Mechanism and Meaning You know who MacKay is? So I thought we were all on the same track, that it was agreeed that information must be about something, only to find out that apparently we aren't. If it does not reduce the uncertainty at the receiver, is it information? So I was rather hoping you would continue with the coin flipping but let us know the meaning of the various heads or tails, or the meaning of a sequence, say every group of three. 2^0 = 1 2^1 = 2 2^2 = 4 2^3 = 8 2^4 = 16 Tells us how much information, in bits, can be coded in a sequence of heads/tails. log(2) 8 = 3. Here's why (in binary) and in coin language where H=1/T=0: 000 = TTT 001 = TTH 010 = THT 011 = THH 100 = HTT 101 = HTH 110 = HHT 111 = HHH It is assumed that T/H is equiprobable. The probabilty distribution is 1/2, 1/2, 1/2. Which, amazing enough, when multiplied = 1/8. And if we toss three coins in the air what is the probabilty of a specific combination of heads and tails. Ain't math fun. Keep it simple please, haha.
Mung
June 9, 2011
June
06
Jun
9
09
2011
01:43 PM
1
01
43
PM
PDT
You are on the Dirt and Time team, so I await your response.
oh, that made me laugh. Lizzie, tell him you need more time.Mung
June 9, 2011
June
06
Jun
9
09
2011
12:50 PM
12
12
50
PM
PDT
No problem, Lizzie. The transfer window is open 'til the end of August!Chris Doyle
June 9, 2011
June
06
Jun
9
09
2011
12:04 PM
12
12
04
PM
PDT
#200 "Do you have an analog for the sender, btw?" Well that is what is yet to be determined, isn't it? Some people say that a God or Gods did the arranging. Others say that extra-terrestrials could have been the source. Modern science says that Dirt and Time did it. :) You are on the Dirt and Time team, so I await your response.Upright BiPed
June 9, 2011
June
06
Jun
9
09
2011
11:46 AM
11
11
46
AM
PDT
Lizzie: "because Upright BiPed asked my what my original “message” was about, and of course, it wasn’t “about” anything." The reason I asked you what it was about is because if informion is not about anything then its not informion - at best, in the Shannon sense, it's noise. This was exactly Shannon's point in his schematic Fig. 1 on the second page of his famous paper. It offers a schematic diagram with five individually-named boxes. From left to right there is as arrow which passes through four of the five boxes in a specific order to indicate the flow of information. The flow begins at “Information Source” then passes through “Transmitter” to “Receiver” and finally to “Destination”. The fifth of the five boxes is tangentially tied to the flow of information between the “Transmitter” and the “Receiver”. The fifth box in entitle “Noise Source”. - - - - - This is why I said I don't care what you want to say the information is about, but it must be about something. Your choice.Upright BiPed
June 9, 2011
June
06
Jun
9
09
2011
11:37 AM
11
11
37
AM
PDT
Chris - haven't forgotten you, but my access time is limited right now! Fitting this in between a java class and a rehearsal!Elizabeth Liddle
June 9, 2011
June
06
Jun
9
09
2011
11:27 AM
11
11
27
AM
PDT
Upright BiPed
Information is a representation of a discrete object/thing embedded in an arrangement of matter or energy, where the object/thing represented is entirely dissociated from the representation
Lizzie, At first glance, I have no particular problem with the definition you propose, save one element which we have both already acknowledged – that is the acknowledgement that there is a receiver to be informed by the representation. In the cell, this is obviously the ribosome (at least in regards to protein synthesis). - – - – -
OK, that's fine, thanks for clarifying. Do you have an analog for the sender, btw? (My response doesn't depend on it, I'm just curious.)
And just so you remember, you were going to demonstrate how material (neo-Darwinian) processes can account for the rise of information from the start.
Yes indeed. I certainly have not forgotten. If I have any more questions I will let you know, but I think I can work with this. Mung:
Lizzie, I think you’re doing a great job of mangling the meaning of information in order to avoid, well, the meaning of information. :)
…the first problem here is the word “symbol” as it somewhat, IMO, implies an arbitrary assignation of signifier to signified, and is one of the points at issue.
Oh my. Just when I thought we were making progress. I thought that you had admitted that information needs to be about something.
No need for distress, Mung :) As I've said, a few times, I'm not trying to avoid anything. Precisely the reverse - I want us to have an agreed operational definition so that if I succeed, we can all agree that I've succeeded. It's no good if we end up arguing whether I've succeeded after I've done the work, and in any case, I can't attempt the work until I know what I'm trying to do. That's why operational definitions are absolutely crucial to scientific methodology.
We don’t need to know what it is about in order to measure it, so why do we need to know what it is about in order to operationalize it?
Well, if we don't, that's fine. The issue of what the thing was about came up because Upright BiPed asked my what my original "message" was about, and of course, it wasn't "about" anything. However, i think we have captured the "aboutness" to some extent in the current formulation. Nonetheless, if you can point me to a clear metric for how to measure any information I manage to generate, that would be really cool. That's what I'm after, and that would be better than the verbal version above. I won't be able to get to this till Sunday, so if someone can post the (or reference the paper in which it can be found) that would be cool. Thanks.Elizabeth Liddle
June 9, 2011
June
06
Jun
9
09
2011
11:26 AM
11
11
26
AM
PDT
Shannon's analysis of the 'amount of information' in a signal, which disclaimed explicitly any concern with its meaning, was widely misinterpreted to imply that the engineers had defined a concept of information per se that was totally divorced from that of meaning. - Donald M. MacKay, Information, Mechanism and Meaning
Mung
June 9, 2011
June
06
Jun
9
09
2011
10:49 AM
10
10
49
AM
PDT
PPPS: A stone carving is not a case of culling by blind contest on differential reproductive success. Indeed, it is a case of -- INTELLIGENT DESIGN: what is chipped off is very carefully chosen based on a specific target outcome that is specified by the artist's intent. (Just think, do forces of erosion -- chance and necessity -- credibly explain the four portrait figures at Mt Rushmore?) --> I don't have time for a point by point, so let's pick key snips that are the proverbial slices of teh cake with the key ingredients.kairosfocus
June 9, 2011
June
06
Jun
9
09
2011
09:28 AM
9
09
28
AM
PDT
PPS: Exercise 2. Get 1,000 coins of the same kind, say US pennies. Define H = 1, T = 0. Put in a tray of square slots and toss. Overwhelmingly, through sheer statistical dominance of contingent possibilities, they will tend to settle at near 50-50 H/T, and in no particular discernible order or organisation. Now, suppose you went away for an hour or so and came back, seeing he coins now starting from slot 1 and following on down, giving the ASCII code for the opening words of this comment. Would you say that the particular outcome is just as likely as any other single outcome, so it cannot be explained by differentiating characteristics of necessity, chance and choice? Or, would you accept that since this event E is a very specific and independently "simply" describable zone of outcomes T, the best explanation is intelligence, as the odds of not being in any such zone T are so overwhelming that the outcome should not be observable just once in the lifespan of our observable cosmos by chance. And, there is plainly no mechanical necessity that forces the coins to that sort of meaningful pattern. Explain your reasoning, and tell us if that explanation would be persuasive to the House over in Las Vegas, why.kairosfocus
June 9, 2011
June
06
Jun
9
09
2011
09:24 AM
9
09
24
AM
PDT
PS: And BTW, the unpredictability of weather in the specific is a case where fine differences in initial conditions -- essentially a chance issue, make a big difference to overall outcomes. The forces that make winds move and the conditions under which water will precipitate out of the air have not changed.kairosfocus
June 9, 2011
June
06
Jun
9
09
2011
08:56 AM
8
08
56
AM
PDT
Dr Liddle: Let's start by getting lawlike natural regularity tracing to forces of nature right, re your:
I don’t accept, at least as self-evident, that “natural regularity” is incompatible with “contingency”, or even high contingency. Or at least, I would need to see very clear operational definitions of those terms as used in that claim.
1 --> Get a die, the ordinary 6-sided, non-loaded kind. 2 --> Hold it up above a table, and drop it. Several times. 3 --> Does it reliably fall, and could you measure the initial rate at a constant acceleration? 4 --> After it lands of the table and tumbles, does it settle to a reading? 5 --> Does it always read the same? 6 --> Now, take up the same die and set it on the table to read 1, 2,3, . . . 6. 7 --> You have just seen the difference between lawlike regularity tracing to mechanical necessity, chance contingency and choice contingency. GEM of TKIkairosfocus
June 9, 2011
June
06
Jun
9
09
2011
08:54 AM
8
08
54
AM
PDT
It is the choice of words we make that gives our utterances meaning, not the phoneme bank from which they are drawn.
It's the words we choose not to use that give our utterances meaning. The phenome bank is of huge importance to the number of words which may be left unsaid, without which, words would be meaningless.Mung
June 9, 2011
June
06
Jun
9
09
2011
08:46 AM
8
08
46
AM
PDT
Lizzie, I think you're doing a great job of mangling the meaning of information in order to avoid, well, the meaning of information. :)
...the first problem here is the word “symbol” as it somewhat, IMO, implies an arbitrary assignation of signifier to signified, and is one of the points at issue.
Oh my. Just when I thought we were making progress. I thought that you had admitted that information needs to be about something. We don't need to know what it is about in order to measure it, so why do we need to know what it is about in order to operationalize it?
Information is a representation of a discrete object/thing embedded in an arrangement of matter or energy, where the object/thing represented is entirely dissociated from the representation.
The first problem here is the word “representation” as it somewhat, IMO, implies an arbitrary assignation of signifier to signified.Mung
June 9, 2011
June
06
Jun
9
09
2011
08:36 AM
8
08
36
AM
PDT
Information is a representation of a discrete object/thing embedded in an arrangement of matter or energy, where the object/thing represented is entirely dissociated from the representation
Lizzie, At first glance, I have no particular problem with the definition you propose, save one element which we have both already acknowledged - that is the acknowledgement that there is a receiver to be informed by the representation. In the cell, this is obviously the ribosome (at least in regards to protein synthesis). - - - - - And just so you remember, you were going to demonstrate how material (neo-Darwinian) processes can account for the rise of information from the start.Upright BiPed
June 9, 2011
June
06
Jun
9
09
2011
08:29 AM
8
08
29
AM
PDT
Mel is right, NS is a culler, not a creator.
Yes and no. NS is certainly a culler, but culling can be creative (cf stone carving). However, if by "creative" you mean the provision of stuff from which to cull, then yes, the stuff is not created by NS. However, I'd say that the information lies in the culling, not in the "creating". It is the choice of words we make that gives our utterances meaning, not the phoneme bank from which they are drawn.
And it is by no means a given that the path to successful novelties lies always step by step uphill.
Absolutely.
Indeed evidence of use of codes and presence of irreducible complexity point to islands of function.
To maintain the metaphor, we are talking about "buttes" right? summits to which there is no gentle path? There I disagree. In a high-dimensioned landscape there are often alternative routes to summits, and some of these may include gentle dips as well as plateaus.Elizabeth Liddle
June 9, 2011
June
06
Jun
9
09
2011
08:17 AM
8
08
17
AM
PDT
Upright BiPed:
EL, I gave you a definition two days ago at 143. Echoing an apparent new trend in debate, you have failed to say what it is about the definition you find prohibitive, and why that is so. Instead, it seems possible that the revolving claim for a definition will stand in as a tactic for avoiding the question.
No it is not a "tactic for avoiding the question" Upright BiPed, and if it is a "tend" then it is a trend with a reason. The problem with your definition, which I quote below, is firstly that it is not operationalized, and secondly that it is potentially circular. So let me have a go at operationalizing it, and see whether you are happy with it. You wrote:
As I already said, my starting point is the historical use of the word; that which gives form, to in-form (from the Latin verb informare), or, from the information processing domain; a sequence of symbols that can cause a transformation within a system. Either is suitable. If these are not sufficient for you, then I will add this: Information is an abstraction of a discrete object/thing embedded in an arrangement of matter or energy. This definition is fully compliant with what is found at the genomic level, as well as inter-cellular transient signaling systems, and every other instance of information I am aware of.
To test a claim we need a set of definitions that will enable an independent objective observer to evaluate whether the claim has been met. My claim was that I could demonstrate how Darwinian processes could generate information. So show you a Darwinian process that I claim has generated information, we need an operational definition of information that will enable an independent observer to verify my claim is justified. OK, so let's first operationalise the terms of one of your definitions:
a sequence of symbols that can cause a transformation within a system.
Now the first problem here is the word "symbol" as it somewhat, IMO, implies an arbitrary assignation of signifier to signified, and is one of the points at issue. So I suggest that we replace it with something more neutral like "items". The second problem is the word "system". The problem here is that we need a system to transform. If I propose a biological system, then you will be tempted to say to me - but hey! You started with a biological system! Where did that come from! And if I start with a non-biological system, you will be tempted to say: hey! but that's not anything like what we see in living things! The third problem is the word "transformation" - what kind of transformation would count? Obviously I could drop a brick into a bucket of strawberries and "transform" some nice strawberry systems into a lot of mush. You'd probably (rightly) call that loss of information, but it's information we are trying to define right now! So we could go with something like: "A sequence of items that can cause non-destructive change to a persisting pattern." But I think we have lost the essence of your concept, so I don't think that will do. So let's try your alternative:
Information is an abstraction of a discrete object/thing embedded in an arrangement of matter or energy.
This looks more promising, apart from the word "abstraction". hmmm. Dictionary definitions of "abstraction" just send us back to "abstract". For "abstract", Merriam Webster has:
1 a : disassociated from any specific instance b : difficult to understand : abstruse c : insufficiently factual : formal 2 : expressing a quality apart from an object 3 a : dealing with a subject in its abstract aspects : theoretical b : impersonal, detached 4 : having only intrinsic form with little or no attempt at pictorial representation or narrative content
Which is somewhat problematic because these tend to reference ideas and minds, and again, we cannot include this in our definition if we are trying to determine whether a mind is intrinsic to information! However "disassociated from any specific instance" might give us a clue. That could give us something like: "Information is a representation of a discrete object/thing embedded in an arrangement of matter or energy, where the object/thing represented is entirely dissociated from the representation". That seems to work, I think, do you agree? So I can't, for example, claim that the pattern of raindrops left on sand is creating "information" about the rain, because the representation (dimples in the sand) is not dissociated from the drops (the dimples are rain-drop shaped). I'm not wild about this (it still seems to have potential loopholes) but what do you think? Believe me, I'm not trying to make this easy for myself - exactly the opposite! I'm trying to make it hard! Interested to know what you think. Cheers LizzieElizabeth Liddle
June 9, 2011
June
06
Jun
9
09
2011
07:32 AM
7
07
32
AM
PDT
@kairosfocus, #178
Dr Liddle First, the null hyp is natural regularity [law]. If something is highly contingent, that kills the null. Then, the second null is that the thing is contingent reflective of a stochastic distribution. What kills that is being on a narrow zone of interest in a large enough config space, just as the analysis that supports the second law of thermodynamics highlights. In case you are interested, here is Dembski’s phrasing (and recall, this is to be applied per aspect, as linked above): “Whenever explaining an event, we must choose from three competing modes of explanation. These are regularity [i.e., natural law], chance, and design.” When attempting to explain something, “regularities are always the first line of defense. If we can explain by means of a regularity, chance and design are automatically precluded. Similarly, chance is always the second line of defense. If we can’t explain by means of a regularity, but we can explain by means of chance, then design is automatically precluded. There is thus an order of priority to explanation. Within this order regularity has top priority, chance second, and design last” . . . the Explanatory Filter “formalizes what we have been doing right along when we recognize intelligent agents.” The steps are plainly valid, and are based on the way science commonly works, the difference being that the chance/choice contrast is decided on isolation to a zone of interest in so large a config space that arriving there by chance is utterly implausible on the gamut of the cosmos or at least the solar system.
A couple of points here: Firstly, I don't accept, at least as self-evident, that "natural regularity" is incompatible with "contingency", or even high contingency. Or at least, I would need to see very clear operational definitions of those terms as used in that claim. Most phenomena we see in the non-living world are highly "contingent". Take weather, for instance. Certain weather phenomena only occur when a certain set of conditions are met. Or star or galaxy formation. Indeed the non-biological world is full of intricately patterned phenomena that are highly contingent, and yet an "Intelligent Designer" is not normally inferred from them (although the designer of a world in which such things occur may be). Secondly, as a result, I don't actually agree with Dembski on this (and indeed I am interested, so thanks for the quotation!). Firstly I don't think "natural law" and "chance" are orthogonal causal factors. Firstly, even if they were, I don't see any a priori reason for assuming that if they are ruled out, the only alternative is some third. Let me try to support my position on this: I think Dembski is using "regularity" in the same sense that he uses "necessity" (as in "Chance and Necessity"). I think contrasting "Necessity" with "Chance" is fraught with difficulty. What is the difference between "Chance" and "Necessity"? Monod, interestingly, does not oppose the two terms in this way. He sees evolution as emerging from the interplay between the highly predictable ("Necessity") and the highly unpredictable ("Chance"). In other words natural events do not arise from one of two separate causal agents, "Chance" or "Necessity"; rather there are events that are highly predictable, and events that are highly unpredictable. Furthermore, highly predictable events are ones that are contingent on few conditions (you drop sodium into water and you will get a hydrogen flame with a high degree of certainty) while highly unpredictable events are contingent on a great many conditions, many of which may be unknown (which is why weather forecasting is so difficult). So I propose an alternative filter: You have a highly complex pattern. You ask: is this pattern highly predictable? And your answer may be yes. For example, the pattern might be a crystal of some kind, and it might be possible, with high degree of certainty, to predict the final crystal from known starting conditions. This would be the equivalent of "is it regular?" But it might be highly non-predictable, like a complex weather pattern. In which case you would to conclude that pattern was critically dependent on either starting conditions, or very small, possibly even quantum level, fluctuations. In other words, that the pattern was chaotic, and that feedback loops resulted in highly non-linear relationships between inputs and outputs. If the answer to the initial question was "non-predictable" i.e. we are dealing with a non-linear, chaotic system, the next question of interest, I sugggest, is (back to Monod): does it exhibit teleonomy? Which I will define (with a tweak to Monod's definition) as: do its structures and behaviour contribute to the persistence of the pattern? If so, we may be in the presence of a living thing. If answer to the last question is Yes, I suggest that we ask a final question: "Does it exhibit intentional behaviour"? By which I mean: do its activities provide evidence that it selects, from a wide repertoire of behaviours, those that further some distal goal? If the last, we have, I suggest, an Intelligent Designer :) But none of that casts any light on the question as to whether the Intelligent Designer was Intelligently Designed - it does, however, cast some light as to what we should be looking for when looking for an Intelligent Designer. So I reject the validity of Dembski's filter. I don't think that Necessity/Regularity and Chance are orthogonal causal factors, I think they simply describe the degree of contingency that governs an event or phenomenon. Andn I think that the signature of life is neither predictability nor unpredictability (because living things exhibit both) but teleonomy. The big question, then, is, must teleonomic phenomena be designed by Intelligent Designers? Darwin's answer was: no. I think he was right :)Elizabeth Liddle
June 9, 2011
June
06
Jun
9
09
2011
06:59 AM
6
06
59
AM
PDT
Dr Liddle: Cf also the discussion in 147, which was originally directed to you in a previous thread. Note especially the expression: Chi_500 = I*S - 500, functionally specific bits beyond the [solar system] threshold Where I is the usual I = - log2 p info metric from Hartley on Also, where S = 1 or 0 depending on whether an item or event E is functionally specific from a definable zone T in a space of possibilities Z. The 500 bits threshold sets up a sufficiently isolated threshold that once the value is positive we are credibly entitled to infer to design on the evidence of functional specificity and complexity together. A random walk culled by trial and error is maximally unlikely to access T on the gamut of the solar system's resources which are 48 orders of magnitude smaller than the set of possibilities for 500 bits. If that is not good enough, jump up to 1,000 bits, which exhausts the resources of the observable cosmos at 1 in 10^150 of the set of possibilities.. GEM of TKIkairosfocus
June 9, 2011
June
06
Jun
9
09
2011
06:31 AM
6
06
31
AM
PDT
Dr Liddle: Intelligent agents -- including beavers and bees for this purpose -- have intentions, but can also cause unintended consequences. the key issue as has been pointed out, is the localisation of events E in special and describable specific zones T in a wider space S of configs. Beyond a certain point the resources of the observable cosmos are inadequate to c4redibly hit on an E from a T, on random walk based trial and error. And that limit demonstrably starts within 1,000 bits worth of possibilities, where 125 bytes is a very short span for a program to make a serious difference. Genetic algorithms, I am afraid, all start in zones T, and seek to climb to hilltops. They thus reveal their root in intelligent design with intent. Here, AKA targetting. That can be seen form the fitness function which at all points in the zone swept by the vary and test and cross-breed processes, has a nice trend pointing to a locally accessible peak of performance. Starting in a broad target zone and seeking to optimise by heading for a hilltop at best is about micro-evo, it has nothing to do with the origin of major structural systems that must function from embryogenesis forward, i.e. body plan level macro-evo. GEM of TKI PS: Mel is right, NS is a culler, not a creator. And it is by no means a given that the path to successful novelties lies always step by step uphill. Indeed evidence of use of codes and presence of irreducible complexity point to islands of function.kairosfocus
June 9, 2011
June
06
Jun
9
09
2011
06:20 AM
6
06
20
AM
PDT
EL, I gave you a definition two days ago at 143. Echoing an apparent new trend in debate, you have failed to say what it is about the definition you find prohibitive, and why that is so. Instead, it seems possible that the revolving claim for a definition will stand in as a tactic for avoiding the question.Upright BiPed
June 9, 2011
June
06
Jun
9
09
2011
06:19 AM
6
06
19
AM
PDT
OK, well, I've checked through the thread, and at this point, I'm not sure who is waiting for responses to specific posts, so I'll try to respond to responses to my own posts that have appeared more recently. Upright BiPed: I'd love to respond to your challenge, but I do need an operational definition of "information" before doing so. I don't mind what it is though (i.e. I'm not arguing about the definition, I just need an operational definition corresponding to the conceptua definition of information you want me to use). @ nullasalus, #177:
Elizabeth Liddle,
But we do not specify the solution. That is what the evolutionary algorithm does, and in that, they are directly comparable to natural evolution.
But we can ‘specify the solution’ in principle, to whatever degree required. Whether or not we do is a reflection of our wants and abilities, not a reflection of GAs themselves. Indeed, we already ‘specify the solution’ to a degree just by employing GAs to begin with. They aren’t utterly unpredictable to us (otherwise their practical use would be far more limited.)
I think it's very important to keep the levels distinct here, otherwise we are in trouble when we try to map the GA model on to life. Yes, of course, we use GAs to solve a problem because we think that GAs might provide us with a solution! And yes, we can constrain the solutions if we want to. But my point is that we, as Intelligent Designers, can carefully define the problem statement in order to ensure that what evolves solves it, we are NOT designing the solution itself. We are designing the problem. Now I am well aware that getting a good problem statement is, in practical terms, a hugely important step in finding a solution. In addition, as Intelligent Designers we can also define the "solution space" - in other words, we can design our GA so that it varies along the dimensions that we think may bracket a solution. But that is not the same thing as actually solving the problem, and I think it's important to keep the steps clear. When we define the problem we are defining the fitness function: by what criteria does our program determine whether our critter breeds or dies? The analog of this, in Darwinian terms, is the environment - whether an actual organism breeds or dies depends on how it responds to the opportunities and hazards presented to it by the environment. So that part doesn't need an ID to account for. When we design the "solution space" however, there are a couple of analogs in Darwinian terms: we need to decide the dimensions along which our critters are going to vary (are we randomly adjusting parameters, for instance, or are we adding new terms or operators?), and indeed the kind of critter it is. And both of these, of course, require intelligent input, and are candidates for an Intelligent Designer opportunity in nature. So I can see the potential argument (cf Behe) that an Intelligent Designer is required a) to design the original critter (usually very simple) and b) to constrain the ways in which its progeny can vary. And I agree that we "neo-Darwinists" or whatever you call us (don't like the term much) need to make the case that these two things can happen in the absence of an Intentional Designer (I use that adjective advisedly :) Then, thirdly, there is the actual evolutionary process. Given the first two, the third is automatic. So, I'd argue we don't need to invoke an Intelligent Designer for that part. Of course that is the fun part - once you've defined your fitness function and designed your virtual biology, you can go home and wait for the system to solve your problem. And the answers can be deeply surprising! On occasions, it's even difficult to figure out how why the solution actually works. So I guess what I'm saying (or trying to pin down) is where the trickiest part of the Darwinian puzzle lies. I'd say there is no problem in accounting, in very "natural" terms for both the fitness function and the solution-finding process. The tricky part is accounting for how the actual original critter emerged, if not from an Intentional Design process, and how it happens (if it does) that the variance in its descendents brackets viable and useful novelties. Does that seem like a fair problem statement?
However, unlike artificial selection, where even the “solution” may be highly specified, in GAs it often is not. Indeed, some GA outputs it is quite difficult to figure out how they actually solve the problem.
And again – this comes down to a statement about the limitations of abilities of a proximate designer, not the processes themselves. In other words, what’s doing the work here in making these ‘Darwinian’ is not the processes themselves, but statements about the designer’s knowledge (or lack thereof) of them. You’re setting up a comparison where the principal metric to decide whether a designer using a GA ‘designed’ the results of a GA, is if the designer knew and intended the GA’s results. But unless you have an ID style design detection filter, science is unable to determine the answer to the question in play – “Did a designer know and intend these results?” Note that this all comes prior to the question of whether or not the processes (variation and selection as defined by any evolutionary theory, given what we know about nature) are capable of achieving what they did, with or without designer input. Just as a GA, whether or not it was designed, likely couldn’t go from a (to use a biological example) single cell to an elephant in 4 generations.
Yes indeed. I think we playing the same game, at least, now, even if we are on different sides! Cool. Also I do think the distinction between "intentional" and "intelligent" is an important one. But perhaps off topic for this thread. Cheers LizzieElizabeth Liddle
June 9, 2011
June
06
Jun
9
09
2011
05:43 AM
5
05
43
AM
PDT
1 4 5 6 7 8 13

Leave a Reply