Uncommon Descent Serving The Intelligent Design Community

At Some Point, the Obvious Becomes Transparently Obvious (or, Recognizing the Forrest, With all its Barbs, Through the Trees)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

At UD we have many brilliant ID apologists, and they continue to mount what I perceive as increasingly indefensible assaults on the creative powers of the Darwinian mechanism of random errors filtered by natural selection. In addition, they present overwhelming positive evidence that the only known source of functionally specified, highly integrated information-processing systems, with such sophisticated technology as error detection and repair, is intelligent design.

[Part 2 is here. ]

This should be obvious to any unbiased observer with a decent education in basic mathematics and expertise in any rigorous engineering discipline.

Here is my analysis: The Forrests of the world don’t want to admit that there is design in the universe and living systems — even when the evidence bludgeons them over the head from every corner of contemporary science, and when the trajectory of the evidence makes their thesis less and less believable every day.

Why would such a person hold on to a transparently obvious 19th-century pseudo-scientific fantasy, when all the evidence of modern science points in the opposite direction?

I can see the Forrest through the trees. Can you?

Comments
F/N: Use Ruby! http://www.ruby-lang.org/en/ ;) It is SO much easier to program in Ruby than in Java. And it is free. Free Open Source Interpreted Object Oriented DynamicMung
June 11, 2011
June
06
Jun
11
11
2011
08:51 AM
8
08
51
AM
PDT
Hi Lizzie, Let me just talk, hopefully briefly. On the one hand I think perhaps my attempts to contribute have actually hindered the debate. I think you probably feel pulled in different directions and that you're not really getting a coherent message from us. So in one sense I feel I should shut up and let you and Upright BiPed work things out. It was my intent to see if you two could come to an agreement on the challenge to be met and not introduce my own qualifications. But on the other had I find this all so intriguing. And I think it could be fun to know the results of your experiment just for the sake of seeing what happens and then debating the meaning, if any, of the results. So I don't see myself bowing out. But I will try to make myself clear about whether I am being critical of your project or just talking about concepts and ideas. If I am talking about your virtual chemical world I'll try to make it clear. My suggestion is that first and foremost you talk to UPB and try to understand what the goal of the project is and whether step by step you are even addressing the issue raised. I think you were on the right path when you were talking about send and receiver, but that's just my opinion. Does it make sense to talk about information apart from communication? Best WishesMung
June 11, 2011
June
06
Jun
11
11
2011
08:47 AM
8
08
47
AM
PDT
Also, naive question: what does F/N stand for? I’ve been wondering! Footnote. ;) But I think it's great that you can ask the question. Says good things about you.Mung
June 11, 2011
June
06
Jun
11
11
2011
08:20 AM
8
08
20
AM
PDT
kairosfocus:
F/N 3: Perhaps, I need to remind us about where the thought on these things had already reached by the 1970?s: ____________ Wicken, 1979: >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. >>
The above can't be of great help to us, unfortunately, given the part I have bolded, as that would make confound the conclusion with the premise! We need an operational definition of the properties of my output that is independent of the concept we want to test.
Orgel, 1973: >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.] >> ______________ I think the distinctions being made here are fundamental and need to be brought on board with further considerations.
Well, the second is more useful than the first, and I agree the distinction is important. So we need a clear operational version of "chi". The problem with all the formulations I have yet seen is that they beg the question of how to formulate the chance hypothesis. This seems to me to be crucial.
Elizabeth Liddle
June 11, 2011
June
06
Jun
11
11
2011
07:47 AM
7
07
47
AM
PDT
F/N: I am sorry, but it is not a blunder on my part to point out the significance of being in a zone where you have a rising fitness function and an algorithm that knows what to do with it. In short the key functional complexity has already been built in at that point. You are simply making explicit what is implicit in the inputs [including what is built into the algorithm itself and the code for it]. the key issue that design theory highlights is the need to get to those islands of function. Hill climbing within an island on built in complex information does not solve that problem.
I'm orry, but I don't see how this relates to my proposal. Can you explain, with specific reference to the items in my proposal? Also, naive question: what does F/N stand for? I've been wondering!Elizabeth Liddle
June 11, 2011
June
06
Jun
11
11
2011
07:14 AM
7
07
14
AM
PDT
kairosfocus:
Dr Liddle: Please take this as a: WARNING, something is seriously amiss at the outset . . . On looking at your just above, it seems to me that the basic problem is that you are going to equivocate between random strings etc that can in effect catalyse copies and the sort of specifically meaningful or functional information that the CSI/FSCI concept addresses.
Oh dear. I'm trying to figure out how to say this in a completely unambiguous manner: I not only do not intend to equivocate about anything, I specifically want to operationalise the definitions we are using before I start so that no equivocation is possible! The reason I haven't started yet, and am still banging on about definitions is precisely so that there is no room for equivocation by anyone, least of all me! tbh, my own view - hunch, at least- is that the UD approach to information is fatally flawed and the reason it is so difficult to get an operational definition (one that can be applied to open-ended systems, for instance) is that there are intrinsic equivocations within the concept (between intention and intelligence, for instance). But I am willing to be convinced otherwise, if we can hammer out a clear, unequivocal definition that can be applied to the kind of project that I have proposed, namely to start with no more than Chance and Necessity and create some Improbable quantity of information.
There is no issue that random unconstrained strings can be constructed, or even that a copying system or templating can replicate such, perhaps even with variation.
That's fine, I'm glad we agree on that.
And, in the case where strings are pre-programmed through nicely co-ordinated patterns of what will come together and what will not, the organising information was preloaded.
If you count the basic laws of physics and chemistry as "pre-programmed" information, then why look to life as evidence for the hand of a designer? Why not simply say: the laws of Necessity must have been designed? More to the point, if Necessity itself is Designed, then we cannot infer Design by ruling out Necessity. It seems to me that what you have just said undermines the entire UD concept.
Observe please, as has now been repeatedly noted — and I missed if you ever responded to this — the COOH-NH2 bond string for proteins is a standard click-together, and so the string of AAs dep4nd on being informed through the mRNA and ribosome to form the — deeply isolated in config space — sequences that fold and function. Linked to that, the AAs are attached to tRNAs through the COOH end, to a standard CCA end, i.e. chemically any AA could attach to any tRNA, what controls this is tha the loading enzymes match the specific tRNA and lock in the right AA, informationally based on a structured key-lock fit. In turn that enzyme forms though the same process [chicken and egg], and is in a functionally isolated fold and function island.
I'm not disputing this - I'm not sure in what sense you want me to respond to it.
Going on, RNAs and DNA similarly have a sugar-phosphate backbone that is a standard click-together. The information is in the sequencing, and is expressed by a key-lock fit on the side chain so to speak, similar to the key-lock fit of a Yale type lock, and of course it is generally accepted and understood that this is done using a 4-state digital info storage system as expressed e.g. in the genetic code and its dialects, with provisions for regulatory codes also. But all of that is distractive from and misdirected relative to the key issue: finding strings etc from specific functionally or meaningfully organised zone in wide config spaces. Cf my thought exercise on the spontaneous assembly of a functional microjet from parts in a vat, which of course bears more than a passing resemblance to your proposed model.
I don't think any of the above bears more than a passing resemblance to my proposed model, except inasfar as what I hope will emerge is a population of systems that code for their own replication, which can be fairly easily quantified by evaluating how like their parent each pair of daughters is. It certainly won't do it as complicatedly as a modern cell. But I do propose more than the simple self-replication of strings, because I am now inserting an additional requirement - the content of the strings must contribute to the efficiency with which they are self-replicated (which was not true of my Duplo model). In other words, I am not proposing a "mould" or "stamp" system, in which a specified pattern is replicated because it is stamped out by the pattern, but a system in which the pattern itself specifies the events that must happen in order to result in the faithful self-reproduction of the whole.
Let’s just say, there is a reason why something like an aircraft or even the instruments on its dashboard so to speak, are not designed that way. Notice, work is defined in terms of the product of applied force and distance along line of application. It could be applied broadly and twisted into all sorts of rhetorical puzzles, but that is kept out by a consideration of context: impartation of orderly as opposed to disorderly motion. And, when we see the work to unweave diffusion by clumping and then organising towards a functional whole, we see that this work has to be controlled informationally if it is to credibly succeed. That is why designers plan their work, and why a design is a plan. It informs organising work to effect the plan. Dembski: . . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.) Similarly, it is NOT a general consensus that GA’s produce novel meaningful information out of the thin air of success rewarded chance variation. In fact they are set up in carefully designed islands of existing function, and they depend on hill-climbing algorithms that are just as carefully designed, explitong metrics that are designed, and relying on underlying models that can interpolate to essentially any degree of precision. Such models are making implicit information explicit, they are not creating new function where none existed before, out of the thin air of chance and mechanical necessity without intelligent direction and control. Just think about how a GA knows how to stop. I think a fresh start on a sounder footing is indicated. GEM of TKI
As I said, I am not proposing a GA. It is precisely in order to start on a fresher and sounder footing that my proposal is what it is. There will be no fitness function. There will be no initial population of breeding individuals. There will be a chemistry and a physics, representing both Chance and Necessity. However, if that last what constitutes the "design" I have inserted within my system, then I suggest that UD moves away from the argument that ID can be inferred ID from living systems, and towards the argument that ID can be inferred from the physics and chemistry that make the emergence of living systems possible (I guess that would be the "fine tuning" argument, and would put you in the same camp as many "theistic evolutionists" :))Elizabeth Liddle
June 11, 2011
June
06
Jun
11
11
2011
07:10 AM
7
07
10
AM
PDT
F/N 3: Perhaps, I need to remind us about where the thought on these things had already reached by the 1970's: ____________ Wicken, 1979: >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. >> Orgel, 1973: >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.] >> ______________ I think the distinctions being made here are fundamental and need to be brought on board with further considerations.kairosfocus
June 11, 2011
June
06
Jun
11
11
2011
06:54 AM
6
06
54
AM
PDT
F/N: I am sorry, but it is not a blunder on my part to point out the significance of being in a zone where you have a rising fitness function and an algorithm that knows what to do with it. In short the key functional complexity has already been built in at that point. You are simply making explicit what is implicit in the inputs [including what is built into the algorithm itself and the code for it]. the key issue that design theory highlights is the need to get to those islands of function. Hill climbing within an island on built in complex information does not solve that problem.kairosfocus
June 11, 2011
June
06
Jun
11
11
2011
06:48 AM
6
06
48
AM
PDT
Dr Liddle: Please take this as a: WARNING, something is seriously amiss at the outset . . . On looking at your just above, it seems to me that the basic problem is that you are going to equivocate between random strings etc that can in effect catalyse copies and the sort of specifically meaningful or functional information that the CSI/FSCI concept addresses. There is no issue that random unconstrained strings can be constructed, or even that a copying system or templating can replicate such, perhaps even with variation. And, in the case where strings are pre-programmed through nicely co-ordinated patterns of what will come together and what will not, the organising information was preloaded. Observe please, as has now been repeatedly noted -- and I missed if you ever responded to this -- the COOH-NH2 bond string for proteins is a standard click-together, and so the string of AAs dep4nd on being informed through the mRNA and ribosome to form the -- deeply isolated in config space -- sequences that fold and function. Linked to that, the AAs are attached to tRNAs through the COOH end, to a standard CCA end, i.e. chemically any AA could attach to any tRNA, what controls this is tha the loading enzymes match the specific tRNA and lock in the right AA, informationally based on a structured key-lock fit. In turn that enzyme forms though the same process [chicken and egg], and is in a functionally isolated fold and function island. Going on, RNAs and DNA similarly have a sugar-phosphate backbone that is a standard click-together. The information is in the sequencing, and is expressed by a key-lock fit on the side chain so to speak, similar to the key-lock fit of a Yale type lock, and of course it is generally accepted and understood that this is done using a 4-state digital info storage system as expressed e.g. in the genetic code and its dialects, with provisions for regulatory codes also. But all of that is distractive from and misdirected relative to the key issue: finding strings etc from specific functionally or meaningfully organised zone in wide config spaces. Cf my thought exercise on the spontaneous assembly of a functional microjet from parts in a vat, which of course bears more than a passing resemblance to your proposed model. Let's just say, there is a reason why something like an aircraft or even the instruments on its dashboard so to speak, are not designed that way. Notice, work is defined in terms of the product of applied force and distance along line of application. It could be applied broadly and twisted into all sorts of rhetorical puzzles, but that is kept out by a consideration of context: impartation of orderly as opposed to disorderly motion. And, when we see the work to unweave diffusion by clumping and then organising towards a functional whole, we see that this work has to be controlled informationally if it is to credibly succeed. That is why designers plan their work, and why a design is a plan. It informs organising work to effect the plan. Dembski:
. . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.)
Similarly, it is NOT a general consensus that GA's produce novel meaningful information out of the thin air of success rewarded chance variation. In fact they are set up in carefully designed islands of existing function, and they depend on hill-climbing algorithms that are just as carefully designed, explitong metrics that are designed, and relying on underlying models that can interpolate to essentially any degree of precision. Such models are making implicit information explicit, they are not creating new function where none existed before, out of the thin air of chance and mechanical necessity without intelligent direction and control. Just think about how a GA knows how to stop. I think a fresh start on a sounder footing is indicated. GEM of TKIkairosfocus
June 11, 2011
June
06
Jun
11
11
2011
06:43 AM
6
06
43
AM
PDT
Kairsfocus:
/N: Dr Liddle, please beware of beginning your work within or in near proximity to a target zone based on having done the targetting work off-stage. That is in fact the subtle fallacy — and point of injection of intelligently developed active information — in all GAs and similar algorithms that in effect use the idea of a space that tells you warmer/colder, directly or indirectly (e.g. a nice smoothly varying fitness metric that points you conveniently uphill, ignoring the evidence of vast seas of non-functional configs). At least, when they are presented as exemplars of chance plus necessity giving rise to functional, organised complexity withou8t intelligent direction.
Firstly, I think this is in itself a fallacy: The idea that information is somehow "smuggled into" a GA via the fitness function arises from a mistake about the levels at which we are evaluating information. Yes, of course, in a GA, the fitness function is rich in information, and yes, in a GA, that fitness function is, clearly "intelligently designed". But the fitness function does not, explicitly does not, contain any information as to how fitness is to be achieved. It is that information (and very useful it can be too) that is created within the GA. Secondly, as I've explained to Mung, what I am proposing is not a GA, and there will be no "fitness function". The other information that is provided by the designer of a GA, in addition to the fitness function, is the information needed to replicate the individuals - they start off with a population of breeding individuals. I am providing no such information. All I am providing is a chemistry and a physics. Yes, I will select my chemistry in such a way that it is likely to result in my anticipated self-replicating systems, but that is completely kosher. Nobody is suggesting that any old chemistry will result in self-replicating critters. If I succeed, that will not solve the problem of abiogenesis, because my chemistry is only a toy chemistry, and only maps crudely on to real-world chemistry, and we do not even know, for sure, what real chemicals might have been around in the early earth (though we have some ideas). But it will, I suggest, demonstrate that we can push the need for an intelligent designer at least back as far as selecting the initial physics and chemistry, and that the claim that "chance and necessity" cannot produce the kind of information displayed by self-reproducing entities is false.Elizabeth Liddle
June 11, 2011
June
06
Jun
11
11
2011
06:41 AM
6
06
41
AM
PDT
I have previously pointed out that the underlying premise of the Hartley-based metric is that information can be distinguished from noise by its characteristics associated with meaningful signals. Thus, signal to noise ratio.
Yes indeed, but only if we have prior knowledge of the characteristics associated with a "meaningful signal". For example, if I attempt to transmit my name: N O W _ I S _ T H E _ T I M E _ F O R _ A L L _ G O O D _ M E N _ T O _ C O M E _ T O _ T H E _ A I D _ O F _ T H E _ P A R T Y down a noisy channel, it may appear like this: Q Q Q _ I Q _ T Q Q _ Q Q Q E _ Q O R _ Q L L _ G Q O D _ Q E N _ Q O _ Q O M E _ T O _ Q Q E _ Q Q D _ O F _ T H Q _ P A R T Y. Because we know there is a meaningful code called English, and because we know something about the probability distribution and contingencies of English letters in English sentences, we can immediately infer that the Qs are noise, not least because they frequently occur without a following U, which is extremely rare in English text. So we can ignore the Qs, or at least assume that a most a very small proportion of them are part of the original signal. That's fine. However, let's say I was communicating to you in a code in which the total number of Qs in the message was an extremely important piece of information - the number of enemy ships in the Channel for instance. And I had deliberately disguised this information by randomly selecting sentences from a typing manual to intersperse among the Qs. In that instance, the Qs would be the message, and the other letters would simply be irrelevant noise, albeit deliberately transmitted as a decoy. Or, let's say, we have an alarm system, in which I repeatedly send the simple message "Q" which means "execute emergency code Q". However, the message is contaminated by cross talk from the Party HQ next door. Again, the signal is the Qs and the noise is the other letters. For this reason I do not find it self-evident that we can distinguish signal from noise without prior knowledge about the signal. We can of course compare the signal sent with the signal received and quantify the noise in the channel, and quantify transmission fidelity. And that seems to me to be a reasonable approach to evaluating the results of my proposed project: if I end up with a self-replicating structure, we can compare the "parent" structure with the "daughter" structures and quantify the fidelity of the transmission. Not,that I do not suggest that a "random string" contains useful information. I do suggest,though, that evaluating whether a string is "random" is a whole nuther ball game, as the concept of CSI implies. And in the case of my proposed project, the daughers critters' morphology will be, by definition, specified by the parent critter. Not any old nice-looking pattern will do. It has to match, with some, if not total, fidelity, the parent pattern. If it does, I submit that information has been transmitted. Indeed, in my thought experiment with the Duplo Chemistry, baggage carousel and cold store, information in that sense was also transmitted. But possibly not enough :)Elizabeth Liddle
June 11, 2011
June
06
Jun
11
11
2011
06:08 AM
6
06
08
AM
PDT
F/N: Dr Liddle, please beware of beginning your work within or in near proximity to a target zone based on having done the targetting work off-stage. That is in fact the subtle fallacy -- and point of injection of intelligently developed active information -- in all GAs and similar algorithms that in effect use the idea of a space that tells you warmer/colder, directly or indirectly (e.g. a nice smoothly varying fitness metric that points you conveniently uphill, ignoring the evidence of vast seas of non-functional configs). At least, when they are presented as exemplars of chance plus necessity giving rise to functional, organised complexity withou8t intelligent direction.kairosfocus
June 11, 2011
June
06
Jun
11
11
2011
05:35 AM
5
05
35
AM
PDT
Mung (and Dr Liddle): I have previously pointed out that the underlying premise of the Hartley-based metric is that information can be distinguished from noise by its characteristics associated with meaningful signals. Thus, signal to noise ratio. I also pointed out that as an artifact of the definition and the use of a weighted average measure to get avg info per symbol, we see that a flat random distribution will give a value of metric, and indeed will be a peak of the avg info per symbol metric. [What that really means is that if we could squeeze out all redundancy and associated differences in symbol frequencies, we would get a code that would push through the maximum quantum of information per symbol, but in fact that is not technically desirable as reliability of signals is a consideration, thus the use of error detection and correction codes. the use of a parity check bit is the first level of this.] I set that in the context where, to then take this anomaly of the metric and use it to pretend that a random bit or symbol string more generally is thus an instance of real meaningful information, is to commit an equivocation and to misunderstand why Shannon focussed on the weighted average H-metric. As I said before, one of his goals was to identify the carrying capacity of noisy, bandlimited channels such as telephone or telegraph lines. To then take this and try to infer that a random bit string is informational in any meaningful sense, is clearly a basic error of snipping out of context and distorting, often driven by misunderstanding. (I also think the deisre to defy the infinite monkeys result and draw out complex messages from lucky noise is a factor. But, we have very good reason to see that complex messages are unreachable by random noise, and sufficiently complex starts at 143 ASCII characters.) GEM of TKIkairosfocus
June 11, 2011
June
06
Jun
11
11
2011
05:29 AM
5
05
29
AM
PDT
OK,thanks for the thoughtful and helpful responses above. I am absolutely serious about my intention to attempt the feat, but obviously I do want to make sure we all agree what success would look like if I achieved it! That's essentially what operationalising a hypothesis is. As I said, it's not a evading tactic (I'm actually dying to get started), it's simply sound methodology. There's no point in demonstrating something if either people think it's trivially true, or alternatively, don't that you've demonstrated what they think you have claimed to demonstrate. So I hope my good faith can now be considered beyond reasonable question. So, where are we? Mung: I haven't proposed a conventional GA, because I don't think there is any disagreement (is there?) that a GA can result in increased information. Moreover, while the entire GA is not self-replicating, a GA incorporates a populationg self-replicating "critters" - "individuals" with a genome that potentially encodes a solution to some problem. These individuals are copied, with variance, and the probability with which they are copied is modulated by the degree to which the "solution" they encode succeeds. In this respect they are a good analog of Darwinian evolution, as they evolving population is enriched by the traits that raise the probability of reproduction. The usual objection to IDs as an analog of Darwinian evolution is that the fitness function is designed by the GA writer (who has her own purpose in writing it) and the copying algorithm is also external to the critters (the critters encode their own offspring, but not the mechanism by which they give birth to those offspring). My intention is to circumvent both those objections firstly by providing no fitness function, and relying on the intrinsic "fitness function" embodied in any self-building-replicator (i.e. a self-replicator that sees to its own self-replication) namely, the more efficiently it produces offspring, the more prevalent the traits it passes to those offspring will be represented in the next generation. Secondly, unlike a GA, my virtual world will not start off with a self-replicator at all. I will build in no self-replication machinery, nor any set of starting genotypes. I will simply let these emerge from the binding rules that govern my "vMonomers" and the stochastic kinetic energy they receive from the virtual fluid medium they inhabit (call it heat if you like, and consider it, for the purposes of entropy discussions, as originating from an external source). I hope that addresses your question, but if not, I am eager to know why. Regarding measuring the "information" my virtual world (I hope) will create: yes, if we can come to an agreement of how we measure the useful/meaningful information embodied in my emergent critters, that would be cool. I am happy to use Shannon information as a starting point. I am concerned, however, as to how we will apply whatever we use to my emergent critters. To be specific, as the rules governing my "chemistry" will include philias and phobias, I anticipate that I will start with ambiphilic lipid-like vMonomers as well as base-like vMonomers, and that these will tend to assemble into vesicles, strings, and other compounds, which in turn will tend to dissassemble and reform. I do not intend to this as "self-replication" although there is a sense in which patterns may tend to persist. What I hope, however, is that eventually, particular polymer-containing vesicles may fortuitously have contents that tend to result in the vesicle self-dividing into two "offspring" vesicles, each with at least some of the properties of the first, and that these in turn will self-divide, and so on, the ones that fortuitously have the properties most conducive to successful division and preservation and transmission of what I would at that point be inclined to call the "Information" embodied in the parent, will come to dominate the population. If I succeed (and as I keep saying, I have no absolute confidence that I will), then I would contend that Information by any definition also applicable to living cells, would have been generated simply by means of the rules of Chance and Necessity established at the start. Indeed, if I use the same random number seed, I should get the same result (Necessity only), yet I have nowhere encoded in my program the information required to either build or replicate any given emergent structure. Indeed, I simply do not know what vesicle contents, if any, are likely to maximise the probability of vesicle division. What I would like to know, (thanks PaV) of course is, that given the final population matrix, how one would estimate its CSI, and whether it exceeds the required threshold. That would be cool :) Cheers LizzieElizabeth Liddle
June 11, 2011
June
06
Jun
11
11
2011
02:44 AM
2
02
44
AM
PDT
Hi Elisabeth "Yes, it will be “virtual” chemistry, but that is different to saying it’s a mathematical demonstration of the principle." Ambitious. It 'll be very difficult to simulate reality at molecular level. For example, quantum entanglement has significant effect in stabilizing DNA molecule according to paper by Vedral and group http://arxiv.org/abs/1006.4053 Entanglement has instantaneous effect and unlimited range. It will be challenge to take that into a model. Good luck.Eugen
June 10, 2011
June
06
Jun
10
10
2011
06:11 PM
6
06
11
PM
PDT
Elizabeth Liddle @237:
All I claimed to be able to do was to produce Information from Chance and Necessity.
Yet you're describing a system which is decidedly non-darwinian, but UPB writes:
UB’s challenge was an demonstration of neo-Darwininan forces that caused the rise of the recorded information in the cell, and has long morphed away to a sim that will have nothing to do with chemical reality.
I read that as neo-Darwinian, not non-Darwinian.Mung
June 10, 2011
June
06
Jun
10
10
2011
05:52 PM
5
05
52
PM
PDT
kairosfocus:
In short, such an exercise is a red herring led off to a strawman. It is not along the right track.
Would you say that it's like receiving the same symbol over and over? But what it seems to me is that Lizzie is proposing to create a symbol generating system, even if it does only generate the same symbol over and over. So it's like a sender with no receiver. It seems to me that Lizzie would need to create a communication system in order to demonstrate the generation of novel information. But I'm not sure how to make the case.Mung
June 10, 2011
June
06
Jun
10
10
2011
05:38 PM
5
05
38
PM
PDT
Elizabeth Liddle:
I’m simply going to demonstrate (I hope) that Information can arise spontaneously from nothing more than Chance and Necessity.
What's your reason for not wanting to use a GA? Now supposedly evolution itself is a non-teleological process of nothing but Chance and Necessity, and is purported to be able to generate information, and not just information, but Complex Specified Information. So if you could show that using a GA, I don't know what the objection would be, or why any objections would be any different to what you are proposing in your virtual world. IOW, I personally don't understand precisely what the difference is if it's pre or post darwinian.Mung
June 10, 2011
June
06
Jun
10
10
2011
05:22 PM
5
05
22
PM
PDT
#213 “I’ve described how I propose to attempt the challenge. If you both are happy with the proposal, I am happy to start work Don't let me stop you. As far as I am concerned this is between you and Upright BiPed and I'm just along for the ride.Mung
June 10, 2011
June
06
Jun
10
10
2011
05:05 PM
5
05
05
PM
PDT
On the Simulation of the Generation of Information Elizabeth, I don't see why you couldn't use a GA. GA's aren't self-replicators. In fact, Schneider's claims is just that ev can generate information de novo. He goes further to claim that it can even generate CSI. And it sounds like what you're trying to create is similar in ways to ev, with it's binding sites. Or if you think that ev is misguided, you might want to look at it anyways to not repeat the same mistakes.Mung
June 10, 2011
June
06
Jun
10
10
2011
04:54 PM
4
04
54
PM
PDT
I’m taking a look at the formulations for CSI now...
Let me know if you find any relationship between CSI and Shannon Information. :) Wouldn't that just be amazing!Mung
June 10, 2011
June
06
Jun
10
10
2011
04:45 PM
4
04
45
PM
PDT
Sorry for the mistakes. Was in a hurry and didn't proof. So, the question isn't, really, if chance and necessity could give rise to information.PaV
June 10, 2011
June
06
Jun
10
10
2011
04:08 PM
4
04
08
PM
PDT
Elizabeth Liddle [241]:
I’ve said, explicitly, the role I plan to give to Chance, and the role I plan to give to Necessity. I’ve also set a fairly high bar for Information, as, without any self-replicating algorithm or starter critter, I plan to let my self-replicators emerge, then evolve.
If set twenty monkeys in front of typewriters, I bet we could get them to type out what we would recognize as English words: like "zebra-crossing". Well, actually, not that long. And, so, we would have chance---the monkeys---and necessity (the mechanical structure of the typewriters). So information would arise. So, the question isn't, really, if chance and necessity can't give rise to information. The question is, how much information can it give rise to? That's why there's an UPB. And that's why it would take the entire time of the known universe for the monkeys to type out, at random, this sentence. Does this help to give perspective?PaV
June 10, 2011
June
06
Jun
10
10
2011
04:06 PM
4
04
06
PM
PDT
ME: You appear to have abandoned Shannon Information after having first introduced it. Can you explain why? Elizabeth Liddle:
Because obviously it doesn’t work as a measure of the kind of information that either you or UB count (reasonably) as information. Because my information (despite having 100 bits in Shannon terms) wasn’t “about” anything, you regard its information content as zero. So what I need is a measure of information that won’t give us a false posititive.
ok, thanks for your response. I appreciate it. So perhaps one way forward is not to abandon Shannon Information, but rather to see if perhaps we (one or both of us) were mistaken. Can we make it work?Mung
June 10, 2011
June
06
Jun
10
10
2011
03:45 PM
3
03
45
PM
PDT
Mung, the confusion has arisen because I was trying to establish what criterion UB wanted to use for information. I wasn't offering a definition at all. What I want is an operationalized definition as it is used in the claim that Chance and Necessity cannot result in it (or not Complex, Specified Information, anyway). I'm taking a look at the formulations for CSI now, but I did hope that my description of what I anticipated would emerge from my virtaul world would clearly qualify (being a structure that embodied the information necessary to duplicate itself). If so, then it should also exhibit CSI.Elizabeth Liddle
June 10, 2011
June
06
Jun
10
10
2011
03:41 PM
3
03
41
PM
PDT
kairosfocus:
1 –> To distinguish signal from noise the signal has to have informational characteristics, not noise characteristics. Signal to noise ratio is in fact a key metric in communications. And you do not need to know the specific meaning to spot a signal from noise. (Yes, this is a design inference.)
Pretty amazing coincidence, lol. I had written up a comment yesterday on how it might be information if it was subject to distortion/loss by noise and correction, but deleted it because I did not want to muddy the waters concerning the nature of information. But you make a very good point.Mung
June 10, 2011
June
06
Jun
10
10
2011
03:37 PM
3
03
37
PM
PDT
^sorry, messed up the tags. I hope it is clear where the words are UB's.Elizabeth Liddle
June 10, 2011
June
06
Jun
10
10
2011
03:35 PM
3
03
35
PM
PDT
kairosfocus:
9 –> As for meaningful info, the way to measure it is to look at its functional specificity
I'm a bit surprised to learn that you believe in non-meaningful information. Seems to me to be an oxymoron. :)Mung
June 10, 2011
June
06
Jun
10
10
2011
03:30 PM
3
03
30
PM
PDT
Upright BiPed: yes, indeed I missed this post, I do apologize (#229)
Lizzie, And what I hope to demonstrate is that in that virtual world, self-reproducing structures will emerge. If I succeed, then the very fact that I have self-reproducing structures, means, I think, that information (in your sense) has been created, because each structure embodies the information required to make a copy of itself. …and what shall we do with the observed discrete-ness ? Lizzie, as much of an achievement as it might be, the issue is not if you can concoct a realistic simulation with parameters where self-replicating structures spontaneously appear. That schtick has already been done with intelligent agents feeding energy and pre-programmed units into an intelligently constrained system (yawn).
But I do not propose to "pre-programmed units" into an intelligently constrained system. I thought I had made that clear. All I am providing is a "chemistry", not "preprogrammed units". The "units" will not be "programmed" at all. They will simply have a set of properties, as real compounds do. As for "feeding energy" - well, yes, of course the system will need energy. I do not understand your objection to this. The issue is can you get an encoded symbolic abstraction of a discrete state embedded into a discrete medium, whereby that representation/medium is physically transferred to a receiver in order that the receiver become informed by the decoded representation. Well, I think the word "symbolic" is problematic, as I've already said, because I do not regard biochemistry as "symbolic". But inasmuch as biochemistry is symbolic, mine will be too, in the terms I set out above, and to which I think you agreed. In other words the self-replication will not simply be a "negative" of the original. I anticipate that what I will end up with is something that contains elements that "code for" the replication of the whole. And moreover, this will not be coded by me, in any sense. All I will provide is the chemistry and the energy. As you can see, the rise of recorded information entails the rise of the abstraction, the symbol, and the translation apparatus/receiver. To approach it otherwise would be to attempt a book prior to the onset of paper, ink, the alphabet, or the reader. I wonder if you are failing to truly appreciate the conceptual issues you face. I know you are enamored with some idea of a mechanical representation, (like a shadow for instance) but that is not what is observed. Even the leading materialist researchers on this issue (Yarus, Knight, etc) concede the observed indirect nature of translation. It is this prescriptive quality which you are shooting to mimic, and it is very much related to Pattee’s “epistemic cut” or Abel’s “cybernetic cut”, and even Polanyi’s “boundary condition”. This is where the mechanism of the mind asserts itself in the causal chain, and for you to be successful, it is that quality (and its observed effects) you must reproduce without a mind. Well, I have told you how I propose to do it, and what I anticipate the result will be. My challenge to you is: If, having provided no more than Chance energy and Nesessary (deterministic) rules, what emerges is a structure that embodies the coding for its own replication, not as a shadow, or a mould of each part, but a copy (with variance) of the whole, on what grounds could you say that I had not supported my claim? I should probably say at this point, and I hope people here agree, that I do not regard DNA as a code for a whole organism. It simply does not contain sufficient information. The information necessary to make a whole organism (or even another cell) is embodied not just in the DNA, but in the entire cell. Denis Noble, rightly IMO, regards DNA not as a program but a database. I agree, and what I envisage is that eventually, if I succeed, my virtual world will be populated by cell-like structures containing database like structures that supply the materials necesary for the maintenance and self-replication of the whole. Yes, it will be "virtual" chemistry, but that is different to saying it's a mathematical demonstration of the principle. Nothing wrong with math.
As I stated in my previous post: “…this is what information is, and it is also what is found in the living cell. Information is being used to create function, and that is an observed reality. I am not interested in a loose example that truly only fulfils the need to complete a game; I am interested in an example relevant to the observation.”
So am I. Although I am also interested in the principle, as ID is based on a principle. If that principle is found to be flawed, than ID has to think again, I suggest. And if a principle of ID is that Complex Specified Information cannot result from mere Chance and Necessity, then the context is irrelevant if I can demonstrate that it can. Yes?Elizabeth Liddle
June 10, 2011
June
06
Jun
10
10
2011
03:28 PM
3
03
28
PM
PDT
p.s. It should follow from our discussion above the the mere ability to measure it is not what makes it information.Mung
June 10, 2011
June
06
Jun
10
10
2011
03:26 PM
3
03
26
PM
PDT
1 2 3 4 5 6 13

Leave a Reply