Uncommon Descent Serving The Intelligent Design Community

Ranking the information content of platonic forms and ideas

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Consider the following numbers and concepts (dare I say platonic forms):

1
1/2
1/9 or 0.111111…..
PI
PI squared
The Book War and Peace by Tolstoy
approximate self-replicating von-Neuman automaton (i.e. living cells)
Omega Number, Chaitin’s Constant
Chaitin’s Super Omega Numbers

I listed the above concepts in terms of which concepts I estimate are more information rich than others, going from lower to higher.

The curious thing is that even though we can’t really say exactly how many bits each concept has, we can rank the concepts in terms of estimated complexity. PI can be represented by an infinite number of digits, and thus be represented with far greater number of bits than contained in Tolstoy’s War and Peace, but PI is conceptually just the circumference of a circle divided by its diameter.

PI can have an infinitely complex representation (infinite number places past the decimal), but this does not make it conceptually more complex than War and Peace provided we have in our conceptual repertoire the notions of circle, circumference, and diameter. Thus I rank PI as more simple than War and Peace. An amusing conjecture is whether somewhere in the digits of PI is a representation of Tolstoy’s War and Peace. 🙂 But from an algorithmic information standpoint, using human math and natural language, most would say PI is more algorithmically simple than War and Peace.

As an aside, the digits of PI can be compactly represented via the Chundovsky algorithm. Notice, the Chudnovsky presumes the concept of infinity to make the representation compact. However the presumption of infinity does not help us express the Chaitin’s Omega number more compactly, because by definition, there is no compact representation of the Omega number, it is incompressible, it is non computable.

The above list can be said to be platonic forms, or concepts, or ideas. ID literature is very friendly to the notion of platonic forms. Whether PI is represented in various ways, there is the sense it is immutable. We can write a hundred textbooks with representations of PI, it doesn’t add or detract from the conceptual amount of information in the number PI. The concept of PI is immutable, hence the concept might be said to be a conserved quantity. Printing more digits of PI on a piece of paper does not increase the information content of the concept of PI. The information content of PI, if we assume platonic worlds are real, is conserved. To me, this illustrates one aspect of conservation of information.

This discussion raises the philosophical issue of whether there can be a platonic world of concepts without minds in the first place, or whether platonic forms are themselves an illusion of human minds. Darwinists and materialist tend to loathe the notion of platonic forms, but ironically if they use math and computers, they succumb to believing in platonic forms in practice if not in philosophy…

The notion of “human” or the classes of species that Linnaeus identified were also considered platonic forms. Even though the world might be filled with 7 billion humans, there is the fundamental platonic form of “human” in the creationist view. Creationists viewed the essentials of the human form as invariant much like the essentials of a rectangle are invariant even though there are infinite varieties of rectangles. In contrast, evolutionists classify species according to some presumed phylogeny. I tried to highlight to folly of classifying organisms according prevailing phylogenetic views versus platonic forms in Two-faced Nick Matzke.

Penrose, when pondering intelligence suggested that when human “produce” information in the form of great works of art, the human consciousness is actually accessing the world of immutable eternal platonic forms. Penrose rightly observed, when a composer composes music, it almost seems that some notes are more right than others, that ideal form just happens to be there. The right notes seemed to pre-exist the composer himself, the composer merely discovered the beautiful forms.

The ID debate centers frequently on the question whether random processes can generate instances of such immutable forms if the immutable forms are sufficiently complex like the approximate Turing machines and information processing systems in biology.

In the question of OOL, a living organism approximates a platonic form we call a Turing machine. Can random process generate such an approximation? If a random process by definition does not have complex platonic form built into it, why should it be expected to create an instance of it (a copy of it) in geological time?

The algorithmic complexity of a self-replicating von-Neumann automaton is substantially greater than what we would expect a random process to generate. If a process manufactured such a system we would presume it was already resident in some form within the process, and that the process is merely decompressing pre-existing conceptual information in order to create such a marvel. Of course, that is exactly what happens when a chicken makes an egg which becomes a chicken. The question then is where did the first chicken come from?

Empirical studies of plausible prebiotic environments suggest it was not information rich enough to make the first cell, much less a chicken. Empirical studies of evolution in real-time and in the field suggest evolution in the wild does not have access to the requisite information to construct a chicken from primitive unicellular organisms. It stands to reason, an information source outside of what we observe in the wild made the first chicken.

ID literature sometime refers to the problem of chicken evolution as the problem posed by conservation of information. Perhaps one might say, it is a problem posed by common sense.

NOTES:

HT: Gpuccio his discussion about PI at UD inspired this thread

HT: Michael Denton who championed the pre-Darwinian conception of platonic forms in biology

Comments
The specification is a concept, and it has not special “measure”.
We agree on something!scordova
November 27, 2013
November
11
Nov
27
27
2013
09:31 AM
9
09
31
AM
PDT
For proteins, the natural specification to be considered is their function. So, for an enzyme, the function will be defined as the ability to accelerate some specific biochemical reaction at at least some specified level of detection. Given the definition, the target space can be, in principle, measure, and its ration to the search space will be the CSI (dFSCI) of that protein family.
You're still falling into the uniqueness trap. How do you know there are not other unrelated protein sequences that could have the same or bettter specific catalytic activity. We all know the set of all theoretical protein sequences is beyond vast. You assume it is also completely barren. What work has been done so far, pioneered by Jack Szostak and carried on by others, does not indicate barrenness.Alan Fox
November 27, 2013
November
11
Nov
27
27
2013
09:30 AM
9
09
30
AM
PDT
gpuccio, niwrad, and colleages, I thank you for your patience and forbearance in this discussion. The reason some of this comes up is I occasionally conduct talks on the topics or interact with others. The difficulties in the workability of definitions arises. The final conclusion of design is not in doubt because of the improbabilities involved. But understanding the formalisms and explaining them does create difficulties. Up until now (8 years now have passed at UD), I've just steered clear of the complications over the years. Also, I've also learned some things I didn't know 8 years ago. :-) But now that it is becoming painfully evident the Darwinists are losing, and news article after news article just gives bad news for their side, I felt freer to explore some things that have cause a little consternation. Thank you again for your participation and interest in these discussions. Salscordova
November 27, 2013
November
11
Nov
27
27
2013
09:30 AM
9
09
30
AM
PDT
Sal: The specification is a concept, and it has not special "measure". It is a binary value: either it is present, or not. In the case of a function that can be measured, we can give a threshold of activity (like for an enzyme) above which the specification is present. The complexity is measured as the ratio of the target space (specification present) to the search space. If the specified outcome is compressible, we may use the complexity of the algorithm which can output it, if there is a chance that the algorithm originated randomly in the system, and then outputted the result by necessity, and if the complexity of the generating algorithm is lower than the complexity of the outcome. Even in this case, we have a probability linked to the outcome, but it is the probability of the algorithm, and not the probability of the outcome. In your example, the probability of a robot arising in the room spontaneously, and then ordering the coins, is certainly lower than the probability of 200 heads from random tossing. Therefore, I would not consider that possible mechanism. But the probability of having unfair coins cannot be ignored, and it should be treated by checking if the coins are really fair. You will always have that problem with highly compressible outcomes: the possibility of a necessity mechanism cannot be ignored, because necessity mechanisms very easily produce ordered results. Luckily, protein sequences are random-like: they are similar to language or software, not to the 2000 coins. That's why, even if your example of homochirality is certainly very interesting and convincing, I prefer to argue about protein sequences. They are digital, they are random-like and scarcely compressible, and the only algorithm which has been conceived as capable of originating them is the RV + NS neo darwinian model, and it is obviously wrong. CSI, or dFSCI, is a complexity value linked to a specification. It needs no "adjustement" at all. It is simple and clear, and it works. For random-like digital sequences, which are the rule in biology, and using correctly the functional specification, there is no problem with possible necessity mechanisms: they simply don't exist. I have challenged for a long time our friends at TSZ to offer a single example of dFSCI originated by RV or RV + any kind or non designed algorithm: nobody has ever been able to provide one. dFSCI allows us to empirically infer design with 100% empirical specificity. That is the simple, beautiful truth.gpuccio
November 27, 2013
November
11
Nov
27
27
2013
09:22 AM
9
09
22
AM
PDT
Excuse me,I don’t understand why CSI is needed. Can’t I just look at probability of a sequence of protein etc and compare it to Universal Plausibility Metric and decide the sequence is not possible? Thanks in advance to whoever will answer
The complication is every possible sequence of an amino acid polymer is extremely unlikely as well, even those polymers that aren't proteins. Every shuffle of a deck of cards gives a sequence that is as improbable as the next. We need specification to distinguish the sequences that are special to humans. It turns out biology naturally has specifications that are special to humans. This coincidence cannot be explained by : 1. chance 2. law 3. evolutionary algorithms It is philosophically accepted in the ID community that the reason for this coincidence is that an intelligence created biology. I think that is reasonable, and it is something I believe.scordova
November 27, 2013
November
11
Nov
27
27
2013
09:05 AM
9
09
05
AM
PDT
I have tried to point out that all Sal’s difficulties can be solved if one considers Kolmogorov complexity instead of generic complexity.
But then that precludes including the probability score in CSI as I have shown in the previous comment. That's likely something we don't want to do. Bill himself suggested the phrase "specified improbability", in such case what you call generic complexity will be favored instead of Kolmogorov complexity. The problem is we can't have it both ways in one concept. That's why I suggest separating the concepts: 1. Algorithmic Information (Kolmogorov) 2. Specified Improbability (what we currently call CSI) Salscordova
November 27, 2013
November
11
Nov
27
27
2013
08:25 AM
8
08
25
AM
PDT
Unfortunately your view: 1. robots create new algorithmic information : NO 2. robots create new CSI in physical artifacts : YES is incoherent. In CSI the “S” part (specification) is a pointer or link to a pattern, a certain target in a huge pattern space. This pointer can be known only by a conscious agent, whose intelligence, synthetically overarches the entire pattern space, and knows the qualitative differences among patterns.
The incoherency is the fact CSI is stated in terms of Shannon bits, whereas platonic concepts like "all heads" are not. The Platonic concept in the Robot is "all coins heads" -- however many bits it actually takes in its memory banks, however many bits it takes to actually manufacture the robot, ect. the sum total of all CSI that are in evidence in the robot are decoupled from the algorithmic information of the concept of "all coins heads". All coins heads can have the following physical specifications:
1. H 2. HH 3. HHH 500. HHHHH....HHH 1,000,000. HHHH.............HHHH....HHH 1,000,000,000 HHH............ 1,000,000,000,000.... H.......
Even though each of these through the lens of Kolmogorov have the same or almost the same algorithmic complexity ("all coins heads", they don't have the same probability of occurrence (Shannon entropy) based on the chance hypothesis for coins. The algorithmic specification (all coins heads) is the same, but the physical improbability is not. Bill unfortunately uses the word complexity to mean improbability. Bill somewhere, either in correspondence or elsewhere said he considered using the phrase "Specified Improbability". If we followed convention, the CSI scores would be different for each example. This again shows the 2000 coin paradox. It is resolvable if one says: 1. algorithmic information is invariant in the process (robot doesn't create new concepts) 2. physical information (Shannon entropy) can increase (as the algorithmic information decompresses over physical objects) It's not matter of who is right or wrong about the final conclusion (design), it is which convention more workable. By conflating CSI with algorithmic information we end up either saying: 1. we can't calculate CSI 2. we end up with the 2000-coins-all-heads paradox unresolved Another way to resolve the paradoxes, and maybe less elegant, but at least consistent is to say the Robot algorithmically compresses an infinite number of specifications for all coins heads. Thus he has an implicit specification for:
1. H 2. HH 3. HHH 500. HHHHH....HHH 1,000,000. HHHH.............HHHH....HHH 1,000,000,000 HHH............ 1,000,000,000,000.... H.......
Each of the implicit specifications has a separate CSI score. The question is whether this is an elegant resolution. One might complain I'm obsessing over formalisms. Well, UD is a good a place to have a discussion about formalisms as anywhere else. We could of course just go back to bashing Darwin as I've done for 8 years here, and arguing in circles with Darwinists, or we can have a discussion about these topics. But I'm delving into these formalism because I want to have the 2000 coin paradox resolved. My answer to the paradox: 1. The algorithmic information (Kolmogorov complexity) is low 2. The CSI score is 2000 bits The way I'd answer the Robot paradox 1. The algorithmic information in the coin+robot system remains the same 2. The CSI increases for the system The analogy from physics that I suggested is that energy is conserved, but entropy increases. In this case, the algorithmic information is conserved, but the Shannon entropy increases as the robot access more and more coins. It does not really make sense to say algorithmic information increases or decreases by X-amount of bits because the number of bits needed to implement a platonic concept vary depending on the machine and representation method. It is fair to say however we can, for a given machine or representation suite, rank them in terms of complexity.scordova
November 27, 2013
November
11
Nov
27
27
2013
08:20 AM
8
08
20
AM
PDT
niwrad: I agree with you that Sal's position in inconsistent about that. The simple truth is that robots cannot create new, original CSI. I have tried to point out that all Sal's difficulties can be solved if one considers Kolmogorov complexity instead of generic complexity. I have also repeatedly stated that only conscious agents can create new, original CSI. I believe that this is true and supported by all known facts. Non conscious algorithms can do only the following: a) generate new complexity about some functional output, if the function definition has already been coded in the algorithm, directly or indirectly, and the computing algorithms for that function have been coded too. In this case, the Kolmogoroc complexity does not increase, and no new CSI is generated. b) do the same as in a), incorporating some new information from the environment. In this case, the new information from the environment contributes to the output, together with the complexity coded in the algorithm. No new CSI is generated. A good example, as I have stated many times, is antibody maturation after the first immune response. I should emphasize that we we say that no new CSI (or dFSCI) can be generated out of a conscious intervention, we are not saying that no new specified/functional information can arise. We are just saying that no new complex specified/functional information can arise. Simple functional information can always arise from random variation. A conscious agent can instead generate new, original CSI/dFSCI because consciousness allows the experience of meaning and purpose, which is negated to non conscious algorithms. Therefore, a conscious agent can have representations of what is meaningful and useful. That allows him to represent and define new functions, and to compute the information necessary to implement those functions, by any available means. But the conscious representation is essential. A new function can be recognized as such only in the consciousness of an agent.gpuccio
November 27, 2013
November
11
Nov
27
27
2013
04:22 AM
4
04
22
AM
PDT
coldcoffee:
Excuse me,I don’t understand why CSI is needed. Can’t I just look at probability of a sequence of protein etc and compare it to Universal Plausibility Metric and decide the sequence is not possible?
No. The reason is simple. What you need is not the absolute probability of some sequence, but the probability of the target space, that is the probability to get a functional sequence. The target space is not 1, at least for proteins. It can be rather big. Durston's numbers show that the target space i big indeed, but not so big that the CSI of a protein family become accessible to random variation. Now, to define a subset of the search space as a target space, you need some form of specification. For proteins, the natural specification to be considered is their function. So, for an enzyme, the function will be defined as the ability to accelerate some specific biochemical reaction at at least some specified level of detection. Given the definition, the target space can be, in principle, measure, and its ration to the search space will be the CSI (dFSCI) of that protein family. A word about the threshold, too. A threshold of complexity must be given to transform a continuous value (dFSI) into a boolean binary value (dFSCI). The threshold must be appropriate for the system and the time span we are considering, because it critically depends on the probabilistic resources of the system. Dembski has suggested 500 bits as a universal threshold. That is perfectly reasonable. I have argued that 500 bits is definitely too much for a biological context. I have proposed 150 bits as an appropriate threshold for biological information on out planer. If we are considering some more restricted system or time span (for example the specific appearance of some basic protein domain in natural history, which can probably be restricted to much smaller time spans), than an even lower threshold could be applied.gpuccio
November 27, 2013
November
11
Nov
27
27
2013
04:10 AM
4
04
10
AM
PDT
Excuse me,I don't understand why CSI is needed. Can't I just look at probability of a sequence of protein etc and compare it to Universal Plausibility Metric and decide the sequence is not possible? Thanks in advance to whoever will answercoldcoffee
November 27, 2013
November
11
Nov
27
27
2013
02:37 AM
2
02
37
AM
PDT
scordova #7 Unfortunately your view:
1. robots create new algorithmic information : NO 2. robots create new CSI in physical artifacts : YES
is incoherent. In CSI the "S" part (specification) is a pointer or link to a pattern, a certain target in a huge pattern space. This pointer can be known only by a conscious agent, whose intelligence, synthetically overarches the entire pattern space, and knows the qualitative differences among patterns. If robots create new CSI - as you claim - they can specify, grasp specifications. When you say "robots cannot create new algorithmic information (platonic forms, concepts, ideas) of any complexity" you say that robots cannot specify. In fact, what is a platonic form, a concept, an idea but a specification, a pattern/target in the infinite pattern space called Platonic World (called "infinite information source" by me)? Do you see the contradiction? In #1 you say robots cannot specify, in #2 you say robots specify. You cannot have both ways: robots create CSI but don't create algorithmic information. Or robots create CSI and algorithmic information, or robots neither create CSI nor algorithmic information. Or robots have real intelligence or they haven't. If they have real intelligence (as you think) they can create anything (as humans can). If they have only false intelligence (as I think) they can create only faked CSI and faked algorithmic information.niwrad
November 27, 2013
November
11
Nov
27
27
2013
02:20 AM
2
02
20
AM
PDT
But it seems not 100% consistent with your previous claim that “robots can create CSI”.
I believe robots can create CSI in physical artifacts, but they can't create new algorithmic information in their memory banks nor create other robots with new algorithmic information unless the robot has access to other sources of information outside of itself. Robots cannot create new algorithmic information (platonic forms, concepts, ideas) of any complexity. The convention so far in the ID community is to describe both Algorithmic Information and Physical Artifact Information with CSI. I've objected to this. The two notions should be separated. I have no problem saying algorithmic information of any complexity cannot increase beyond what is front loaded in the process. The disagreement is whether CSI in physical artifacts can be increased by a robot. I say, yes. To distinguish my view from my ID colleagues: Standard ID view:
1. robots create new algorithmic information : NO 2. robots create new CSI in physical artifacts : NO
My view:
1. robots create new algorithmic information : NO 2. robots create new CSI in physical artifacts : YES
scordova
November 26, 2013
November
11
Nov
26
26
2013
05:26 PM
5
05
26
PM
PDT
scordova When you speak so I agree with you (and Penrose), no problem. But it seems not 100% consistent with your previous claim that "robots can create CSI".niwrad
November 26, 2013
November
11
Nov
26
26
2013
11:43 AM
11
11
43
AM
PDT
Penrose has good reason to suppose what he supposes. A computer cannot do more than what it is programmed to do. If it is "creative" in any sort of way, it must have non-deterministic inputs. So too, human intelligence, it seems to me, if it is not a computer it must have some access to inputs outside of itself to have truly new insights, otherwise "new insights" are just the consequence of random process combined with computation. I agree with Penrose. His book Emperor's New mind was an anti-computational view of human intelligence. This is a spiritual view of intelligence.scordova
November 26, 2013
November
11
Nov
26
26
2013
11:30 AM
11
11
30
AM
PDT
scordova
Penrose, when pondering intelligence suggested that when human “produce” information in the form of great works of art, the human consciousness is actually accessing the world of immutable eternal platonic forms.
Ahh. When, in my "response to scordova", I said that man is connected to the infinite information source, I said *exactly* the same thing of Penrose's "accessing the world of immutable eternal platonic forms". But if niwrad says something then it is "superstition and pseudo-science", if the same thing is said by Penrose it is a truth.niwrad
November 26, 2013
November
11
Nov
26
26
2013
08:07 AM
8
08
07
AM
PDT
scordova
We do not have the computer science background to even conceive of the correct architecture (that is we couldn’t even write the blue prints from scratch), much less actually achieve the architecture in practice for such self-replicators in Earth-like environments. We can only build such replicators in our toy-cyberspace worlds.
Here is the analysis that someone with a computer science background put together on exactly this topic: The Design of the Simplest Self-Replicator The Power Point slides of this presentation are here The presentation is about creating a top-level design for creating a real, concrete, material self-replicator. The study creates a functional diagram of capabilities that need to be present in such a system. It also identifies the main difficulties and technical problems that need to be resolved in order to achieve this goal. The presentation concludes with an estimate if the current, most advanced engineering technologies will be able or not to solve these challenges.InVivoVeritas
November 25, 2013
November
11
Nov
25
25
2013
06:02 PM
6
06
02
PM
PDT
InVivoVeritas, Self-replication like the kind found in biology is achieved via computation and information processing (unlike the self replication of salt crystals). Exceptional minds indeed worked on the problem, and to this day, none can solve it from scratch, they have to look at biological systems as an example. To this day, we can't make robots that can make identical copies of themselves from scratch (that is, the robot has to build a factory that will build copies of the robot). Living cells are unusual in that they are factories that can build identical copies of factories of themselves. We do not have the computer science background to even conceive of the correct architecture (that is we couldn't even write the blue prints from scratch), much less actually achieve the architecture in practice for such self-replicators in Earth-like environments. We can only build such replicators in our toy-cyberspace worlds. When we make artificial life in the lab, we borrow existing parts from living organisms. The complexity of a self-replicating von-Neumann automaton that can actually replicate in physical environments like Earth is almost beyond comprehension. It has more complexity than Tolstoy's War and Peace, maybe far more than most operating systems. Darwinsits trivialize the human by saying it has only 3.3 or so billion base pairs, but if we include all the parts of the cell that are also necessary, the number of components approaches 100 trillion atoms. We don't know exactly how many are absolutely essential, but given an embryo can expand (decompress) into a full human over time, it's fair to say, lots of the parts have primary function or redundant function. Humans build self-replicators in our toy universes (we call them computer viruses). Such man-made computer viruses are toys compared to the computation inside living cells. Indeed the blue-print for a computational self-replicators such a life comes from a mind beyond human ability by several orders. But there is another miracle in play. It seems the Designer intended that human technology would some day arrive at the point that humans could (if they were willing to open their eyes) realize they were made by a God-like intelligence. We, myself included, could have lived our lives in the dark that we were made by God, but providentially, God has given us the tools to discover the fact we are designed. Some of the studies of the platonic form of self-replicators do indeed come from the geniuses of human civilization. The foremost probably is von-Neumann. There is a highly slanted, pro-evolution article on von-Neumann's work, but reading it, one gets the sense of the difficulty in the blueprints of a self-replicator: http://en.wikipedia.org/wiki/Von_Neumann_universal_constructor My guess is that a blue print that would describe the computational details of a human cell would easily exceed the complexity of War and Peace, and to quote Francis Collins, the blueprint is written in the Language of God.scordova
November 25, 2013
November
11
Nov
25
25
2013
04:02 PM
4
04
02
PM
PDT
scordova this is a very interesting topic and you put on the table challenging thoughts. One of my preoccupations is with self-replication - as appears to be in your text also. Recently I started to think from a new perspective on this topic. The essence of my thought is this:
The self-replication is a divine thought. The self-replication concept is a transcendental idea that could not have originated in human mind (even in an exceptional one). How then can we even think that the laws of physics and chemistry and happenstance may have put together a self-replicating thing?
We are now familiar with this concept of self-replication. But this is just because we observed it in nature and it is easy to "grasp it from concrete example". But in essence it has the divine "signature" which appears in Genesis clearly and repeatedly:
"And God said, Let the earth bring forth grass, the herb yielding seed, and the fruit tree yielding fruit after his kind, whose seed is in itself, upon the earth: and it was so." Genesis 1.11
Another way to say this: The Life Could NOT have a materialistic origin for another simple reason: the self-replication which is a signature of the living things is a divine idea and concept. Not only that nature could not have achieved it randomly, but even a "human 'untainted' with the knowledge of nature (if that is possible)" would not even conceived such a thought (i.e. the concept of self replication).InVivoVeritas
November 25, 2013
November
11
Nov
25
25
2013
02:50 PM
2
02
50
PM
PDT

Leave a Reply