# The cause of incompleteness

November 3, 2009 | Posted by niwrad under Intelligent Design |

In a previous post I promised to start at UD a discussion about the incompleteness of physics from an ID point of view. Here is the startup article.

At Aristotle’s time “physics” was the study of nature as a whole. In modern times physics seems to have a more specific meaning and is only one field among others that study nature. Nevertheless physicists (especially materialist ones) claim that physics can (or should) explain all reality. This claim is based on the gratuitous assumption that all macroscopic realities can be deduced entirely from their microscopic elements or states. Also if this assumption were true there would be the problem to understand where those fundamental objects or states came from in the first place. Many physicists even think about a “Theory of Everything” (ToE), able to explain all nature, from its lower aspects to its higher ones. If a ToE really existed a system of equations would be able to model every object and calculate every event in the cosmos, from atomic particles to intelligence. The question that many ask is: can a ToE exist in principle? If the answer is positive we could consider the cosmos as a giant system, which evolution is computable. If the answer is negative this would mean that there is a fundamental incompleteness in physics, and the cosmos cannot be considered a computable system. An additional question is: what are the relations between the above problem and ID?

Stephen Hawking in his lecture “Gödel and the end of physics” seems to think that Kurt Gödel’s incompleteness theorems in metamathematics can be a reason to doubt the existence of a ToE:

“Some people will be very disappointed if there is not an ultimate theory, that can be formulated as a finite number of principles. I used to belong to that camp, but I have changed my mind. I’m now glad that our search for understanding will never come to an end, and that we will always have the challenge of new discovery. Without it, we would stagnate. Gödel’s theorem ensured there would always be a job for mathematicians. I think M-theory will do the same for physicists. I’m sure Dirac would have approved.”

In two words, Gödel’s incompleteness theorems essentially say that in general a mathematical formal system beyond a certain complexity is either inconsistent or incomplete. Hawking’s reasoning is something like this: every physical theory is a mathematical model, and since, according to Gödel’s incompleteness theorems, there are mathematical results that cannot be proven, then there must be physical statements that cannot be proven as well, including those contained in a ToE. Gödel’s incompleteness applies to all mathematical theories with potentiality greater or equal to arithmetic. Since any mathematically described physical theory has potentiality greater than arithmetic then is necessarily incomplete. So we are before a fundamental impossibility of a complete ToE that comes from results in metamathematics.

Computability theory and its continuation Algorithmic Information Theory (AIT) are mathematical theories that can be considered sort of meta-informatics, because are able to prove statements about algorithms and their potentiality, what they can or cannot output. A basic concept of AIT is compressibility: an output that can be generated by a computer program with binary size much lesser than the output itself is called “compressible” or “reducible”. Given that a mathematical formal system and its theorems are comparable to an algorithm and its outputs we find that incompleteness in math (improvable theorems do exist in a formal system) has its equivalence in incompressibility in AIT (non algorithmable incompressible outputs do exist). For these reasons by mean of the tools of AIT it is possible to prove theorems equivalent to Gödel’s theorem. According to Gregory Chaitin (the founder of AIT):

“It is sometimes useful to think of physical systems as performing algorithms and of the entire universe as a single giant computer” (from “Metamathematics and the foundations of mathematics”). – “A theory may be viewed as a computer program for calculating observations. This provides motivation for defining the complexity of something to be the size of the simplest theory for it, in other words, the size of the smallest program for calculating it” (from “On the intelligibility of the universe and the notions of simplicity, complexity and irreducibility”).

A physical theory is composed of laws (i.e. algorithms). If the universe is a giant computer then the incompressibility results of AIT apply: incompressible outputs do exist, which no algorithm can create, then a complete physical theory describing those outputs does not exist. If the universe is not a giant computer then a complete physical theory describing it does not exist for definition. In both cases we arrive to the incompleteness of physics. The conclusions of Chaitin are somewhat similar to Hawking’s ones:

“Does [the universe] have finite or infinite complexity? The conventional view on this held by high-energy physicists is that a ToE, a theory of everything, a finite set of laws of nature that we may someday know, which has only finite complexity. So that part is optimistic! But unfortunately in quantum mechanics there is randomness. God plays dice, and to know the results of all God’s coin tosses, infinitely many coin tosses, necessitates a theory of infinite complexity, which simply records the result of each toss!” (“From Philosophy to Program Size” 1.10)

About the infinite complexity Chaitin is correct. But the language of Chaitin is a bit misleading where he says that “God plays dice”. In reality also all apparently random results are wanted by God. Otherwise His will would be limited by dice and this is nonsense. Also randomness is under the governance of God. To deny this would mean to deny its Omnipotence, then deny the Total Possibility (which is another name for what theology calls God’s Omnipotence). From this point of view any result that appears random to us is simply an event which unique cause is directly God himself (the First Cause), while a result due to a physical law is an event wanted by the Laws-Giver too obviously, but by mean of an intermediary law (which works as secunda causa). So events divide in two sets: those wanted by God not compressible into laws and those wanted by God compressible into laws. After all there is no reason to believe that God should limit himself to the latter only.

There is another point of view from which a physical ToE is incomplete. We might call this argument, “physics-incompleteness argument from ID”. If a ToE must be indeed what it wants to be, that is a theory describing all aspects of reality, this ToE must also deal with its higher aspects, those related to intelligence. But intelligence is what creates theories. In fact a ToE is an intelligent design and physicists who develop it are designers. A ToE is incapable to compute the decisions of its designer. Said other way, the free will of the designer of a ToE entirely transcends it. You can also look at the problem from this point of view: if a physicist decides to modify the ToE, the ToE cannot account for it, because it is impossible that a thing self-modifies. As a consequence, since a ToE doesn’t compute all things in the universe, is incomplete and not at all a ToE.

To sum up we have that metamathematics proves the incompleteness of math. AIT proves the incompressibility of informatics. Both these results reverberate on physics causing its irreducibility. In turn ID shows that a ToE is incomplete because cannot compute its designer. These three fields agree to show the incompleteness of physics and compose a final consistent scenario.

The important thing to get is that all incompleteness results in specific fields are only particular cases of a more general truth. To understand it we must start from the fundamental concept of the aforesaid Total Possibility, which has no limits because leaves outside only the impossible and the absurd that are pure nothingness. For this reason, the Total Possibility is not reducible to a system. In fact any defined system S leaves outside all what is ‘non S’. This ‘non S’ limits the system S. Since S has limits it cannot be the Total Possibility, which is unlimited. As Leibniz said: “a system is true for what affirms and false for what denies”. Also large-enough sub-sets of the Total Possibility are not reducible to systems. For Gödel “large-enough” means with potentiality greater or equal to arithmetic. Mathematics and the cosmos are large-enough in this sense and as such are irreducible to systems. They are simply too rich to be compressed in a system because they are aspects or functions of the Total Possibility. The Total Possibility has nothing to do with the simple infinites (mathematical or of other different kinds). Any particular infinite has its own limits, in the sense that leaves outside something (e.g. the infinite series of numbers doesn’t contain what was before the Big-Bang, galaxies, elephants, your past and future thoughts, what will remain when the universe will collapse … while Total Possibility does). While there is only one Total Possibility there are many infinites, which are infinitesimal compared to it. To confuse the two concepts, Total Possibility and infinites, is a serious error and cause the total misunderstanding of what the former is.

Systematization (the reduction or compression to a system) represents epistemologically also all the bottom-up approaches to get the total knowledge. The fundamental failure of systematization, when applied to rich-enough sub-sets of the Total Possibility is also the failure of all bottom-up reductionist and positivist approaches to knowledge. Of course this failure appears negative only for who hosts the naive illusion that more comes from less. For who understands the Total Possibility, the failure in principle of systematization is only a logical consequence of the fact that less always comes from more.

To use a term from the computers jargon that all people understand, mathematics and the cosmos are “windows” on the Total Possibility. As a window-shell on our display is an aperture on the operating system of our computer and allows us to know something of it, analogously mathematics and the cosmos are large-enough apertures on the Total Possibility. This is sufficient to make them not systematizable. This is true also for the cosmos despite the fact it is infinitesimal respect the Total Possibility. It is easy to see that such “window” symbol is equivalent to the symbolism of “Plato’s cave”, from which the prisoners can see only the shadows of the realm of Ideas or Forms (Plato’s equivalent of the eternal possibilities contained in the Total Possibility). Plato, although he sure didn’t need scientific confirmations for his philosophy, would be glad to know that thousands years after him fundamental results in science support his correct top-down philosophical worldview.

Given its fundamental incompleteness, math implies the necessity of the intelligence of mathematicians for its endless study. Since informatics is basically incompressible, computers (and in general Artificial Intelligence) will never be able to fully substitute human intelligence. Given its fundamental incompleteness, physics implies the necessity of the intelligence of physicists for its endless development. In turn ID says exactly the same thing about complex specified information: its generation will always need intelligent designers. In a sense also the ID concept of irreducible complexity agrees with the above results: in all cases there is a “true whole” which richness cannot be reduced, indeed because it represents a principle of indivisible unity. The final victory of ID against evolutionism will be only the unavoidable consequence of the fact that the former is a correct top-down conception while the latter is a bottom-up illusion. Bottom-up doesn’t work for the simple fact that reality is an infinite hierarchy of information layers, from the Total Possibility all the way down until to the more infinitesimal part of the cosmos.

A believer asked God: “Lord, how can I approach You?” God answered: “By mean of your humility and poverty”. May be in this teaching there is a message for us about our actual topic (a message that Gödel, Hawking and Chaitin seem just to have humbly acknowledged): indeed by recognizing the radical incompleteness (“humility and poverty”) of our systems, we have a chance to understand the “Infinite Completeness” of God.

### 55 Responses to *The cause of incompleteness*

### Leave a Reply

You must be logged in to post a comment.

“But unfortunately in quantum mechanics there is randomness. God plays dice, and to know the results of all God’s coin tosses, infinitely many coin tosses, necessitates a theory of infinite complexity, which simply records the result of each toss!” (”From Philosophy to Program Size” 1.10)”

I think that this is wrong. Quantum mechanics doesn’t deal with the result of individual “coin toss” not any more than a statistician is bothered about each individual vote in a post-electoral survey.

What counts is the pattern: for example the fact that 50 percent of electrons come with a spin-up and the other 50 percent with a spin-down. We don’t care about the way individual electron. A ToE can be complete without having to decide the state of each and every single atom in the universe, especially when current interpretation of quantum mechanics affirm that such thing is not possible and therefore without interest.

Regarding the theological bit – God acting at the quantum level – well, that’s a bit hard to swallow. Although some system are chaotic and very dependent on initials conditions, this is not the case for most phenomena that the Bible (and here I assume that the author is Christian) describe has being made by “the hand of God” (Jesus’s birth, Gideon’s sun, Red sea, the plague, etc..). Also, the bible describe Satan as the God of randomness/”fortune”, not God himself.

However, I did enjoy the Gödel’s incompleteness principle. But I think that this should be understood differently: modern mathematics are founded on axioms that need to be changed in order to prove some theorems. A bit like the way we use now non-euclidean geometry to understand our Universe.

I feel that the existence of math itself finds the explanation for the origination of its existence in God. And thus I feel all “true” math will ultimately lead back to its source which is God.

Euler’s Number – God Created Mathematics – video

http://www.youtube.com/watch?v=0IEb1gTRo74

This related website has the complete working out of the math of Pi and e in the Bible, in the Hebrew and Greek languages:

http://www.biblemaths.com/pag03_pie/

Michael Denton – Mathematical Truths Are Transcendent And Beautiful – video

http://www.youtube.com/watch?v=h3zcJfcdAyE

As well, I find science, math and thus reality to conform to this Theistic Postulation

I find it extremely interesting that quantum mechanics tells us that instantaneous quantum wave collapse to its “uncertain” 3-D state is centered on each individual “conscious” observer in the universe (it will never collapse for the artificial intelligence of a computer), whereas, 4-D space-time cosmology tells us each 3-D point in the universe is central to the expansion of the universe. Why should the expansion of the universe, or the quantum wave collapse of the entire universe, even care that I exist?

This is obviously a very interesting congruence in science between the very large (relativity) and the very small (quantum mechanics). A congruence they seem to be having a extremely difficult time “unifying” mathematically into a “Theory of Everything” (Einstein, Penrose). Yet, a unification which Jesus apparently seems to have joined together with His resurrection:

The Center Of The Universe Is Life – video

http://www.youtube.com/watch?v=do2KUiPEL5U

St. Augustine

Kyrilluk, I am sorry but you are wrong when you state,

Why should science care if you find reality “a bit hard to swallow”?

The plain fact is that a sufficient transcendent cause must exist in order to explain quantum wave collapse to its “uncertain” 3D state since the “hidden variable” is now crushed as a coherent explanation to explain reality.

(of note: hidden variables were postulated to remove the need for “spooky” forces, as Einstein termed them—forces that act instantaneously at great distances, thereby breaking the most cherished rule of relativity theory, that nothing can travel faster than the speed of light.)

i.e. the only “cause” left with sufficient explanatory power to explain what we observe in reality is God,,,there simply is no “unintelligent” material “cause” left to explain the overarching non-chaotic wave collapse we find in reality with any degree of rational coherence.

Nicely done niwrad. I think perhaps you may have gone a bit overboard on some things though.

In my mind, a ToE would be a theory that allows us to accurately predict literally everything that happens. This will not happen for two reasons:

1) Free will, which is not necessarily the same as intelligence, for instance artificial intelligence.

2) Quantum randomness, which follows the laws of probability. It is true that the total result of large numbers of random particles can be predicted with some accuracy. However when dealing with successive events, such as a long chain of nonrandom cause and effect influenced by random fluctuations, eventually the probability of any possible outcome is going to decrease to nothing because all probabilities are less than 1 and must be multiplied together to express level of confidence.

That’s pretty much it. I don’t see what reductionism or compressibility has to do with it. It seems to me that if a ToE existed then compressibility would be absolutely necessary, but a ToE doesn’t have to be infinitely compressible.

Also I don’t like it when people attack reductionism. This is sort of like attacking the political system of a country for not being perfect, when the goal of political systems is not and never has been to be perfect, but to provide a stable system of governance. Reductionism is required for a scientist to operate. In order to solve a problem, all other variables must be controlled for to expose the one variable you are after. Science is necessarily reductionist. It does not follow that once all problems have been solved, reductionism would prevent a ToE.

You say, “it is impossible that a thing self-modifies.” This is demonstrably untrue. Just look at DNA. In order for this statement to be absolutely true, you would have to reduce the meaning of “thing” down to a the level of the quantum particle, and no ToE would be that small. Furthermore, if a ToE could be attained, it goes without saying that it would no longer be modified since it’s a ToE and is complete and accurate. That’s the whole point of the concept.

We are getting to the point in science where quantum randomness appears to prove that a ToE is impossible. There is now evidence that stochastic processes do exist at the cellular level in the form of cellular signals that depend on molecules which are in a very low concentration. These molecules must hit their target in a very busy and crowded cellular environment. Whether they do or not pretty much depends on quantum randomness, since we are down at the level of “What does this electron do in this situation?” etc, etc. These situations appear to be stochastic for all intents and purposes, and thus inherently unpredictable.

Simplified, a ToE is something that would determine the cause of everything to be natural law. Recall Dembski’s three possible causalities:

Chance, necessity and agency. For a ToE to exist, necessity, or natural law, must explain everything. Since both chance and agency also exist as causes, a ToE is impossible.

Because both chance and agency are inherently unpredictable and any prediction based on either involves a certain level of uncertainty. This is death to a probabilistic system of prediction for the reason I already stated.

tragic mishap #4

The relation “X acts on Y” is true only for X different from Y. X never acts on X. In Scholasticism this truth was “nihil agit se ipsum”. All cases of apparent self-action in reality are of the form “X acts on Y”. For example, I don’t shave myself: a part of my body (the hand) shaves another part of my body (the face). Hand and face are two different parts.

DNA is not exception: portions (X) of it are modified by some machinery or process (Y).

When I wrote:

I meant: it is impossible that a [passive] thing, as a theory is, can become active and even act on itself. The relation between the designer and its design is a subject-acts-on-object relation. The former has active role and the former passive role. The action of the designer never can be the action of its design.

To Bornagain77 at #2: I dont think you are helping the cause.

Biblemaths.com. come on now.

WOW Graham, You are so right (heavy satire), Instead of believing what is clearly a powerful watermark in the Bible corroborating its authenticity, with the two most important foundational “transcendent constants” of Math, I will just hop in your clown car and believe my Great Great Grandpappy was a proton powered rock in a mud puddle somewhere with absolutely no corroborating evidence whatsoever,,,

The Capabilities of Chaos and Complexity: David L. Abel – Null Hypothesis For Information Generation – 2009

To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: “Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration.” A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis.

http://www.mdpi.com/1422-0067/10/1/247/pdf

http://mdpi.com/1422-0067/10/1/247/ag

Graham, If you notice the very first entry of the second link for Abel’s Null Hypothesis:

Neither Spontaneous Combinational Complexity nor “The Edge Of Chaos” can generate:

1. Mathematical Logic

Mr BA^77,

A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis.

If I offered life as we see it around us as the counter-example, I think Abel’s only response can be “But you haven’t shown it to be ‘unaided’.” Whereas Behe is often choosing specific example systems and claiming positively “This is irreducible.” Abel is trying to shift the burden of proof in OOL to say “Prove this was unaided to my satisfaction.”

Get your refutation published in peer review and then I may give it the time of day Nak.

Nak, contrary to your bottomless faith in Darwinian processes, I offer life around us as an solid example that Abel’s Null hypothesis not only applies to Origin Of Life research but to all life on earth!

“The Edge of Evolution: The Search for the Limits of Darwinism”

http://www.amazon.com/Edge-Evo.....0743296206

A review of The Edge of Evolution: The Search for the Limits of Darwinism

The numbers of Plasmodium and HIV in the last 50 years greatly exceeds the total number of mammals since their supposed evolutionary origin (several hundred million years ago), yet little has been achieved by evolution. This suggests that mammals could have “invented” little in their time frame. Behe: ‘Our experience with HIV gives good reason to think that Darwinism doesn’t do much—even with billions of years and all the cells in that world at its disposal’ (p. 155). http://creation.com/review-mic.....-evolution

Dr. Behe states in The Edge of Evolution on page 135:

“Generating a single new cellular protein-protein binding site (in other words, generating a truly beneficial mutational event that would actually explain the generation of the complex molecular machinery we see in life) is of the same order of difficulty or worse than the development of chloroquine resistance in the malarial parasite.”

That order of difficulty is put at 10^20 replications of the malarial parasite by Dr. Behe. This number comes from direct empirical observation.

Richard Dawkins’ The Greatest Show on Earth Shies Away from Intelligent Design but Unwittingly Vindicates Michael Behe – Oct. 2009

Excerpt: The rarity of chloroquine resistance is not in question. In fact, Behe’s statistic that it occurs only once in every 10^20 cases was derived from public health statistical data, published by an authority in the Journal of Clinical Investigation. The extreme rareness of chloroquine resistance is not a negotiable data point; it is an observed fact.

http://www.evolutionnews.org/2.....est_s.html

An Atheist Interviews Michael Behe About “The Edge Of Evolution” – video

http://www.in.com/videos/watch.....34623.html

THIS IS MY FAVORITE SUBJECT! I know that Hawking is driving people like Michio Kaku insane with this- because Kaku and other are positivist materialists (though I hope they ill change).

But let me give you all a scene from a movie that shows the brilliance of what Godel was saying philosophically about the nature of reality.

In the 2001 movie “Swordfish” staring John Travolta- there is a scene in the beginning of the movie (which is actually a flash forward to near the end of the story) where he is talking to these cops about his plan and demands involving the money he is trying to steal and the hostages he is using for extortion and leverage. And the character Travolta is playing is critiquing all these “real” heist movies (that is his fictional charter is talking about actual movies) that he has watched and enjoyed- and then the cops critique his critique by saying that his plan would not work because in the movies there is always a “morality tale”- as which point Travolta says

“well some things are stranger than fiction.”

The theme of the movie is about “misdirection.”

The application or analogy to physics might be “is a universal theory of physics correct if it proves true but is not about what it claims to be about“? That is what if the proof is good but the axioms are flawed. How do you prove the quality of axioms? This takes a certain “qualitative” judgment that is not mathematical in nature.

Or as Einstein said

“Not everything that can be counted counts and not everything that counts can be counted.”

As, essentially, Heisenberg proved was a law of physics with his uncertainty principle.

This is classical Platonism and enough of it to make your head spin. That is what Gödel was proving with incompleteness. He was saying that all statements are ONLY statements- That is, that the term “fictional” has no meaning when we “cannot decide” the truth of a proposition.

So Gödel used the liar’s paradox to create an non decidable proposition for set theory and formal logic. This resulted in “a true statement” that was at the same time “inconsistent.”

Logic did not work.

We see this with other simpler questions like

“Could God create a rock so heavy that he cannot lift it?”

There is no answer to this question. If yes then God is not God, if no, then God is not God.

But God can still exist because both answers yes, and no, are independently correct.

That is, Yes, God can create a rock as heavy as he wants to infinity, and at the same time, No, he cannot lift one too heavy for him (because he cannot create such a rock)

This is also known as a circular argument.

The creepy part is that this is as much a statement about logic as it is about physics. All of this sets a limit of what we can know about physics. The rock God makes would have a weight even if it is infinite- unless “the axioms are transcendent.”

And Physics dose not “do” the business of transcendence well at all.

It proves there will always be a proposition greater than the ultimate one. Sound strange? Even fictional? Yes, but a necessary truth of formal proofs.

Put simply as an example again- Here is the analogy to the God’s rock paradox.

God is omnipotence = Universal string theory of everything

Fact: God cannot make a rock heavier than he can lift

=

Fact: String theory cannot make a universal theory containing all theories that cannot be universally theorized about.

God is not God = String Theory is not String theory.

So going back to the quote from Swordfish-

The imagination would have you believe that physics must be logically consistent to warrant being a true universal theory, and, that it could only be “fictional” to accept the possibility that a theory of universal physics could be false simply “because” it is universal.

But this is The Truth according to formal logic.

Now you theists out there might be sad to hear that “God is not God”- I have good new for you though. What was proved was not that God was not God – nor that String theory could not be string theory but that you could never prove either to be what they are.

We have proven you cannot prove anything.

Hence I give you

Mysterium fidei. The Mystery of faith.

Or what Platonists call the truth about reality- that mind makes reality and not the other way around. Or what modern philosophy calls the problem of induction.

And for the materialist it is stranger than their most fictional fantasy. Yet it is Fact, and Truth. And what allows for it is the mystery of mind, and the freedom of man’s rationalization.

Btw, at the end of Swordfish Travolta gets away with it all, robbing the bank and stealing the money and turns out to be good guy, all along, using the money to fight international terrorism. Both Travolta and the police were right. His plan worked out not just because of his plan but because the “morality tale” ended up being different. He axioms were different, and stranger, than the police could have imagined.

“Not only is the Universe stranger than we think, it is stranger than we can think.”

-Heisenberg

Stranger than fiction. And there is a subtle point here one can make about things like physics. Even when you know your right, sometimes all of the things you think you know are not what you thought they were all along.

Proofs are only as good as the quality of the axioms, taken for granted, that are used in them.

The quality of the notion of “God” to me though is strong, so long as we don’t “expect the impossible“.

This is all about a higher intellectual moral judgment on “man’s ability to reason“.

Proving that all proofs are provisional.

SO there is no way to prove if a universal theory is correct or not. It requires faith. Therefore people need to use their “intuitions” to judge things. This calls upon the necessity of a personal spiritual guidance.

If Hawking is in disillusion over accepting the fact of Godel- he will be in really bad shape when he accepts Heisenberg.

Peace.

Nak just in case you think Behe may have softened his stance since he published “The Edge”:

Very interesting and probing post. Lots of fruitful lines of thought there to explore. Well done.

I certainly agree with the general thrust of the post; that is, that there are in mathematics (and physics) “in principle” logical hiatuses, and (without getting too specific or dogmatic about just what it is) this hints at something irreducible lurking out there.

However, I’m not so comfortable with the jump to biological irreducible complexity at the end of post. I’m very interested in the debate as to whether known physics contains sufficient “endogenous” information to lead to the generation of living structures or whether some “second creative dispensation”, (on top of what is given by known physics) is needed to boot strap the formation of life. But at this stage the question is not settled in my mind as to whether or not living structures have an “in principle irreducibility” that puts them on a mathematical par with such things as incompressibility and Godel’s theorem.

Fair enough, I think it’s fairly clear from Niwrad’s post that the outer most logical frames of mathematics point to an ultimate “in principle irreducibility”, but I’m not so sure that there is an “in principle irreducibility” that prevents the evolution of complex organization of

any kindfrom a basic physics. If this is the case then the question of life’s irreducible complexity would have to be settled experimentally rather than via pure theoretical reflection; which from the point of view of theoreticians who like their armchairs rather than the lab, can feel rather tedious!This may be of interest: I hold that for math to be even true in the first place Truth (God) must exist. (this is a topic in itself)

Yet, when we say something is true in physics we inevitably allude to something that is “more true” in its degree of unchangedness. Thus, for physics, in establishing “truthfulness” we can only appeal to a higher constant which has “changed in the least” in order to establish whether something else lower in physics is true or not.

Here is a clear example from Dr. Hugh Ross:

Testing Creation Using the Proton to Electron Mass Ratio – Nov. 2009

Excerpt: The bottom line is that the electron to proton mass ratio unquestionably joins the growing list of fundamental constants in physics demonstrated to be constant over the history of the universe.

http://www.reasons.org/Testing.....nMassRatio

cont excerpt: For the first time, limits on the possible variability of the electron to proton mass ratio are low enough to constrain dark energy models that “invoke rolling scalar fields,”10 that is, some kind of cosmic quintessence. They also are low enough to eliminate a set of string theory models in physics. That is these limits are already helping astronomers to develop a more detailed picture of both the cosmic creation event and of the history of the universe. Such achievements have yielded, and will continue to yield, more evidence for the biblical model for the universe’s origin and development.

For materialism this is problematic to its core since there sits a transcendent constant right between these two “material particles” telling them exactly what ratios to be for as long and for as precisely we can measure, yet from a Theistic point of view this is exactly what we would expect,,, It is also interesting to note that materialism offers no foundational standard for which we can expect transcendent truths to exist in the first place thus if materialism were actually true science would not be possible.

Bornagain,

You should also check out the Platonic solids and how there is a severe fundamental mathematical constraint there too- which seems to illustrate the geometrical intelligibility of design in the creation.

Also for another trip look up Rene Descartes’ “topological invariant” regarding those Platonic solids.

All of this along side other mathematical patterns of symmetry show a universe that is very specifically designed and intelligible beyond any chance based mutli-verse model’s explanatory power.

Timothy V Reeves #16

In a previous post I wrote:

There are processes in a biological cell that work as a TM. In the known physics no law is able to create those 5 things separately and to greater reason nothing is able to create and assembly them contemporaneously as IC needs. We must conclude that, just for this single reason, life is not reducible/compressible into the system of the known physical laws. There is more: life is underivable from whatsoever set of laws/algorithms. As Turing would put it: life is non computable.

niwrad @7:

The point remains that if a ToE was really a ToE, it would not need to be modified any further.

Frost122585 #14

As I said, God’s Omnipotence is the Total Possibility, which contains all but does not contain impossibility. A case of impossibility is when a possibility contradicts another possibility. The case in point is exactly this sort of impossibility. In fact the possibility A=”God creating a rock so heavy” conflicts with possibility B=”that he cannot lift it”. A and B are incompatible, i.e. the coexistence of A and B in the same time and in the same context is impossible and as such has absolutely no reality. Having no reality cannot belong to the Total Possibility, which is the Supreme Reality. God’s Omnipotence is perfectly intact of course.

To clear better the concept here is a geometrical example. The impossibility of the God-and-the-rock antinomy is quite similar to the impossibility of a circle that is in the same time a square. A circle and a square can well coexist in the space as two different objects, but they cannot be the same object in the same time (a single geometric figure can be circular OR squared, but can’t be circular AND squared). To claim that the Total Possibility is not infinite (God non omnipotent) because cannot contain a circle that is in the same time a square is pure nonsense. The God-and-the-rock impossibility is exactly of the same kind.

God is never illogical. The key point is to understand that non logic, absurdity, nonsense and impossibility are pure nothingness. So, not containing them doesn’t decrease God’s Omnipotence of a single bit. The Unlimited cannot be limited by nothingness.

This old anti-theological (and anti-metaphysical) antinomy was invented by atheists to deny God’s Omnipotence. As any antinomy is impossible and without reality. Unfortunately this cheating may convince who has no sufficient background in logic.

Thanks very much Niwrad, (at #19) I’ll take those points away with me.

I notice that your argument rather depends on the assumption of the irreducible complexity of Turing like machines. Irreducible complexity and its inverse of

reducible complexityinterest me from a theoretical point of view. In particular I’m interested in whether or notreducible complexityat least has a mathematical existence (even if it is not true of our cosmos)Let me briefly try to express my problem.

As I am sure you are aware that

reducible complexityrequires stable or self maintaining structures to be juxtaposed in morphospace in such a way that they form a linked set (a bit like the way the Mandelbrot set is linked into a whole). Assuming this set has fibrils linking it to the appropriate initial conditions then this linkage allows for random walk to eventually walk the set – effectively in this scenario the tendency of thermodynamic agitation to expand to fill the greatest “volume” is paradoxically the engine that motivates the drive toward life (and presumably its eventual demise as the random walk over stretches itself to fill what may be a very large or quasi-infinite “volume”)But all this very much depends on

the right physical regimesupplying sufficient constraint and richness of pattern in order to define this abstract linked structure which provides pathways allowing thermodynamic access to self maintaining structures. This conjectured pattern in morphospace is a static non-dynamic platonic object that we can’t see or touch; it is not itself a living structure and would be an implication of the physics of the system.If it existsthen it is computable for the reasons that a) it is a finite pattern (albeit very large), and b) that all finite patterns can be eventually generated.But here’s the suspected problem: My guess is that such an object as this linked set (

ifit has a mathematical existence) is one of those computationally irreducible objects that Stephen Wolfram refers to. That is, there is no analytical way of showing its existence other than to carry out the computation and watching it. If we were in a cosmos where such a computation was taking place analytical results proving whether the process was actually happening (or not happening, as the case may be) would be difficult to come by. We would by rather stuck as passive bystanders witnessing the process unfold and describing it would be a very narrative intense business rather short of those elegant in principle “one liners” beloved of theoreticians.Now obviously the question arises as to whether our own cosmos is actually providing us with the opportunity to witness such a process in the form of “evolution”. Trouble is, of course, ID theorists are quite adamant that the self maintaining biological structures in our cosmos are

irreduciblycomplex. So it looks as though I am stuck!After thought:

As theist I am not at all adverse to the ID community’s insistence that life comes as a second creative dispensation – this is something I am carefully considering. However, given the presumed capabilities of a posited Creator then the creative dispensation we see in physics is not to be underestimated because it is associated with that self same Creator. It is ironic then the very notion of ID leads me to consider evolution as at least a possibility. Physics cannot be dismissed as a mere mindless “natural” process – that smacks of dualism.

niwrad:

Non-computability is a very strong claim. What makes you think that life is non-computable?

Timothy V Reeves #22 #23

Thank you for your interest and contributes. Well, your wrote:

It is true that physical laws are associated with the Creator but they remain what they are: equations and equations don’t generate CSI. Not to consider physical processes as mindless (as you seem to do) means to consider them somehow intelligent (able to generate CSI). Aside from the fact that in actual physics nothing supports this last possibility, where would the hypothetical intelligence of physical processes come from in the first place? Sure it wouldn’t come from nothingness, rather from the Creator himself, then the ID claim that however a first information source is necessary remains true. The evolution that you consider possible is not the Darwinian one, rather sort of front-loading evolution embedded in the cosmos at its beginning. Front-loading evolution (FLE) is an ID option because after all doesn’t deny the necessity of the information source. Personally I don’t believe FLE for reasons that may be I will explain in the future.

The necessity of an information source holds true also for your “morphospace that is not itself a living structure and would be an implication of the physics of the system [where] thermodynamic agitation to expand to fill the greatest volume is paradoxically the engine that motivates the drive toward life”. The question is always the same: where is the intelligent engine producing CSI? In the scenario, the “physics of the system” provides laws while the “thermodynamic agitation” provides randomness. ID theory shows that chance, necessity and their mix are not CSI engines. After all the basic actors at play in the cosmos are not so numerous: intelligence, chance and necessity. We can mix the latter however we like but if their individual contribute to CSI is zero also their compositions don’t provide CSI, as any arithmetic expression composed of zeroes doesn’t give a number.

R0b #24

When I speak of life I always mean “intelligent life”, because also lower living forms show at least a reverberation (so to speak) of the effects of intelligence. So your question becomes: “why is intelligence non computable?”. The topic is too important (in fact involves the deep nature of intelligence and its relations with life) that I would dedicate an entire article to it. I would ask you the patience to wait for it. Thank you.

However even if with “life” we mean a mechanic process (and doing that we are applying a reductionist approach I don’t agree with), in a sense we can say that life is non-computable. In fact in my #19 comment I said that life implies Turing machines and their IC. According to Turing, computability means “to be generable by a TM”. Then the question is: can a TM create a TM? My answer is “no” because a TM, respect its outputs, is a meta-concept of creation. A meta-concept cannot be generated as it were a simple output of itself. In other words ontologically a TM cannot create the concept of TM. From this point of view the concept of TM is non-computable. Since life entails the concept of TM then life too is non-computable. Hence also from a mere mechanic perspective we arrive to the apparently paradoxical conclusion that life is non-mechanic.

Thanks for the reply Niwrad. Here are some points:

1. You say

The necessity of an information source holds true also for your morphospace ….. Yes, it certainly does! I’mnotdisputing the ID view that the universe is sourced in some kind of a priori complexity; in this connection I ampersonallycommitted to the notion of a personal God.2) Yes, I am positing some kind of front loading. In fact I think that Dembski’s recent ideas show that however we try to cut the cloth (evolution or no evolution) somewhere in the great space of possibility an informational skew must be contrived in the form of heavily weighted probabilities (baring multiverse speculations which attempt to spread probability evenly/symmetrically – a view which has problems of its own!)

3) I can’t see how some kind of frontloading can be escaped. After all, elementary and disorganized matter is constantly being annexed into new organic structures without violation of thermodynamics or natural processes. This works, of course, because of the physical constraint inherent in the

initial physical conditionsin the form of preexisting biological machinery. This preexisting biological material, with the wherewithal to organize elementary and disorganized matter almost indefinitely, to my mind constitutes a form of front loading that even the most ardent second dispensationalist cannot deny. I suppose it all hinges on just what one means by “frontloading”. I’ll be interested to see your post on the subject.4) Equations don’t generate CSI? No, I agree, but my question is actually this: Can CSI be

implicitin the equations themselves in as much as they fix the appropriate connected morphospace pattern of self sustaining structures? The attempts by ID theorists that I have looked at to show that CSI cannot be generated by equations have made two assumptions a) That the CSI needs to begeneratedas opposed to beingimplicitin the equations from the outset b) That they expect those equations to directly imply biological structures, when in fact they should be looking at the layout of morphospace.5) There is no “Darwinism” without a connected morphospace pattern of self sustaining structures – it is an implicit requirement of “Darwinism”, like it or not.

6) May I remind you of the potentially big problem of computational irreducibility. It may not be possible to get an analytical handle on any linked pattern in morphospace (assuming it exists) unless the computation is actually done in front of us. And if it was executed we may still have a big problem: Because of human limitations we are likely only to be able to sample a small part of it; thus the question of whether evolution is happening or not may be humanly undecidable.

7) I’m not here getting involved with the question of the evolution of higher forms of intelligent life, which via some form of self reference may entail non-computability

a laRoger Penrose for example. I’ve tried to pair the problem down to the simpler question of the mathematical existence of a class of reducibly complex self sustaining/maintaining structures.8) If life

isa mix of “chance and necessity” (what I call “law and disorder”), then don’t forget Who is doing the mixing!9) Are you a YEC?

Thanks for your points Timothy V Reeves, which show that our positions are not so distant after all.

1. Ok. But I am afraid that on that adjective “personal” we could discuss a lot…

2) Ok.

3) Agree. The frontloading in the sense you describe it is undeniable. It is almost a tautology that any system is frontloaded with its own potentiality. A biological embryo is frontloaded with the potentiality of developing a living being. A biological cell is frontloaded with the potentiality of sustaining/maintaining, reproducing, differentiating, etc.. These potentialities must be perfectly designed bit by bit.

4) Sorry I am not sure to understand you here. However, a) the equations at play cannot be other than those of actual science, and these don’t contain biological CSI; b) I don’t see as the layout of morphospace can provide CSI either.

5) Yes.

6) Uhmm, I am not so sure that [frontloaded macro]evolution may be humanly undecidable. I think there are good reasons to doubt frontloaded macroevolution.

7) You speak about “reducibly complex self sustaining/maintaining structures” but from these to arrive at organisms there is a long way that has to be filled with CSI. Whoever mixes “chance and necessity” only, without adding CSI, simply obtains by-products of “chance and necessity”, nothing more.

9) For me the timing problem is of secondary importance. My primary issue is to know *how* life and species arose not *when* they arose. Timing is important for who bases his arguments on probability and resource spaces. I tend to base my arguments on matters of principle (and something tells me you too).

niwrad:

Your views regarding computability seem quite foreign to computing theory. Anything that’s finite is computable, including a finite automaton like a TM. The question of computability has nothing to do with ontology, since computing theory deals only with abstract representations.

R0b #29

Say x your answer. It is finite because is 280 bytes long. Say y=f(x) the Boolean function with value y=1 if x is output of a TM and value y=0 if x is not. This function f has both domain and codomain finite, its definition is finite too but f is not computable. In fact it is impossible by mean of simple calculations to determine its value. Of course there can be other methods to know its value but they are not computations. Therefore there can be finite non computable things.

Consider a TM t with finite definition. One could think of another finite TM q whose output is t. This can work syntactically. But what if we consider the thing from a semantic point of view (in my previous comment #26 I used the adverb “ontologically” in this sense)? If a TM could semantically compute a TM, we should have that a TM is able to compute the meaning of TM. But this is a self-reference fallacy.

In “The Emperor’s New Mind” Roger Penrose says that a computer cannot answer self-reflexive questions such as “What does it feel like to be a computer?” Here we face a similar impossible problem: a TM cannot compute the self-reflexive question “What is a TM?”.

The problem can be expressed in an ID framework. A TM must be designed. It cannot design itself and cannot be designed by another TM (because non intelligent). Since life implies TMs, life is designed.

niwrad:

Any function with a finite domain is trivially computable using a lookup table.

I had never heard the term “semantically compute” until I read your comment. So I googled it and found what might be a relevant reference in Bram Van Heuveln’s philosophy disseration. Is his usage of the term the same as yours? If so, then I’ll read the paper to find out exactly what the term means, and then respond to your point. I suspect we’re stepping outside of computing theory, and into an area of imprecisely defined concepts.

A TM is certainly capable of outputting canned answers to those questions. The question of whether a TM can

understandthe questions and answers is a philosophical one (strong vs. weak AI), and thus falls outside of computing theory.R0b #31

I provided a function f with finite domain and said it was incomputable. You said it is trivially computable using a lookup table but you didn’t give an hint about such computation.

It seems to me that Van Heuveln’s distinction between semantic computation and syntactic computation is based on the fact that the latter doesn’t represent real causation, it simulates real causation only. IOW syntactic computation processes symbols without understanding their meaning as semantic computation does. This distinction makes sense and could also be a useful concept in the AI topic. For example, we can say that computers work out syntactic computation while human mind works out semantic computation. As a consequence AI machines can only simulate human intelligence, without being real intelligence. One of the key points here is the distinction between simulation and reality.

The beautiful thing about the ID/evo debate (and the reason I like it) is indeed it covers a lot of interrelated philosophical and scientific fields. To return to the above issue and trying to apply it to the real case of biological complexity, this complexity arose thank to real causation, then semantic “computation”. Hence the fact that a TM, as you say, “is capable of outputting canned answers” (i.e. syntactic computation) doesn’t refute the ID inference about the biological complexity.

We can look at the problem also from another point of view. Let’s suppose per absurdum that the TMs present in the cells are syntactically computed by another TM. Where did this “parent” TM arise from in the first place? This way we have only shifted the problem of the arising of TMs without resolving it. Therefore we return to the point I tried to explain in my previous post: eventually a machine can produce (syntactically) other machines but at the start of the process an intelligent agent must exist, who knows the semantic of what he is doing when creating the first machine (and its embedded potentiality to produce offspring).

Thanks very much for the reply Niwrad.

Yes I agree, our positions aren’t very different; I accept ID’s kernel idea of a design source. I think the main difference between myself and many of the correspondents on UD is that I haven’t been able to clear evolution off my desk by consigning to the “obviously false” waste paper bin; it’s status in this respect is still unsettled in my mind. Thus, for me evolution remains a favourable candidate under consideration, a candidate that presents no outright contradiction with ID’s notion of design. So bear with me.

Frontloading in the sense I mentioned in my last post is almost a trivial truism, but I suppose it’s the more subtle forms of frontloading that are the bug in the rug. This is where the controversy arises: If one can’t see the front loading because it is buried deep in the convoluted logic of the system then one might think it not to be present at all, thus seeing no need for some sort of creative dispensation (cue atheism). Alternatively, one might want to posit a more obvious form of frontloading in order to make the case for special creation less equivocal. (As per some forms of very “in yer face” ID)

In the subject of being “implicit” I’ll get back to you on point 4 as briefly as I can and try to be clearer. So watch this space.

niwrad:

I’m not sure what hint you’re looking for. Why do you think that your function can’t be implemented with a lookup table?

Regarding your function, when you say “output of a TM”, do you mean a

particularTM, or is the TM also an argument of the function? If the latter, then the number of allowed TMs must be finite in order for the domain of f(x,tm) to be finite. The halting problem would render this function non-computable only if f had to work for all TMs.I wasn’t trying to. I was merely pointing out that TMs can output profound English text, other TMs, or any other finite output, contrary to your claims. Your premises involving non-computability are not true, so while computing theory lends an appearance of rigor to your philosophical arguments, your application of it isn’t correct.

R0b #34

My function f is Turing incomputable because it is a priori impossible to know syntactically if your answer x (argument of f) is outputted e.g. by a text pseudo-random generator or a spam engine or whatever mechanic system, rather than a human mind. It is possible to know it semantically, for example if you declare that answer x is your own design. But your declaration would not be at all computation rather ontological agency.

The abstract definition of TM is finite. Also the definition of the construct “output of a TM” is finite. The definition of the “Boolean function with value y=1 if x is output of a TM and value y=0 if x is not” is finite too. The halting problem is not the unique case of incomputability.

I didn’t claim that “TMs cannot output profound English text, other TMs, or any other finite output”. Indeed above I have just said that a random generator (given enough time) might mechanically output your previous answer x (which is an English text). I said that a TM cannot semantically compute another TM. To be precise, a TM can semantically compute exactly nothing for the simple fact that TMs have no understanding and without understanding no semantic.

My premise was that life/intelligence is non computable. This was the starting point of the discussion between you and me. To show that “my application of computing theory isn’t correct” you could try to prove that life/intelligence is computable.

Hi Niwrad,

Let me attempt to expand a little on point 4 as promised (as briefly as I can).

In promoting the notion of irreducible complexity the ID community has all but spilt blood at the hands of some very bloody minded people and so I understand their emotional investment in IC. No need to worry, I’m’ not challenging IC here but I am simply making the opposite assumption of

reduciblecomplexity and seeing where it takes me. As far as you’re concerned the following is a counterfactual argument.Firstly let me make the general observation that as far as human logic goes our particular cosmic physical regime seems to have been selected from an infinitely larger space of possibility. If we assume equal a-priori probabilities over this huge space (as I believe is Dembski’s practice) then the apparently contingent configurations and properties of our particular observable universe have an absolutely minute probability, thus displaying a high information content. (Using Dembski’s concept of information –log(p) )

Reduciblecomplexity (a requirement of any form of evolution including “Darwinism”) demands at least the following conditions in morphospace:a) A class of forms across a wide spectrum of complexity that have a high probability of persisting.

b) That this class is fully connected like a Mandelbrot set.

Given these conditions then random thermal agitation allows for a random walk of this connected set, with the network of persistence probabilities effectively acting as channels, depressions, wells and traps that have the effect of considerably enhancing the probability of this class of configurations by these configurations accumulating and damming up the “flow of probability”.

Now, there are two ways of assigning these persistence probabilities. One way is to simply put them in by hand on an item by item basis. This contrived method of assigning persistence probabilities is what, I think, ID theorists would identify as an obvious and explicit form of frontloading. (Dembski identified this form of frontloading in the Avida experiment in computational evolution. See See here; many thanks to Kairosfocus for alerting me to this paper)

In doing this job by hand the configurations selected for high persistence probability swap a very low probability for a relatively high probability thus effectively loosing information. These enhanced probabilities presumably result of conditions further back in the source doing the selection, conditions that give rise to this highly improbable distribution of persistence probabilities. The source bears the low probability and thus the information appears to “come from” the source (presumably intelligence in this case) that assigns the persistence probabilities.

There is however, to my mind, another conjectured way in which the source may assign persistence probabilities and thus bear the low probability. That source might distribute the persistence probabilities using a succinct mathematical function or functions and these functions constitute the “physics” of the system. Trouble is, it seems fairly clear that the persistence probabilities required to do the job would form a very complex pattern in morphospace. So the question is can such a pattern be defined by a set of relatively simple equations?

Frankly I don’t know the answer to that question. We do know, of course, that there is a relatively small class of large complex patterns that can be generated in relatively fast time from elegant mathematics/short algorithms. e.g. fractals and highly complex disordered sequences. These complex forms with a fast-time map to relatively simple mathematical functions are very rare and so applying Dembski’s assumption of equal a-prior probabilities are thus able to bear the information burden.

Hence at this stage in my thinking the following two questions are unsettled in my mind:

a) Is it

in principlepossible to distribute persistence probabilities using succinct algorithms/mathematics?b) Is the physics of our world one of those succinct systems?

As long as I remain unsure about the answer to these questions evolution is a “design” candidate. I’m agree there is nothing we know that obliges the configurations of this world to have high probabilities as it seems those configuration have been selected from an enormous space of possibility, thus implying some high information source. But I am in doubt about how that source burdens the information.

My comment about computational irreducibility is a pessimistic “what if” worst case scenario. If the computation from an elegant set of laws to a complex assignment of persistent probabilities is computationally irreducible then, as I have already suggested, analysis is going to be difficult and the burden will be on experiment to show the way. Trouble is, simply taking a few experimental samples isn’t going to prove much either way. If this is the case then it looks to me that the argument will run and run.

Note: On that adjective “personal” I probably have quite a lot in common with Brian McLaren, but I don’t want to get into any “fights” on that score! I’ve got enough on my plate with this evolution question!

niwrad:

Okay, I misunderstood your description of f(x) in [30]. You’re saying that f(x) tells us whether x is the output of

anyTM, not just a given TM.In this case, f(x) is not a valid function, as there is not a unique value of f(x) for each value of x. TMs, sub-TMs, and super-TMs can all output finite sequences. Since f(x) isn’t a well-defined function, it makes no sense to say it’s computable or non-computable.

You said:

According to Turing, computability means “to be generable by a TM”. Then the question is: can a TM create a TM? My answer is “no” because a TM, respect its outputs, is a meta-concept of creation.You’re explicitly talking about Turing computability, and saying that a TM cannot create a TM. But it can.

And you have yet to establish that premise. If, by

non-computable, you mean something other than the established definition, then perhaps a different word would be more appropriate.As for me proving that life/intelligence is computable, the burden is on you to prove non-computability, since the premise is yours, not mine. But if I were to take your challenge, you would need to tell me your formal representation of life/intelligence in order for the proposition to even make sense.

Timothy V Reeves, I appreciate your work to elaborate concepts from an ID point of view. Here are my comments about.

I agree that reducible complexity is a requirement of any form of evolution. In fact I am convinced that IC denies any evolution (the Darwinian one but also theistic evolution). About your class of fully connected forms it seems to me a system already very complex.

I don’t know if I follow you here, anyway for sure random thermal agitation doesn’t increase the information content of a single bit. It is more likely that random thermal agitation increases entropy and destroys information.

No objection here.

Sorry, for me this is an hard passage. However what I like is information coming from an intelligent source.

If with “to distribute persistence probabilities” you mean “to create CSI”, then this creation cannot be obtained by equations.

Anyway fractals are not examples of gratis creation of information.

To a) my answer is always the same: there is information non compressible. To b) it is indeed the goal of my actual article to claim that “the physics of our world” is not a “succinct system”, to use your own terms.

I agree that a lot of research work has to be done on these topics.

R0b #37

Why do you say that f(x) has not a unique value for each value of x? It is Boolean: it has y=1 OR y=0 value. To have multiple values would mean that a given y=g(x), for a certain x, has say y=m AND y=n in the same time as values. They say such g(x) is not univocal. But this is not the case for my f(x). It makes sense to ask if your answer x is computable or non-computable. That is exactly what f(x) does.

It can syntactically but not semantically.

In #26 I wrote that I would have written a specific article about intelligence and I will do. Unfortunately I have too work in the pipeline. This discussion between us is only somehow propaedeutic. Yes the premise is mine but the affirmation that “my application of computing theory isn’t correct” is yours and yet I don’t see valid arguments from you supporting your claim.

niwrad:

Okay, so f(x) tells us whether x is computable. Now I understand, hopefully.

Assuming that x is a finite-length string, f(x) is always 1, because any finite-length string is computable. I’m going to stop here are see if we agree on this fundamental fact.

R0b #40

If x in f(x) is your 280 bytes answer in your comment #29 there are three possible cases:

(1) you officially declare to be the writer of x, then f(x)=0 because x is not written by a TM (rather by a guy named R0b). I just said that your declaration would not be a computation, rather an entirely different thing;

(2) x is written by a TM (or other mechanic system), then f(x)=1. To inference this we must be witnesses of this mechanic writing. Again our testimony would not be a computation.

(3) this is the “else” clause in the control flow. You don’t declare to be the writer of x and we are not witnesses of its TM generation. In this “else case” f(x) is incomputable because it is impossible a priori to decide its Boolean value.

Notice that also in the cases #1 and #2 f(x) is non properly computed because its values are found by mean of actions different from computation.

It is true, as you say, that the question “is any finite-length string computable?” has answer “yes”. In fact in the worst case (if the string is incompressible – in the sense of algorithmic information theory) at least it exists the trivial program “a=…; print a;” able to output it. But the question that f(x) must answer is another one: “is the 280 bytes answer in R0b’s comment #29 written by a TM or not?”. Such f(x) is incomputable.

Hi Niwrad,

Thanks very much for a careful consideration of my points. Sorry to keep banging on about this, but I am simply using this as an opportunity to articulate my position.

I have endeavored to use the term “information” in order to try and maintain compatibility with the ID community’s concepts, but I sometimes find it a rather slippery and awkward term. Part of the problem is that measures of “information” bundle the observer and the system observed into a joint system and the observer himself becomes a variable in that system with the potential to be depository of “hidden” information.

For example, consider the case of an algorithm that generates a highly disordered sequence or a complex fractal. To the uninitiated the pattern is very information rich because each bit of the pattern has some surprise element and thus is able to inform. However, if the observer should learn the algorithm the same pattern is no longer informative; the observer can predict each bit. What then has happened to the “information” in the pattern? The pattern hasn’t changed, so what has changed? My reading of the situation is that the change is in the observer; an improbable pattern in the form of an algorithm has now been implanted in the observer’s head. The same thing happens with a sequence of coin tosses: From the outset the sequence is information rich, but as soon as the observer sees and learns the sequence the sequence is no longer able to inform and thus loses its information.

Some of the confusion seems to trace back to use of the rubric “chance and necessity”. I much prefer my own rubric “law and disorder” because so called “necessity” is not necessarily necessary and so called “chance” may be more necessary than we think. Consider again the Mandelbrot set. For those initiated into the algorithm, each bit of the set has a probability of 1 and thus seems to classify as “necessity”. But this necessity is conditioned on the use of the Mandelbrot algorithm. Hence, we have in fact P(bit|mandlebrot)=1 (and not P(bit)=1) and therefore because the algorithm has been chosen from who knows what huge space of possible algorithms, “necessity” should read as “conditional necessity”. If we use Dembski’s assumption of equal a-priori probabilities, then taken from the point of view of someone who doesn’t know what algorithm is being used, so called “necessity” suddenly snaps over into highly informative improbability.

This apparent appearance, disappearance and reappearance of information can make information a very frustrating concept to use.

Another little issue I have with information as metric is that its use of the log operator results in a differentiated product of probabilities being lumped into single undifferentiated sum, with the consequence that “information” is a metric which is not very sensitive to configurational form.

Yet another issue is this: When one reaches the boundary of a system with “white spaces” beyond its borders how does one evaluate the probability of the systems options and therefore the system’s information? Is probability meaningful in this contextless system? I am inclined to follow Dembski’s approach here of using equal a priori probabilities, but I am sure this approach is not beyond critique.

Anyway persevering with the concept of information, here is my reply to some of your points.

ONE) You say

About your class of fully connected forms it seems to me a system already very complex.Yes it is very complex, I’m not denying that. And I certainly agree that Irreducible Complexity elevated to the level of some “catch all” principle would cast grave doubts about the workability of conventional notions of evolution.TWO) Clearly, random agitation doesn’t degrade the “physical constraints” imposed on the actual “material stuff” of a system whether these constraints have been put in by hand or by equation; they reside on a kind of meta level above and beyond thermal agitation.

Ifevolution is to work then it is the information contained in these constraints that does the “heavy lifting” (to use an expression I have seen on UD) unaffected by thermodynamic agitation. The thermodynamic agitation has the effect of facilitating an exploration of the space of possibility, a space limited and narrowed by the “physical constraints”, constraints whose integrity remains untouched by decay. But, of course, it’s one thing to speculate about the possible mathematical existence of physical constraints that so narrow the space of possibility as to considerably enhance the chances of life evolving, but it is quite another to assert that the particular physics of our universe is one such system of constraints; if the physical constraints are too slack the resultingunharnessedthermal agitation is simply a destructive force.THREE) On the piece you said was a hard passage: Whatever the details here I think we agree on the essential idea of intelligence (or at least some kind of a-priori complexity) ultimately being the source of information. The real issue is how that intelligence applies that information. In a nutshell what I was trying to say is this: If we claim some event complex to have a high probability we are in fact claiming that it has a high

conditionalprobability; that is P(Event|Condition) ~ 1. The high probability (with a concomitant loss of information) is presumably gained at the expense of a “condition” which then bears the low probability. But if it is claimed that this condition has a high probability, then this high probability in turn is gained at the expense of yet another low probability condition, call it condition 2. That is, P(condition|condition 2) ~1 where condition 2 now bears the low probability. And so on to condition n. This, I think, is basically Dembski’s concept of the “conservation of information” that he explores more rigorously in his papers.FOUR) You say

Anyway fractals are not examples of gratis creation of information. I agree, but my reason for agreeing is this: The information effectively resides in the fractal algorithm itself because being taken from a presumably a large space of possibilities it therefore has a high improbability (Assuming equal a-priori probabilities). As I have said it is wrong to attribute unconditional probabilities of 1 to fractal calculations.SIX) You say

If with “to distribute persistence probabilities” you mean “to create CSI”, then this creation cannot be obtained by equation.Yes and No.“Yes” because as I have said,

it is possible to distribute life enhancing persistence probabilities using succinct equations then the improbability is found in the choice of equations because they entail a rare (i.e improbable) mapping of a “fast time” algorithm to a complex pattern, thus shifting the information to the equations selected; the equations would not create the information, but rather be the bearer of that information.if“No” because unlike yourself my thinking has not yet reached a stage where I can confidently claim that there are no fast-time maps from some succinct systems of equations to the required distribution of persistence probabilities. Although in the far greater majority of cases complex structures are effectively incompressible strings, there is a small class of complex forms that do map to fast time succinct (i.e. compressed) algorithms; as we know a small number of complex and disordered sequences can be generated in fast time by a relatively small algorithm. Of course I’m not then claiming that this is any strong reason to contradict you view that equations are not enough to implicitly embody life’s complexity (it merely sets a precedent) and therefore I look forward to your postings on the subject.

niwrad, does f(“Hello”) have a unique value?

Timothy V Reeves, about what you wrote I have no serious objections by now. However the argument of fractals, equations, algorithms, etc. vs. information, which you put on the table is so interesting that is worth of a specific UD article I am going to put in my agenda. At UD I always try to separate different arguments in different discussions to be more focused and reader-friendly as possible.

Please continue to frequent UD and I am sure we will have other nice discussions. It would be a pity if UD loses a commenter as you. Thank you.

R0b #44

My function f(x) needs as argument a single specific string written somewhere. Only in internet there are 382 millions “hello”. What of them do you specify? If you don’t specify the particular “hello” f(x) is not valid because x is not univocal. If x is univocal f(x) has a unique value.

Okay, so the argument is not the string itself, but rather information about a certain physical instantiation of the string. No problem. If the function is well-defined and the domain is finite, f(x) can be implemented with a lookup table. This is not a controversial statement. If we can’t agree on that, then we mean different things by the term “computability” and we have no foundation for a discussion.

R0b #47

I don’t think we are really on a different page about computability. Perhaps the misunderstanding is the following. Given for f(x) a domain {x1, x2, … xn} and the codomain {1, 0} the lookup table has n rows: y1=f(x1); y2=f(x2); … yn=f(xn).

The problem is not there on that finite lookup table about which we agree, the problem is that any single row of it, i.e. the single attempt to know mechanically a single value of f (say y2=f(x2)) necessarily fails. It is this failure that makes me say that f(x) is incomputable, not the lookup table per se.

Before a string on a computer screen we cannot a priori know by mean of a computation if it was written by a guy hitting on a keyboard or by a TM. It is this impossibility that my function describes.

niwrad:

The TM determines the answer by looking it up in the table, which is incorporated in the TM. The question of how the table got populated with the correct answers — i.e. how the TM was made — is irrelevant to the computability issue. The only relevant question is whether

someTM, out of the space of all possible TMs, implements the function.That’s like saying that a TM can’t say whether a given shirt came from Macy’s or JCPenney. But of course a TM can do this. A TM can contain any information whatsoever, as long as it’s finite. If the information contained in the history of the universe is finite, as Dembski argues, then a TM can “know” everything there is to know about the physical history of the universe.

R0b #49

Sorry but disagree. It is a tautology to say that if I insert the answers in a TM then the TM outputs them. The problem is indeed to know if a TM can compute the answers from data different from them and without having them hardwired inside itself. This is relevant to the computability issue.

niwrad:

Actually, no. The definition of

computabilitydoes not disqualify TMs with hardwired answers. You can search any computing theory text and you will find no definition ofcomputabilitythat matches your understanding of the term. Nor will you find any examples of non-computability that are finite-domain functions.R0b #51

By mean of your method of the hardwired values there is no incomputable problem. Also the problem to know the future outcomes of the lottery become computable. You provide us a TM with the hardwired answers. Your method is too good to be true.

Computable functions are only a sub-set of functions/problems. You seem to believe that all finite problems are computable. I provided a finite problem but you didn’t compute it.

That’s incorrect. You can’t hardwire answers to the Halting problem because a TM is a finite automaton and the Halting problem has an infinite domain. And there are an uncountably infinite number of problems that are Turing equivalent to the Halting problem.

In contrast, the number of computable problems is

countablyinfinite. Which means that virtually all problems are non-computable.Absolutely. If you think that tomorrow’s winning lottery number is non-computable, you misunderstand what computability is all about.

But I explained why there

isa TM that computes it.R0b #53

You seem to believe that any problem finitely defined on a finite number of objects can be resolved by mean of a finite series of instructions or operations (computation or algorithm). Really I don’t understand what your believe is based on, considered the huge range the concept of “problem” covers. This is even more unbelievable for me given you have rightly stated that “virtually all problems are non-computable” (by the way this perfectly agrees with the general thrust of my OP).

How you can claim that tomorrow’s winning lottery number is obtainable by mean of a series of instructions is beyond me. TMs cannot know the future.

I am interested in the effective calculability and solution of problems. You seem to be interested in sort of illusory and abstract calculability of them. As a consequence I fear we will never converge to an agreement.

Anyway these kinds of situation are typical when an evolutionist (you) and an IDer (me) discuss: the former inclines to oversimplify and reduce things while the latter inclines to see the things from an engineering resolutive viewpoint.

This doesn’t mean that our discussion has been unuseful and I thank you for your active participation.

You put forth an argument based on incompleteness and computability theory, neither of which deal with

effectivecalculability and solution of problems. Regardless, this isn’t a question of our respective interests, it’s a question of whether your claims wrt computability are true or false.Computability is a well-defined mathematical concept, so it’s not subject to opinion. At least one of us is simply wrong.

I thank you too for your graciousness.