Uncommon Descent Serving The Intelligent Design Community

No Free Lunch in physics

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In his 2003 paper, “How far are we from a quantum theory of gravity?” especially p. 49. Lee Smolin invokes the no free lunch theorem to argue that in searching for the minimum of a complicated but unknown potential there is no chance of doing better than a random search unless the search algorithm has built into it some very definite information about the function itself. The paper is widely available online (e.g., http://arxiv.org/PS_cache/hep-th/pdf/0303/0303185.pdf).

There’s not much difference here between Smolin’s argument against string theory and ID arguments against natural selection. I have yet to read the new books by Woit and Smolin, but I’m told it is astonishing how closely the controversies over string theory reflect the controversies of neo-Darwinism. In this regard, Ed Witten’s role in physics parallels Richard Dawkins’s in biology.

Comments
Dr. Dembski, I don't see the tasks as very similar. Smolin is aking how we can find a string theory that we should have a good reason to prefer over a large number of other theories. A large number of law-like theories is not the same as an infinity of all possible theories, so I agree with a previous poster that Smolin's invocation of NFL is probably inappropriate. From there, it esay to criticize his assumption that we must give the search procedure as much domain knowledge as possible - though we certainly could do that simply to get the answer faster. NDE, on the other hand, is the assumption that, from any contingent point on any fitness surface, if optimization can happen, it _will_ happen. Baked in to the particular fitness landscape of our reality is the optimization procedure of gradient following. As the fitness surface changes over time and as populations are perturbed away from local optima by random effects, which local optima are occupied, and to what degree, is both inevitable and contingent on the starting point. If the surface changed faster than optimization occurs, we might expect to find many objects in far-from-optimal states at any point in time. Setting aside catastrophes, our particular fitness surface changes slowly enough that we find most or all objects near some local optima. Also, in our particular case, it must be said that the search procedure has co-evolved with the fitness landscape over time. This is to be expected as the bulk of the responsibility for forming the fitness landscape shifted from the abiotic environment to the ecology surrounding and supporting each life form. In these terms, ID has the reasonable task of pointing out that there are populations occupying local optima, into which they could not have fallen - they had to have been placed. If ID had a model of expected distributions of populations over co-evolving fitness/search pairs, it could compare our biosphere to that model and say "we are an outlier" or "the class of models in which we are not an outlier is extremely small".David vun Kannon
October 12, 2006
October
10
Oct
12
12
2006
08:21 AM
8
08
21
AM
PDT
I said:
Global optimization is such a non-issue for evolutionary biologists, as well as for engineers. The issue is whether an optimizer can find a satisfactory solution in an acceptable amount of time.
Dr. Dembski said:
Satisficing solutions are equivalent to global optima simply by suitably flattening portions of the objective function.
True. But a satisficer operating on the original objective function is not equivalent in behavior to a global optimizer operating on the transformed objective function. A satisficer knows what constitutes a satisfactory solution, and halts as soon as it finds one (if it indeed finds one in the bounded time available to it). A global optimizer ordinarily does not know the globally optimal value, and must evaluate feasible solutions until none remains. Thus a global optimizer generally does not stop as soon as it finds a globally optimal solution, but continues searching for one that is better. I can see how you got the notion of satisficing from what I wrote, but what I had in mind was that the (not global) optimizer uses the limited time available to it to find the best solution it can, and that the judgment of whether a solution is satisfactory or not is external to the optimizer. An example is the evolution of bent-wire antenna designs (discussed in No Free Lunch). No one knows the global optimum of the fitness function, and exhaustive exploration of the design space is infeasible. The evolutionary optimizer used the limited time available to it to obtain the best designs it could. There was no guarantee that the best would be good enough for actual application. In the end, the investigators studied the evolved designs and found that several were not merely satisfactory, but outstanding. A satisficing algorithm would not have produced this result. By the way, transforming the objective function as you describe is a good trick for certain mathematical analyses of optimization. It facilitates answering questions like 1. What is the probability of obtaining a satisfactory solution with a fixed number of evaluations? 2. What is the expected number of evaluations to obtain a satisfactory solution? But the transformation of objective functions has an important limitation. While an optimization algorithm behaves identically on an objective function and its transformation up to observing a satisfactory value, it may behave differently on the two thereafter. Thus we cannot use the transformation to address, for instance, 3. What is the expected number of satisfactory solutions obtained with a fixed number of evaluations?DharmaBum
October 10, 2006
October
10
Oct
10
10
2006
12:14 PM
12
12
14
PM
PDT
Satisficing solutions are equivalent to global optima simply by suitably flattening portions of the objective function.William Dembski
October 9, 2006
October
10
Oct
9
09
2006
08:35 PM
8
08
35
PM
PDT
A striking result of complexity theory is somewhat worrying in this regard. Called the “no free lunch theorem” this states that no specific search algorithm is likely to do better than random search in finding the global minimum of a randomly chosen complicated potential[168]. To do better than random search, a search procedure must be based on an algorithm which is crafted taking into account some properties of a given potential.
This is muddled. The NFL work of Wolpert and Macready addresses optimization, not global optimization. The difference is important. The NFL results relate to distributions of functions and distributions of search results (histograms of observed values), not results on a particular function. For a uniformly drawn function, it is possible for an arbitrary search algorithm to locate a global optimum long before a random search does. The claim that the search algorithm must be crafted to outperform random search on a particular function is simply wrong. The uniform distribution on objective functions is an analytical construct in Wolpert and Macready's work, important because it is the average of all distributions. But the uniform distribution does not model reality. Almost all potentials have compact descriptions too big to fit in the observed universe. Thus the probability of almost all potentials is, as a practical matter, zero. Similar considerations apply to fitness functions. In reality, optimizers operate on highly compressible functions. Global optimization is such a non-issue for evolutionary biologists, as well as for engineers. The issue is whether an optimizer can find a satisfactory solution in an acceptable amount of time. The problem Smolin sets up for string theorists is quite different.DharmaBum
October 9, 2006
October
10
Oct
9
09
2006
06:28 PM
6
06
28
PM
PDT

Leave a Reply