Uncommon Descent Serving The Intelligent Design Community

Questioning Information Cost

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Di.. Eb.., or Dieb,  on the blog DiEbLog, has posted a number of questions, relating to the paper A General Theory of Information Cost Incurred by Successful Search. He raises a number of questions and objections to the paper.

Firstly, Dieb objects that the quasi-Bayesian calculation on Page 56 is incorrect, although it obtains the correct result. However, the calculation is called a quasi-Bayesian calculation because it engages in hand-waving rather than presenting a rigorous proof. The text in question is shortly after a theorem and is intended to explicate the consequences of that theorem rather than rigorously prove its result. The calculation is not incorrect, but rather deliberately oversimplified.

Secondly, Dieb objects that many quite different searches can be constructed which are represented by the same probability measure. However, if searches were represented as a mapping from the previously visited points to a new point (as in Wolpert and Macready’s original formulation), algorithms which derive the same queries in different ways will be represented the same way. Giving multiple searches the same representation is neither avoidable nor inherently problematic.

Thirdly, Dieb objects that a search will be biased by the discriminator towards selecting elements in the target, not a uniform distribution. However, Dieb’s logic depends on assuming that we have a good discriminator. As the paper states, we do not assume this to be the case. If choosing a random search, we cannot assume that we have a good discriminator (or any other component). The search for the search assumes that we have no prior information, not even the ability to identify points in the target.

Fourthly, Dieb doesn’t see the point in the navigator’s output as it is can be seen as just the next element of the search path. However, the navigator produces information like a distance to the target. The distance will be helpful in determining where to query, but it does not determine the next element of the search path. So it cannot be seen as just the next element of the search path.

Fifthly, Dieb objects that the inspector is treated inconsistently. However, the output of the inspector is not inconsistent but rather general. The information extracted by the inspector is the information relevant to whether or not a point is in the target. That information will take different forms depending on the search, it may be a fitness value, a probability, a yes/no answer, etc.

The authors of the paper conclude that Dieb’s objections derive from misunderstanding our paper. Despite five blog posts related to this paper, we find that Dieb has failed to raise any useful or interesting questions. Should Dieb be inclined to disagree with our assessment, we suggest that he organize his ideas and publish them as a journal article or in a similar venue.

Comments
Elizabeth:
I think if the “evolutionary informatics” lab are going to do “evolutionary informatics” they need a better model of evolutionary optimisation.
What is "evolutionary optimization"? Natural selection doesn't optimize, so please do tell what you are talking about. BTW Lizzie, unguided evolution isn't a search at all. It's more like a drunk stumble. Ya see blind and undirected processes cannot search and that is what darwinian and neo-darwinian evolution posits. keiths:
That assumes that the folks at the EIL are actually trying to model evolution, versus caricaturing it.
No one can model unguided evolution, not even evolutionary biologists.Joe
July 17, 2013
July
07
Jul
17
17
2013
04:53 AM
4
04
53
AM
PDT
As to accurately trying to model evolution, versus caricaturing it, this is the best program thus far developed trying to accurately model evolution:
Using Computer Simulation to Understand Mutation Accumulation Dynamics and Genetic Load: Excerpt: We apply a biologically realistic forward-time population genetics program to study human mutation accumulation under a wide-range of circumstances.,, Our numerical simulations consistently show that deleterious mutations accumulate linearly across a large portion of the relevant parameter space. http://bioinformatics.cau.edu.cn/lecture/chinaproof.pdf MENDEL’S ACCOUNTANT: J. SANFORD†, J. BAUMGARDNER‡, W. BREWER§, P. GIBSON¶, AND W. REMINE http://mendelsaccount.sourceforge.net Using Numerical Simulation to Test the Validity of Neo-Darwinian Theory - 2008 Abstract: Evolutionary genetic theory has a series of apparent “fatal flaws” which are well known to population geneticists, but which have not been effectively communicated to other scientists or the public. These fatal flaws have been recognized by leaders in the field for many decades—based upon logic and mathematical formulations. However population geneticists have generally been very reluctant to openly acknowledge these theoretical problems, and a cloud of confusion has come to surround each issue. Numerical simulation provides a definitive tool for empirically testing the reality of these fatal flaws and can resolve the confusion. The program Mendel’s Accountant (Mendel) was developed for this purpose, and it is the first biologically-realistic forward-time population genetics numerical simulation program. This new program is a powerful research and teaching tool. When any reasonable set of biological parameters are used, Mendel provides overwhelming empirical evidence that all of the “fatal flaws” inherent in evolutionary genetic theory are real. This leaves evolutionary genetic theory effectively falsified—with a degree of certainty which should satisfy any reasonable and open-minded person. http://www.icr.org/i/pdf/technical/Using-Numerical-Simulation-to-Test-the-Validity-of-Neo-Darwinian-Theory.pdf Using Numerical Simulation to Better Understand Fixation Rates, and Establishment of a New Principle - "Haldane's Ratchet" - Christopher L. Rupe and John C. Sanford - 2013 Excerpt: We then perform large-scale experiments to examine the feasibility of the ape-to-man scenario over a six million year period. We analyze neutral and beneficial fixations separately (realistic rates of deleterious mutations could not be studied in deep time due to extinction). Using realistic parameter settings we only observe a few hundred selection-induced beneficial fixations after 300,000 generations (6 million years). Even when using highly optimal parameter settings (i.e., favorable for fixation of beneficials), we only see a few thousand selection-induced fixations. This is significant because the ape-to-man scenario requires tens of millions of selective nucleotide substitutions in the human lineage. Our empirically-determined rates of beneficial fixation are in general agreement with the fixation rate estimates derived by Haldane and ReMine using their mathematical analyses. We have therefore independently demonstrated that the findings of Haldane and ReMine are for the most part correct, and that the fundamental evolutionary problem historically known as "Haldane's Dilemma" is very real. Previous analyses have focused exclusively on beneficial mutations. When deleterious mutations were included in our simulations, using a realistic ratio of beneficial to deleterious mutation rate, deleterious fixations vastly outnumbered beneficial fixations. Because of this, the net effect of mutation fixation should clearly create a ratchet-type mechanism which should cause continuous loss of information and decline in the size of the functional genome. We name this phenomenon "Haldane's Ratchet". http://creationicc.org/more.php?pk=46
Of course Darwinists deceptively deny that this computer model which uses realistic parameters has anything to do with Darwinism, but what type of honesty should we expect from people who deceptively claim that intelligently designed computer algorithms provide convincing proof for unguided Darwinian evolution in the first place? In the following podcast, Dr. Robert Marks gives a very informative talk as to the strict limits we can expect from any evolutionary computer program (evolutionary algorithm): Darwin as the Pinball Wizard: Talking Probability with Robert J. Marks II - video http://www.youtube.com/watch?v=Kxv3Q0VaX9E 7:00 minute mark "For many years I thought that it is a mathematical scandal that we do not have a proof that Darwinian evolution works." Gregory Chaitin - Proving Darwin 2012 - Highly Respected Mathematicianbornagain77
July 17, 2013
July
07
Jul
17
17
2013
02:41 AM
2
02
41
AM
PDT
Lizzie,
I think if the “evolutionary informatics” lab are going to do “evolutionary informatics” they need a better model of evolutionary optimisation.
That assumes that the folks at the EIL are actually trying to model evolution, versus caricaturing it.keiths
July 17, 2013
July
07
Jul
17
17
2013
12:34 AM
12
12
34
AM
PDT
Exactly, Alan. This all seems a little like shifting deckchairs on the Titanic to me. A static search space with a static target and static feedback seems a very odd way to simplify evolutionary optimisation. It seems to me that the very things that make it so powerful have been simplified out. The thing about evolution is that it isn't a search for a hidden target at all, or even several hidden targets. It's a search for solutions - any solutions - to the problem of persisting in the current environment, and that environment, both its resources and threats, will change as a function of the solutions found. So for a start it is massively non-linear. I think if the "evolutionary informatics" lab are going to do "evolutionary informatics" they need a better model of evolutionary optimisation.Elizabeth B Liddle
July 17, 2013
July
07
Jul
17
17
2013
12:29 AM
12
12
29
AM
PDT
Quoting from the paper:
There are only so many ways that matter can be configured to be alive and, once alive, only so many ways it can be configured to serve different biological functions. Most of the ways open to evolution (chemical as well as biological evolution) are dead ends. Evolution may therefore be characterized as the search for alternative “live ends.” In other words, viability and functionality, by facilitating survival and reproduction, set the targets of evolutionary biology. Evolution, despite Dawkins’s denials, is therefore a targeted search after all.
The authors simply assert that the process of cumulative variation and selection is a targeted search, when this is not how evolutionary biologists portray evolution (I note that a comment in a popular book written thirty years ago is not the only evidence that exists indicating that evolution stumbles upon viable solutions near to already viable solutions to earning a living). I wonder whether arguing about whether the mathematical model works is rather premature. Should we not first be convinced that the model presented is at all appropriate.Alan Fox
July 17, 2013
July
07
Jul
17
17
2013
12:09 AM
12
12
09
AM
PDT
DiEb, Yes. If you pick a random inspector/discriminator you are going to get the equivalent of a random guess regardless of the rest of your search. And? Yes, that's different from previous work. But we are not inconsistent with those results. We are just considering more cases. Winston Ewert
July 16, 2013
July
07
Jul
16
16
2013
07:56 PM
7
07
56
PM
PDT
Sorry, but if I pick a search at random out of the subset of the exhaustive searches, I end up with a search which does exactly as good as a random guess. That's quite different from Macready's and Wolpert's ideas...DiEb
July 16, 2013
July
07
Jul
16
16
2013
02:42 PM
2
02
42
PM
PDT
This average is taken over all inspectors and discriminators.
No. No. No. You only take the average over all inspectors and discriminators if you are picking a search at random. If you are performing an exhaustive search you aren't picking a search at random.Winston Ewert
July 16, 2013
July
07
Jul
16
16
2013
02:19 PM
2
02
19
PM
PDT
What is quite irritating in the scenario above: For each element \mu in M(\Omega) you can take our enumeration, have a random inspector and then find a discriminator which maps our exhaustive search onto this \mu... ------------ What is quite irritating in the scenario above: For each element μ in M(Ω) you can take our enumeration, take a random inspector and then find a discriminator which maps our exhaustive search onto this μ...DiEb
July 16, 2013
July
07
Jul
16
16
2013
11:08 AM
11
11
08
AM
PDT
1) Thanks, that will be appreciated by all those who share my confusion. 2) Well, it doesn't matter really. And it doesn't matter that the archive and the archiver are only tools which keep a small part of the sample path, as they are realizations of computational restrictions, while your discriminator works with the whole sample path / search matrix. For me, it's a totally different animal, but we don't have to go there. 3) What makes you think that an exhaustive search succeeds only as often as a random guess? That's not exactly what I said: I'm stating that exhaustive searches will on average succeed only as often as a random guess. This average is taken over all inspectors and discriminators. Take for example a search which enumerates the elements \omega_1, \omega_2, \dots. The second line of your search matrix contains the output of the inspector: if this one is chosen randomly, any combination of values \alpha_1, \alpha_2, \dots, \alpha_N will occur with the same probability: in other words - the output is independent of the search and can be ignored. At this point, we who we are performing the search, know nothing about the target: it could be FOO, it could be BAR :-) But now comes the random discriminator. If it has no "independent knowledge", it has to perform a random search on the search matrix, thereby being as good a a random guess. If it has "independent knowledge", i.e., if it returns the target "with greater certainty than is possible simply on the basis of the information delivered by the inspector and the navigator", than we can combine it with a permutation of the elements of \Omega to get another discriminator which prefers another element. If we take the average over all permutations, everything levels out, and we are again left with a random guess.DiEb
July 16, 2013
July
07
Jul
16
16
2013
08:16 AM
8
08
16
AM
PDT
1) Given your confusion on this point, I'll be looking to emphasize this in future work. 2) Your reading makes no sense to me. You seem to be inventing a restriction on archivers that the authors do not state when they define them. Regardless, its not an important point here. What makes you think that an exhaustive search succeeds only as often as a random guess?Winston Ewert
July 16, 2013
July
07
Jul
16
16
2013
06:47 AM
6
06
47
AM
PDT
1) Indeed, at the begin of section 2 you state "Let T = {\omega_1, \omega_2, \dots, \omega_k} be the target and define the probability p = K/N." So I've been imprecise when I implied that the target isn't formalized in your framework - I do apologize. I'm afraid that readers who are familiar with the definitions of, e.g., Wolpert and Macready, will approach you framework with a similar misconception that I had, and interpret the sentence "The inspector O_{\alpha} is an oracle that, in querying a search-space entry, extracts information bearing on its probability of belonging to the target T" in a way that the inspector is somewhat depending on T - as it is in the the usual definition as being the place where the cost function take its optimum, while in your framework it can be fully independent of T. When I asked for a formalization, I meant that I'd appreciated that this independence is stressed in your phrasing of the framework. 2) As for the article of Corne and Knowles: my reading of the text differs from yours. That can happen. I think that there are various possible archives in the case of a "multiobjective optimizer", while in the single objective case, you don't have much choice (and the archiver is unnecessary as it is trivial). It's interesting that the authors choose as the usual performance measure in single objective optimisation "max(d_m^y) (assuming maximasation, of course)". I'll look for other papers of the authors to bolster this view (which must be a trivial point to them), and we always have the possibility to just ask the authors. 3) I hope that you change your mind about continuing this discussion: if it helps, I get sometimes frustrated, too. You have certainly cleared of some misconceptions which I hold about the framework! Unfortunately, at this point of the discussion I'm left with the following conclusion: In the general framework Of Dembski, Ewert, and Marks searches which have enumerated the complete search space will on average find the target with the same probability as a random guess.DiEb
July 15, 2013
July
07
Jul
15
15
2013
10:28 PM
10
10
28
PM
PDT
DiEb, Please read the start of section 2 of the paper again. It's pretty clear that we do formalize the target. The fact that you choose to ignore that we've done so and continue to insist on restrictions to archivers that the authors of that paper never do has left me too frustrated with to continue this discussion.Winston Ewert
July 15, 2013
July
07
Jul
15
15
2013
07:37 PM
7
07
37
PM
PDT
1) An archiver in a “single objective optimisation” "plays the role of the ‘best so far’ placeholder", too - so it is the 'best so far' placeholder. There is nothing arbitrary about this in this case. Yes, it is a map A_N x Z -> A_N, but in the case of the “single objective optimisation”, the map is fully characterized by its role. So I don't think that you are going further, your discriminator and the archive are only loosely related. 2) The target plays is not unimportant in your concept: you should formalize it: "The target is known to person analyzing the search, but it may or may not be known to the search itself" doesn't fit well into your 6-tuple approach.DiEb
July 15, 2013
July
07
Jul
15
15
2013
02:46 PM
2
02
46
PM
PDT
1) They define an archiver as a mapping: A_N x Z -> A_N. Its a perfectly valid archiver to map everything to the empty archive. It is only the precise requirement that makes it keep the best point in memory on the single objective case. Your quotes states that archives play the role of the best-so-far record. They are effectively a generalization of best-so-far records. It states that maintaing a best so far record is trivial, but no statement that an archiver will maintain a correct best-so-far record. But I'm not disputing that the discriminator is more random than the archiver. The discriminator gives the algorithm more leeway than an archiver. My point is merely that we are taking what was done there but going further with it. 2) The framework defines the target as being a subset of the search space. It is assumed to be a given. A search succeeds if it picks a point which is in the target. So the target is assumed to be defined before you start analyzing the search. It is not some sort of unknowable. The target is known to person analyzing the search, but it may or may not be know to the search itself.Winston Ewert
July 15, 2013
July
07
Jul
15
15
2013
02:36 PM
2
02
36
PM
PDT
3) You said "An exhaustive search will fare better than a random guess." How so? On average, it will not, it will be as bad as a random guess.DiEb
July 15, 2013
July
07
Jul
15
15
2013
01:57 PM
1
01
57
PM
PDT
1) In all the "single objective optimisation" problems, the archiver will return the best so far value, whether the archiver is precise or not. There is nothing to accommodate: the quote which I gave above is meant for all kinds of archives. Again: archives are not as random as discriminators, as they are biased toward good values - even archives which are not precise. 2) The whole thing becomes a little bit to random for my taste: who knows what the target is? The discriminator may have this knowledge - or may not have it. Where is the objective criterion of having met the target? Where is it formalized? If I have minimized the cost function with FOO, who will tell me that I was indeed looking for BAR? If this game is played at the village fair, you will have a brawl in no time, accompanied by accusations of cheating!DiEb
July 15, 2013
July
07
Jul
15
15
2013
01:28 PM
1
01
28
PM
PDT
DiEb, Yes, as far as I know we are the only group who considers searches with counter-productive discriminators. The Corne paper discuss archives of varying quality but assumes a "precise" archiver. However, searches without precise archives and counter-productive discriminators exist and thus I think our model should be able to accommodate them. Right, to describe the set of all possible searches, you'd have to be randomly choosing the inspector from the set of possible inspectors which may or may not correlate with your target. As I've tried to express, access to the characteristic function (or fitness function) is not assumed when picking a search at random.Winston Ewert
July 15, 2013
July
07
Jul
15
15
2013
12:07 PM
12
12
07
PM
PDT
"I fail to see why that’s baffling." It is baffling as your group is the only one who does so. Everyone else is trying to find the optimum of the fitness function - or in the case of multiobjective optimizers - a set of non-dominated points (as David Corne's and Joshua Knowles's do in the paper you have mentioned). If you look into paper, you will see that in the case of one-dimensional optimization the archive will always contain the best element found so far, so at the end of the search it will return just the element with the best fitness value in your search matrix (to quote Corne and Knowles: "The archive plays the role of the ‘best so far’ placeholder in single objective optimisation. However, whereas in single- objective optimisation it is entirely trivial to maintain this best-so-far record, the story is quite different in multiobjective optimisation.") So when we are talking about "single objective optimisation" - as in finding the maximum of a characteristic function - there is exactly one possible archiver, though there is a abundance of discriminators. What am I trying to do? I want to describe the set of all possible searches for a single element out of a set of four elements. According with the general thought, this means that I want to maximize the characteristic function of that element. But I get the impression that that isn't the case in your framework as you state that you " have separated the cost function from the search problem").DiEb
July 15, 2013
July
07
Jul
15
15
2013
08:51 AM
8
08
51
AM
PDT
DiEb, Yes, as you've put it we have separated the cost function from the search problem. It'd be possible to run a hamming search for FOO when you really wanted BAR. I fail to see why that's baffling. Obviously, there are possible searches with poor discriminators. Actually there is a paper, "Some multiobjective optimizers are better than others" which looks at a case of that. (Of course, they aren't calling it a discriminator). Are you correct so far? I'm not sure what you are trying to do. Are you trying to pick a random search? Are you trying to model a particular search?Winston Ewert
July 15, 2013
July
07
Jul
15
15
2013
07:51 AM
7
07
51
AM
PDT
If I’m studying a particular search, such as exhaustive search or easter egg search, I’m not picking the components at random. Picking the components at random is brought up in response to your objection that the discriminator should bias the search towards the target. It was not intended to answer your objection that a search cannot be represented as a probability measure.
Can we exemplify this? Let's look for a single element in a small set - let's say for the element 1 in the set of four elements {1,2,3,4} and let's say that the fitness function is just the characteristic function of the element 1 and that that is all we know. Then: 1) The "initiator" is a random variable taken values in {1,2,3,4} 2) The "terminator" is a randomly chosen stopping time for the stochastic process given by the search 3) The "inspector" is the output of the fitness function - or are there any alternatives to choose from? 3) The "navigator" doesn't come into play, as no additional knowledge is assumed 4) The "nominator" is again randomly chosen 5) The "discriminator" is a random variable taken values in the output of the stopped process. Am I correct so far?DiEb
July 15, 2013
July
07
Jul
15
15
2013
01:54 AM
1
01
54
AM
PDT
Winston Ewert, perhaps you can clarify the following point: If you look at the literature, you will find that search problems are seen as a special case of optimization problems: you have a cost function, and you try to find its optimum. Sometimes you are interested only in the value of the optimum, sometimes you are interested in the place where this optimum occurs - than you have a search. The main point is: if you have found the optimum, you have solved the problem - the X marks the spot. Such a cost function can be straight forward - take e.g. f being the Hamming distance to "BOB". It can be misleading: let g be the Hamming distance to "TIM", but set g("TIM")=3 and g("BOB")=0. What are we looking for? Right, again for "BOB". Am I right to think that you have separated the cost function from the search problem? You seem to allow for an unaltered g, the Hamming distance to "TIM" while in reality searching for "BOB". If you do so, it becomes very difficult to compare your results with anything of the classical theories. Indeed, take exhaustive searches which are identical in their first five components, but differ in their "discriminator". It is quite baffling that such searches will be "represented" by quite different measures giving all probabilities for hitting the target between 0 and 1, while every performance measure in the sense of Macready and Wolpert will have the same result on these searches.DiEb
July 15, 2013
July
07
Jul
15
15
2013
01:26 AM
1
01
26
AM
PDT
General Remark: I got the impression that there aren't many mathematicians out there who won't dismiss your work out of hand, but are willing to read your texts, follow your calculations, and really try to understand what you are doing. When this text was presented in Ithaca, how many mathematicians were there in the audience? Were the mathematics discussed, or only the philosophical implications? Compare this situation to 2010, when Vinay Deolalikar proposed his proof on N != NP: you had an instant buzz in the blogosphere, , a back and forth of arguments, and at the end of it all, everyone involved could write a paper :-) This thread is the first one I have seen in which the mathematical content of the article is discussed, and even here, at the blog founded by William Dembski himself, only very few commentators have actually read the article. True, I'm very critical when it comes to the work of the EIL, but I'm eager to discuss it. You don't have to be thrilled by this, but surely, you must see the opportunity: at the end of the day, at the very least we should be able state our arguments more clearly. Indeed, I think your framework can be worse than counterintuitive: at the moment, I'm inclined to say that it is not even very useful. But perhaps, you will change my mind.DiEb
July 15, 2013
July
07
Jul
15
15
2013
01:24 AM
1
01
24
AM
PDT
OT: Darwin as the Pinball Wizard: Talking Probability with Robert J. Marks II http://www.youtube.com/watch?v=Kxv3Q0VaX9Ebornagain77
July 14, 2013
July
07
Jul
14
14
2013
01:44 PM
1
01
44
PM
PDT
WE: It seems that in many relevant cases, exhaustion of the space is not possible on accessible resources, a key point in design related issues. It means that unless a search is aided by an oracle or the like, it is overwhemingly unlikely to succeed, once relevant target zones are sufficiently rare. KFkairosfocus
July 14, 2013
July
07
Jul
14
14
2013
11:47 AM
11
11
47
AM
PDT
DiEb, I guess I wasn't finished. When you say things like that it would be flattering to describe our work as counter-intuitive, I take that as an accusation of bad faith. Perhaps it was not intended as such. It seems that two things have gotten conflated here. There are two questions. Firstly, can all searches be represented as a probability measure? Secondly, if we pick randomly from all searches is that equivalent to picking uniformly over all probability measures? When I say that you pick the components of the search randomly, I'm only referring to answering the second question. That has nothing to do with the answer to the first question. A search can be modelled as a probability distribution regardless of your choice of the search components. If I'm studying a particular search, such as exhaustive search or easter egg search, I'm not picking the components at random. Picking the components at random is brought up in response to your objection that the discriminator should bias the search towards the target. It was not intended to answer your objection that a search cannot be represented as a probability measure. As for a search being represented as a probability measure: the search is defined to include both the fitness function and the search algorithm. Together, these clearly form a stochastic process with no external inputs. Thus, the search can be represented as a probability distribution over the search space.Winston Ewert
July 14, 2013
July
07
Jul
14
14
2013
08:08 AM
8
08
08
AM
PDT
a) Generally I think it is nice that you know about which specific real world objects you draw the conclusions when you are looking at the representation. But that may be just a personal preference. b) How in the world do you choose the inspector at random? You have the output of an oracle, or a fitness function, or this guy who shouts warmer and colder. In all of these cases a complete enumeration discloses the optimum. The S4S should work in these cases, too. c) "An exhaustive search will fare better than a random guess." No, not really, not in your framework, not after you have applied a random discriminator to the search matrix of an exhaustive search. d) "I’m finished with this discussion, DiEb. You’ve still failed to raise any interesting questions or demonstrate comprehension of what the paper is doing." I agree that some of the questions aren't very interesting, but they seem to be helpful. "What’s more, you’ve chosen to accuse us of bad faith" I'm quite critical of your work, but I do believe that you stand behind your mathematics, and I take it as a sign of good faith that you are willing to help your readers to understand the paper.DiEb
July 14, 2013
July
07
Jul
14
14
2013
06:56 AM
6
06
56
AM
PDT
DiEb, A representation is a many-to-one mapping from real world objects to mathematical representations. Many real world objects will be collapsed into one mathematical objection. It won't, in general, be possible to go from the mathematical object to the real world object. You are objecting to properties that every representation has. I've explicitly stated that all six components are chosen randomly. Why in the world would you decide that I didn't really mean that the inspector was chosen randomly? Even exhaustive search requires access to some information about the target. You've got to at least have the ability to determine whether a point is in the target. NFL assumes that you have that, but S4S does not. An exhaustive search will fare better than a random guess. But we are searching for a search. We are picking a search from all possible searches. Exhaustive search is much better than a random guess, and by the consequence of the paper's results much rarer. I'm finished with this discussion, DiEb. You've still failed to raise any interesting questions or demonstrate comprehension of what the paper is doing. What's more, you've chosen to accuse us of bad faith, which is a sign of someone looking to find fault rather than understand. As iterated in my post, if you want responses from us on your ideas, you are going to have to publish them.Winston Ewert
July 14, 2013
July
07
Jul
14
14
2013
04:26 AM
4
04
26
AM
PDT
Addendum: If we wish to eliminate the condition of not having repetitions in our searches, we can observe just those searches which are terminated only after the whole search space was visited: terminators with this property exist. Such searches may have length N, but can be much longer. The result is the same: the probability of finding the target during a complete enumeration of the search space is (much) less than one. I have to ask: What good is a model in which an exhaustive search doesn't fare much better than a single guess?DiEb
July 14, 2013
July
07
Jul
14
14
2013
02:50 AM
2
02
50
AM
PDT
Thank you for trying to clarify my simple misunderstandings of your theory. Here is another one: So, lets take a finite search space with N elements. We are looking for a single element, so we try to maximize its characteristic function f. To keep everything finite, we don't allow repetitions, i.e., in our search each place can only be visited once. This is - as Macready and Wolpert observed - always possible by keeping a look-up table and thus doesn't change the set-up. Therefore, our search is completed in at most N steps. (BTW: The claim that "each of the six components [is picked] at random" seems not to apply to the inspector: this is a fixed function for a search - in our case, the inspector returns the value of the fitness function. Of course, you can say that we pick the inspector at random out of the set of the one possible inspector.) Let's take a look at all the searches which are ended by their terminator only after the N-s step, i.e., the subset of all exhaustive searches. The price question: What is the probability to find the target in such an exhaustive search? Until now, everyone looking at such problems would have thought that this probability is one: we certainly visited the target and spotted that the function f takes it maximum there. But in the world of Dembski, Ewert, and Marks it is not, as a random discriminator takes its toll - and discriminators aren't obliged to return the target if it was found and identified... Counterintuitive? That is a flattering description: the discriminator's purpose seems to be to turn even a search which is successful by all human standards into a guess to fit the idée fixe that each search can be "represented" by a measure on the search spaceDiEb
July 14, 2013
July
07
Jul
14
14
2013
01:39 AM
1
01
39
AM
PDT
1 2

Leave a Reply