Uncommon Descent Serving The Intelligent Design Community

Robert Marks interviewed by Tom Woodward

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Tom Woodward, author of DOUBTS ABOUT DARWIN and DARWIN STRIKES BACK, interviewed Robert J. Marks about his work at the Evolutionary Informatics Lab. For the podcast, go here:

Darwin or Design?
(program starts at 5:08 | actual interview starts at 7:52)

Comments
PaV, The fundamental problem, I think, is that Dembski and Marks have gotten the interpretation of NFL backwards. In the interview, Prof. Marks indicates that Wolpert and Macready said, "With a lack of any knowledge of anything, one search was as good as any other search." What they actually said was that one search algorithm may outperform another, but if the practitioner does not exploit prior knowledge of the problem in selecting an algorithm, then he or she has no formal justification for believing that the algorithm to outperform random search. Thus Prof. Marks misunderstands when he goes on to say,
What Wolpert and Macready said was, my goodness, none of these algorithms work as well as [any better than, he means to say] any other one on the average if you have no idea of what you're doing. If indeed that is true, and an algorithm works, then that means that information has been added to the search.
NFL says that random search works as well, on average over all functions, as any other algorithm. It is obviously wrong to say that information has been added when random search succeeds.Noesis
January 24, 2011
January
01
Jan
24
24
2011
06:54 AM
6
06
54
AM
PDT
PaV (10): First, note that the search space is just the domain of the function -- the set of possible inputs to the "black box." It is the function that may be algorithmically random, not the search space. It holds that almost all functions are algorithmically random even when the domain is modest in size. The only reason I stipulated that the domain and codomain be large (realistic) was to ensure that many search algorithms would outperform random search by a wide margin, and would be assigned high active information. This merely highlights how wrong it is to claim that high performance requires information. For smaller search spaces, algorithms that unexplainably outperform random search (i.e., on functions for which design is impossible) are associated with positive active information. The magnitude relative to random search is just not as great. More to come.Noesis
January 24, 2011
January
01
Jan
24
24
2011
06:52 AM
6
06
52
AM
PDT
Noesis:
Almost all functions are devoid of order expressible in an algorithmic language. If the objective is to find the absolute maximum of such a function, and the domain and codomain are large, then many algorithms outperform random search by a wide margin. But it is impossible to identify those algorithms a priori. And it is impossible to explain a posteriori why they work well. High performance on an algorithmically random function is, for want of a better term, happenstance. Yet you and Prof. Marks convert the high performance into high active information.
You've earlier referenced NFL theorems. From my limited understanding, most NFLT deal with huge search spaces, and, indeed, a priori or a posteriori, it's hard to know much about how, or why, an algorithm would work well---or better than a random search---when dealing with such huge search spaces. However, it seems to me that Dr. Dembski and Dr. Marks deal with algorithms that have tractable, or nearly tractable, search spaces, and, within which, such delineation of active information has applicability. (And this is measurable, I believe.) Certainly Dr. Dembski can answer for himself, but it seems like you're equating two very different types of search spaces. I would think that when it comes to NFLT, "active information" has no relevancy (just as, no observer, no matter how long he "ovserves" can come up with an algorithm that helps to "order" the search space.).PaV
January 23, 2011
January
01
Jan
23
23
2011
08:00 PM
8
08
00
PM
PDT
Dr. Dembski, The kind of search addressed by the no-free-lunch theorem interacts with a function strictly as a "black box" that transforms inputs from the domain (search space) to outputs in the codomain. A search begins with no information as to the workings of a box realizing function f. It does not gain information exploitable in extension of the search from observations f(x_1) = y_1, ..., f(x_n) = y_n, because they pose no logical constraint whatsoever on f(x) for any x yet to be input to the box. (The ordering of the codomain in quality is irrelevant.) Thus I have to say that a search algorithm that uses the observations methodically exhibits bias. It is the job of the search practitioner to acquire knowledge of exploitable order in the "inner workings" of the box through physical observation, and to select a correspondingly biased search algorithm. There is no free lunch in the sense that such design is required to justify belief that an algorithm will outperform random search. You and Prof. Marks turn this around to indicate that an algorithm greatly outperforms random search only if it is designed to solve the problem, and this is not the case. Almost all functions are devoid of order expressible in an algorithmic language. If the objective is to find the absolute maximum of such a function, and the domain and codomain are large, then many algorithms outperform random search by a wide margin. But it is impossible to identify those algorithms a priori. And it is impossible to explain a posteriori why they work well. High performance on an algorithmically random function is, for want of a better term, happenstance. Yet you and Prof. Marks convert the high performance into high active information. In sum, a search process, in itself, never has exploitable information about the black box. And when a search process performs well, that in itself does not justify a claim that the "search-forming process" was informed. In theory, there is almost always no reason for superior performance. Thus I believe that log-probabilities of success in search should not be regarded as information.Noesis
January 23, 2011
January
01
Jan
23
23
2011
12:48 PM
12
12
48
PM
PDT
tragic, as to this postulate #2 '2) Computational Frontloading – Organisms are given computational power by which they can compute solutions to problems that are intractable for blind search.' Though quantum computation has been hypothesized within DNA here,,, Quantum Computing in DNA - Hameroff Excerpt: Hypothesis: DNA utilizes quantum information and quantum computation for various functions. Superpositions of dipole states of base pairs consisting of purine (A,G) and pyrimidine (C,T) ring structures play the role of qubits, and quantum communication (coherence, entanglement, non-locality) occur in the “pi stack” region of the DNA molecule. ,,, We can then consider DNA as a chain of qubits (with helical twist). Output of quantum computation would be manifest as the net electron interference pattern in the quantum state of the pi stack, regulating gene expression and other functions locally and nonlocally by radiation or entanglement. http://www.quantumconsciousness.org/views/QuantumComputingInDNA.html ,,, and indeed 'quantum computation' would go a very long way towards explaining the very, very, fast 'search and repair' mechanisms witnessed in DNA,,, Quantum Dots Spotlight DNA-Repair Proteins in Motion – March 2010 Excerpt: “How this system works is an important unanswered question in this field,” he said. “It has to be able to identify very small mistakes in a 3-dimensional morass of gene strands. It’s akin to spotting potholes on every street all over the country and getting them fixed before the next rush hour.” Dr. Bennett Van Houten – of note: A bacterium has about 40 team members on its pothole crew. That allows its entire genome to be scanned for errors in 20 minutes, the typical doubling time.,, These smart machines can apparently also interact with other damage control teams if they cannot fix the problem on the spot. http://www.sciencedaily.com/releases/2010/03/100311123522.htm ,,, Yet in spite of this evidence of, and necessity for, quantum computation within DNA to solve such search and repair problems, tragic, it should also be noted that 'quantum computation', though very, very, fast for calculating 'search and repair' problems, are in fact known to be much more limited when searching for a novel functional protein: ,,,despite some very optimistic claims, it seems future 'quantum computers' will not fair much better in finding functional proteins in sequence space than even a idealized 'material' supercomputer of today can do: The Limits of Quantum Computers – March 2008 Excerpt: "Quantum computers would be exceptionally fast at a few specific tasks, but it appears that for most problems they would outclass today’s computers only modestly. This realization may lead to a new fundamental physical principle" http://www.scientificamerican.com/article.cfm?id=the-limits-of-quantum-computers The Limits of Quantum Computers - Scott Aaronson - 2007 Excerpt: In the popular imagination, quantum computers would be almost magical devices, able to “solve impossible problems in an instant” by trying exponentially many solutions in parallel. In this talk, I’ll describe four results in quantum computing theory that directly challenge this view.,,, Second I’ll show that in the “black box” or “oracle” model that we know how to analyze, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform “quantum advice states”,,, http://www.springerlink.com/content/0662222330115207/ Here is Scott Aaronson's blog in which refutes recent claims that P=NP (Of note: if P were found to equal NP, then a million dollar prize would be awarded to the mathematician who provided the proof that NP problems could be solved in polynomial time): Shtetl-Optimized Excerpt: Quantum computers are not known to be able to solve NP-complete problems in polynomial time. http://scottaaronson.com/blog/?p=456 Protein folding is found to be a 'intractable NP-complete problem' by several different methods. Thus protein folding will not be able to take advantage of any advances in speed that quantum computation may offer to any other problems of computation that may be solved in polynomial time: Combinatorial Algorithms for Protein Folding in Lattice Models: A Survey of Mathematical Results – 2009 Excerpt: Protein Folding: Computational Complexity 4.1 NP-completeness: from 10^300 to 2 Amino Acid Types 4.2 NP-completeness: Protein Folding in Ad-Hoc Models 4.3 NP-completeness: Protein Folding in the HP-Model http://www.cs.brown.edu/~sorin/pdfs/pfoldingsurvey.pdf OT: Creed, a christian band that does their absolute best to try upset old church ladies: Creed - Overcome (Official Music Video) http://www.youtube.com/watch?v=8ocxkmdadPEbornagain77
January 23, 2011
January
01
Jan
23
23
2011
08:40 AM
8
08
40
AM
PDT
Noesis, There's no circularity here, but there are two senses of active information at play. Think of kinetic energy -- there's the measurement of kinetic energy of a system, which can be defined operationally, and then there's the underlying entity -- the actual kinetic energy of the system. Active information is, in the first instance, defined relationally: there's an inherent difficulty of a problem as gauged by blind search; there's the reduced difficulty of a problem as gauged by an alternate search. This reduction in difficulty or improvement in search capacity is measurable and, indeed, is measured by active information. But then the question arises, what enabled the second search (the alternate search) to perform better than the first search (the blind search). Here we posit that the second search had an infusion of active information, now treated as a hypothesized entity. This is in keeping with standard scientific practice of treating measurements as reflecting underlying entities.William Dembski
January 23, 2011
January
01
Jan
23
23
2011
08:29 AM
8
08
29
AM
PDT
I'm an "In the beginning" guy myself, but I've thought about frontloading. It seems to me there are three possible types of frontloading mechanisms: 1) Mutational Frontloading - Mutational pathways are laid out in advance through which the organism advances basically by Darwinian means. This would require intelligent input for the organism to start at exactly the right spot so Darwinian processes can do the job. 2) Computational Frontloading - Organisms are given computational power by which they can compute solutions to problems that are intractable for blind search. 3) Programmed Frontloading - The genetic program contains all information right from the start but is programmed to "advance" species through some sort of very long term timing mechanism.tragic mishap
January 23, 2011
January
01
Jan
23
23
2011
07:43 AM
7
07
43
AM
PDT
Dr. Dembski, Comment 4 was addressed to you.Noesis
January 23, 2011
January
01
Jan
23
23
2011
06:20 AM
6
06
20
AM
PDT
You define the active information of an algorithm in terms of its performance. Then you say that an algorithm performs well because it has active information, when it actually has active information because it performs well. It seems to me that you're caught up it circular reasoning. What have I missed?Noesis
January 23, 2011
January
01
Jan
23
23
2011
05:59 AM
5
05
59
AM
PDT
Thanks for your response. I don't believe all targeted searches require "gradual, narrowing down, step by step, on the target". It all depends on how they are written. For example with "weasel" Dawkins could have had a high mutation rate such that the target is reached in say just a few steps. And with genetic recombination along with gene duplication, followed by function changing mutations and integration into the system, there can be profound effects in a short period of time, ie one or a few generations. In that scenario saltations would/ could rule. As for the evidence, well the same evidence that points to design would mean that there was/ is some process(es) that the designer(s) used. And instead of coming around and tinkering with the original design to get the desired outcome it would be much easier to design all that in- "In the beginning..." so to speak. And a targeted search does not have to start with some jumble of chemicals. It could start with whatever the designer(s) choose- designers choose the starting point(s), the resources, the end point and the algorithm to "make it so". Something else for you to consider- when contractors build a building, that is a targeted search. When cars are designed and manufactured that would be a targeted search. When you assemble a piece of furniture that came unassembled, that is also a targeted search. The intermediates of the buildings are no longer there, just the final product. The intermediates of the cars are gone, as are the intermediates of that assembled piece of furniture.Joseph
January 23, 2011
January
01
Jan
23
23
2011
05:29 AM
5
05
29
AM
PDT
Sure, it's possible, Joseph. But what's the evidence. Searches tend to be gradual, narrowing down, step by step, on the target and thus implying lots of intermediaries. Do we see such chains of intermediaries in the history of life? Darwinists count the fossil record as a wonderful vindication of their theory. From a less biased stance, the fossil record doesn't look nearly so good.William Dembski
January 22, 2011
January
01
Jan
22
22
2011
07:01 PM
7
07
01
PM
PDT
Dr Dembski, What do you think of the idea that a targeted search was/ is a plausible design mechanism employed by the designer(s) of living organisms? (yes I know ID is not about specific mechanisms but even you said no one is prevented from looking into it)Joseph
January 22, 2011
January
01
Jan
22
22
2011
05:17 PM
5
05
17
PM
PDT

Leave a Reply