Uncommon Descent Serving The Intelligent Design Community

“Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Here’s our newest paper: “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information,” by William A. Dembski and Robert J. Marks II, forthcoming chapter in Bruce L. Gordon and William A. Dembski, eds., The Nature of Nature: Examining the Role of Naturalism in Science (Wilmington, Del.: ISI Books, 2009).

Click here for pdf of paper.

1 The Creation of Information
2 Biology’s Information Problem
3 The Darwinian Solution
4 Computational vs. Biological Evolution
5 Active Information
6 Three Conservation of Information Theorems
7 The Law of Conservation of Information
8 Applying LCI to Biology
9 Conclusion: “A Plan for Experimental Verification”

ABSTRACT: Laws of nature are universal in scope, hold with unfailing regularity, and receive support from a wide array of facts and observations. The Law of Conservation of Information (LCI) is such a law. LCI characterizes the information costs that searches incur in outperforming blind search. Searches that operate by Darwinian selection, for instance, often significantly outperform blind search. But when they do, it is because they exploit information supplied by a fitness function—information that is unavailable to blind search. Searches that have a greater probability of success than blind search do not just magically materialize. They form by some process. According to LCI, any such search-forming process must build into the search at least as much information as the search displays in raising the probability of success. More formally, LCI states that raising the probability of success of a search by a factor of q/p (> 1) incurs an information cost of at least log(q/p). LCI shows that information is a commodity that, like money, obeys strict accounting principles. This paper proves three conservation of information theorems: a function-theoretic, a measure-theoretic, and a fitness-theoretic version. These are representative of conservation of information theorems in general. Such theorems provide the theoretical underpinnings for the Law of Conservation of Information. Though not denying Darwinian evolution or even limiting its role in the history of life, the Law of Conservation of Information shows that Darwinian evolution is inherently teleological. Moreover, it shows that this teleology can be measured in precise information-theoretic terms.

Comments
R0b, Thank you for the reply. You wrote:
Consider an algorithm that finds the WEASEL target with the following logic: It randomly selects points in the search space until it finds a point whose fitness plus the number of the query is even. In other words, if it’s the 3rd query and the fitness is 127, then the condition is satisfied. After finding such a point, it immediately goes to “METHINKS IT IS LIKE A WEASEL”.
This strategy would no longer be using a standard evolutionary strategy, which could find different targets simply by using different fitness functions, but would constitute a new search strategy/algorithm. We could also say "What about an algorithm that simply tries one query, no matter what the fitness function, then goes to the target?" or any other variation of that. But these are different search strategies, so the fitness function method I outlined isn't directly applicable, since they aren't really evolutionary strategies in the normal sense of the word. However, your example wouldn't escape the LCI. Going with your new set-up, we can see that there exists a similar set-up for every target in your lower level search space: for example, it could go to "Meblinks it is like a weasel" after satisfying the condition. So why did we choose the one algorithm that goes to our target rather than to "Meblinks...", "Rethinks...", "hstjdins..." or any other of the 10^40 permutation choices we have? More importantly, what is the minimum informational cost incurred by going from the set of all such algorithms (bounded by our original search space) to the set that chooses "Methinks..." with the same efficiency as the algorithm you constructed? As I mentioned, the "goto" target of your algorithm could have been any of the roughly 10^40 permutations in the original search space, so we have at least 10^40 algorithms to choose from. The search for your particular algorithm (or one that performs equivalently well) is as hard as, and likely much harder, than our original search. The LCI still holds. AtomAtom
May 12, 2009
May
05
May
12
12
2009
10:40 AM
10
10
40
AM
PDT
Atom, Okay, I think I've finally got it. Sorry it took so long to sink in. I think your idea for defining a higher-order baseline is a good one, but I don't believe it works with Marks and Dembski's framework. First of all, back in [168] where I agreed with your point about all algorithms performing equally over the whole set of fitness functions, I was wrong. Marks and Dembski's model is not, in general, NFL-compatible. The problem is that Wolpert and Macready define the goodness of a search in terms of the codomain of the fitness function, but Marks and Dembski define the target independent of the fitness function, as Tom English pointed out above. Consider an algorithm that finds the WEASEL target with the following logic: It randomly selects points in the search space until it finds a point whose fitness plus the number of the query is even. In other words, if it's the 3rd query and the fitness is 127, then the condition is satisfied. After finding such a point, it immediately goes to "METHINKS IT IS LIKE A WEASEL". No matter what fitness function we use, this algorithm will likely find the target within a few queries. So how do we apply your condition that the higher-order space of fitness functions must have the same average performance as the null search? I think that coming up with generally applicable constraints on the higher-order space definition is harder than meets the eye. As it says in the paper, the ways to search and to metasearch are endlessly varied, and the higher-order space definition can include or exclude any aspect of any conceivable search.R0b
May 12, 2009
May
05
May
12
12
2009
09:11 AM
9
09
11
AM
PDT
" in a search performance on the lower level search " = " in improved search performance on the lower level search" Sorry, I type too fast sometimes. AtomAtom
May 11, 2009
May
05
May
11
11
2009
03:18 PM
3
03
18
PM
PDT
R0b wrote in the next post:
1. Information cost [of the higher order reduction] depends on the definition of the higher-order search space.
Correct and agreed.
2. We can define the higher-order search space to contain only good searches, thus making the information cost zero and falsifying the LCI.
No we can't, since doing so would result in a search performance on the lower level search. If a reduction leads to search performance on the lower level search, then we cannot ignore that cost. If it leads to no search improvement (and no hinderance, since we can contribute negative active information), then we can ignore it.
3. In response to the objection that this higher-order search space must incur an information cost from an even higher-order search space, we can point out that this is true for all search spaces that have a non-zero probability of yielding a good search. If the LCI requires us to regress probabilities all the way up, then we’re stuck with an infinite information cost in every case.
Either that, or a source that can generate information without relying on search spaces. But this is a side issue. AtomAtom
May 11, 2009
May
05
May
11
11
2009
03:16 PM
3
03
16
PM
PDT
R0b wrote:
One counterintuitive aspect of Marks and Dembski’s framework is that information cost is not based on the average performance of elements in the higher-order search space. Rather, it’s based on the fraction of those elements that perform at a level of at least q. Information cost does not tell us whether the average performance of the higher-order space is better or worse than the null search. It only tells us what the odds are of randomly selecting a search that performs at least as well as the given alternate search.
R0b, You've almost got it. I didn't say that the average performance of the higher level search was used to calculate the incurred cost, only that it can be used as an objective basis for deciding which informational costs are relevant, and hence, must be accounted for. It also provides a handy method for setting an objective baseline for for the higher level informational cost measure. My reply has been consistent and I fail to see any issue with using the method I outlined to define the higher order space in a non-ad hoc way. AtomAtom
May 11, 2009
May
05
May
11
11
2009
03:07 PM
3
03
07
PM
PDT
Atom, Hopefully we've gotten past the confusion about average performance vs. information cost. I can't remember my train of thought from a few days ago, so I'll just reiterate the point that you're disputing: 1. Information cost depends on the definition of the higher-order search space. 2. We can define the higher-order search space to contain only good searches, thus making the information cost zero and falsifying the LCI. 3. In response to the objection that this higher-order search space must incur an information cost from an even higher-order search space, we can point out that this is true for all search spaces that have a non-zero probability of yielding a good search. If the LCI requires us to regress probabilities all the way up, then we're stuck with an infinite information cost in every case.R0b
May 11, 2009
May
05
May
11
11
2009
10:03 AM
10
10
03
AM
PDT
Atom, You dropped my qualifier in "physical probability." See more on this in the new thread Bill started. I'm guessing that you, like me, are more engineer than philosopher. I accuse myself of a serious error in neglecting computational complexity in my investigation of NFL. Dembski and Marks are making the same error in focusing entirely on information costs. There are huge distinctions in search programs when time and memory are limited. I don't have to go with Seth Lloyd in saying that the universe literally is a computer to say that there are analogous distinctions in nature. This discussion has turned interesting at just the wrong time. I really need to put on the blinders and deal with the end-of-semester drudge work.T M English
May 10, 2009
May
05
May
10
10
2009
04:36 PM
4
04
36
PM
PDT
"no logically reason"* => "no logical reason"Atom
May 9, 2009
May
05
May
9
09
2009
01:24 PM
1
01
24
PM
PDT
Dr. English, an addendum, I have been thinking about my response and wanted to make a distinction. When I say we could possibly deal with the physical constraints (which I referred to as a form of "necessity"), what I meant was physical necessity, given the number of particles in the universe. I don't want this confused with logical necessity, which wouldn't make sense to treat as contingent (obviously, by definition). I just wanted to make sure I was clear on that point. Given that there is no logically reason we're aware of that the universe has this number of particles, which causes a reduction to take place, then measuring a cost on that reduction could be meaningful (via the tri-level search outlined above.) If however there is a logical necessity to that number of particles, the reduction requires no explanation, as necessary entities are their own explanation. AtomAtom
May 9, 2009
May
05
May
9
09
2009
01:20 PM
1
01
20
PM
PDT
Atom, I think we're getting pretty close to the same page. I think a major discrepancy in our thinking is your association of information cost with performance averaged over all of the functions in the higher-order space. For instance:
So if Reduction A results in a subset that still only performs as well as blind search, either a) the reduction didn’t improve search performance, and so incurs no informational cost...
[Emphasis mine] One counterintuitive aspect of Marks and Dembski's framework is that information cost is not based on the average performance of elements in the higher-order search space. Rather, it's based on the fraction of those elements that perform at a level of at least q. Information cost does not tell us whether the average performance of the higher-order space is better or worse than the null search. It only tells us what the odds are of randomly selecting a search that performs at least as well as the given alternate search. Consider that the set of functions that indicate proximity to a target performs no better on average than the larger set mentioned in endnote 49, i.e. they both perform on average the same as the null search. Yet Marks and Dembski say that the reduction from the latter to the former entails a heavy information cost. More later, probably after Mothers' Day.R0b
May 9, 2009
May
05
May
9
09
2009
12:50 PM
12
12
50
PM
PDT
Dr. English, Thank you for your contribution. I feel that your exasperation, however, has led you to make a couple leaps towards the end of your comment. You wrote:
We are within the universe, and to posit the existence of an entity that can observe a succession of states of the universe begs the question of the existence of a supernatural entity.
Whoa whoa whoa. No one I'm aware of was discussing a "supernatural entity" nor assuming one. Demsbki and Marks paper is about the mathematics underlying conservation of information; to begin discussing metaphysical interpretations is beyond this thread, as the contents of the paper itself have barely begun to be discussed. The universe can only instantiate at most a fraction of the total number of possible fitness functions, which is correct. So a reduction has already taken place due to the physical constraints. But this reduction, in as much at it improves the performance of our original search, would incur an information cost of at least the active information, if the math in the paper holds. You have not criticized the math, only its application, so I'll assume it does hold. Now, you may argue "You cannot calculate this informational cost, since we don't know the 'probability' of the universe." It is true that we don't know the probability of the universe. But the paper also provides a measure theoretic version of the theorem, which would apply if the probability distribution differed substantially from the uniform in a way that eventually assisted our lowest level search. (Even if the constraints are necessary, that is probability of 1 for that one state and zero for the others.) In short, you'd have a search-for-a-search-for-a-search. The universe would be assigned a (non-?)uniform probability (reduction 1, measure theoretic version) for assigning the set of possible fitness functions (reduction 2, measure theoretic version), from which we choose our actual fitness function (reduction 3, fitness theoretic version.) Unless I'm missing something (which is always a possibility) the LCI would seem to also hold vertically, for your tri-layered search. AtomAtom
May 9, 2009
May
05
May
9
09
2009
11:25 AM
11
11
25
AM
PDT
Atom (and R0b), It seems that the key claim is that a regress of mechanism gets materialistic explanation nowhere in accounting for active information because there are at least as many alternatives in a higher-order space of material configurations as in the lower-order space. Dembski and Marks seem not to object to treating the known universe as a finite computing machine, and I'm going to proceed more or less along those lines. A huge fraction of alternatives we can allude to in mathematics have no physical realization simply because they require excessive resources to "fit in the universe." In the third theorem, there are (M + 1) ^ K fitness functions, where K is the size of the base-level search space Omega. For any binary representation (e.g., the machine language of the computer you're using), almost all fitness functions have no description of length much less than K log (M + 1). Even though Dembski and Marks indicate that M is large, I set M = 1 for simplicity. Now the typical fitness function requires K bits to describe. As Dembski and Marks observe, if Omega is the set of all length-100 sentences over a 20-amino-acid alphabet, then K is about 10^130. But Seth Lloyd estimates that the observed universe registers at most 10^120 bits of information. The upshot is that if the entire known universe were searching a space of descriptions of fitness functions, only a minuscule fraction of the descriptions would be sufficiently compact to arise: 1 / 2^10000000000 [10 zeros]. I have to note the absurdity of this scenario. We are within the universe, and to posit the existence of an entity that can observe a succession of states of the universe begs the question of the existence of a supernatural entity. Similarly, we cannot regard the evolution of the universe as a search process. We cannot frame the universe that has in its unfolding included us as an alternative to a null universe. There is no way to assign a physical probability to the universe. Thus we cannot associate active information with the universe. Some fitness functions are physically possible, and others are not -- and you cannot attribute the mere existence of physical constraints to intelligence.T M English
May 8, 2009
May
05
May
8
08
2009
07:53 PM
7
07
53
PM
PDT
Addendum, I should make explicit that I'm not considering the trivial set of only one fitness function that assigns the same value to all permutations as the baseline set, or for that manner, any set that is smaller than our reduced set. For a reduction to make sense, the reduced set needs to be a subset of the baseline set. Sorry I didn't spell that out explicitly.Atom
May 8, 2009
May
05
May
8
08
2009
05:24 PM
5
05
24
PM
PDT
R0b, I agree that there may have been some talking past points. I'll try to get this back on track. I thought you had expanded the higher order space to include different search strategies (this wasn't clear to me), so if I mischaracterized your argument, I apologize. Let us assume an evolutionary strategy is a given. We will further assume a base search space, which will be the permutation space of all 24 letter long base 27 (per our alphabet) strings. We agreed this space had 27^24 elements. Now our evolutionary strategy will have to use a fitness function (following the standard implementation of an evolutionary search). What is the search space of possible fitness functions? First, we'd want to use deterministic functions that assign only one fitness value to each element in our original space. Furthermore, we want to limit the number of fitness functions, which we will do in two ways. First, we limit the function to only take as inputs the elements of our original search space (in other words, the domain is all x such that x is a permutation in our original set.) Secondly, we will limit the possible output values of the function to integers between 0 and n, so that our search space becomes well defined. This I will label Reduction A. Given this set-up, we can now calculate the informational cost of choosing one fitness function from that new set in a straight forward manner. (Call this Reduction B.) But the question becomes, if I understand you correctly, why do we include the informational costs of Reduction B and not of Reduction A (which is infinite)? More importantly, if we can ignore the informational cost of Reduction A, why can't we also ignore the cost of Reduction B? If this is not you position, then please clarify, because I have misunderstood you. If so, I will reiterate my earlier response. Reduction A is the reduction from the set of all possible fitness functions (setting n to ∞, effectively) which as you correctly point out is a reduction of an infinite set of possibilities to a finite set, which would incur an infinite informational cost. But as I correctly pointed out, Reduction A does not improve search performance over blind search. Showing this is easy. Imagine you perform your search using fitness function 1 of your reduced set (assuming that you can order the fitness functions in our reduced set, which you can), then use fitness function 2, then fitness function 3, etc, until you've performed the same search using all of the fitness functions. You then average the performance of all the functions and will find that your evolutionary strategy performed only as well as blind search. So if Reduction A results in a subset that still only performs as well as blind search, either a) the reduction didn't improve search performance, and so incurs no informational cost, or b) it did improve search performance, which means that the original set of all possible functions (the set prior to Reduction A, which is infinite) somehow performs worse than blind search. But since that set includes all possible fitness functions, it will perform as well as blind search, per the NFL theorems. (I could be wrong on this point, since my understanding of the NFL isn't as strong as some of the other commenters, but I think the NFL would apply in this case as well.) So if Reduction A didn't improve search performance, it is irrelevant to our calculation. We can go further than Dembski (I believe) and define our higher order search baseline as the smallest set that 1. assigns a value to each and every permutation in the original space and 2. still, when averaged, performs only as well as blind search. That would be an objective baseline to measure our subsequent reductions from. If you find this disagreeable, then please show a reduction from a set that performs as well as blind search to one that performs better, and show how this reduction does not incur an informational cost of at least the active information. Atom PS You are correct in your point on Shannon info about reduction in receiver uncertainty. However, I don't think you understood my larger point, being that we could inflate any uncertainty/probability/information measure, even that of an observer, by including irrelevant reductions. (What about the reduction for the receiver to limit him to his current state, from all possible messages he could have expected, to just a few? We only consider the issue from a baseline, being defined for us in the Shannon case as the receiver's current state of uncertainty, but implicitly defined for us in the Dembski case, using the criterion I outlined.) Regardless, it was a side issue which isn't necessary to understanding my argument and I brought it up only as a way of hopefully getting you to see my original point. I will drop it.Atom
May 8, 2009
May
05
May
8
08
2009
04:57 PM
4
04
57
PM
PDT
Joseph (#153):
You don’t need Dembski to respond. All you need to do is take something that is alleged to have active information and show it can arise via nature, operating freely.
I could be wrong here, but IF I read Dembski and Marks correctly, then if active information was found to arise "via nature, operating freely", then this would, in fact, have been caused by intelligence smuggling the information in somehow. IF this is the case, then I guess that the oft repeated argument that ID would be falsified if natural processes could produce CSI is wrong.Hoki
May 8, 2009
May
05
May
8
08
2009
04:48 PM
4
04
48
PM
PDT
Atom, with regards to Shannon information: Shannon info is a relative measure based on epistemic probability. It measures the reduction of uncertainty in the receiver, so it's explicitly relative to the receiver's prior knowledge. The active info framework, on the other hand, attempts to provide absolute measures by regressing probabilities to a point of no prior conditions. But it can't do so, because an ultimate, unconditional search space is an undefined search space, and there's no way to derive probabilities from it.R0b
May 8, 2009
May
05
May
8
08
2009
01:44 PM
1
01
44
PM
PDT
Atom:
BTW, we can always arbitrarily inflate any probability calculation (”what are the chances of getting heads on a coin flip…with that particular coin out of all the coins of history?”), so if your objection were viable, we could never meaningfully calculate the probability of anything.
But that's not inflating a probability calculation, it's two different probabilities, i.e. the probability of flipping a heads given that I have a fair coin and I flip it, and the probability of flipping a heads with that particular coin, where me having that coin is not a given. But elaborating on your point, and hopefully not distorting it too much, let's imagine two different conversations. Conversation #1. Atom: 1982 pennies are slightly biased. If I flip this 1982 penny, there is a 51% chance that it will come up heads. R0b: But what are the odds of this being a 1982 penny? Assuming that you got it from my coin jar, the odds are 1 out of 367. (I fastidiously keep track of the dates on all of the coins my jar.) Or assuming that you got it from a penny collection that has 1 penny from every year from 1970 to 2009, the odds are 1 in 40. Or assuming-- Atom: Uh, I gotta go. Conversation #2. Dawkins: This WEASEL search has a much better probability of success than random sampling. There's a 50% chance that it will find the target in 40 generations. Marks and Dembski: But what are the odds of it being that efficient? Assuming that you chose the fitness function from all functions that map 28-letter sequences to an 8-bit fitness value, then the odds are 10^-3742. Or assuming that you chose the search from all possible searches uniformly distributed according to the distribution they confer on the search space, the odds are 10^-8930. Or assuming-- Dawkins: Uh, I gotta go. The point being that the information cost depends on what we assume about the origin of the search. If the LCI doesn't hold for all possible assumptions, which it doesn't, then it should place explicit restrictions on those assumptions.R0b
May 8, 2009
May
05
May
8
08
2009
01:34 PM
1
01
34
PM
PDT
Atom, we've definitely talked past each other, and I'm not sure how to get back on the same page. In [157] you described a higher-level space of just fitness functions, not fitness functions and strategies. I had assumed that an evolutionary strategy was a given, and the higher-level search consisted of finding a good fitness function. My X+Y example describes a higher-order space of functions only, not strategies, so there is no reduction of strategies involved. And since the strategy is a given, there isn't any NFL-like comparison of different strategies' performance over all fitness functions. I'll have to wait until my next comment to address the question of whether this issue applies to all probability calculations and to Shannon info.R0b
May 8, 2009
May
05
May
8
08
2009
12:34 PM
12
12
34
PM
PDT
jerry:
The active information is the map and boat in the treasure hunt. In evolutionary biology it is ??? My guess it is intelligence. Because intelligence can certainly do it just as it provides the map and boat for the treasure hunters.
But if an intelligent entity makes a map based on pre-existing information, she is simply "shuffling around pre-existing information" as Marks and Dembski say, which unintelligent entities are also capable of doing. And I know of no evidence that an intelligent entity can make a map to a treasure without having pre-existing information regarding the location of the treasure. If anyone thinks that intelligence does have that capability, I'll reissue a challenge: Can someone use their intelligence to find a 32-character (capital letters and spaces) string with an MD5 hash of cb6ba5a8daf75b7d50fef95cecae78d7? If intelligent agents create active info, as Marks and Dembski claim, then they can only do so by luck, not by any inherent ability to create active info. Active info is defined as better-than-random probability of something occurring successfully. If there is a better-than-random probability that active info will be created, then that active info already exists, and its so-called creation is actually a mere shuffling of it. Therefore, it's self-contradictory to speak of something that has a better-than-random chance of successfully creating active info.R0b
May 8, 2009
May
05
May
8
08
2009
11:50 AM
11
11
50
AM
PDT
R0b, I understood your point but I don't know if you understood mine. By reductions that improve performance, I meant relative to the performance of blind search. You wrote:
I submit that if you can ignore the infinite information cost incurred by reducing X+Y to X, then I can ignore the infinite information cost incurred by reducing X+Y to the set of only good searches. In doing so, the LCI is falsified.
In going from X+Y to X, you didn't improve the performance of the search over blind search, for this simple reason: the set of all possible fitness functions / strategies, when averaged over all problems, is no better than blind search and the performance of the set of all n^(27^24) fitness functions, which is a proper subset of X+Y, still has a performance equal to blind search when averaged on our problem (meaning you average the performance of all the fitness functions in our set on our problem.) You can see this intuitively as well, since for every fitness function in our reduced set that encodes correct information about the target, we have a negated fitness function that would encode incorrect information about the target. The reduction from all strategies to just the evolutionary strategy with all fitness functions of n^(27^24) did not improve search performance. Both sets average to blind search performance. So your reduction actually didn't improve search performance, so is irrelevant in terms of informational cost. We're only concerned with reductions that improve the lower-level search performance over blind search. Further, you propose that we can create an artificially inflated space which isn't implicitly defined, but is created to cause a problem. This may be true (I'll have to over your reasoning again), but you ignored my point that we could do this with any probability calculation, and arbitrarily inflate the search space (see my coin example in the previous post.) if you point were valid, we'd never be able to calculate infomration from any probability, since we can always say "You're ignoring the reduction of uncertainty from the contrived/inflated space to the 'real' space, which is infinite." Remember, Shannon information relies on the probability of receiving a set of characters to calculate information content...so if your point is valid, you've "falsified" not just Active Information, but information theory itself. So your point probably isn't valid. AtomAtom
May 8, 2009
May
05
May
8
08
2009
11:49 AM
11
11
49
AM
PDT
Mr Kairosfocus, 1] the complexity of organization of the cell and its genetic component defies blind search, starting with the need to get to the shores of islands of function. Agreed, blind search is a complete mischaracterization of OOL and evolution. 2] posing hill-climbing algorithms that use randomness to get variations that fitness functions reward or punish begs this prior question. What was prior was not a question, it was an assertion. However, it should be noted that hill-climbing algorithms rely much more on history than randomness to acheive results. So your combination of "blind search" and "randomness" are getting you off on the wrong foot in understanding evolutionary processes. 3] Active info relates to the info needed to get TO islands of function in combinatorially explosive config spaces. (Notice that ratio of odds on blind random search landing in the target vs a search that gets you there in a much more feasible span of effort.) Agreed. My reading and questions to Dr Dembski still leave me fuzzy on some of the details of active information, but I think I have basic understanding of the concept. You can see some of my questions above, and some of Dr Dembski's responses. 4] We observe 2 sources of high contingency: chance/stochastic and purposeful. Known cases of complex organised function uniformly trace to design — for the obvious reason. If you look at the Human Competitive Awards, you might rethink that. 5] The relevant large increments in biofunctional info to explain first life or body plans — 100’sK to 10’s or 100’s M bits — thus on inference to best explanation credibly come from design, i.e teleology. Sir, these are numbers you repeat over and over again. This is handwaving, not fact. But even if I took it seriously, let's assume that the very large number of cells populating the early oceans of Earth, relicating perhaps as fast as every half an hour, could in a year only fix one bit of information. Then it would take them only 100 million years to reach your number of bits necessary to develop diverse body plans. Since history is much more important than randomness, Deep Time has to be taken very seriously as the source of active information. 6] Can you — or any other supporter of the power of chance to get us to the shores of function, supply an observed case of chance doing the lucky noise trick? I have, in universes with simpler laws of nature, such as this Evoloops CA example shown here. Doing the same in our universe is what scientists have been doing ever since Miller-Urey. 7] If not, you have not got a good empirical case, I am afraid I am always ready to listen to a good empirical case. If there is a suite of show-stopping experiments demonstrating how OOl is doomed to failure, by all means point them out to me. I am quite willing to listen to incremental results, because all that the conventional scientific community has at this point is incremental results.Nakashima
May 8, 2009
May
05
May
8
08
2009
10:55 AM
10
10
55
AM
PDT
Atom, I'm not sure what you mean by performance-improving reductions, so I'll go back to your "I AM AN ENGLISH SENTENCE" example to concretize the discussion. You propose a higher-level space consisting of all functions f:A->B where A is the set of all 24-letter sequences and B consists of n fitness values. As you point out, there are n^(27^24) such functions. I submit that, in proposing that space, you are already incurring an infinite information cost. I'll call your space X, and note that it is a subset of space X+Y ("+" meaning union). I'm going to define Y in a thoroughly contrived way: Y is the set of functions f:C->B for which condition0 is true. C is the set of all sequences of any length, B consists of n fitness values, and condition0 is the condition that all 24-letter sequences in f map to a fitness of 0. Assuming that your algorithm searches the space of 24-letter sequences, the functions in Y provide no benefit to the search since they always return 0 for 24-letter sequences. Since Y is infinitely large, it overwhelms the finite number of "good" functions in X, so the percentage of good functions in X+Y is 0. Reducing the space from X+Y to X improves performance in the sense that it increases the percentage of good functions in the higher-order space. (I don't know any other sense in which a higher-order space reduction can improve performance.) But this is an infinite reduction, incurring an infinite information cost. I submit that if you can ignore the infinite information cost incurred by reducing X+Y to X, then I can ignore the infinite information cost incurred by reducing X+Y to the set of only good searches. In doing so, the LCI is falsified.R0b
May 8, 2009
May
05
May
8
08
2009
09:57 AM
9
09
57
AM
PDT
Here we go again. Just what is a search function, a fitness landscape, active information? Evolution supposedly works by creating a modification in the genome through let's say Allen MacNeill's 50+ engines of variation. Each such change is a search. It is a blind search for nothing. It has no objective except that it may be stimulated by some environmental pressure but in most cases it will be the result of a random event to the genome with no particular environmental pressure. The search which seems to be a misnomer because there is no actual search going on is really a wandering into various combinations of DNA. And these various combinations of DNA can affect the organism. This is fairly simple to understand. The easiest change to understand is that a protein will be modified and this modification can have some effect on the organism. The fur may now be white and not brown. If the effect results in more offspring then we say that natural selection has "selected" this change for keeping. Most likely it will be damaging or neutral and be eliminated except some neutral ones may be kept because of a random process or for other reasons. Now this is a little like looking for buried treasure and while the big treasure is buried on Bora Bora and this island is quite out of the way even if you somehow made it to Tahiti. But the analogy is that all over the world there is buried treasure and while most of the digging will come up empty occasionally a dig will find something valuable and near these valuable finds will probably be some useful items that the treasure buriers included that make the original treasure more desirable and for which the diggers can easily get to. But the analogy is not quite the same for the genome because while the nearby treasure is usually considered additional loot, a nearby genomic element is just a possible replacement for what has already been found. The treasure analogy would have to be modified so that any additional treasure would somehow just replaces one you already found. What are the odds of a treasure seeker getting to Bora Bora when it is digging in Omaha or Barcelona or Manila. Not very likely in a trillion years let alone the 3.5 billion years since life appeared on earth. Especially when the treasure seekers are constrained by moving on foot and will essentially continue to dig around the original find and might feel obsessive and dig in places where they already dug. It becomes obvious to treasure diggers that after a while there is nothing more to dig for because each new hole produces less and less and almost all are producing nothing. And the search pattern when that happens is to go back to the original site and dig some more around it. They do not widen the search because they find no value in digging empty holes when the original site may provide some additional trinkets. But if someone gave the treasure diggers a map, and instructions on how to build a boat, then they may get to Bora Bora and many other places where other large treasures are buried. Modern evolutionary biology says that a few loners will defy the masses and go out alone digging holes all over the place in a random fashion and by doing so, a few of these loners have found some new treasure. But there is no evidence that any of these wandering treasure hunters ever found anything of much values let alone a jackpot. The point of this little analogy is that there isn't any specific treasure but that the search finds many minor small treasures but is really unlikely to find the really big ones which are buried quite far away from where they are digging, maybe on another continent or even under the sea. The active information is the map and boat in the treasure hunt. In evolutionary biology it is ??? My guess it is intelligence. Because intelligence can certainly do it just as it provides the map and boat for the treasure hunters. Maybe others can define the active information differently and just how does a fitness landscape fit into this scenario.jerry
May 8, 2009
May
05
May
8
08
2009
08:15 AM
8
08
15
AM
PDT
Nakashima-San: 1] the complexity of organization of the cell and its genetic component defies blind search, starting with the need to get to the shores of islands of function. 2] posing hill-climbing algorithms that use randomness to get variations that fitness functions reward or punish begs this prior question. 3] Active info relates to the info needed to get TO islands of function in combinatorially explosive config spaces. (Notice that ratio of odds on blind random search landing in the target vs a search that gets you there in a much more feasible span of effort.) 4] We observe 2 sources of high contingency: chance/stochastic and purposeful. Known cases of complex organised function uniformly trace to design -- for the obvious reason. 5] The relevant large increments in biofunctional info to explain first life or body plans -- 100'sK to 10's or 100's M bits -- thus on inference to best explanation credibly come from design, i.e teleology. 6] Can you -- or any other supporter of the power of chance to get us to the shores of function, supply an observed case of chance doing the lucky noise trick? 7] If not, you have not got a good empirical case, I am afraid. GEM of TKIkairosfocus
May 8, 2009
May
05
May
8
08
2009
06:54 AM
6
06
54
AM
PDT
Mr Joseph, As I read Mr Chu-Carroll, he is agreeing with Dr Dembski that evolution works by active information. He has situated that active information in the structure of the search space - evolution is searching physico-chemical systems in 3 spatial and one temporal dimension at a certain range of temperatures and pressures, but still far from equilibrium. That is a subset of search spaces that is far easier for evolution than other arbitrary searches, most of which resemble white noise. Dr Dembski's differences with Mr Chu-Carroll only begin with questioning the source of this active information. Dr Dembski has said the source in teleogical, Mr Chu-Carroll is silent on the issue.Nakashima
May 8, 2009
May
05
May
8
08
2009
06:31 AM
6
06
31
AM
PDT
beelzebub, Can either you or Mark Chu-Carroll demonstrate that nature, operating freely giving rise to active information? THAT is the only way to refute the paper. Nobody cares who is impressed or not. People care what can be demonstrated. IDists can demonstrate agencies giving rise to active information....Joseph
May 8, 2009
May
05
May
8
08
2009
04:35 AM
4
04
35
AM
PDT
Mark Chu-Carroll of Good Math, Bad Math has reviewed the Dembski/Marks paper. He is not impressed, to put it mildly. Linkbeelzebub
May 8, 2009
May
05
May
8
08
2009
02:58 AM
2
02
58
AM
PDT
R0b, Thanks for your response. You wrote:
Given the lack of constraints in the LCI, we can model it such that the information cost is less than the performance gain, thus falsifying the LCI.
In my post I emphasized the minimal information cost for reductions that led to lower level search improvement. As I said explicitly, not all reductions will aid in your lower level search, and thus are irrelevant for LCI purposes. But those that do aid in search improvement will incur an informational cost not less than the q/p improvement measure, or following the paper, the active information. Just because we can arbirarily expand/inflate the information cost as much as we'd like by including irrelevant (non-performance improving) reductions does not mean we can reduce it below the active information when we include the relevant reductions. BTW, we can always arbitrarily inflate any probability calculation ("what are the chances of getting heads on a coin flip...with that particular coin out of all the coins of history?"), so if your objection were viable, we could never meaningfully calculate the probability of anything. I believe Dembski's paper makes clear the relevant subset, since it is always in reference to an improved original search, therefore, only search improving reductions need to be included, but they all must be included. Again, the importance becomes on choices/reductions that improve the search performance. Even if those are the only reductions we will include in our total (and we could always include more), the LCI still holds. So I don't believe you can "model [the higher-level search] such that the information cost is less than the performance gain" when you include all the costs associated with performance improving reductions; if you can, I bet it would be pretty easy to show where you're ignoring relevant (performance improving) reductions. Go ahead and give it a shot; see if you can select a subset in a higher order space such that this subset improves search performance by ratio q/p and yet has an informational cost less than the active information. I don't think you can. Either that choice resulted in a search improvement, and so that reduction would need to be added to the cost (by definition) and will be greater than or equal to the active information (per the theorems), or it wouldn't result in an improved search, which contradicts our premise that it does. AtomAtom
May 7, 2009
May
05
May
7
07
2009
04:42 PM
4
04
42
PM
PDT
Tom:
But there is a hugely important philosophical issue begging to be addressed, and Dembski and Marks have ignored it: What is a probability?
I think that this question begs to be addressed in all of Dembski's ID work. It's a rock that has broken many a shovel while digging through the mathematical, empirical, and metaphysical layers of Dembski's arguments.R0b
May 7, 2009
May
05
May
7
07
2009
04:21 PM
4
04
21
PM
PDT
Atom:
So you question becomes, in essence, “What if we define our higher order search as just the elements that are ‘good’ fitness functions?” In this case we are still looking at a subset of the possible functions, so the cost is there.
Yes, but my point is that the cost is always there, because every set is a proper subset of another set. According to Marks and Dembski, the appropriate way to define the higher-level space of fitness functions is to fix the domain and codomain, and vary the mapping. But that space is part of a larger space in which the domain and codomain also vary. And that's part of a larger space that includes things other than functions. Marks and Dembski ignore the information costs associated with selecting a space from these superspaces, so why can't we ignore the cost of selecting the "only good functions" set from a superset? Your point that our choice of algorithm has no effect on performance is a good one, assuming that the fitness function is not fixed and that all available algorithms sample without replacement. In such a case, it makes no difference whether the algorithm is fixed or variable. But we could, if we wanted, fix the fitness function and vary the algorithm. Different definitions of the higher-level space will yield different information costs. If you apply the three different CoI theorems to the WEASEL example, you'll get three different information costs. So information cost is not just a function of the problem at hand, it's also a function of how we model it. Given the lack of constraints in the LCI, we can model it such that the information cost is less than the performance gain, thus falsifying the LCI.R0b
May 7, 2009
May
05
May
7
07
2009
03:16 PM
3
03
16
PM
PDT
1 2 3 7

Leave a Reply