Home » Intelligent Design » This Site Gives me 150 Utils of Utility; Panda’s Thumb Gives me Only 3

This Site Gives me 150 Utils of Utility; Panda’s Thumb Gives me Only 3

Any effort to give precise gradations of quantification to CSI is doomed to failure.  It reminds me of certain economists’ effort to quantify “utility” through a measurement called a “util.”  See here.

The more I think about it, the more I am convinced that the concepts are very much the same.  We can all agree that the concept of “utility maximization” is very important and represents a real phenomenon.  But while we can say of utility there is a lot, there is a little, or there is none at all, there is no way to measure it precisely.  The “util” is useful as a hypothetical measure of relative utility, but it has no value as an “actual” unit of measurement, such as inches, pounds, meters, or grams.

Similarly, of CSI we can say it is present or it is not present.  That is what the explanatory filter does.  In some cases we can estimate relative CSI if we are able to calculate the bits of information present in the two instances.  But not usually.  Consider a space shuttle and a bicycle.  Both obviously show CSI and a design inference is inescapable with respect to each.  It is also obvious that the space shuttle contains vastly more CSI than the bicycle.  But if one asks me “how much more CSI is there in a space shuttle than in a bicycle?” the only satisfactory answer it seems to me is “a lot more.”  I could posit a measure of CSI – call it an “info” – and say the space shuttle contains 100 infos of CSI and the bicycle contains only 10 infos.  But this is certainly a meaningless game.  Actually, it is more than meaningless.  It is affirmatively harmful, because the game gives an illusion of precise measurement where there can be none.

Why am I going on about this?  Because many materialists commenting on this site frequently say, essentially, if one cannot quantify CSI then it is a meaningless concept.  This is false.  “Utility” cannot be quantified, but surely no one would suggest it does not exist or that it is not a useful concept in the field of economics.  Similarly, simply because CSI cannot always be precisely quantified is no reason to suggest that it does not exist or that it is not a useful concept in the study of objects to determine whether design is the most plausible explanation for their features.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

54 Responses to This Site Gives me 150 Utils of Utility; Panda’s Thumb Gives me Only 3

  1. The problem is that, while you can easily quantify lower levels of information in bits (which is useless for our purposes), higher levels of information (which is what we are interested in) are usually described in terms of rules, not quantities. Semantics are defined by interface rules, state transitions, etc. I see no way that such could usefully reduced to a number.

  2. I could posit a measure of CSI – call it an “info” – and say the space shuttle contains 100 infos of CSI and the bicycle contains only 10 infos.

    Actually, there’s already a unit of measurement for it: the bit. But, other than that, you’re right. It’s impossible to figure out a precise measure of it for just about any complex object.

  3. When you say that utility cannot be quantified, you are conflating two very different notions of utility:

    1) Utility as a measure of overall happiness as used by Jeremy Bentham or (arguably) John Stuart Mill

    2) Utility as a tool to represent revealed preferences as used by modern economists

    The second kind of utility absolutely can be quantified (although it is only defined up to monotonic transformations) and that is the whole basis of modern economics. You’re absolutely right that comparisons based on 1) are unscientific and properly belong to the realm of philosophy. But this is not the case for comparisons based on 2) – utility of this sort absolutely can be quantified and economists quantify it all the time. Without making additional assumptions we can’t make judgments like, “Taking $100 from Jon and giving it to Peter would improve social welfare”, but we can make judgments like, “Lowering the gas tax is inefficient in the sense that alternative methods of redistribution would give more consumption to those who would benefit from such a move while also leaving more for everyone else.”

  4. That’s an interesting observation—how can you quantify CSI?

    Maybe it’s like quantifying logic and beauty and virtue. These come in greater or lesser quanties but may not be precisely quantifiable—perhaps because of the uniqueness of each instance.

    Reminds me of measuring information. We can measure the number of bits in a text, but it’s more difficult for the linguist to measure the quantity of what he calls “new information.” Discourse procedes with something old and something new in every proposition or foregrounded clause. The old information provides coherence but the new information is why we speak. Imagine a machine which could scan texts for new information—information not already in its data banks. All it could do is look for identical strings of symbols, but new information is identifiable only by understanding both the old and the new—which no machine could ever do.

    I know a girl from Africa who remembers the people back home trying to describe snow. She remembers what they said but has forgotten the language in which they said it. This happens all the time. If you’re bilingual you will remember in one language what you were told in another. You can always paraphrase, simplify or expand. Information is not precisely measureable—that is, not unless there is some universal language (of which mathematics is a subset) to which everything is reducible to its least common denominator.

    It would be interesting if we could logically demonstrate that there are things which cannot in principle be precisely measured—not just that they are too complex for us to measure with our limited technology.

  5. (I forgot to mention the main point: the fact that CSI cannot be quantified very much does count against it as a scientific theory. CSI is at best as scientific as Benthamite utilitarianism and not in the same league as modern utility theory as used in economics)

  6. But why would we need to quantify something in order to identify it. I can pick my wife out of a crowd without quantifying her.

    And as for this constant fretting over whether we’re doing science or nonscience—remember that physicists often class biology as nonscience—it’s all observation and no theory. Whatever we do—gardening, checkers, theoretical physics—employs varying degrees of observation, reason and authority. The myth of “the scientific method” is a pernicious myth.

    Anyway must we be able to exactly quantify CSI for it to be “scientific”? Of course not! It’s like prototype semantics. All we need are a sufficient number of identifying features.

  7. Great observation Barry. The argument about the requirement of a precise measure of CSI as a refutation of arriving at a design inference has always baffled me. This is really a pathological case of not seeing the forest for the trees.

    I tried to demonstrate this with my Hello World program example here at UD. This little 66-character program represents as many possible combinations as there are subatomic particles in 10 trillion universes. When one considers even a single protein, the numbers becomes so large so quickly that they cause a cerebral shortcircuit. Then consider that proteins must interact with other proteins to form machines that must interact with other machines, etc. One soon needs exponents so large that they must be expressed with exponents. It is for such purposes that the googolplex was invented.

    If I can infer that an old-fashioned mechanical adding machine is designed, I can infer that a modern microprocessor with its millions of transistors is designed, without supplying a precise number that represents its CSI. With such skyrocketing organized complexity a design inference becomes essentially a trivial exercise. A great deal of fancy footwork, rationalization, and excuse-making are required to avoid the obvious, which is done for obvious reasons.

  8. Barry, interesting post. I don’t know if you were also poking around at Telic Thoughts, but I was just there looking at Mike Gene’s latest thread “Artificial or Natural” and then jumped over here and, lo and behold, I see that you are indirectly responding to some of the posts there (they are talking about a computer program being able to identify design through an algorithm, which presumably, means an ability to quantify in some way the design characteristics).

    I think this is a very interesting issue. If you are correct, then perhaps some of Dembski’s efforts to precisely identify design from a mathematical perspective will not pan out?

  9. Gil, I don’t think we should completely discount the effort to quantify/specify what goes into a design inference in a way that might help to make it more objective and algorithmic. I do, however, largely agree with your observation that, in practice, with most of what we see around us:

    “With such skyrocketing organized complexity a design inference becomes essentially a trivial exercise. A great deal of fancy footwork, rationalization, and excuse-making are required to avoid the obvious, which is done for obvious reasons.”

  10. I both agree and disagree. As far as I understand, when it comes to measuring CSI, all one needs to know is the probabilistic resources available (number and length of trials), and the probability (measured in bits) of the independent formulation of the specified pattern, and the number of specified patterns within all possible patterns. Furthermore, one needs to observe that the pattern doesn’t flow as a necessity from the properties (attractive or repulsive) of the material in which case the information content would be extremely low (if there is any at all). Necessity/law = high probability and low information.

    The reason why it is hard, if not impossible, to measure CSI in some instances is that some patterns are hard to measure in bits. Take that “chair” example, where the tree is grown in a chair shape. When it comes to artistic shapes that merely “look” like something, I don’t think that you can apply an objective measure of CSI, since the search space and the number of potentially specified targets is somewhat subjective and ambiguous. Let’s take clouds for instance. What is the number of shapes that *look* specified that are possible and what is the total shape space?

    Now, don’t get me wrong as I do think that a somewhat subjective, “Design-Matrix-esque” filter can be used to gauge the potential necessity of intelligence as a cause when analyzing some patterns. And yes, these shapes that have a high design inference associated with them are also highly specified or pre-specified and are complex as in having a low probability. In fact, specificity and complexity are criteria which an archaeologist would use to determine if a rock was not just a rock. Does its shape look highly improbable (complex) and does it fit within a functional target space (specificity — does it match an independently given functional pattern). These observations in this case are somewhat subjective but still useful. And in these cases, the finer the resolution, the stronger the inference.

    However, that does not mean that all inference to intelligent design has to be subjective. As I briefly explained above, CSI can be measured objectively when the pattern itself permits. Some examples include all codes/cyphers and languages, number sequences and shapes which are regular and can be briefly described mathematically, the probability of arriving at any small target amid a huge search space at consistently better than chance performance, and even such complex things as circuits and potentially even functional systems of integrated units. Those functionally specific objects may be able to be measured objectively in bits (as an information theoretic measure of probability) since they are objectively created from high information external diagrams (known as blueprints or instructions) and must be created component by component. As long as there is an objective information measure associated with these external diagrams — which as far as I understand can be seen as the independently formulated patterns — and we if we can estimate the search space and probability of arriving at the said configuration, then there is no hindrance to measuring for CSI.

    ” ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random;’ but whereas ordered systems are generated according to simple algorithms and therefore lack complexity, organized systems must be assembled element by element according to an external ‘wiring diagram’ with a high information content.”

    Jeffrey S. Wicken, “The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): 353, 349-65.

    “In the face of the universal tendency for order to be lost, the complex organization of the living organism can be maintained only if work – involving the expenditure of energy – is performed to conserve the order. The organism is constantly adjusting, repairing, replacing, and this requires energy. But the preservation of the complex, improbable organization of the living creature needs more than energy for the work. It calls for information or instructions on how the energy should be expended to maintain the improbable organization. The idea of information necessary for the maintenance and, as we shall see, creation of living systems is of great utility in approaching the biological problems of reproduction.”

    George Gaylord Simpson and William S. Beck, Life: An Introduction to Biology, 2nd ed. (London: Routledge and Kegan, 1965), 145.

    Even though not everything can be measured objectively as having CSI content, the things that can be measured as such are most probably the effects of previous intelligence.

  11. “Utility” is a useful concept in economics, but economics is not science. Likewise CSI might be a useful concept for, say, philosophy, but that doesn’t mean its a useful concept for science.

  12. 12
    Venus Mousetrap

    I’ve often had doubts about CSI as you know – my main problem is that if we can write a program to detect CSI, then CSI is probably algorithmically reproducible (otherwise, how could a program know when something meets the criteria), which means that a natural law could produce it. However, I can see how this can be handwaved away so I’m not going to argue the point.

    I shall have to argue with Gil’s Hello World program, however, since this is an area that interests me. :) It is true that mutating a C program is very unlikely to make one that compiles, but in my opinion this is a bad example. DNA is not like C – it’s more like a compiled program, that is, machine code.

    Mutating one letter of AGCTAGCACAACAGT won’t necessarily wreck it. In some cases it will make no change, because of the redundancy of the code. Sometimes it puts a stop in early, and the result is an altered protein. This also sometimes has the effect of turning all the stuff after the stop into junk, if the altered protein turns out to be acceptable.

    Similarly, when you get down to the low level of code, it looks like 1011010101101011… . The operations are represented by strings of binary, so 10110101 could be a JUMP instruction, for example, and 01101011 could be an address to jump to. A mutation in the address would just send the program to a different place, which won’t necessarily wreck it – the idea of genetic algorithms is that the code is flexible enough to allow odd tweaks to be made to it, which sometimes improve it.

    DNA has this same property of mutational robustness… whether by design, I leave you to decide.

    I’d also like to note that junk DNA forms naturally this way – if you have a jump command, or a way of disregarding nonsensical operations, or operations which cancel each other out, then they will accumulate simply by the result of random mutation.

  13. CJYman, I don’t know why you believe you are in partial disagreement with me. I said exactly the same thing you said:

    “In some cases we can estimate relative CSI if we are able to calculate the bits of information present in the two instances. But not usually.”

  14. Jason1083, I disagree that “util” is an exact measure. See the link in the post for the reasons why.

  15. Jason1083, for example, in my title I say this site gives me 150 utils of utility. What does 150 utils mean? Nothing except in relation to my further statement that PT gives me only 3.

  16. 16

    The argument about the requirement of a precise measure of CSI as a refutation of arriving at a design inference has always baffled me. This is really a pathological case of not seeing the forest for the trees.

    I am new around here, so forgive me if I am treading on covered ground. But, without a calculated value of CSI, how does the EF provide anything different than a subjective assessment?

  17. 17

    I have to slightly disagree with you Barry. While the true information content of many designed objects is way out of our current reach and precise definition, the true information content of a properly defined “simple” sequence of data may well be within our reach.

    It From Bit Excerpt:

    But Zeilinger and Brukner noticed that it (Shannon Information) doesn’t take into account the order in which different choices or measurements are made.

    This is fine for a classical hand of cards. But in quantum mechanics, information is created in each measurement–and the amount depends on what is measured when–so the order in which different choices or measurements are made does matter, and Shannon’s formula doesn’t hold. Zeilinger and Brukner have devised an alternative measure that they call total information, which includes the effects of measurement. For an entangled pair, the total information content in the system always comes to two bits.

    me again:

    Thus it seems that if true information content can indeed be satisfactorily defined for any given “simple” sequence of data to the very foundation of reality itself (indeed it looks as if “true information” is the ultimate foundation of our reality) then when making an inference to design, the CSI explanatory filter may be able to be accurately quantified and brought in to play in greater detail.
    Indeed it seems it would be reasonable to refine the current CSI probability bound of 10^150 to a more precise and lower number (an actual quantification of CSI) for example, establishing a more realistic and concrete CSI, than 10^150, for small protein molecules using such a precise method.
    I believe this is a reasonable expectation on our part since, instead of starting with the flawed Shannon starting point to deduce total information content of a simple sequence, the search for CSI will actually start with the true “reality” information content of a known sequence.
    Refining the basic element of information, bit, to its true definition of reality, is of a prime necessity when trying to determine the actual threshold CSI, involved in a “simple” designed sequence.
    An essential element in this process will be separating the simple sequence of threshold CSI from its functional neighbors. i.e. the specific CSI information content of a “required simple protein” will most likely be very different from the information content of entire functional protein machine, such as the flagellum.

  18. Todd Berkebile:

    “Likewise CSI might be a useful concept for, say, philosophy, but that doesn’t mean its a useful concept for science.”

    CSI is extremely useful within the science of information theory. It is a quantifiable measure of the information content (measured probabilistically in bits) of a specified (or pre-specified) pattern measured against all available patterns and probabilistic resources. It also deals with the difficulty (again measured in bits) of finding a small target within a vast search space.

    CSI also sets the stage for a conservation of information — a 4th law of thermodynamics — which deals with the objective flow of information. Therefore, it is useful in science.

  19. soplo–

    I sometimes wonder the same thing about standard evolutionary theory. It purports to be about adaptation and the propagation through populations of genes conferring enhanced fitness. Population genetics tells us precisely how incremental quantities of a parameter known as “fitness” spread themselves via differential reproduction. Zoology and other observational sciences tell us about existing specific adaptations that confer this “fitness”. However, given that “fitness” is a central concept of the theory that is purported to be the crown jewel of all biology, it seems odd that there is no method for calculating or measuring its value. “Fitness” as a quantity is crucial to the entire theory, but it remains a strictly metaphysical concept, eluding empirical measurement. Oh, we can always find out who were the “fit” after the fact. They’re the ones who survived. Just as theory predicts!

    Putting a value on CSI seems much more tractable to me.

  20. A good measure of the amount of CSI in something is the size of the instruction set needed to produce it. We can easily compare the difference in the amount of CSI in a bicycle and a space shuttle by weighing the manuals necessary to build each. The former likely can be written up in monograph, while the latter likely requires a small library!

  21. Hello Barry,

    The only reason I partially disagreed with you is that you *seemed* to be overly negative about the prospect of measuring for CSI. However, as far as I can tell, the only theoretical limitations in its use would be in regards to measuring art or things that merely subjectively “look” like something complex and specified such as “faces in clouds” for reasons I gave above.

    As I explained, an objective information theoretic measure of CSI for the objective blueprint or instructional information necessary to create a system of functionally integrated units is possible.

    In fact, the CSI measure of arriving at an hospitable planet may even theoretically be able to be worked out, taking into consideration the work of Dr. Gonzalez (sp?).

    As far as I can tell, the only significant limitation may be when it comes to “artistic” or subjective specified patterns.

  22. BarryA – what both the site and your example show is that utility is only defined up to monotonic transformations. This is quite different from not being a quantitative measure! If CSI were capable of giving us an ordinal comparison between any two objects, then it would immensely useful as a scientific concept – the problem is that CSI can’t be made quantitative at all, not that it is ordinal rather than cardinal. This is the difference between philosophy and science.

    If I observe that you prefer A to B and that is the only choice I’m modeling, it doesn’t matter if I say you get 150 utils from A and 5 utils from B or 15 utils from A and 5 from B, but it sure does matter if I say you get 3 utils from A and 5 from B! That would contradict our observation.

    This might seem like a pedantic point, but the entire edifice of economic theory is built on it. Your example just happens to be extremely trivial. If we consider instead the problem of you choosing from a vector of n goods (x_1,…,x_n) subject to the constraint that your total expenditures are less than your wealth (p_1 x_1 + … + p_n x_n

  23. bornagain77, for the same reason as I set forth in post [12] I don’t know why you think you disagree with me. If information content can be expressed in bits, then it can be quantified exactly. But tell me, using my first example, how many bits of information are in the space shuttle and how many are in a bicycle?

  24. Sorry, looks like my last post got cut off because the blog doesn’t like the “less than or equal to” sign. Here is the post in full:

    BarryA – what both the site and your example show is that utility is only defined up to monotonic transformations. This is quite different from not being a quantitative measure! If CSI were capable of giving us an ordinal comparison between any two objects, then it would immensely useful as a scientific concept – the problem is that CSI can’t be made quantitative at all, not that it is ordinal rather than cardinal. This is the difference between philosophy and science.

    If I observe that you prefer A to B and that is the only choice I’m modeling, it doesn’t matter if I say you get 150 utils from A and 5 utils from B or 15 utils from A and 5 from B, but it sure does matter if I say you get 3 utils from A and 5 from B! That would contradict our observation.

    This might seem like a pedantic point, but the entire edifice of economic theory is built on it. Your example just happens to be extremely trivial. If we consider instead the problem of you choosing from a vector of n goods (x_1,…,x_n) subject to the constraint that your total expenditures are less than your wealth (p_1 x_1 + … + p_n x_n less than or equal to w), we quickly find that utility theory allows us to put a great deal of structure on the problem and make surprisingly general inferences if we make some weak assumptions. For instance, we can use this structure to infer that everyone can be given higher utility if we replace a marginal tax which distorts the price of one of the consumption goods with a lump-sum tax that raises the same revenue under some weak and testable restrictions on utility.

    To obtain more policy relevant conclusions we’d have to develop more theory, but the key idea is that by observing your choices we can construct a utility function (defined up to monotonic transformations) which allows us to make out-of-sample inferences about what you would choose in counterfactual states of the world using weak restrictions on the formal structure of utility functions (which can in turn be tested empirically).

    Jason

  25. Barry, are you saying that ratios might be more achievable than absolute numbers?

    Then with CSI you must determine how much CSI is required to specify one item as opposed to another …

    I wouldn’t suggest beginning with life forms, as arbitrarily large numbers will result.

    For example, somebody mentioned the googolplex.

    Just as I feared! Let’s not go there. All numbers are inherently evil, but numbers with names are monsters.

    Perhaps we might begin with the question of how much information is required to specify a brick? A soap bubble?

    Also: Theories about CSI are not needed to dismiss the Darwinist superstition. The Darwinist superstition is that natural selection is a creative force. It isn’t, and it obviously isn’t.

    Anyone can see this by looking at the difference between animals subjected to natural selection and animals protected by humans and artificially bred. Natural selection produces sameness; breeding (intelligent selection) produces creative differences.

    So we do not know the source of the huge level of information in naturally occurring life forms, and it is probably too much to begin a project like this with.

  26. Here is what I have just said on another thread. There are some CSI that is easy to measure and some for which maybe one should forget the concept.

    It is meaningless for a space shuttle or Mt. Rushmore but is very applicable for a computer program and machine operations, an alphabet and language and DNA and proteins.

    Separate the two types of concepts and then have the same discussion.

  27. Todd Berkebile – what in the world are you talking about?

    If economics is not a science, than what make physics, chemistry and biology sciences?

    Here is an example provided by Al Roth:

    “Rather than quibbling about definitions, it may help to consider how laboratory experiments complement other kinds of investigation in economics, as they do in those other sciences. Let me give an example.

    One strategy for looking at field data (as opposed to laboratory data) is to search out “natural experiments,” namely comparable sets of observations that differ in only one critical factor. The benefit of using field data is that we are directly studying markets and behavior we are interested in, but the disadvantage is that in natural markets we can seldom find comparisons that permit sharp tests of economic theory.

    In a 1990 paper (in the informatively named journal, Science) I studied such a natural experiment, involving the markets for new physicians in different regions of the U.K. in the 1970′s. The markets in Edinburgh and Cardiff succeeded while those in Newcastle and Birmingham failed, in ways that can be explained by how these markets were organized. But as will be apparent to readers of the Economist, there are other differences than market organization between Edinburgh and Cardiff on the one hand and Newcastle and Birmingham on the other. So, how are we to know that the difference in market organization, and not those other differences, accounts for the success and failure of the markets?

    One way to approach this question is with a laboratory experiment. In a paper in the Quarterly Journal of Economics, John Kagel and I report such an experiment, in which we study small, artificial markets that differ only in whether they are organized as in Edinburgh and Cardiff or as in Newcastle and Birmingham. Unlike in those naturally occurring markets, the market organization is the only difference between our laboratory markets. And our laboratory results reproduce, on a smaller scale and despite far smaller incentives, the results we see in the natural markets. So the experiments show that the differences in market organization by themselves can have the predicted consequences.

    Does this “prove” to a mathematical certainty that the different market organizations are the cause of the differences observed in the natural markets? Of course not. Does it provide powerful additional evidence in favor of that hypothesis? Of course it does.”

  28. 28

    Barry,
    The problem is in the size of bite you are trying to take. As O’Leary in 24 and Jerry in 25 somewhat pointed out, the problem of quantifying CSI, to a more concrete level, may very well be solvable if strictly limited in its scope and definition. This (Quantifying CSI) is a realistic expectation especially considering Zeilinger’s breakthroughs in quantifying “true information” to the foundation of reality itself.

  29. If lack of quantification is a problem for CSI and ID, I suggest logical inference has the same problem. How many scientific theories are based on inference?

  30. O’Leary asks: “Barry, are you saying that ratios might be more achievable than absolute numbers?”

    No, a ratio is achieved by putting one absolute number in the numerator and another absolute number in the denominator. As with my example of the space shuttle and the bicycle, we can frequently say “more CSI” or “less CSI,” but we usually cannot quantify precisely what we mean by “more” and “less.”

    My point is rather modest. For any given object that exhibits CSI, it is not USUALLY possible to quantify the bits of information exactly. Calls for the exact quantification of CSI are in my experience disingenuous distractions.

  31. An engineer, a physicist and an economist are stranded on an island, and all they have to eat are cans of beans they dragged in from the ship wreck. The problem, how to open the cans.

    The engineer says, let’s bang the cans with rocks until they open.

    That’s stupid says the physicist, it will make the cans’ edges jagged and rock pieces will contaminate the food. Let’s build a fire under the cans and the steam building in the cans will eventually cause the cans to break open.

    That’s great if by “break open” you mean “explode and spread the beans all over the ground.”

    So the engineer and the physicist go ‘round and ‘round, and all the while the economist is sitting back with a smug self-satisfied smirk on his face.

    Finally, the engineer says, “What are you grinning about? How do YOU propose to solve this problem?”

    “It is simplicity itself,” says the economist. “We simply assume we have a can opener.”

  32. Yes, it’s certainly true that economists often make unreasonable assumptions to make problems tractable. But that doesn’t engage with the point that utility theory is a quantitative empirical theory which has proven extremely useful in predicting economic behavior across a wide range of settings (I agree that we have a long way to go in predicting the behavior of the macroeconomy, but we’re pretty good at calculating things like, “How much will the income of the average person rise with an additional year of schooling?”) whereas CSI is not a quantitative theory and has not correctly predicted anything or explained any previously inexplicable phenomenon.

  33. Actually BarryA, the engineer would just use his/her trusty swiss-army knife he/she has in his/her pocket at all times ;P

  34. “Utility” is a useful concept in economics, but economics is not science. Likewise CSI might be a useful concept for, say, philosophy, but that doesn’t mean its a useful concept for science.

    Fallacies employed:

    Straw man, Weak Analogy, argument from what appears to be an ignorance of what CSI is.

  35. Jason 1083:
    “… whereas CSI is not a quantitative theory and has not correctly predicted anything or explained any previously inexplicable phenomenon.”

    You should do some due diligence and then come back to the discussion when you actually know what you are talking about. I could understand if you didn’t understand what CSI was and then asked questions, but to blatantly spread misinformation … that’s a different story.

    Here, why don’t we start with this question for yourself: “How does one calculate the CSI of a line of computer code?” If you actually know and explain it properly, then you will see that your statement above is negated. CSI is a quantification of some objective quantities measured against each other. Now, if you actually understand the concept behind CSI, you will be able to easily tell me what are those quantities.

    Second, an understanding of CSI leads into a conservation of information theorem (a 4th law of thermodynamics), where specified or pre-specified targets within an overwhelming search space will not be arrived at better than chance or any target will not be arrived at better than chance on average unless there is previously existing information to guide the search. These basic concepts have been discussed by information theorists for a while now.

    A “learner… that achieves at least mildly than better-than-chance performance, on average, … is like a perpetual motion machine – conservation of generalization performance precludes it.”

    –Cullen Schaffer on the Law of Conservation of Generalization Performance. Cullen Schaffer, “A conservation law for generalization performance,” in Proc. Eleventh International Conference on Machine Learning, H. Willian and W. Cohen. San Francisco: Morgan Kaufmann, 1994, pp.295-265.

    Yet EAs perform at better than chance performance to produce complex and specified targets. How do they do this? Understanding where CSI originates will help to solve that problem.

  36. 36

    Barry, I just reread your posts and I apologize for my misunderstanding of what you were saying. So, to be on the same page, you are quite correct that in most instances CSI cannot be given an exact number even though it is known, without a doubt, to be present in a certain system.
    Where I missed your point is that I thought you were saying an exact CSI quantification is impossible in ALL instances. So again I apologize for misunderstanding of you.

  37. BarryA:
    “As with my example of the space shuttle and the bicycle, we can frequently say “more CSI” or “less CSI,” but we usually cannot quantify precisely what we mean by “more” and “less.””

    It may be true that we “usually” can’t quantify CSI. However, that’s probably just due to our own ignorance and lack of “information” about the information. The ONLY place where I see CSI as unmeasurable *in principle* is within subjective or artistic patterns for the reasons I have given above.

    As long as we can calculate the improbability (measured in bits) of the IC core of instructions for building a spacecraft or a bicycle, then there is no reason why we can not calculate its CSI — as long as we have an approximation of the probabilistic resources and how constrained the functions are.

    BarryA, I do understand what you are saying here. There are many examples of objects, such as Mount Rushmore, that exhibit CSI in that they are highly constrained and improbable organizations (complex) which seems to match a pre-specification that is not defined by the physical properties of the material used (one type of specificity) yet we are hard pressed to actually provide a measure of CSI. In these cases, I agree that some common sense (and the Design Matrix) is to be utilized as the investigator continues to study finer resolutions of the pattern to see if it continues to exhibit CSI characteristics.

    My only qualm was that you seemed to be somewhat overly pessimistic since I don’t think that these artistic patterns are the “usual” case when discussing CSI patterns. Its just that these are the cases (such as archways and water-carved “elephants”) that the critics want to focus on, not realizing (or fully realizing) that there are other actual patterns that are more readily quantified as having an objective measure of CSI.

  38. My point is rather modest. For any given object that exhibits CSI, it is not USUALLY possible to quantify the bits of information exactly. Calls for the exact quantification of CSI are in my experience disingenuous distractions.

    This does, I think, raise three important questions:

    1. How can we distinguish between cases where CSI can and cannot be calculated exactly?
    2. When CSI can’t be calculated exactly, what (exactly) can be calculated? And how?
    3. In the cases where CSI cannot be calculated, how does one infer design?

  39. 1. How can we distinguish between cases where CSI can and cannot be calculated exactly?

    This is a very good question and I’ve thought a lot about after I read the last works by Dembski in his designinference site.
    Surely putting CSI in strict relation with the concept of Chaitin-Kolmogorov theory of algortmic complexity has been a real advancement towards CSI’s quantification.
    However, IMHO the real problem lies in the fact that an exact computation (not a maximization) of CSI does strictly require the knowledge of ALL the possible semiotic agents.and their knowledge. This is in my opinion the crucial point. For example, let us consider a compressed file such as an MP3 sound file. We cannot really recognize its artificial nature without having the “key” to interpret it, i.e. the decompression algorithm, which in turn does represent the knowledge originally provided by the authors of the algorithm itself.

    In other words, I think that to compute, or at provide a floor to, CSI it is necessary to know some of these “keys”; but in any case this computation is not definitive because we don’t know if some simpler “key” could exist

  40. Footnote: The 2 LOT thread of 3rd March 2008, contains an excerpt of a quantitative metric of CSI [From Research ID], at comment-post 53, here.

  41. 41

    As with my example of the space shuttle and the bicycle, we can frequently say “more CSI” or “less CSI,” but we usually cannot quantify precisely what we mean by “more” and “less.”

    So, I ask again, how is this different than a subjective assessment? I had thought CSI was a computational system developed by Dr. Dembski. I will be a bit discouraged if it has only advanced it to the “yes”, “no”, “more”, “less” level of formalization.

  42. #39 KF

    The 2 LOT thread of 3rd March 2008, contains an excerpt of a quantitative metric of CSI [From Research ID], at comment-post 53, here.

    I will have a look of the definition of FSCI. At first it seems on the right way to provide a more stable metric for specified complexity

  43. SC:

    In re:

    I ask again, how is this different than a subjective assessment? I had thought CSI was a computational system developed by Dr. Dembski. I will be a bit discouraged if it has only advanced it to the “yes”, “no”, “more”, “less” level of formalization.

    ALL measurements are digitisable. So, in principle [and in praxis too . . .] ALL measurements are a chain of yes/no, more/less decisions.

    Equally — and as pointed out above — ALL measurements incorporate a subjective element. Indeed, ALL knowledge inevitably incorporates a subjective element. Further to this, every quantity is also about a quality: how much of X is in the end about in part recognising the presence/absence of X. Moreover, once we address information, as opposed to mere concatenations of elements forming a contingent whole, we are dealing with issues of intent, purpose, context etc — i.e the active mind, thus again the subjective.

    Objectivity is about whether there is credibly more than the merely subjective, and CSI — especially FSCI — far and away passes that test.

    In short you may be falling into dismissive, selective hyperskepticism; which is inevitably incoherent.

    GEM of TKI

  44. PS: On more/less CSI, a key point is that here is more/less of COMPLEXITY in addressing a bicycle vs a 787, and complexity can be measured by K-compressibility of descriptions, effectively in the number of bits to get the most sparse but adequate specification.

    But, once we see an object that is highly contingent, it is not determined by mechanical necessity, as such would produce not contingency but natural regularity. Only chance or intelligence have been observed as sources of such high contingency.

    When the resulting configuration is complex beyond the Dembski type bound [i.e. we have effectively more than 500 - 1,000 bits of information storage capacity] AND it is especially functionally specified, exhibiting complex organisation, it is credibly so isolated in the config space that chance or similar processes would be overwhelmingly likely to fruitlessly exhaust the probabilistic resources of the observed cosmos without arriving at the shores of any of the islands of function in the config space.

    But, we know that intelligent agents, using insight, thus active information [which is measureable] are able to routinely exceed the performance of chance or the like. So, when we see FSCI, we reliably infer to intelligence. And, this reliability is amply supported by direct observation of a great many cases where we do directly know the causal story.

    (In short, per basic scientific methods, we are entitled to shift the burden of proof to those who object to the use of FSCI as a criterion of objectively detecting agency. And in fact, with much lower confidence levels, similar explanatory filters are routine in statistics and experimental science. In short, we are looking at selective hyperskepticism again.]

  45. Interesting handle, Soplo Caseosa,
    Would you mind translating it for my curiosity?
    Thanks.

  46. 46

    kairosfocus,
    Thanks for your lucid explanation. Your clear concise manner has cleared up a few questions I had about CSI. Yet I still have one more nagging question that may or may not be pertinent to this topic, that arises from this following excerpt.

    It From Bit Excerpt:

    But Zeilinger and Brukner noticed that it (Shannon Information) doesn’t take into account the order in which different choices or measurements are made.

    This is fine for a classical hand of cards. But in quantum mechanics, information is created in each measurement–and the amount depends on what is measured when–so the order in which different choices or measurements are made does matter, and Shannon’s formula doesn’t hold. Zeilinger and Brukner have devised an alternative measure that they call total information, which includes the effects of measurement. For an entangled pair, the total information content in the system always comes to two bits.

    So my question is, “When will Zeilinger’s definition of total information come into play when quantifying CSI as opposed to how information is “normally” defined?”

  47. BA 77:

    There are many metrics of information, and some of them have different uses.

    In situations where sequence of choice is important the metric you discuss may be important. [There are such things as sequential, memory embedding systems, and combinational, sequence-independent ones. Feedbacks with lags are one way to get such effects,a nd systems where state changes and inrternal state affects response to next input will be sequential -- check up finite state machine algebra. Oddly, a combination lock is sequential, and an ordinary key-lock is combinational in this sense!]

    GEM of TKI

  48. As pointed out, every system, whether in the world of biology, engineering, or business, can be modeled or simulated. Hugely complex simulation models are designed and developed in all sorts of fields.

    So what are we waiting for? Why not establish a pilot program by identifying a small number of biological functions, organs, and/or organisms. Then we design and develop the most efficient models possible, and we have a quantity in terms of bits.

    Of course critics will claim that what is most efficient is subjective. Excellent, they and everyone else are welcome to design and develop their own simulation models. Why not offer awards for the most efficient? For example, trophies of Charles Darwin with his famous hat and beard, along with a totally puzzled and confused expression on his face.

    Now of course the simulation models will utilize processes found in nature, e.g., random number generators. Great, these “calls” will be subtracted out in order to arrive at a more true CSI measure.

    Critics want predictions, do they? Fantastic, once several simulation models are built we will become very good at predicting CSI measures for additional target processes.

    The simulation models will provide an additional benefit. Each point or “node” in the model can be analyzed as to the probability that it was derived by natural means. It will be loads of fun to then multiply the probabilities together, the numbers will be astronomical beyond all plausibility, what a hoot it will be!! We will then establish a lottery with the same odds, and publicly challenge our Materialist friends to play the lottery with their own personal funds. Maybe we can embarass and impoverish them all in one sweet and grand gesture!!!

    Oh, of course there is one wrinkle in this entire proposal. And that is that we have no idea how some of the greatest functions in biology work. The human mind for example. Hmmm. Well, we can say one thing for sure, the CSI elevator ain’t anyway near the top floor, if there is a top floor.

    Going up????

  49. 49

    KF

    Equally — and as pointed out above — ALL measurements incorporate a subjective element.

    Perhaps so, but should that be a reason to not undertake the effort, as Barry seems to be suggesting?

    In short you may be falling into dismissive, selective hyperskepticism; which is inevitably incoherent.

    I am just asking a question so I can understand better. You shouldn’t be so dismissive as me just because I don’t everything there is about ID.

  50. To the evolution supporter:

    How do you quantify fitness with an exact measurement and test it against reality as it relates to survival?

    How is co-option quantified with exact measurements that can lead to predictability?

    Who has quantified relatedness and its determination?

    When someone sees two fossils in the ground, what quantitative analysis is done to show change over time? How does this quantitative analysis get tested against reality to show change? Has this been applied to body plans, tissue types, organs, cell types, and the machinery within the cell? Does this lead to predictions that can be determined to happen in the future?

    Are there exact measurements of quantity associated with change over time?

    Does a forensic detective need to know mathematical models and statistical analysis to detect intelligent agency at a crime scene?

  51. Re RRE:

    You’re joking, right?

    There are scads of equations dealing with measurements of fitness (not exact, mind you, but useful models). Probably the most famous of these would be the Hardy-Weinberg equations.

    When comparing two fossils, morphological characteristics are compared quantitatively. I.e., is the brain case sufficiently different in volume to a point where we might suspect this is a separate species? Does this correlate with other changes in morphology (e.g., femur size, pelvic tilt, whatever) to bolster this hypothesis.

    And if you want really quantitative stuff for change over time, then look at mutations. Synonymous, nonsynonymous, indels, etc.

    Nothing in biology is a clear line – this individual has this quantitative fitness. Or this line separates species X from species Y. But most concepts in biology can be applied to modeling. Fitness, mutation rate, morphological change – all those things are modeled daily.

  52. zylph says: “And if you want really quantitative stuff for change over time, then look at mutations. Synonymous, nonsynonymous, indels, etc.”

    Excellent, I am really excited. Our wishes and desires have taken wings!! Could you please provide us links to the published models for how the human eye evolved? And the human mind, with its varied capabilities? And blood clotting? And the built in GPS units found in various birds and fish? And the echo location found in bats? And why humans love lots of chili peppers and jalepenos, not to mention chocolate?

    Wow, I must have slept through the biggest scientific breakthroughs since the theories of relativity and the discovery of DNA. Did anyone else miss it as well, or am I along in this?

  53. kairosfocus (#44): “When the resulting configuration is complex beyond the Dembski type bound ….. [AND it is especially functionally specified, exhibiting complex organisation, it is credibly so isolated in the config space that chance or similar processes would be overwhelmingly likely to fruitlessly exhaust the probabilistic resources of the observed cosmos without arriving at the shores of any of the islands of function in the config space.”

    Well stated, and I agree, but this of course assumes an isolation of these islands that is denied by the Darwinists, who always claim there actually are countless “islets” of function in a constantly changing configuration space that allows a long series of relatively short jumps to reach the highly functionally specified organization containing total complexity beyond the Dembski bound. In other words, supposedly there is always a chain of islets where each one slightly increases its CSI, ending up with the final CSI beyond the Dembski bound.

    So it comes back to demonstrating that this profusion of “islets” of function doesn’t really exist. This is really an alternate statement of Behe’s irreducible complexity argument.

  54. Zylphs (51),

    Show me where you can quantitatively determine when a novel trait has been produced. Unfortunately the equations you have told me about, the Hardy-Weinburg equations only deal with predictions about alleles. Not new traits. That one is not hardy enough.

    An excerpt by Daniel O’Neil who is a professor of the
    Behavioral Sciences Department, Palomar College, San Marcos, California admits there’s a problem:

    Despite the fact that evolution is a common occurrence in natural populations, allele frequencies will remain unaltered indefinitely unless evolutionary mechanisms such as mutation and natural selection cause them to change.
    http://anthro.palomar.edu/synthetic/synth_2.htm

    In other words, evolution sure is happening, but not here.

    Morphology deals with examining similar physical traits and features. I wish to see this branch of science do a quantitative analysis showing how traits change over time to produce new traits, and then verify those results by testing them against reality. So far no go though.

    See if you can answer this for me:
    How do you quantitatively determine a new trait has come into existence by changing over time? In other words: When does a new trait get determined after X amount of time has passed? How do you quantify a characteristic?

    The answer is not under any of the coconut shells on the table of science.

    Where you said:

    But most concepts in biology can be applied to modeling. Fitness, mutation rate, morphological change – all those things are modeled daily.

    Modeling is great. You can make your model do whatever you want on the computer. It’s funny how you have to invoke an intelligent cause in order to make artificial organisms on computers come into existence. A mind and machine have to physically place the algorithm in the form of code (machine code) onto another machine (computer) to make a simulated organism, proving that an intelligent cause must be present to produce the effect. Evolutionary modeling is pretty much proving intelligent design. An intelligent cause is followed by action (programming) in order to actualize (Intelligent Design) an artificial organism at the beginning stages of life’s supposed history, or whatever (because you can program it to do whatever).

    The main point of the whole article was about showing that measurements do not have to be represented as exact quantifiable amounts. What I was trying to reinforce, is, that terms like, ‘fitness’ and ‘adaptation’ are used by evolutionists to determine and explain the existence of novel traits, which are arbitrary terms and are not absolutely quantifiable in terms of measuring change. This means that CSI should not have to be determined true, if and only if it is able to determine the exact quantity measurement of information or complexity in the structure that is in question.

    As BarryA said about CSI, “it is present or it is not present”. This means CSI can be represented more as a boolean expression based on specific criteria (the steps in the EF).

Leave a Reply