Uncommon Descent Serving The Intelligent Design Community

This Site Gives me 150 Utils of Utility; Panda’s Thumb Gives me Only 3

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Any effort to give precise gradations of quantification to CSI is doomed to failure.  It reminds me of certain economists’ effort to quantify “utility” through a measurement called a “util.”  See here.

The more I think about it, the more I am convinced that the concepts are very much the same.  We can all agree that the concept of “utility maximization” is very important and represents a real phenomenon.  But while we can say of utility there is a lot, there is a little, or there is none at all, there is no way to measure it precisely.  The “util” is useful as a hypothetical measure of relative utility, but it has no value as an “actual” unit of measurement, such as inches, pounds, meters, or grams.

Similarly, of CSI we can say it is present or it is not present.  That is what the explanatory filter does.  In some cases we can estimate relative CSI if we are able to calculate the bits of information present in the two instances.  But not usually.  Consider a space shuttle and a bicycle.  Both obviously show CSI and a design inference is inescapable with respect to each.  It is also obvious that the space shuttle contains vastly more CSI than the bicycle.  But if one asks me “how much more CSI is there in a space shuttle than in a bicycle?” the only satisfactory answer it seems to me is “a lot more.”  I could posit a measure of CSI – call it an “info” – and say the space shuttle contains 100 infos of CSI and the bicycle contains only 10 infos.  But this is certainly a meaningless game.  Actually, it is more than meaningless.  It is affirmatively harmful, because the game gives an illusion of precise measurement where there can be none.

Why am I going on about this?  Because many materialists commenting on this site frequently say, essentially, if one cannot quantify CSI then it is a meaningless concept.  This is false.  “Utility” cannot be quantified, but surely no one would suggest it does not exist or that it is not a useful concept in the field of economics.  Similarly, simply because CSI cannot always be precisely quantified is no reason to suggest that it does not exist or that it is not a useful concept in the study of objects to determine whether design is the most plausible explanation for their features.

Comments
Zylphs (51), Show me where you can quantitatively determine when a novel trait has been produced. Unfortunately the equations you have told me about, the Hardy-Weinburg equations only deal with predictions about alleles. Not new traits. That one is not hardy enough. An excerpt by Daniel O’Neil who is a professor of the Behavioral Sciences Department, Palomar College, San Marcos, California admits there’s a problem: Despite the fact that evolution is a common occurrence in natural populations, allele frequencies will remain unaltered indefinitely unless evolutionary mechanisms such as mutation and natural selection cause them to change. http://anthro.palomar.edu/synthetic/synth_2.htm In other words, evolution sure is happening, but not here. Morphology deals with examining similar physical traits and features. I wish to see this branch of science do a quantitative analysis showing how traits change over time to produce new traits, and then verify those results by testing them against reality. So far no go though. See if you can answer this for me: How do you quantitatively determine a new trait has come into existence by changing over time? In other words: When does a new trait get determined after X amount of time has passed? How do you quantify a characteristic? The answer is not under any of the coconut shells on the table of science. Where you said: But most concepts in biology can be applied to modeling. Fitness, mutation rate, morphological change - all those things are modeled daily. Modeling is great. You can make your model do whatever you want on the computer. It’s funny how you have to invoke an intelligent cause in order to make artificial organisms on computers come into existence. A mind and machine have to physically place the algorithm in the form of code (machine code) onto another machine (computer) to make a simulated organism, proving that an intelligent cause must be present to produce the effect. Evolutionary modeling is pretty much proving intelligent design. An intelligent cause is followed by action (programming) in order to actualize (Intelligent Design) an artificial organism at the beginning stages of life’s supposed history, or whatever (because you can program it to do whatever). The main point of the whole article was about showing that measurements do not have to be represented as exact quantifiable amounts. What I was trying to reinforce, is, that terms like, ‘fitness’ and ‘adaptation’ are used by evolutionists to determine and explain the existence of novel traits, which are arbitrary terms and are not absolutely quantifiable in terms of measuring change. This means that CSI should not have to be determined true, if and only if it is able to determine the exact quantity measurement of information or complexity in the structure that is in question. As BarryA said about CSI, "it is present or it is not present". This means CSI can be represented more as a boolean expression based on specific criteria (the steps in the EF).RRE
May 23, 2008
May
05
May
23
23
2008
06:41 PM
6
06
41
PM
PDT
kairosfocus (#44): "When the resulting configuration is complex beyond the Dembski type bound ..... [AND it is especially functionally specified, exhibiting complex organisation, it is credibly so isolated in the config space that chance or similar processes would be overwhelmingly likely to fruitlessly exhaust the probabilistic resources of the observed cosmos without arriving at the shores of any of the islands of function in the config space." Well stated, and I agree, but this of course assumes an isolation of these islands that is denied by the Darwinists, who always claim there actually are countless "islets" of function in a constantly changing configuration space that allows a long series of relatively short jumps to reach the highly functionally specified organization containing total complexity beyond the Dembski bound. In other words, supposedly there is always a chain of islets where each one slightly increases its CSI, ending up with the final CSI beyond the Dembski bound. So it comes back to demonstrating that this profusion of "islets" of function doesn't really exist. This is really an alternate statement of Behe's irreducible complexity argument.magnan
May 23, 2008
May
05
May
23
23
2008
01:00 PM
1
01
00
PM
PDT
zylph says: "And if you want really quantitative stuff for change over time, then look at mutations. Synonymous, nonsynonymous, indels, etc." Excellent, I am really excited. Our wishes and desires have taken wings!! Could you please provide us links to the published models for how the human eye evolved? And the human mind, with its varied capabilities? And blood clotting? And the built in GPS units found in various birds and fish? And the echo location found in bats? And why humans love lots of chili peppers and jalepenos, not to mention chocolate? Wow, I must have slept through the biggest scientific breakthroughs since the theories of relativity and the discovery of DNA. Did anyone else miss it as well, or am I along in this?Ekstasis
May 23, 2008
May
05
May
23
23
2008
06:27 AM
6
06
27
AM
PDT
Re RRE: You're joking, right? There are scads of equations dealing with measurements of fitness (not exact, mind you, but useful models). Probably the most famous of these would be the Hardy-Weinberg equations. When comparing two fossils, morphological characteristics are compared quantitatively. I.e., is the brain case sufficiently different in volume to a point where we might suspect this is a separate species? Does this correlate with other changes in morphology (e.g., femur size, pelvic tilt, whatever) to bolster this hypothesis. And if you want really quantitative stuff for change over time, then look at mutations. Synonymous, nonsynonymous, indels, etc. Nothing in biology is a clear line - this individual has this quantitative fitness. Or this line separates species X from species Y. But most concepts in biology can be applied to modeling. Fitness, mutation rate, morphological change - all those things are modeled daily.zylphs
May 22, 2008
May
05
May
22
22
2008
04:34 PM
4
04
34
PM
PDT
To the evolution supporter: How do you quantify fitness with an exact measurement and test it against reality as it relates to survival? How is co-option quantified with exact measurements that can lead to predictability? Who has quantified relatedness and its determination? When someone sees two fossils in the ground, what quantitative analysis is done to show change over time? How does this quantitative analysis get tested against reality to show change? Has this been applied to body plans, tissue types, organs, cell types, and the machinery within the cell? Does this lead to predictions that can be determined to happen in the future? Are there exact measurements of quantity associated with change over time? Does a forensic detective need to know mathematical models and statistical analysis to detect intelligent agency at a crime scene?RRE
May 22, 2008
May
05
May
22
22
2008
02:07 PM
2
02
07
PM
PDT
KF
Equally — and as pointed out above — ALL measurements incorporate a subjective element.
Perhaps so, but should that be a reason to not undertake the effort, as Barry seems to be suggesting?
In short you may be falling into dismissive, selective hyperskepticism; which is inevitably incoherent.
I am just asking a question so I can understand better. You shouldn't be so dismissive as me just because I don't everything there is about ID.soplo caseosa
May 22, 2008
May
05
May
22
22
2008
11:20 AM
11
11
20
AM
PDT
As pointed out, every system, whether in the world of biology, engineering, or business, can be modeled or simulated. Hugely complex simulation models are designed and developed in all sorts of fields. So what are we waiting for? Why not establish a pilot program by identifying a small number of biological functions, organs, and/or organisms. Then we design and develop the most efficient models possible, and we have a quantity in terms of bits. Of course critics will claim that what is most efficient is subjective. Excellent, they and everyone else are welcome to design and develop their own simulation models. Why not offer awards for the most efficient? For example, trophies of Charles Darwin with his famous hat and beard, along with a totally puzzled and confused expression on his face. Now of course the simulation models will utilize processes found in nature, e.g., random number generators. Great, these "calls" will be subtracted out in order to arrive at a more true CSI measure. Critics want predictions, do they? Fantastic, once several simulation models are built we will become very good at predicting CSI measures for additional target processes. The simulation models will provide an additional benefit. Each point or "node" in the model can be analyzed as to the probability that it was derived by natural means. It will be loads of fun to then multiply the probabilities together, the numbers will be astronomical beyond all plausibility, what a hoot it will be!! We will then establish a lottery with the same odds, and publicly challenge our Materialist friends to play the lottery with their own personal funds. Maybe we can embarass and impoverish them all in one sweet and grand gesture!!! Oh, of course there is one wrinkle in this entire proposal. And that is that we have no idea how some of the greatest functions in biology work. The human mind for example. Hmmm. Well, we can say one thing for sure, the CSI elevator ain't anyway near the top floor, if there is a top floor. Going up????Ekstasis
May 22, 2008
May
05
May
22
22
2008
07:46 AM
7
07
46
AM
PDT
BA 77: There are many metrics of information, and some of them have different uses. In situations where sequence of choice is important the metric you discuss may be important. [There are such things as sequential, memory embedding systems, and combinational, sequence-independent ones. Feedbacks with lags are one way to get such effects,a nd systems where state changes and inrternal state affects response to next input will be sequential -- check up finite state machine algebra. Oddly, a combination lock is sequential, and an ordinary key-lock is combinational in this sense!] GEM of TKIkairosfocus
May 22, 2008
May
05
May
22
22
2008
07:01 AM
7
07
01
AM
PDT
kairosfocus, Thanks for your lucid explanation. Your clear concise manner has cleared up a few questions I had about CSI. Yet I still have one more nagging question that may or may not be pertinent to this topic, that arises from this following excerpt. It From Bit Excerpt: But Zeilinger and Brukner noticed that it (Shannon Information) doesn’t take into account the order in which different choices or measurements are made. This is fine for a classical hand of cards. But in quantum mechanics, information is created in each measurement–and the amount depends on what is measured when–so the order in which different choices or measurements are made does matter, and Shannon’s formula doesn’t hold. Zeilinger and Brukner have devised an alternative measure that they call total information, which includes the effects of measurement. For an entangled pair, the total information content in the system always comes to two bits. So my question is, "When will Zeilinger's definition of total information come into play when quantifying CSI as opposed to how information is "normally" defined?"bornagain77
May 22, 2008
May
05
May
22
22
2008
06:11 AM
6
06
11
AM
PDT
Interesting handle, Soplo Caseosa, Would you mind translating it for my curiosity? Thanks.Charlie
May 22, 2008
May
05
May
22
22
2008
04:43 AM
4
04
43
AM
PDT
PS: On more/less CSI, a key point is that here is more/less of COMPLEXITY in addressing a bicycle vs a 787, and complexity can be measured by K-compressibility of descriptions, effectively in the number of bits to get the most sparse but adequate specification. But, once we see an object that is highly contingent, it is not determined by mechanical necessity, as such would produce not contingency but natural regularity. Only chance or intelligence have been observed as sources of such high contingency. When the resulting configuration is complex beyond the Dembski type bound [i.e. we have effectively more than 500 - 1,000 bits of information storage capacity] AND it is especially functionally specified, exhibiting complex organisation, it is credibly so isolated in the config space that chance or similar processes would be overwhelmingly likely to fruitlessly exhaust the probabilistic resources of the observed cosmos without arriving at the shores of any of the islands of function in the config space. But, we know that intelligent agents, using insight, thus active information [which is measureable] are able to routinely exceed the performance of chance or the like. So, when we see FSCI, we reliably infer to intelligence. And, this reliability is amply supported by direct observation of a great many cases where we do directly know the causal story. (In short, per basic scientific methods, we are entitled to shift the burden of proof to those who object to the use of FSCI as a criterion of objectively detecting agency. And in fact, with much lower confidence levels, similar explanatory filters are routine in statistics and experimental science. In short, we are looking at selective hyperskepticism again.]kairosfocus
May 22, 2008
May
05
May
22
22
2008
04:06 AM
4
04
06
AM
PDT
SC: In re:
I ask again, how is this different than a subjective assessment? I had thought CSI was a computational system developed by Dr. Dembski. I will be a bit discouraged if it has only advanced it to the “yes”, “no”, “more”, “less” level of formalization.
ALL measurements are digitisable. So, in principle [and in praxis too . . .] ALL measurements are a chain of yes/no, more/less decisions. Equally -- and as pointed out above -- ALL measurements incorporate a subjective element. Indeed, ALL knowledge inevitably incorporates a subjective element. Further to this, every quantity is also about a quality: how much of X is in the end about in part recognising the presence/absence of X. Moreover, once we address information, as opposed to mere concatenations of elements forming a contingent whole, we are dealing with issues of intent, purpose, context etc -- i.e the active mind, thus again the subjective. Objectivity is about whether there is credibly more than the merely subjective, and CSI -- especially FSCI -- far and away passes that test. In short you may be falling into dismissive, selective hyperskepticism; which is inevitably incoherent. GEM of TKIkairosfocus
May 22, 2008
May
05
May
22
22
2008
03:46 AM
3
03
46
AM
PDT
#39 KF The 2 LOT thread of 3rd March 2008, contains an excerpt of a quantitative metric of CSI [From Research ID], at comment-post 53, here. I will have a look of the definition of FSCI. At first it seems on the right way to provide a more stable metric for specified complexitykairos
May 22, 2008
May
05
May
22
22
2008
02:31 AM
2
02
31
AM
PDT
As with my example of the space shuttle and the bicycle, we can frequently say “more CSI” or “less CSI,” but we usually cannot quantify precisely what we mean by “more” and “less.”
So, I ask again, how is this different than a subjective assessment? I had thought CSI was a computational system developed by Dr. Dembski. I will be a bit discouraged if it has only advanced it to the "yes", "no", "more", "less" level of formalization.soplo caseosa
May 22, 2008
May
05
May
22
22
2008
02:09 AM
2
02
09
AM
PDT
Footnote: The 2 LOT thread of 3rd March 2008, contains an excerpt of a quantitative metric of CSI [From Research ID], at comment-post 53, here.kairosfocus
May 22, 2008
May
05
May
22
22
2008
01:33 AM
1
01
33
AM
PDT
1. How can we distinguish between cases where CSI can and cannot be calculated exactly? This is a very good question and I've thought a lot about after I read the last works by Dembski in his designinference site. Surely putting CSI in strict relation with the concept of Chaitin-Kolmogorov theory of algortmic complexity has been a real advancement towards CSI's quantification. However, IMHO the real problem lies in the fact that an exact computation (not a maximization) of CSI does strictly require the knowledge of ALL the possible semiotic agents.and their knowledge. This is in my opinion the crucial point. For example, let us consider a compressed file such as an MP3 sound file. We cannot really recognize its artificial nature without having the "key" to interpret it, i.e. the decompression algorithm, which in turn does represent the knowledge originally provided by the authors of the algorithm itself. In other words, I think that to compute, or at provide a floor to, CSI it is necessary to know some of these "keys"; but in any case this computation is not definitive because we don't know if some simpler "key" could existkairos
May 22, 2008
May
05
May
22
22
2008
12:48 AM
12
12
48
AM
PDT
My point is rather modest. For any given object that exhibits CSI, it is not USUALLY possible to quantify the bits of information exactly. Calls for the exact quantification of CSI are in my experience disingenuous distractions.
This does, I think, raise three important questions: 1. How can we distinguish between cases where CSI can and cannot be calculated exactly? 2. When CSI can't be calculated exactly, what (exactly) can be calculated? And how? 3. In the cases where CSI cannot be calculated, how does one infer design?Bob O'H
May 21, 2008
May
05
May
21
21
2008
10:58 PM
10
10
58
PM
PDT
BarryA: "As with my example of the space shuttle and the bicycle, we can frequently say “more CSI” or “less CSI,” but we usually cannot quantify precisely what we mean by “more” and “less.”" It may be true that we "usually" can't quantify CSI. However, that's probably just due to our own ignorance and lack of "information" about the information. The ONLY place where I see CSI as unmeasurable *in principle* is within subjective or artistic patterns for the reasons I have given above. As long as we can calculate the improbability (measured in bits) of the IC core of instructions for building a spacecraft or a bicycle, then there is no reason why we can not calculate its CSI -- as long as we have an approximation of the probabilistic resources and how constrained the functions are. BarryA, I do understand what you are saying here. There are many examples of objects, such as Mount Rushmore, that exhibit CSI in that they are highly constrained and improbable organizations (complex) which seems to match a pre-specification that is not defined by the physical properties of the material used (one type of specificity) yet we are hard pressed to actually provide a measure of CSI. In these cases, I agree that some common sense (and the Design Matrix) is to be utilized as the investigator continues to study finer resolutions of the pattern to see if it continues to exhibit CSI characteristics. My only qualm was that you seemed to be somewhat overly pessimistic since I don't think that these artistic patterns are the "usual" case when discussing CSI patterns. Its just that these are the cases (such as archways and water-carved "elephants") that the critics want to focus on, not realizing (or fully realizing) that there are other actual patterns that are more readily quantified as having an objective measure of CSI.CJYman
May 21, 2008
May
05
May
21
21
2008
07:41 PM
7
07
41
PM
PDT
Barry, I just reread your posts and I apologize for my misunderstanding of what you were saying. So, to be on the same page, you are quite correct that in most instances CSI cannot be given an exact number even though it is known, without a doubt, to be present in a certain system. Where I missed your point is that I thought you were saying an exact CSI quantification is impossible in ALL instances. So again I apologize for misunderstanding of you.bornagain77
May 21, 2008
May
05
May
21
21
2008
07:40 PM
7
07
40
PM
PDT
Jason 1083: "... whereas CSI is not a quantitative theory and has not correctly predicted anything or explained any previously inexplicable phenomenon." You should do some due diligence and then come back to the discussion when you actually know what you are talking about. I could understand if you didn't understand what CSI was and then asked questions, but to blatantly spread misinformation ... that's a different story. Here, why don't we start with this question for yourself: "How does one calculate the CSI of a line of computer code?" If you actually know and explain it properly, then you will see that your statement above is negated. CSI is a quantification of some objective quantities measured against each other. Now, if you actually understand the concept behind CSI, you will be able to easily tell me what are those quantities. Second, an understanding of CSI leads into a conservation of information theorem (a 4th law of thermodynamics), where specified or pre-specified targets within an overwhelming search space will not be arrived at better than chance or any target will not be arrived at better than chance on average unless there is previously existing information to guide the search. These basic concepts have been discussed by information theorists for a while now. A "learner... that achieves at least mildly than better-than-chance performance, on average, ... is like a perpetual motion machine - conservation of generalization performance precludes it.” --Cullen Schaffer on the Law of Conservation of Generalization Performance. Cullen Schaffer, "A conservation law for generalization performance," in Proc. Eleventh International Conference on Machine Learning, H. Willian and W. Cohen. San Francisco: Morgan Kaufmann, 1994, pp.295-265. Yet EAs perform at better than chance performance to produce complex and specified targets. How do they do this? Understanding where CSI originates will help to solve that problem.CJYman
May 21, 2008
May
05
May
21
21
2008
06:50 PM
6
06
50
PM
PDT
"Utility" is a useful concept in economics, but economics is not science. Likewise CSI might be a useful concept for, say, philosophy, but that doesn't mean its a useful concept for science. Fallacies employed: Straw man, Weak Analogy, argument from what appears to be an ignorance of what CSI is.F2XL
May 21, 2008
May
05
May
21
21
2008
05:27 PM
5
05
27
PM
PDT
Actually BarryA, the engineer would just use his/her trusty swiss-army knife he/she has in his/her pocket at all times ;PJJS P.Eng.
May 21, 2008
May
05
May
21
21
2008
03:21 PM
3
03
21
PM
PDT
Yes, it's certainly true that economists often make unreasonable assumptions to make problems tractable. But that doesn't engage with the point that utility theory is a quantitative empirical theory which has proven extremely useful in predicting economic behavior across a wide range of settings (I agree that we have a long way to go in predicting the behavior of the macroeconomy, but we're pretty good at calculating things like, "How much will the income of the average person rise with an additional year of schooling?") whereas CSI is not a quantitative theory and has not correctly predicted anything or explained any previously inexplicable phenomenon.Jason1083
May 21, 2008
May
05
May
21
21
2008
02:31 PM
2
02
31
PM
PDT
An engineer, a physicist and an economist are stranded on an island, and all they have to eat are cans of beans they dragged in from the ship wreck. The problem, how to open the cans. The engineer says, let’s bang the cans with rocks until they open. That’s stupid says the physicist, it will make the cans’ edges jagged and rock pieces will contaminate the food. Let’s build a fire under the cans and the steam building in the cans will eventually cause the cans to break open. That’s great if by “break open” you mean “explode and spread the beans all over the ground.” So the engineer and the physicist go ‘round and ‘round, and all the while the economist is sitting back with a smug self-satisfied smirk on his face. Finally, the engineer says, “What are you grinning about? How do YOU propose to solve this problem?” “It is simplicity itself,” says the economist. “We simply assume we have a can opener.”BarryA
May 21, 2008
May
05
May
21
21
2008
02:07 PM
2
02
07
PM
PDT
O'Leary asks: "Barry, are you saying that ratios might be more achievable than absolute numbers?" No, a ratio is achieved by putting one absolute number in the numerator and another absolute number in the denominator. As with my example of the space shuttle and the bicycle, we can frequently say “more CSI” or “less CSI,” but we usually cannot quantify precisely what we mean by “more” and “less.” My point is rather modest. For any given object that exhibits CSI, it is not USUALLY possible to quantify the bits of information exactly. Calls for the exact quantification of CSI are in my experience disingenuous distractions.BarryA
May 21, 2008
May
05
May
21
21
2008
01:56 PM
1
01
56
PM
PDT
If lack of quantification is a problem for CSI and ID, I suggest logical inference has the same problem. How many scientific theories are based on inference?Lurker
May 21, 2008
May
05
May
21
21
2008
01:56 PM
1
01
56
PM
PDT
Barry, The problem is in the size of bite you are trying to take. As O'Leary in 24 and Jerry in 25 somewhat pointed out, the problem of quantifying CSI, to a more concrete level, may very well be solvable if strictly limited in its scope and definition. This (Quantifying CSI) is a realistic expectation especially considering Zeilinger's breakthroughs in quantifying "true information" to the foundation of reality itself.bornagain77
May 21, 2008
May
05
May
21
21
2008
01:26 PM
1
01
26
PM
PDT
Todd Berkebile - what in the world are you talking about? If economics is not a science, than what make physics, chemistry and biology sciences? Here is an example provided by Al Roth: "Rather than quibbling about definitions, it may help to consider how laboratory experiments complement other kinds of investigation in economics, as they do in those other sciences. Let me give an example. One strategy for looking at field data (as opposed to laboratory data) is to search out "natural experiments," namely comparable sets of observations that differ in only one critical factor. The benefit of using field data is that we are directly studying markets and behavior we are interested in, but the disadvantage is that in natural markets we can seldom find comparisons that permit sharp tests of economic theory. In a 1990 paper (in the informatively named journal, Science) I studied such a natural experiment, involving the markets for new physicians in different regions of the U.K. in the 1970's. The markets in Edinburgh and Cardiff succeeded while those in Newcastle and Birmingham failed, in ways that can be explained by how these markets were organized. But as will be apparent to readers of the Economist, there are other differences than market organization between Edinburgh and Cardiff on the one hand and Newcastle and Birmingham on the other. So, how are we to know that the difference in market organization, and not those other differences, accounts for the success and failure of the markets? One way to approach this question is with a laboratory experiment. In a paper in the Quarterly Journal of Economics, John Kagel and I report such an experiment, in which we study small, artificial markets that differ only in whether they are organized as in Edinburgh and Cardiff or as in Newcastle and Birmingham. Unlike in those naturally occurring markets, the market organization is the only difference between our laboratory markets. And our laboratory results reproduce, on a smaller scale and despite far smaller incentives, the results we see in the natural markets. So the experiments show that the differences in market organization by themselves can have the predicted consequences. Does this "prove" to a mathematical certainty that the different market organizations are the cause of the differences observed in the natural markets? Of course not. Does it provide powerful additional evidence in favor of that hypothesis? Of course it does."Jason1083
May 21, 2008
May
05
May
21
21
2008
01:18 PM
1
01
18
PM
PDT
Here is what I have just said on another thread. There are some CSI that is easy to measure and some for which maybe one should forget the concept. It is meaningless for a space shuttle or Mt. Rushmore but is very applicable for a computer program and machine operations, an alphabet and language and DNA and proteins. Separate the two types of concepts and then have the same discussion.jerry
May 21, 2008
May
05
May
21
21
2008
01:15 PM
1
01
15
PM
PDT
Barry, are you saying that ratios might be more achievable than absolute numbers? Then with CSI you must determine how much CSI is required to specify one item as opposed to another ... I wouldn't suggest beginning with life forms, as arbitrarily large numbers will result. For example, somebody mentioned the googolplex. Just as I feared! Let's not go there. All numbers are inherently evil, but numbers with names are monsters. Perhaps we might begin with the question of how much information is required to specify a brick? A soap bubble? Also: Theories about CSI are not needed to dismiss the Darwinist superstition. The Darwinist superstition is that natural selection is a creative force. It isn't, and it obviously isn't. Anyone can see this by looking at the difference between animals subjected to natural selection and animals protected by humans and artificially bred. Natural selection produces sameness; breeding (intelligent selection) produces creative differences. So we do not know the source of the huge level of information in naturally occurring life forms, and it is probably too much to begin a project like this with.O'Leary
May 21, 2008
May
05
May
21
21
2008
01:09 PM
1
01
09
PM
PDT
1 2

Leave a Reply