Uncommon Descent Serving The Intelligent Design Community

Siding with Mathgrrl on a point, and offering an alternative to CSI v2.0

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

There are two versions of the metric for Bill Dembski’s CSI. One version can be traced to his book No Free Lunch published in 2002. Let us call that “CSI v1.0”.

Then in 2005 Bill published Specification the Pattern that Signifies Intelligence where he includes the identifier “v1.22”, but perhaps it would be better to call the concepts in that paper CSI v2.0 since, like windows 8, it has some radical differences from its predecessor and will come up with different results. Some end users of the concept of CSI prefer CSI v1.0 over v2.0.

It was very easy to estimate CSI numbers in version 1.0 and then argue later whether the subjective patterns used to deduce CSI were independent and not postdictive. Trying to calculate the CSI in v2.0 is cumbersome, and I don’t even try anymore. And as a matter of practicality, when discussing origin-of-life or biological evolution, ID-sympathetic arguments are framed in terms of improbability not CSI v2.0. In contrast, calculating CSI v1.0 is a very transparent transformation going from improbability to taking the negative logarithm of probability.

I = -log2(P)

In that respect, I think MathGrrl (who’s real identity he revealed here) has scored a point with respect to questioning the ability to calculate CSI v2.0, especially when it would have been a piece of cake in CSI v1.0.

For example, take 500 coins, and suppose they are all heads. The CSI v1.0 score is 500 bits. The calculation is transparent and easy, and accords with how we calculate improbability. Try doing that with CSI v2.0 and justifying the calculation.

Similarly, with pre-specifications (specifications already known to humans like the Champernowne Sequences), if we found 500 coins in sequence that matched a Champernowne Sequence, we could argue the CSI score is 500 bits as well. But try doing that calculation in CSI v2.0. For more complex situations, one might get different answers depending on who you are talking to because CSI v2.0 depends on the UPB and things like the number possible primitive subjective concepts in a person’s mind.

The motivation for CSI v2.0 was to try account for the possibility of slapping on a pattern after the fact and calling something “designed”. v2.0 was crafted to try to account for the possibility that someone might see a sequence of physical objects (like coins) and argue that the patterns in evidence were designed because he sees some pattern in the coins somewhat familiar to him but no one else. The problem is everyone has different life experiences and they will project their own subjective view of what constitutes a pattern. v2.0 tried to use some mathematics to create at threshold whereby one could infer, even if the recognized pattern was subjective and unique to the observer of a design, that chance would not be a likely explanation for this coincidence.

For example, if we saw a stream of bits which some claims is generated by coin flips, but the bit stream corresponds to the Chapernowne sequence, some will recognize the stream as designed and others will not. How then, given the subjective perceptions that each observer has, can the problem be resolved? There are methods suggested in v2.0, which in and of themselves would not be inherently objectionable, but then v2.0 tries to quantify how likely the subjective perception can arise out of chance and then it convolves this calculation with the probability of the objects emerging by chance. Hence we mix the probability of an observer concocting a pattern in his head by chance and then mixing it with the probability an event or object happens by chance, and after some gyrations out pops a CSI v2.0 score. v1.0 does not involve such heavy calculations regarding the random chance an observer formulates a pattern in his head, and thus is more tractable. So why the move from v1.0 to v2.0? The v1.0 approach has limitations witch v2.0 does not. However, I recommend, that when v1.0 is available to use, use v1.0!

The question of post diction is an important one, but if I may offer an opinion — many designs in biology don’t require exhaustive rigor as attempted in v2.0 to try to determine if our design inferences are postdictive (the result of our imagination) or whether the designed artifacts themselves are inherently evidence against a chance hypothesis. This can be done using simpler mathematical arguments.

For example, consider if we saw 500 fair coins all heads, do we actually have to consider human subjectivity when looking at the pattern and concluding it is designed? No. Why? We can make an alternative mathematical argument that says if coins are all heads they are sufficiently inconsistent with the Binomial Distribution for randomly tossed coins, hence we can reject the chance hypothesis. Since the physics of fair coins rules out physics as being the cause of the configuration, we can then infer design. There is no need in this case to delve into the question of subjective human specification to make the design inference in that case. CSI v2.0 is not needed to make the design inference, and CSI v1.0, which says we have 500 bits of CSI, is sufficient in this case.

Where this method (v1.0 plus pure statistics) fails is in questions of recognizing design in a sequence of coin flips that follow something like the Champernowne sequence. Here the question of how likely it is for humans to make the Champernowne sequence special in their minds becomes a serious question, and it is difficult to calculate that probability. I suppose that is what motivated Jason Rosenhouse to argue that the sort of specifications used by ID proponents aren’t useful for biology. But that is not completely true if the specifications used by ID proponents can be formulated without subjectivity (as I did in the example with the coins) 🙂

The downside of the alternative approach (using CSI v1.0 and pure statistics) is that it does not include the use of otherwise legitimate human subjective constructs (like the notion of motor) in making design arguments. Some, like Michael Shermer or my friend Allen MacNeill, might argue that we are merely projecting our notions of design by saying something looks like a motor or a communication system or a computer, but the perception of design is owing more to our projection than to an inherent design. But the alternative approach I suggest is immune from this objection, even though it is far more limited in scope.

Of course I believe something is designed if it looks like a motor (flagellum), a telescope (the eye), a microphone (the ear), a speaker (some species of bird can imitate an incredible range of sounds), a sonar system (bat and whale sonar), a electric field sensor (sharks), a magnetic field navigation system (monarch butterflies), etc. The alternative method I suggest will not directly detect design in these objects quite so easily, since the pure statistics are hard pressed to describe the improbability of such features in biology even though it is so apparent these features of biology are designed. CSI v2.0 was an ambitious attempt to cover these cases, but it came with substantial computational challenges to arrive at information estimates. I leave it to others to calculate CSI v2.0 for these cases.

Here is an example of using v1.0 in biology regarding homochirality. Amino acids can be left or right handed. Physics and chemistry dictate that left-handed and right-handed amino acids arise mostly (not always) in equal amounts unless there is a specialized process (like living cells) that creates them. Stanley Miller’s amino acid soup experiments created mixtures of left and right handed amino acids, a mixture we would call racemic (a mix of right and left-handed amino acids) versus the homochiral variety (only left-handed) we find in biology.

Worse for the proponents of mindless oirgins of life, even homochiral amino acids will racemize spontaneously over time (some half lives are on the order of hundreds of years), and they will deanimate. Further, when Sidney tried to polymerize homochiral amino acids into protoproteins, they racemized due to the extreme heat and created many non-chains, and the chains he did create had few if any alpha peptide bonds. And then in the unlikely event the amino acids polymerize, in a soup, the amino acids can undergo hydrolysis. These considerations are consistent with the familiar observation that when something is dead, it tends to remain dead and moves farther away from any chance of resuscitation over time.

I could go on and on, but the point being is we can provisionally say the binomial distribution I used for coins also applies to the homochirality in living creatures, and hence we can make the design inference and assert a biopolymer has at least -log2(1/2^N) = N bits of CSI v1.0 based on N stereoisomer residues. One might try to calculate CSI v2.0 for this case, but me being lazy will stick to the CSI v1.0 calculation. Easier is sometimes better.

So how can the alternative approach (CSI v1.0 and pure statistics) detect design of something like the flagellum or DNA encoding and decoding system? It cannot do so as comprehensively as CSI v2.0, but v1.0 can argue for design in the components. As I argued qualitatively in the article Coordinated Complexity – the key to refuting postdiction and single target objections one can formulate observer independent specification (such as I did with the 500 coins being all heads) by appeal to pure statistics. I gave the example of how the FBI convicted cheaters of using false shuffles even though no formal specifications for design were asserted. They merely had to use common sense (which can be described mathematically as cross or auto correlation) to detect the cheating.

Here is what I wrote:

The opponents of ID argue something along the lines: “take a deck of cards, randomly shuffle it, the probability of any given sequence occurring is 1 out of 52 factorial or about 8×10^67 — Improbable things happen all the time, it doesn’t imply intelligent design.”

In fact, I found one such Darwinist screed here:

Creationists and “Intelligent Design” theorists claim that the odds of life having evolved as it has on earth is so great that it could not possibly be random. Yes, the odds are astronomical, but only if you were trying to PREDICT IN ADVANCE how life would evolve.

http://answers.yahoo.com/question/index?qid=20071207060800AAqO3j2

Ah, but what if cards dealt from one random shuffle are repeated by another shuffle, would you suspect Intelligent Design? A case involving this is reported in the FBI website: House of Cards

In this case, a team of cheaters bribed a casino dealer to deal cards and then reshuffle them in same order that they were previously dealt out (no easy shuffling feat!). They would arrive at the casino, play cards which the dealer dealt and secretly record the sequence of cards dealt out. Thus when the dealer re-shuffled the cards and dealt out the cards in the exact same sequence as the previous shuffle, the team of cheaters would be able to play knowing what cards they would be dealt, thus giving them substantial advantage. Not an easy scam to pull off, but they got away with it for a long time.

The evidence of cheating was confirmed by videotape surveillance because the first random shuffle provided a specification to detect intelligent design of the next shuffle. The next shuffle was intelligently designed to preserve the order of the prior shuffle.

Biology is rich with self-specifying systems like the auto correlatable sequence of cards in the example above. The simplest example is life’s ability to make copies of itself through a process akin to Quine Computing. Physics and chemistry makes Quine systems possible, but simultaneously improbable. Computers, as a matter of principle, cannot exist if they have no degrees of freedom which permit high improbability in some of its constituent systems (like computer memory banks).

We can see the correlation between a parent organism and its offspring not being the result of chance, and thus we can reject the chance hypothesis for that correlation. One might argue that though the offspring (copy) is not the product of chance, the process of copying is the product of a mindless copy machine. True, but we can further then estimate the probability of randomly implementing particular Quine computing algorithms (that makes it possible for life to act like computerized copy machines). The act of a system making copies is not in-and-of-itself spectacular (salt crystals do that), but the act of making improbable copies via an improbable copying machine? That is what is spectacular.

I further pointed out that biology is rich with systems that can be likened to login/password or lock-and-key systems. That is, the architecture of the system is such that the components are constrained to obey a certain pattern or else the system will fail. In that sense, the targets for individual components can be shown to be specified without having to calculate the chances the observer is randomly formulating subjective patterns onto the presumably designed object.

lock and key

That is to say, even though there are infinite ways to make lock-and-key combinations, that does not imply that emergence of a lock-and-key system is probable! Unfortunately, Darwinists will implicitly say, “there are infinite number of ways to make life, therefore we can’t use probability arguments”, but they fail to see the errors in their reasoning as demonstrated with the lock-and-key analogy.

This simplified methodology using v1.0, though not capable of saying “the flagellum is a motor and therefore is designed”, is capable of asserting “individual components (like the flagellum assembly instructions) are improbable hence the flagellum is designed.”

But I will admit, the step of invoking the login/password or lock-and-key metaphor is a step outside of pure statistics, and making the argument for design in the case of login/password and lock-and-key metaphors more rigorous is a project of future study.

Acknowledgments:
Mathgrrl, though we’re opponents in this debate, he strikes me a decent guy

NOTES:
The fact that life makes copies motivated Nobel Laureate Eugene Wigner to hypothesize a biotonic law in physics. That was ultimately refuted. Life does copy via a biotonic law but through computation (and the emergence of computation is not attributable to physical law in principle just like software cannot be explained by hardware alone).

Comments
How to compare infinite sets of natural numbers, so that proper subsets are also strictly smaller than their supersets:
Are there really as many rational numbers as natural numbers? You might answer “Yes” but a better answer would be “It depends on the underlying order relation you use for comparing infinite sets”. In my opinion there really is no reason why we should consider Cantors characterization of cardinality as the only possible one and there is also a total order relation for countable sets where proper subsets are also strictly smaller than their supersets. In this article I want to present you one of them. (bold added)
HT Winston EwertJoe
May 23, 2013
May
05
May
23
23
2013
11:13 AM
11
11
13
AM
PDT
To Chance, JWTruthInLove, and Sal- The roller coaster hypothesis: When dealing with certain infinite sets- make all elements = e, and then it's {e,e,e,e,e,e,e,e,e,e,e,e,e,e,e,e,e,...} all the way down, for all similarly classified sets. Just like going over the top and down a steep, never-ending roller coaster drop, in which you reach terminal velocity and just keep going, and going and going.Joe
May 22, 2013
May
05
May
22
22
2013
10:57 AM
10
10
57
AM
PDT
So much for uniformitarianism...Joe
May 22, 2013
May
05
May
22
22
2013
04:41 AM
4
04
41
AM
PDT
Joe @111, ahh, I see. Thanks. Yes, crafty. ;) Sal @112, point taken. (That was humorous the way you put it.) I try not to be dogmatic about it, but infinity looks to me like something, which by definition, can never, ever be traversed.Chance Ratcliff
May 21, 2013
May
05
May
21
21
2013
02:58 PM
2
02
58
PM
PDT
Sal, Again I thank you. My demon doesn't like the word "concoct" used in relation to the word "math". On New Year's Eve 2006 I posted :
With set theory in general anything can be a set. Just put whatever you want in {} and you have a set. Or if you can't find {} just declare what you want to be in a set. Then all subsets are just that set and/ or that set minus any number of items.
So yes, I understand the arbitrary nature of set theory. Thanks again, much to think about...Joe
May 21, 2013
May
05
May
21
21
2013
02:56 PM
2
02
56
PM
PDT
I doubt such a quantity could be concretely real, at least in this universe.
There are many mathematicians who find it offensive and denigrating that their idealized world could have any counterpart in reality. There was a mathematician by the name of Ito who created Ito's Calculus. He was later mortified to find that people found application of his Calculus to finance.scordova
May 21, 2013
May
05
May
21
21
2013
02:52 PM
2
02
52
PM
PDT
Chance, The bijection is formed ordinally, set A's first element with set B's first element, regardless of what the actual number is. And yes it appears to be nothing but craftiness.Joe
May 21, 2013
May
05
May
21
21
2013
02:49 PM
2
02
49
PM
PDT
It looks like the quality of being infinite makes lots of things possile that would be otherwise impossible for mere mortal finitistic systems.
In other words, Black Magic, Fenomenal Black Magic (FBM).scordova
May 21, 2013
May
05
May
21
21
2013
02:48 PM
2
02
48
PM
PDT
Sal, I wouldn't equate eternality with infinity. ;) But I'd say that infinity is only abstractly useful, such as when dealing with limits. I doubt such a quantity could be concretely real, at least in this universe.Chance Ratcliff
May 21, 2013
May
05
May
21
21
2013
02:47 PM
2
02
47
PM
PDT
If so, then they can’t have the same cardinality
Why? You're extrapolating finitistic reasoning to situations involving infinity. It looks like the quality of being infinite makes lots of things possile that would be otherwise impossible for mere mortal finitistic systems. :-)scordova
May 21, 2013
May
05
May
21
21
2013
02:42 PM
2
02
42
PM
PDT
This one-to-one mapping seems to make numbers arbitrary.
The mappings are arbitrary. The construction of what you want to put in a set is arbitrary, so in that sense, a set contains what you arbitrarily choose to put in it. At issue is of you make certain arbitrary constructs, what will their properties be? Perhaps disturbing is that it becomes apparent with set theory, the real numbers are not the only conceptual arbitrary entitities one can concoct within set theory. Can other mathematical "number" systems be concocted. Yes, and surprisingly, they have utility. Like: http://booster911.hubpages.com/hub/Modulo-2-Arithmetic where 1 + 1 + 1 = 1 So yeah, members of sets can be be abitrary constructions with arbitrary properties. We can construct math systems that behave in the familiar way, and math systems that don't. The modulo-2 polynomials, strange as they are, are vital in Information Technology. Whether we choose to call some members of a set "numbers" is maybe a matter of convenience, what they really are is based on the rules and properties we project on them via unprovable axioms such as this set of unprovable axioms for real numbers: http://www-history.mcs.st-and.ac.uk/~john/analysis/Lectures/L5.html These axioms were a codification of the way we sort of expect numbers to behave based on how we experience reality. Some daring mathematicians said, "what if we assume different properties, what happens". And you get other mathematical systems. Set theory helps to deduce those properties rigorously. Are those strange math systems useful. Sometimes. One strange, previously tabooo, math system was non-Euclidean geometry which became the basis of much of modern physics. One might ask, is there an inherently true math? A lot of mathematicians might respond with, "what does it matter as long as its beautiful?" :-)scordova
May 21, 2013
May
05
May
21
21
2013
02:37 PM
2
02
37
PM
PDT
Footnote to #104, We could define F:A→B as F(a) = a-1, and this would appear to allow for a bijection in the infinite case, but this seems little better than craftiness. Yet if A = {all positive integers} and B = {all nonnegative integers}, then isn't A ⊂ B true even for infinite sets? If so, then they can't have the same cardinality, at least as the definition would apply to discrete cases. I'm not inclined to think of infinity as a quantity, if only because for two functions, f(x) = 10^x and g(x) = log(x), both f and g approach infinity in the limit as x approaches infinity. Graphing these two functions makes it clear how absurd this is if infinity is treated like a quantity!Chance Ratcliff
May 21, 2013
May
05
May
21
21
2013
02:33 PM
2
02
33
PM
PDT
as to:
Applying finite minds to the problems of the infinite is bound to present difficulties.
Yet:
"The human mind infinitely surpasses any finite machine." Gödel's philosophical challenge (to Turing) - Wilfried Sieg - lecture video http://www.youtube.com/watch?v=je9ksvZ9Av4 "Either mathematics is too big for the human mind or the human mind is more than a machine" ~ Godel
Moreover Godel derived incompleteness, at least in part, by studying the infinite. As to 'presenting difficulties', the not to subtle hint of the following video is that 'studying the infinite' was 'dangerous knowledge'
BBC-Dangerous Knowledge - Part 1 https://vimeo.com/30482156 Part 2 https://vimeo.com/30641992
bornagain77
May 21, 2013
May
05
May
21
21
2013
02:18 PM
2
02
18
PM
PDT
I'm wondering how we could form a bijection between sets A and B if A = {all positive integers} and B = {all nonnegative integers} when we define a mapping between sets as F:A→B, such that F(a) = a. (Set B would include zero, where set A would not). It seems that such a condition could never be satisfied between these sets. Is there a practical way to resolve this disparity? It seems like a logical contradiction to me. Considering any discrete case of A and B containing numbers less than N, we would always get a set containing zero when taking the complement of the intersection between sets A and B: (A ∩ B)' = {0}, indicating that the cardinality of A and B are different: |A| ≠ |B|. No bijection would exist for a mapping F:A→B where F(a) = a. Why should this not be so for the infinite case?Chance Ratcliff
May 21, 2013
May
05
May
21
21
2013
02:14 PM
2
02
14
PM
PDT
And thank you Sal and JWTruthInLove. It may go a little slow here because my last calculus class was in 1992- so be gentle....Joe
May 21, 2013
May
05
May
21
21
2013
01:50 PM
1
01
50
PM
PDT
For argument's sake only: Let's say that my methodology is correct and the set of all non-negative integers has a greater cardinality than the set of all positive integers. How would that effect anything? (other than meaning Sal's above proofs are wrong)Joe
May 21, 2013
May
05
May
21
21
2013
01:47 PM
1
01
47
PM
PDT
This one-to-one mapping seems to make numbers arbitrary. Also the strange thing about infinity is it makes small % in the finite world really, really close to 0. So a difference of ten numbers in an infinite world would be almost as close to 0% as one can get. Also "my" cardinality deals with the number of elements in a set. What does actual cardinality stand for?Joe
May 21, 2013
May
05
May
21
21
2013
01:43 PM
1
01
43
PM
PDT
@Joe: It's confusing if you use the same term "cardinality" for the actual cardinality and JOEC. A = {x : x in N} B = {f(x) : x in N}, f(x) = x + 1 Am I correct to assume that JOEC(A) > JOEC(B) is true?
As opposed to looking down infinity and saying “Gee, it goes on forever so they must be the same”
No one is saying that, except for you.JWTruthInLove
May 21, 2013
May
05
May
21
21
2013
11:51 AM
11
11
51
AM
PDT
Applying finite minds to the problems of the infinite is bound to present difficulties. One could make the argument that it's inappropriate to even try, but human nature is such we'll try any way. Here is a powerful example of what happens when we extrapolate out to infinity. You can find yourself concluding that: 1 = 0 See: Grandi Series Consider the following two sets: SET 1: integers greater than 0 (a member of this set is designated Y) SET 2: integers greater than 10 (a member of this set is designated X) Superficially it would seem SET 1 has 10 more members than SET 2. But then again, what happens when we're dealing with infinity gets strange. A math professor would say, "prove that the two have the same cardinality." If he did so on a homework assignment I'd say something like:
we must show every member of X can be mapped to every member of Y and vice versa. let every member X of set to 2 map to (X - 10), and every member of Y of SET 1 map to (Y + 10), thus we can construct a 1-to-1 mapping, thus they have the same cardinality. A more terse proof is: Y = F(X) = X - 10 or alternatively X = F(Y) = X + 10 demonstrate a function with a 1-to-1 mapping is possible
Is it this proof valid? Good enough to make the grade. I'd probably get docked points for the less terse form of the proof. Questions of its ultimate validity I leave to others, but that is the accepted answer and it seems to work. But what did I say about applying finitistic reasoning to question of infinity? The claims appear to be not so clear or at least counter intuitive. The fact that the cardinality of all the reals from 0 to 1 is equal to the cardinality of all the reals from 0 to 2 seems really astonishing. The proof would be: SET 1: all reals from 0 to 1 (symbolized by Y) SET 2: all reals from 0 to 2 (symbolized by X)
we must show every member of X can be mapped to every member of Y and vice versa. let every member X of set 2 map to (X/2), and every member of Y of SET 1 map to (Y * 2), thus we can construct a 1-to-1 mapping, thus they have the same cardinality A more terse proof: Y = F(X) = X/2 or alternatively X = F(Y) = Y * 2 demonstrate a function with a 1-to-1 mapping is possible
scordova
May 21, 2013
May
05
May
21
21
2013
11:44 AM
11
11
44
AM
PDT
JWTruthInLove (Sal too)- the following is what I have come up with wrt infinite sets and cardinality: The Number Line Hypothesis With respect to infinite sets (with a fixed starting point), it has been said that the set of all non-negative integers (set A) is the same size, ie has the same cardinality, as the set of all positive integers (set B). I have said that set A (the set of all non-negative integers) has a greater cardinality than set B (the set of all positive integers). My argument is that set A consists of and contains all the members of set B AND it has at least one element that set B does not. That is the set comparison method. Members that are the same cancel each other and the remains are inspected to see if there is any difference that can be discerned with them. Numbers are not arbitrarily assigned positions along the number line. With set sizes, ie cardinality, the question should be “How many points along the number line does this set occupy?”. If the answer is finite then you just count. If it is infinite, then you take a look at the finite because what happens in the finite can be extended into the infinite (that’s what the ellipsis mean, ie keep going, following the pattern put in place by the preceding members). With that in mind, that numbers are points along the number line and the finite sets the course for the infinite, with infinite sets you have to consider each set’s starting point along the line and the interval of its count. Then you check a chunk (line segment) of each set to see how many points each set occupies (for the same chunk). The chunk should be big enough to make sure you have truly captured the pattern of each set being compared. The set with the most points along the number line segment has the greater cardinality. For set A = {0.5, 1.5, 2.5, 3.5,…} and set B = {1,2,3,4,…}, set A’s cardinality is greater than or equal to set B. It all depends on where along the number line you look. As opposed to looking down infinity and saying "Gee, it goes on forever so they must be the same", I look back from infinity and say "Hey, look what came before this point" and can we use that to make any determinations about sets.Joe
May 21, 2013
May
05
May
21
21
2013
10:47 AM
10
10
47
AM
PDT
JW, Thank you. One never knows when something new and different will become handy. ;) I am still trying to figure out what logical inconsistencies my definition brings about...Joe
May 20, 2013
May
05
May
20
20
2013
12:03 PM
12
12
03
PM
PDT
Sal, Your input was very welcome and appreciated. On one hand I thought that I knew that all infinite sets were not equal and on the other I had my opponents yelling "infinity is infinity you IDIOT". And now I see the value in saying infinite sets are not equal- thanks to you.Joe
May 20, 2013
May
05
May
20
20
2013
12:01 PM
12
12
01
PM
PDT
@Joe: I like the definition. It may not be useful in algorithmic theory but someone somewhere might actually find a use for it. Let's call the function which returns "Joe's cardinality" of a set JOEC. :-)JWTruthInLove
May 20, 2013
May
05
May
20
20
2013
11:57 AM
11
11
57
AM
PDT
The problem is we are finite human beings trying to construct theorems about things that have an infinite number of members. Hofstadter called this "finitistic reasoning". One might argue on principle whether it is legitimate at all to use our finite reasoning skills to make statements about things that are infinite. But it is our nature to try to extrapolate ideas out to infinity where things are not always so clear. I don't think I can add anything more to questions of set theory. Have at it gentleman and thanks to all who offered comments on my thread both on or off topic.scordova
May 20, 2013
May
05
May
20
20
2013
11:39 AM
11
11
39
AM
PDT
Normally Joe's cardinality is the actual number of elements. In the case on infinite sets then we can only do a greater than, less than or don't know. If one infinite set obvioulsy contains the same elements of another infinite set AND has elements the other does not, then it has a greater cardinality. Nice finishing quote, btw.Joe
May 20, 2013
May
05
May
20
20
2013
11:22 AM
11
11
22
AM
PDT
As for mapping rationals to irrationals, it is done the ordinal way :) :razz:Joe
May 20, 2013
May
05
May
20
20
2013
11:19 AM
11
11
19
AM
PDT
And using my cardinality the set of integers that includes 0 is greater than the set of integers that doesn’t.
That's an example of a relation of your cardinality. What's your definition of cardinality (aka "Joe's cardinality")?
And evolutionism entails crap in the guise of science.
"It warps the minds of our children and weakens the resolve of our allies".JWTruthInLove
May 20, 2013
May
05
May
20
20
2013
11:18 AM
11
11
18
AM
PDT
Looks like I'm getting brain Cantor :)Joe
May 20, 2013
May
05
May
20
20
2013
11:08 AM
11
11
08
AM
PDT
keiths is the guy who baldly asserted the unguided evolution is by far a better explanation for what we observe than ID. And using my cardinality the set of integers that includes 0 is greater than the set of integers that doesn't. As I said, it has everything the other one has PLUS something else. Can anyone tell me what logical contradiction that dredges up? My opponents fear the worst yet cannot say what that is. And evolutionism entails crap in the guise of science.Joe
May 20, 2013
May
05
May
20
20
2013
11:07 AM
11
11
07
AM
PDT
Sal, It was his Continuum Hypothesis that people have been debating- my bad.Joe
May 20, 2013
May
05
May
20
20
2013
11:01 AM
11
11
01
AM
PDT
1 2 3 4

Leave a Reply