Home » Intelligent Design » Mathgrrl Auditions for Arthur Murray Dance Studio

Mathgrrl Auditions for Arthur Murray Dance Studio

In my last post I demonstrated that Leslie Orgel coined the phrase “specified complexity.” Then I demonstrated that William Dembski uses the phrase in an identical sense.

This placed Mathgrrl on the horns of a dilemma. She can stick with her assertion that the concept of “specified complexity” is meaningless, but if she does that she has to admit that materialist hero Orgel was employing a meaningless concept.

Or she can admit that Orgel’s concept of “specified complexity” is meaningful, but if she does that she has to admit that ID proponent Dembski’s use of the concept is legitimate.

What is a good materialist to do? Dance, evade and obfuscate of course!

Now Mathgrrl writes: “I have said nothing about whether or not Orgel’s concept is coherent or meaningful.”

OK Mathgrrl. I will put it to you: Was Orgel’s concept of specified complexity coherent or meaningful?

My prediction: More dancing, evasion and obfuscation.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

26 Responses to Mathgrrl Auditions for Arthur Murray Dance Studio

  1. Match Game with Gene Rayburn:

    Dancing with the (blank)

  2. MathGrrl pointed out that Orgel used the term specified complexity in a descriptive and qualitative sense. Dembski uses it in a quantitative sense. Those seem to be at odds with one another.

    Also, Wicken said this:

    The information content of any organized system is difficult to quantify. Reductionist reasoning would suggest that the elements of information are ‘coded’ and ontologically self-standing within the system itself. This is not so. The generation and interpretation of information always requires larger contexts in which it can be understood, because they involve more information than the system itself can possibly contain. Strictly construed, these contexts contribute to that information content. The ways in which they so contribute are not quantifiable in the present understanding of information theory. This non-quantifiability applies from biological organizations to machines to linguistic structures. In this paper, I will discuss some of those inherent limitations to informational quantification.”

  3. My apologies to Mr. Arrington: an error in my link. In fact, the earlier article is here.

    I retract the claim that the earlier discussion was deleted.

    See how easy that is?

  4. Muramasa,

    Read “No Free Lunch”- Dembski is expanding on previous uses of “specified complexity”- chapters 4 and 6…

  5. Joseph,

    Barry’s posts claim that Orgel and Wicken use “specified complexity “in an identical sense” as Dembski. To the casual reader, this would suggest that Orgel and Wicken support Dembski’s use of the term. I do not believe this is correct. The cited abstract from Wicken support MathGrrl’s assertion that Orgel and Wicken see specified complexity as a qualitative measure.

    As an example, I may say that a novel I read was interesting. My neighbor may make the claim that the novel contained “258 units of interesting”. He would be wrong to assert that I endorsed his claim.

  6. 6
    William J. Murray

    Mathgrrl said:

    My conclusion is that, without a rigorous mathematical definition and examples of how to calculate [CSI], the metric is literally meaningless. Without such a definition and examples, it isn’t possible even in principle to associate the term with a real world referent.”

  7. Muramasa,

    Dembski took Orgel et al’s concept and expanded on it- moderized it. That much is cea by areading of NFL.

  8. 8

    mathgrrl, so far we have gotten dancing, evasion and obfuscation only from your friends. Are you going to join the ball?

  9. Muramasa at 5,

    You write, “Barry’s posts claim that Orgel and Wicken use ‘specified complexity’ ‘in an identical sense’ as Dembski. To the casual reader, this would suggest that Orgel and Wicken support Dembski’s use of the term.”

    Well, a casual reader can assume anything he wants, but to a careful reader, it does not suggest any such thing.

    Depending on what else is at stake, two parties can use a term to mean, for all practical purposes, the same thing yet – for various reasons – not endorse each other’s use.

    Mutual endorsement has no bearing on whether a third party might examine the two uses and come to the conclusion that the parties in dispute are using the term to mean the same thing. That matter must be addressed separately.

    Most likely, Dembski works with the term quantitatively because he is a mathematician and can only work with quantities, not qualities.

  10. 10

    Muramasa is no longer with us.

  11. Slightly off-topic-

    In “The Nature of Nature” Stephen C Meyer has his essay on the origin of DNA. In that essay he has specified information (and CSI) as being a (specified) subset of the Shannon Information superset:

    Within the set of combinatorially possible sequences [Shannon Information], only a very few will convey meaning [specified information]. This smaller set of meaningful sequences, therefore, delimits a domain or pattern within the larger set of the totality of possibilities- page 301

    IOW specified information is Shannon information with meaning/ functionality, just as I have been saying.

    Dance around that MathGrrl…

  12. So what IS the rigorous mathematical definition of CSI? And examples of application? I’m ID friendly as can be but I never saw this defined.

  13. kornbelt888-

    I am not sure that definition exists. Complexity is taken care of by the math in “No Free Lunch”. Specification’s math component is in Dembski’s paper on specification and Shannon did the math for information.

  14. KB:

    What keeps happening is that when the definitions and calculations are proffered, they are submerged under a tide of dismissive rhetoric.

    This is compounded by the explosion in the rate of posting at UD, that makes key posts and threads fade from view very quickly. (I have long argued for the need of in effect a reference base . . . )

    I suggest you first look at the Little Green Men Thread from over the weekend, which brought out much, both in the OP and in the comments.

    Next, the UD WACs, all along, have had important information that was being rhetorically brushed aside, especially 25 – 28, noting the metrics and examples described in the last two. (Maybe this will help you understand why I am a bit less than pleased with MG’s rhetorical performance and assertions here in recent weeks.)

    Observe that in WAC 28, the X-metric is described with a concrete example of a PC screen.

    This simplest, easiest to understand — and in fact commonly used [the file sizes you commonly see are in effect this in action] — metric is the one that allows us to look at the DNA complement of a cell or to make a protein and pretty directly notice the function, the specificity on a code and the search space implications of the number of AA’s coded for. A typical 300 AA protein needs 900 bases of DNA, and thus 1,800 functional bits. This is well beyond the 1,000 bit threshold at which the config space goes to 10^301 possibilities, over ten times the square of the 10^150 Planck time quantum states of the 10^80 or so atoms of our observed cosmos across its thermodynamic lifespan; about 50 million times the time said to have elapsed since the big bang. In short the cosmos scale resources cannot begin to scratch the surface of the number of possibilities of just 1,000 bits. Islands of function in such a space are so deeply isolated, therefore that chance plus necessity is not a feasible search mechanism. The only empirically and analytically credible, observed source for objects with FSCI beyond this threshold, is intelligence. (For one instance among billions, this post.)

    That is what is being obscured by the materialist talking points.

    The Durston Functional Sequence Complexity metric was published with values for 35 protein families in 2007, and is based on an extension of Shannon’s H-measure of average information per symbol. MG has been busy trying to suggest that his metric is not related to the Dembski metric, but in fact it very plainly is on the same principle of scaling islands of function relative to config spaces and identifying informational complexity with difficulty of search on chance plus necessity.

    Here is an excerpt:

    For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space.

    In short this is very much aligned with the key concept of the Dembski metric; the difference is that Durston et al are using the observed protein sequences in families to estimate the information content in functional bits.

    The Dembski metric is based on in effect estimating a probabilistic measure for the functional target (the way to do this is variable, and the Durston type general approach would be relevant) and comparing it to a threshold scope of config space that — if the value is beyond this — it is so maximally unlikely that the result is by blind chance and mechanical necessity, that we may comfortably assign the credible cause of a complex, specific piece of information to design.

    Boiling down, the Chi metric is equivalent t6o estimating a probability of hitting a target zone on a chance driven hyp [this is how the Hartley-Shannon information metric works], then subtracting a threshold value that starts at 398 bits.

    In effect CHI = – log2 (2^398*D2*p)

    So CHI = Ip – (398 + K2), bits

    I updated my always linked today to further bring out the various metrics, and so I suggest you start here and read on down through the parts that speak to the Durston and Dembski metrics.

    Hope this helps.

    GEM of TKI

  15. Barry:

    In my last post I demonstrated that Leslie Orgel coined the phrase “specified complexity.” Then I demonstrated that William Dembski uses the phrase in an identical sense.

    Barry, Dembski’s usage of the term “complexity” in the context of “specified complexity” is unconventional — he defines it to mean improbable. His definition of “complexity” has nothing to do with whether or not something is uniform, repetitive, etc. You can read everything he’s published on the subject and you’ll find no exceptions.

    Orgel, on the other hand, contrasts “complexity” with simplicity and uniformity, and says nothing about probability. What, then, makes you think that he is using Dembski’s definition of term?

  16. 16

    RObb writes: “Dembski’s usage of the term ‘complexity’ in the context of ‘specified complexity’ . . . mean[s] improbable. His definition of ‘complexity’ has nothing to do with whether or not something is uniform, repetitive, etc. You can read everything he’s published on the subject and you’ll find no exceptions. Orgel, on the other hand, contrasts ‘complexity’ with simplicity and uniformity, and says nothing about probability.”

    Sorry RObb. That dog won’t hunt.

    In “Intelligent Design” Dembski describes the “complexity” criterion as follows: “Complexity ensures that the object is not so SIMPLE that it can readily be explained by chance.”

    Dembski: Complexity means “not simple.”
    Orgel: Complexity means “not simple.”

    The dance continues.

  17. Sorry RObb. That dog won’t hunt.

    In “Intelligent Design” Dembski describes the “complexity” criterion as follows: “Complexity ensures that the object is not so SIMPLE that it can readily be explained by chance.”

    Emphasis added – that seems to shift the meaning to be about probability.

    as far as I’m aware, all of Dr. Dembski’s maths are based around calculating the probabilities of events. Has he defined complexity in any other way?

    Dembski: Complexity means “not simple enough to be probable.”
    Orgel: Complexity means “not simple.”

    The dance continues.

    If the dog isn’t hunting, I guess it’s safe to have a foxtrot.

  18. R0bb:

    Barry, Dembski’s usage of the term “complexity” in the context of “specified complexity” is unconventional — he defines it to mean improbable.

    How is that unconventional?

    But anyway a reading of “No Free Lunch” demonstrates dembski is exapnding on what Orgel et al., have said.

  19. Barry:

    In “Intelligent Design” Dembski describes the “complexity” criterion as follows: “Complexity ensures that the object is not so SIMPLE that it can readily be explained by chance.”

    But what does Dembski mean by “simple” here? We know that his usage isn’t the same as Orgel’s, because Orgel associates the term with uniformity and repetition, which are characteristics that Dembski correlates positively with “specified complexity”. Consider that his examples of specified complexity include simple repetitive sequences, plain rectangular monoliths, and a narrowband signals epitomized by a pure sinusoidals.

    Dembski explicitly and consistently defines the “complexity” half of “specified complexity” as improbability. If you doubt this, please let me know so we can discuss it, or you can ask ID proponents like vjtorley or CJYman. Orgel, on the other hand, says nothing about improbability. So I’ll ask again: What makes you think that Orgel is using Dembski’s definition of “complexity”?

  20. R0bb:

    Dembski explicitly and consistently defines the “complexity” half of “specified complexity” as improbability.

    Is there any other way to define complexity?

  21. 21

    RObb, do you really read what you write before you post it here? Listen to yourself. In your passion to defend mathgrrl’s indefensible position you have actually gone around the bend of linguistic sanity.

    You are now saying that Dembski believes simple and complex is the same. “his [i.e., Dembski's] examples of . . . complexity include simple . . . sequences”

    Do you really believe Dembski believes simple things are complex? Give me a break.

  22. R0bb:

    Dembski explicitly and consistently defines the “complexity” half of “specified complexity” as improbability.

    Is there any other way to define complexity?

    Number of parts, or number of moving parts? Minimum Description Length?

  23. Mr. Arrington,

    Yet another separate thread, rather than addressing my arguments directly where I made them? One might easily get the impression that you are deliberately attempting to distract attention from your unsubstantiated assertion about me.

    For your convenience, here is my pertinent comment from the previous thread:

    – begin copy –
    Barry Arrington,

    I will not retract an obviously true statement no matter how much you huff.

    You made the following claim in reference to me:

    QuiteID, she said the concept is meaningless (unless her friends are using it).

    That claim is untrue. You cannot produce any support for it. Intellectual honesty requires that you retract it.
    – end copy –

    It doesn’t require an enormous amount of courage to support one’s claims in an online venue, but it does require integrity to retract those that one can’t support. I await your response with interest.

  24. William J. Murray,

    Mathgrrl said:

    My conclusion is that, without a rigorous mathematical definition and examples of how to calculate [CSI], the metric is literally meaningless. Without such a definition and examples, it isn’t possible even in principle to associate the term with a real world referent.”

    Not exactly. You are quoting what Barry Arrington said that I said. Here is what I actually said:

    My conclusion is that, without a rigorous mathematical defintion and examples of how to calculate it, the metric is literally meaningless. Without such a definition and examples, it isn’t possible even in principle to associate the term with a real world referent.

    http://www.uncommondescent.com.....ent-376790

    Barry Arrington replaced the word “it”, which in context clearly refers to the numeric metric being discussed, with “[CSI]“. Whether deliberate or not, the result of that replacement was to give the impression that I meant something other than what I clearly said.

    Incidentally, even with that additional confusion, there is no substance to Barry Arrington’s claim mentioned in my immediately previous comment in this thread.

  25. Barry:

    You are now saying that Dembski believes simple and complex is the same. “his [i.e., Dembski's] examples of . . . complexity include simple . . . sequences”

    What I said, without ellispes, was: “Consider that his examples of specified complexity include simple repetitive sequences, plain rectangular monoliths, and a narrowband signals epitomized by a pure sinusoidals.”

    Are you denying that these are some of Dembski’s examples of specified complexity, or are you denying that they’re simple? You must be disagreeing with one or the other.

    Perhaps the confusion stems from the fact that I’m using “simple” in its normal sense, while Dembski uses “specified complexity” to mean “specified improbability”. To restore linguistic sanity, as you put it, I’ll try to spell this out more clearly:

    1) When Dembski is talking about order, uniformity, regularity, etc., he uses the term descriptive simplicity (or descriptive complexity to denote the lack thereof).

    2) When Dembski says specified complexity, the complexity part refers to improbability, not “descriptive complexity”. He defines it this way explicitly and very consistently.

    3) Orgel contrasts “complexity” with the regularity of a crystal, which has “identical molecules packed together in a uniform way”. He says nothing about probability or improbability. If anything, his characterization of complexity fits Dembski’s “descriptive complexity”, which is inversely proportional to Dembski’s specified complexity metric.

    Do you disagree with any of these three statements, and if so, which ones? If not, then how do you reconcile these facts with your claim that Orgel’s and Dembski’s “specified complexity” mean the same thing?

  26. Robb:

    There has been a longstanding three way contrast in design thought that actually predates Dembski’s involvement by a decade or so.

    Thaxton et al specifically contrast order, organisation and randomness on the Orgel-Wicken-Yockey distinction. Dembski et al follow in those footsteps. Order and organisation are two different ways that we have a difference from randomness. And indeed it is to understand this difference that is pivotal to understanding the crux of design theory. Routinely, as in crystals, we find repetitive order that boils down to: set up unit cell, and repeat till resources run out. This is low-informational.

    By contrast, organisation is high informational, and in relevant contexts usually functionally specific. It tends to have redundancies in it that keep it from being as uncorrelated as a full random arrangement (often stood in for by a random tar), and its specificity has to do with some purpose or role it fills, or some saliency that sets it apart as “special.” Text in coherent English is special like that, random text strings are distinct, and so are text repeats of a unit bloc. This example appears as far back as Thaxton et al, and I am amazed how consistently objectors to ID refuse to entertain the three way distinction being made. Remember, Bradley is a polymer expert. Here is Ch 8 in TMLO, 1985 or so:

    1. [Class 1:] An ordered (periodic) and therefore specified arrangement:
    THE END THE END THE END THE END

    Example: Nylon, or a crystal . . . .

    2. [Class 2:] A complex (aperiodic) unspecified arrangement:
    AGDCBFE GBCAFED ACEDFBG

    Example: Random polymers (polypeptides).

    3. [Class 3:] A complex (aperiodic) specified arrangement:
    THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE!

    Example: DNA, protein.

    And, it is quite plain that Dembski’s “improbability” is about searching config spaces and hitting cases E in specific target zones T by random walk searches, as he for instance elaborates in NFL, and has gone on to develop further with Marks on the subject of active info as the injection that explains search performance above such a random walk. Durston’s remarks are much along the same lines.

    Simple repetitive orderly sequences are NOT complex, not for Orgel, not for Wicken, not fro Thaxton, not for Abel-Trevors, not for Durston, not for Dembski.

    And, let us observe the transformation of the Dembski metric I have been discussing in recent days, as is posted on here (and as appears in 14 above, this thread):

    ? = – log2[10^120 ·?S(T)·P(T|H)].

    How about this:

    1 –> 10^120 ~ 2^398

    2 –> Following Hartley, we can define Information on a probability metric:

    I = – log(p)

    3 –> So, we can re-present the Chi-metric:

    Chi = – log2(2^398 * D2 * p)

    Chi = Ip – (398 + K2)

    4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities.

    5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits. (In short VJT’s CSI-lite is an extension and simplification of the Chi-metric.)

    6 –> So, the idea of the Dembski metric in the end — debates about peculiarities in derivation notwithstanding — is that if the Hartley-Shannon- derived information measure for items from a hot or target zone in a field of possibilities is beyond 398 – 500 or so bits, it is so deeply isolated that a chance dominated process is maximally unlikely to find it, but of course intelligent agents routinely produce information beyond such a threshold . . .

    The transformation into an additive expression shows that the Dembski metric is about being beyond a threshold, measurable in bits. A threshold that is of course directly connected to the universal probability bound by giving a config space that is in excess of 10^150 possibilities (or at 398 bits, 10^120). the comparison is that precisely because of teh number of possibilities, something beyond the threshold is not reasonably searchable through a random walk being likely to land on a target zone that is sufficiently specific. Thus, we are right back at GP’s classically apt image of islands of interest (especially function) in the midst of vast seas of uninteresting (non-functional) possibilities. Complexity is directly related to search resources and ability to find things beyond the threshold of sufficient complexity.

    And, Robb, that view has been pretty clear in Dembski’s writings fro a decade or more, as can easily be seen at his Design Inference site if you do not have say NFL to hand. Since you are a long-time commenter here, I am surprised that you don’t seem to be familiar with that way of cashing out the “probability” issue you spoke of.

    GEM of TKI

Leave a Reply