Home » Intelligent Design » The Chronicle says of Gonzalez “a clear case of discrimination”

The Chronicle says of Gonzalez “a clear case of discrimination”

The Chronicle of Higher Education has a balanced article on Iowa State’s refusal to tenure Guillermo Gonzalez.

Advocate of Intelligent Design Who Was Denied Tenure Has Strong Publications Record
By RICHARD MONASTERSKY

At first glance, it seems like a clear-cut case of discrimination. As an assistant professor of physics and astronomy at Iowa State University, Guillermo Gonzalez has a better publication record than any other member of the astronomy faculty. He also happens to publicly support the concept of intelligent design. Last month he was denied tenure.

Read More

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

57 Responses to The Chronicle says of Gonzalez “a clear case of discrimination”

  1. Gonzalez’s publication record has trailed off? That is why he has 21 publications since 2002? Before 2002 he must have been a man on fire!

    Notice how the one astronomer who admitted it was discrimination remained anonymous for fear of backlash. That demonstrates exactly how poison the atmosphere is in that field.

  2. H’mm:

    Best as I recall, Einstein’s key productivity periods were circa 1905 and 1916, the former in that golden period where he was mid-late 20′s, and the latter period a decade later built on and generalised his initial work.

    Similarly, Newton’s peak was in his mid 20′s — about the period of age when one does a Post-Doc BTW.

    Indeed, ther eis a general pattern thast the big breakthroughs in Physics come from men at about that stage, and it is felt that this is because they have completed their technical education, but are not sufficiently locked into the existing scheme of things to be blind to new paradigms.

    This is so far as I know, now a commonplace of history and phil of science.

    So, Jehu, you are right that Gonzalez was “on fire” in the post doc years.

    But his record since then has not exactly been stagnant either: 21 papers since 2002 or so is well above the AVERAGE performance expected of a tenure winning physicist at ISU on their declared policy. Note also he has co-authored a technical textbook on Observational Astronomy published by Oxford, and which is in use in his own department, as well as in several others of some note. Add in the near fifty before that and we get to the 68 published peer reviewed papers that his case rests on.

    So, the Chronicle’s story line does not add up, at least in the way they intend. For, when we look, we can see a vital subtext in the comments:

    At first glance, it seems like a clear-cut case of discrimination. As an assistant professor of physics and astronomy at Iowa State University, Guillermo Gonzalez has a better publication record than any other member of the astronomy faculty. He also happens to publicly support the concept of intelligent design. Last month he was denied tenure . . . . But a closer look at Mr. Gonzalez’s case raises some questions about his recent scholarship and whether he has lived up to his early promise . . . .

    Mr. Gonzalez has a normalized h-index of 13, the highest of the 10 astronomers in his department. The next closest was Lee Anne Willson, a university professor who had a normalized h-index of 9.

    Under normal circumstances, Mr. Gonzalez’s publication record would be stellar and would warrant his earning tenure at most universities, according to Mr. Hirsch. But Mr. Gonzalez completed the best scholarship, as judged by his peers, while doing postdoctoral work at the University of Texas at Austin and at the University of Washington, where he received his Ph.D. His record has trailed off since then.

    “It looks like it slowed down considerably,” said Mr. Hirsch, stressing that he has not studied Mr. Gonzalez’s work in detail and is not an expert on his tenure case. “It’s not clear that he started new things, or anything on his own, in the period he was an assistant professor at Iowa State” . . . . That pattern may have hurt his case. “Tenure review only deals with his work since he came to Iowa State,” said John McCarroll, a spokesman for the university.

    H’mm: he didn’t start any new thing in the period under question? Oh, I get it, the inference that the Goldilocks zone effect points to ours as a pretty privileged planet is not “science.”

    The rest if hack puffery: for instance, it would be highly unlikely that institutions strongly committed to an old paradigm would fund a researcher operating in a new one, and in any way — pace Mr McConnell’s ISU admin advocacy claims — grantmaking is NOT a specification on the declared criteria for tenure. As to the “he has only one Doctoral candidate who has got through,” again, where did this come from, and what is the context?

    In short we see the pattern O’Leary pointed out of trying to find novel means/excuses to discredit what is on the face of it and by the declared standards of assessment a highly qualified candidate who would have been well above the average record of those who WERE awarded tenure in his cohort or in recent cohorts.

    That resort to hidden, covert criteria is a typical sign of discrimination.

    Chronicle should be ashamed of itself.

    GEM of TKI

  3. as for the “tailing off” comment, I’m not familiar with his position, but did he have teaching duties? Does he now advise students? Is he involved in service (or extension)?

    I’m sure as a post-doc, he did not advise students or teach, or have to attend faculty meetings, be on curriculum committees, faculty senate, etc. So, having 21 peer reviewed papers since 2002 is still quite good. Especially when you consider the other activities he must now be involved in as a faculty member. And, when you consider others in his Department, he seems to be blowing them away.

  4. I was being sarcastic. 21 publications since 2002 is stellar. I bet no other person in his department published at that rate in the same time frame.

  5. Jehu – things must be very different in Astronomy. I have 28 since 2002 (not including the Nature correspondence), and I don’t consider myself to be stellar.

    OK, I’m a statistician, so I guess I don’t consider myself to be an outlier either.

    Bob

  6. Bob O’H: Are you concerned that you will be denied tenure given your modest publication record since 2002, or are you already tenured, having somehow slipped through the cracks? Or is the magic tenure cutoff area somewhere between 21 and 28?

  7. russ – Finland doesn’t have a tenure system.

    Bob

  8. The Chronicle never implies Gonzalez tenure denial is “clear case of descrimination”. The actual quote goes like this:

    “At first glance, it seems like a clear-cut case of discrimination… But a closer look at Mr. Gonzalez’s case raises some questions about his recent scholarship and whether he has lived up to his early promise”

    The article then goes on explaining the possible real reasons tenure was denied.

  9. Bob: Does that mean you don’t have academic freedom?

  10. eduran: No offense intended, but you don’t seem to have been following the situation at ISU, if you think there are “possible real reasons”. There has been an organized campaign among the faculty to oust GG. The leader of that campaign was granted full professorship at the same time GG was denied tenure.

    Ask former Harvard president Lawrence Summers who runs the show at American univerities. He was basically hounded out by the faculty. I doubt that ISU president Geoffroy is interested in falling on his sword for GG.

  11. russ – what? Academic freedom isn’t enforced by the tenure system. There’s nothing in my contract stipulating what I can or cannot say.

    Bob

  12. 12
    The Scubaredneck

    Russ,

    I think the point that eduran was making (and the point I was going to make if no one else did) is that the article does not say what DaveScot’s title leads one to believe it says. Dave’s title suggests that the Chronicle is decrying Gonzalez’s tenure denial as a clear cut case of discrimination when it is doing nothing of the sort. The merits of ISU’s denial are beside this point, as enduran’s comment deals with what the Chronicle article actually says vs. what DaveScot appears to be saying.

    The Scubaredneck

  13. 13

    Nobody has really explained to me what ISU found objectionable to giving Gonzalez tenure.

    Without going into conspiracy theories, can someone tell me Exactly what ISU said they found about Gonzalez’s background, work or others that made them deny him tenure ?

    I’m sure they are not going to say because he is a Christian or an ID proponent. If not, then what is it ?

    Thanks in advance.

  14. H’mm:

    Let’s do a bit more Math on GG’s productivity.

    He has 68 papers and it seems they date form in effect this post Doc years on. Okay, that is 1993 on.

    Take out the 21 since 2002, giving us 47 papers in about the span 1993 – 2001, or in ~ 9 years. That’s ~5.2 papers per year.

    From 2002 on, he has done 21 papers in 5 years, or ~ 4.2 papers per year. Meanwhile he has produced one technical book, a semipopular but seriously scientific book [cf. here the sort of audience of Darwin's Origin, or a lot of classic scientific works] and a film.

    In short, now that he has taken up the usual round of full responsibilities of a junior prof, and has taken up writing, his research productivity measured by peer-reviewed papers — and remember his citation rate is the best in his department so quality of the papers on average is not the issue [despite the "he had a rejected paper" complaint Chronicle duly reports on . . .] — has dropped off by one paper per year. (BTW, how does his productivity compare with the other applicants who were accepted? Career to date and in the previous several years prior to applicaiton for tenure while at ISU as junior staff? Why is the evidence on this not being highlighted, if the ISU is concerned and has the facts to show that he has become utterly unproductive relative to his peers in his cohort as well as established faculty?)

    Now, too, 4.2 papers per year projected across the 9 years from 2002 on would mean GG is in trend expected to produce about 38 peer reviewed papers.

    The “falloff in productivity of research” claim simply does not wash!

    BTW also, if the textbook he has co-authored is being used in his dept and by some other significant schools, that suggests strongly that the claim that he is a good teacher is supported by objective evidence.

    This smells worse and worse and worse . . . of the classic “blame the victim” game.

    Time to wake up, wise up, rise up folks!

    GEM of TKI

  15. Bob O’H

    Jehu – things must be very different in Astronomy. I have 28 since 2002 (not including the Nature correspondence), and I don’t consider myself to be stellar.

    Astornomey is different which is why Gonzales has the second highest publishing record in his department. Also, you are a research fellow so all you do is research. Nonetheless, for the five years before 2002 you only managed to publish 9 times. From 1997 until 2002 you only managed 1 or 2 papers a year. You have never published a text book or a popular book. You have never published in a journal of general interest such as Science or Nature or been featured on the cover of Scientific American.

  16. In response to 14 (kairosfocus):

    I think his publication record, at least since 2002, is not quite as rosy as you make out. I’ve just spent a little time at NA
    SA’s ADS (Astrophysics Data System), hunting down all the papers done by Gonzalez since he arrived at Iowa in 2002.

    I weeded out those papers that weren’t by this particular Gonzalez (there’s someone of the same last name doing work on gravit
    y waves, and also a couple of folks in South America). I then weeded out those papers that were not refereed, or weren’t publi shed in the first- and second-tier journals (these being MNRAS, ApJ, AJ, A&A and PASP).

    This left a total of 13 papers. Of these,

    *) Only 4 had Gonzales as the first author

    *) Of these four, one was published in PASP, which is a second-tier journal.

    *) Of the remaining 3, which were published in MNRAS (a first-tier journal), *all* of them were short, 4-page letters, rather than substantial pieces of research.

    While letters do contain significant results, they are not comparable to longer, more in-depth papers. Given that the three le tters were published in 2006, one suspects that Gonzalez was attempting to make up for the fact that prior to last year, he ha d not published a *single* first-author paper since arriving at Iowa.

    Moreover, if one counts his total paper output over the past 5 years (whether first author or not), it comes in at 2.6 papers a year, which is pretty poor for someone trying to get tenure.

    Finally, might I ask whether Gonzalez managed to secure any significant research grants during his 5 years as an Assistant Pro fessor? If not, this would weigh heavily against him; Universities invariably want to save tenure posts for those with a prove n track record of bringing in funds. At my own institution, the benchmark for tenure is c. $200,000 per year.

    Based on this analysis (which I’m happy to have challenged), I think that Iowa was perfectly reasonable in their decision to r eject Gonzalez’s application for tenure.

  17. AARDPIG:

    Interesting survey.

    Only problem; you are not accounting for:

    1] We are looking at a system that looks at the career to date productivity

    2] the number of papers overall is 68, since 1993 or so one infers

    3] The record on 2002 to date accepted by the Chronicle etc is 22 peer reviewed papers not 13 [including non peer reviewed ones], so something is wrong with the data sets being cited here.

    4] ISU keeps coming up with criteria for rejection that repeatedly do not line up with its declared policy, and

    5] several of the key judges in the case — starting with this department — show themselves to be biased, in a context where ISU to date has not shown that serious steps were taken to assure that bias did not decide the issue.

    I am of course not in a position to come up with the overall definitive answer, but the record looks a lot like a man with a very good record and introduction of novel concepts [Galactic Habitable Zones] is being hit because he is not conforming to orthodoxy, esp when we compare the way ISU has treated the man who led the charge against him. [Cf the other thread on this, and elsewhere.]

    So, let us wait for the playout, but call for justice to be done and be seen to be done; meanwhile the case does not look so good for ISU and Chronicle etc. (Notice, I have pointed out that there is evidence that makes the behaviour of the institution look questionable, and there will have to be a very good explanation of why the decision is a just one, given its context. Cf ongoing discussion at Evo News and Views for more.]

    GEM of TKI

  18. This reminds me of what people do when a black person claims to have been discriminated against, often with very telling evidence: “Oh, but it wasn’t really discrimination, let me try to find some minor points that undermine the very obvious evidence that it was…”

    People just don’t see the discrimination unless they are affected by it. Take note.

    From an article I once read:

    So[...]in 1963–at a time when in retrospect all would agree racism was rampant in the United States, and before the passage of modern civil rights legislation–nearly two-thirds of whites, when polled, said they believed blacks were treated the same as whites in their communities–almost the same number as say this now, some forty-plus years later[...]

    [...]in mid-August 1969, forty-four percent of whites told a Newsweek/Gallup National Opinion Survey that blacks had a better chance than they did to get a good paying job–two times as many as said they would have a worse chance? Or that forty-two percent said blacks had a better chance for a good education than whites, while only seventeen percent said they would have a worse opportunity for a good education, and eighty percent saying blacks would have an equal or better chance? In that same survey, seventy percent said blacks could have improved conditions in the “slums” if they had wanted to, and were more than twice as likely to blame blacks themselves, as opposed to discrimination, for high unemployment in the black community (16).

    In other words, even when racism was, by virtually all accounts (looking backward in time), institutionalized, white folks were convinced there was no real problem. Indeed, even forty years ago, whites were more likely to think that blacks had better opportunities, than to believe the opposite (and obviously accurate) thing: namely, that whites were advantaged in every realm of American life.

    (Taken from “The Absurdity (and Consistency) of White Denial”, Tim Wise, http://www.counterpunch.org, April 24, 2006)

    I see the same pattern with denial of discrimination against IDers and sympathizers.

  19. Thanks Atom:

    I just happen to be of Afro-Caribbean descent, and can see what you are saying. (You can appreciate too, that I am sympathetic to my fellow Caribbean person — GG is a refugee from Castro’s dictatorship; and of course, fellow Christian.)

    Of course, further to all of this, I am a confirmed contrarian thinker, given the force of Plato’s parable of the cave. [Also, cf Eph 4:17 - 24!]

    I think on the substantive issue, John West has aptly summed up over at ENV:

    Key Developments in Gonzalez Tenure Denial Case, May 21-26

    John West

    Action Item: Help Guillermo Gonzalez in his fight for academic freedom. Contact ISU President Gregory L. Geoffroy at (515) 294-2042 or email him at [email protected] and let him know that you support academic freedom for Dr. Gonzalez to follow the evidence wherever it leads.

    Here is a recap of the major developments this week in the Guillermo Gonzalez tenure case:

    1. The Chronicle of Higher Education reported that Gonzalez ranks first among his astronomer colleagues at ISU according to the “h-index” statistic, which seeks to measure how widely a scientist’s articles are cited by other scientists. According to the Chronicle, “Mr. Gonzalez has a normalized h-index of 13, the highest of the 10 astronomers in his department. The next closest was Lee Anne Willson, a university professor who had a normalized h-index of 9.”

    2. It was revealed that at same time ISU denied tenure to Gonzalez this past spring, the university promoted to full professor his chief academic persecutor, atheist professor Hector Avalos, who believes that the Bible is worse than Hitler’s Mein Kampf.

    3. The world’s preeminent science journal, Nature, featured the Gonzalez case in an article in its news section. In the article, Gonzalez’s former post-doctoral advisor at the University of Texas, Austin, is quoted as saying: “He is one of the best postdocs I have had” and “I would have said he was a serious tenure candidate.”

    4. U.S. Senator and presidential candidate Sam Brownback issued a statement defending Gonzalez’s right to academic freedom, while Darwinist academics vociferously advocated blacklisting pro-intelligent design scientists from academia.

    5. ISU spokesman John McCarroll continued to invent facts in his effort to defend the tenure denial, this week claiming that a professor’s publications prior to being hired by ISU aren’t considered during the tenure process. Asked to provide documentation for this latest claim, McCarroll declined to respond.

    If you have just heard about this story, you should check out the key developments from last week, which included the admission by two members of Gonzalez’s department that intelligent design played a role in his tenure denial, and the release of tenure statistics showing that ISU approved 91% of its tenure applications this year. In addition, tenure standards for ISU’s Department of Physics and Astronomy revealed that outside research funding was not a stated criterion for tenure decisions in the department.

    Posted by John West at 12:15 AM [I have added italics]

    The links are of course at the original site, and are well worth following up.

    Food for further thought, methinks

    GEM of TKI

  20. I just happen to be of Afro-Caribbean descent, and can see what you are saying. (You can appreciate too, that I am sympathetic to my fellow Caribbean person — GG is a refugee from Castro’s dictatorship; and of course, fellow Christian.)

    Me too. I’m Puerto Rican, African descended from my father’s side, which is one of the reasons I am interested in the African-American situation in the US. (But by far, not my only reason for interest.)

    I’m a fellow Messianic believer as well.

  21. Hi Atom:

    Interesting! (I’d love to communicate more; follow up my always linked to see how to contact me directly.)

    Now, from one of the later threads, the following from ASA’s Tim Davis, is highly interesting as further evidence that this one is a blatant case of academic discrimination:

    I’ve done some research about Guillermo Gonzalez’ publication and citation record, in order to draw my own conclusions about it. He isn’t a Nobel laureate (who is?), but his record is far better than many of his critics are maintaining. I’m getting tired of hearing that the data have been manipulated to inflate his ability. Thus I offer the following objective analysis and factual information.

    Dr. Gonzalez has an outstanding publication record for a junior scientist. He is co-author of an astronomy text for Cambridge University Press, the top scientific publisher in the world, and an author of dozens of articles in scientific journals, including several recent articles in the top journals in his field (Astronomical Journal, Astrophysical Journal, Publications of the Astronomical Society of the Pacific, and Monthly Notices of the Royal Astronomical Society).

    According to the ISI Web of Knowledge (the standard source for information about citations in science), Dr. Gonzalez has more than 1200 citations with an h-index of 20. This means that he has contributed to 20 papers that have been cited at least 20 times each. At least four of these were written at ISU, among them a paper in Reviews of Modern Physics of which he is the sole author and a paper in Astronomical Journal, of which he is second author, that has already been cited 49 times in four years. He was sole author or first author of all three of his most frequently cited papers. Furthermore, contrary to some things that have been said, interest in his work has not slackened in recent years; indeed, the five highest years for citing his work are 2002 through 2006, with 2006 having the second highest total number of citations. For comparison, his colleague Dr. Steven Kawaler, an excellent astronomer and full professor at ISU, has been cited about half as much (681 time!
    s, as of this week); his h-index is 16, and none of his papers has been cited as often as any of Dr. Gonzalez’ top four papers. Harvard astronomer Alyssa Goodman, director of The Initiative in Innovative Computing, has an h-index identical to that of Dr. Gonzalez: as interesting and important as her work is, the data reveal that Dr. Gonzalez’ work is no less interesting, at least in terms of citations in professional journals . . . . .

    So, whatever the reasons for his tenure denial, it can’t fairly be laid on his publication record. He’s more than met objective criteria for being a full professor of astronomy at either ISU or Harvard.

    If anyone wants to continue to claim that this is not an accurate assessment, they will need to be very specific about what is wrong with the ISI data given above, or the other facts. It’s always possible that the search brought up papers by other people with the same surname and initials (a few such known instances were removed from the data), or failed to bring up papers by Gonzalez or the others mentioned above. But I doubt that the data I do have is badly wrong. Failing such a failure, then it’s time to shut down arguments about him not having done enough high quality research.

    Blaming the victim is an old, and vicious tactic that makes it easier to close out eyes willfully to blatant wrongdoing.

    I think we have some serious reasons to ask whether that is at work here.

    Time to wake up, wise up, and rise up folks!

    GEM of TKI

  22. Blaming the victim is an old, and vicious tactic that makes it easier to close out eyes willfully to blatant wrongdoing.

    Agreed.

    As for your always linked, I’ve actually been spending my free-time this weekend reading through your treatise. Very good reading, I am thoroughly enjoying it.

    I had been thinking of contacting you in regards to a question I had on probability theory, but figured I’d be respectful and see if you had already answered the issue in your writings. We’ll see.

    Atom

  23. Hi Atom

    Thanks for the kind words.

    I have been busy elsewhere overnight, on the el Faisal deportation case in Jamaica. (Cf my blog today, accessible through my site.)

    I suspect that Dr Dembski is the real expert on probability in this forum [and doubtless Dr Sewell too! Both being PhDs in Mathematics . . .], but I am very willing to respond best as I can on what you are thinking about.

    So, why not raise the point if it is at all relevant to the topic — after all the root of Dr Sewell’s point is a probabilistic one.

    GEM of TKI

  24. So, why not raise the point if it is at all relevant to the topic

    I don’t know how germane it is to the topic at hand, but I’ve raised a form of it before here without success. I’ve been thinking about it more and how I can phrase my question more clearly, so that people don’t focus on irrelevant details.

    Maybe I can contact Dr Sewell. It is just something to think about.

    Good luck with the deportation case.

  25. Atom

    Thanks.

    Okay, I see the problem of the distractions and distortions.

    But maybe, now this is off the main page, we can look at it without such interference?

    Would the Laplacian principle of indifference be of help: that unless we have reason to prefer particular outcomes among a set of outcomes, a priori, we expect each possible specific outcome to be equiprobable — here in effect saying we have no rational basis for expecting otherwise so this is the default. E.g. tha tis why we say h?T is 50-50, or that any one face of a faitr die is 1/6. THis extends into the province pf thermodyamics and is a key component of statistical mechanics, which has had a significant measur eof success.

    In my paper I have a link to Collins who discusses probability in ways that may be helpful to you too.

    GEM of TKI

  26. Ok, since it looks like we’re the only two fish left on this thread, I guess it will be safe…

    Here is an email I wrote that sums up my question:

    ======Begin Email=========

    Quote from an article:

    The second premise is sometimes refuted by statements such as the following: “This very situation in this very moment is extremely improbable, since trillions of other possibilities could have been actualized; nevertheless it is happening right now”. Such statements expose a deep misunderstanding of how statistic works. What you need beforehand are categories. Take e.g. a lottery. To determine the mathematical probability of a certain combination of numbers, let’s say 6 out of 49, you find approximately 14 million possibilities of combinations. Every combination is equally (im-)probable: one out of 14 million. But is your lottery ticket worth something only because your personal combination is so special that it may not occur again in the other 14 million (minus one) cases? Of course not. Another thing has to take place: Your combination must be in the winning category! The drawing of lots separates the winners (first category) from the losers (second category). And the probability of being in the first category is what counts, because the second category is a kind of “black box” containing all the losers. Therefore the probability that your special number is in the category of the losers is very high. Probability is always connected with categories. And this is also how our common life-decisions work. You would decide not to walk across the street when the trafficlight is red, because you (at least unconsciously) know that when doing otherwise the mathematical probability to fit the “category of dead people” is very high. You would not consider the walking across the street when the light is red as equivalent to the green light, just because both events are surely unique in the universe and therefore equally “improbable”.

    Then there are people who say that even if an event is most improbable, it nevertheless can happen. It can even occur in the next second, since mathematical improbability doesn’t say anything about when it happens. Look at the lottery above: Even if the chances to win are one out of 14 million (approx. 1:107), almost every week people do win. So the improbable does happen! But in terms of science this is not really a very improbable event. First of all, there are usually more than 14 million people who participate in the lottery; therefore it is highly probable that one person should win. And secondly, physicists agree that “really improbable” are events beyond a probability of one out of 10000000000000000000000000000000000000000 (this is 1:1040). Although mathematically possible, it is absurd to believe that such an event could really happen in our universe (given the lifespan and the size of our universe).

    (from http://www.professorenforum.de.....oeller.pdf)

    Ok, so he touches on some valid points, and even brings probabilistic resources into the mix (“…there are usually more than 14 million people who participate in the lottery…”). Good for him.

    I like his line of reasoning, which is similar to what you and I were discussing in terms of prepecifying categories.

    Q:If you have your 500 bit string and you get a random sequence. (Call this sequence A). Immediately we have two category sets: “Is sequence A” and “Not Is sequence A”. One set (“Not Is sequence A”) is much more probable to have been chosen by chance: There is only one member in “Is sequence A” but there are roughly 10^150 members in the set “Not Is sequence A”. How therefore can I be justified in believing you got a member of “Is sequence A” by chance, when it was so much more probable that you didn’t? If low probabilities can rule out us having to worry about certain outcomes, then this low probability event shouldn’t have happened in our lifetime.

    =====End of Email=========

    It may be a confused (confusing?) question, but it is something I’ve been wrestling with.

  27. Hi Atom

    You have raised an excellent excerpt and issue.

    Let’s take it on, step by step:

    1] Probabilities before and after the fact. After something has happened, it has happened,though of course we may make the error of perceiving an event incorrectly — this is an adjustment made for say measuring Shannon information.

    2] Your sequence A is, on a before the fact basis, a target, with odds of ~ 1 in 10^150, i.e it is almost impossible to hit by chance relative to the available opportunities in the observed universe. To specify any such single given sequence then toss the tray of coins and hit it “just so” is sufficiently improbable that if I were to see such a claim, I would reject the claim that this happened simply by chance.

    3] Why is that? Because we know, independently, that agents are capable of setting up and hitting targets, using intelligence. (BTW, smuggled into the above setting up of a target lurks the point that we have in fact given functionality to a given sequence that does not hold for the others.)

    4] Here we see a functional target that is highly improbable among the configurational space available, i.e it is in principle highly informational. So, we are at functionally specified, complex information.

    5] We also see that we have defined thereby, two macrostates: [(a) the target/hit -- one single outcome fits it, and (b) the miss -- all the other ~ 10^150 outcomes fit it.].

    6] Now, this means that we are vastly more likely by chance to end up in the miss macro-state than the hit one. Indeed, so much so, that if the whole human population were to be employed to toss trays of 500 coins for their lifetimes [let's say 70 years by 7 bn people for 10,000 tosses per day ~ 1.8*10^18 tosses], it would make no material difference to the lack of probabilistic resources to access the state.

    8] So, it is very unlikely for an individual to achieve the relevant outcome by chance, while we do know that since there is a known target, it could be achieved by simply arranging the coins, which could of course be suitably disguised. [Dawkins-type programs that have the target written in then do a sequential trial and error rewarding relative success is a case in point.)

    9] That of course brings up the other side of the underlying issue, what if nature just so happens to be in a state that in some still warm pond or the equivalent, there is a much more probable sequence that is sufficiently functional to replicate itself, and then a family of such sequences occurs, and “complexifies” itself until presto life results, and evolves. The problem here is of course that the relevant molecules are so complex and have to form in such a specific configuration that this simply compounds the improbability. Such a still warm pond will on average break down such complex molecules, not assemble them, for excellent, probabilistically linked and energy-driven thermodynamic reasons. [And, lurking beneath, if such a thing were possible is this one: how is it that we just happen to be in a cosmos where such chains to "spontaneous life" exist. In short the design issue at cosmological level emerges.]

    10] So we see that while such a spontaneous event is logically and physically possible in the abstract, the balance of probabilities is such that it is so maximally unlikely that of the competing explanations, chance or agency, agency is infinitely better. Unless, of course you are in a position to rule out such agency in advance. But doing so by begging metaphysical questions is bad philosophy, not good science.

    Trust this helps

    GEM of TKI

  28. Hey GEM,

    Thanks for the reply. Let me try to draw this out a bit further, we’ll see where it goes.

    Your sequence A is, on a before the fact basis, a target, with odds of ~ 1 in 10^150, i.e it is almost impossible to hit by chance relative to the available opportunities in the observed universe.

    The target is either close to impossible to hit (we probably will not see it in the lifetime of the universe, the low probability hinders it from occurring) or not (we will see it, regardless of the probabilities involved.)

    Without specifying it, we saw that it occurred. So it was possible to hit, regardless of any probability involved. The specific probability of that sequence (sequence A) occuring was still 1 in 10^150, there were still two categories (“Is A”, “Not Is A”, with the second category swamping the number of compatible states in the first), and yet this low probability did not hinder that outcome from occurring. The probability was the same in both cases.

    Then in the second case, we specify it beforehand, and now we will not expect it to occur.

    Why not?

    The probabilities in both cases are the same, whether we were aware of them or not. In one case, the low probability event can occur (it did, we witnessed it) but in the second we say that it probably will not. Someone will say “Why not? Low-probability? Two-categories, with associated macro-states?” knowing that these were the same mathematically in both cases. The only difference was that we subjectively “selected” the sequence in the second case…but this shouldn’t affect the underlying mathematics or outcomes (unless we’re getting into QM type of observer effects…)

    Anyway, I don’t rule agency out (obviously). I just want to be able to say “This did not occur by chance due to low probability and category (microstate-macrostate) structure.” But I can’t as long as a counter-example matching those criteria is evident (as in my example.)

  29. Hi Atom:

    I will be brief,being on my way out the door.

    We should distinguish two different things:

    1] Probability of occurrence of an event, given observation

    This has to do with the reliability of our observations, which generally speaking is not 100%.

    2] The inference tot he cause of an event, given its observation.

    Having factored in 1, we now look at the issue of where did something come from.

    Life is observed, pretty nearly certainly, indeed, with practical certainty. It has a certain nanotechnology.

    Where did it come from?

    Well, we have three potential suspects: chance and/or law-like natural forces and/or agency.

    Law? These do not produce contingency, so they do not dominate but of course may be involved in the mechanisms.

    Chance?

    When accessing a given macrostate is maximally unlikely, it is not credible though not absolutely or logically or physically impossible for the event to come about by chance.

    Agency?

    We know agents exist and routinely generate events that exhibit FSCI.

    So confident are we that such does not, to nearly zero probability happen by chance,t hat it is the essence of the 2 LOT, i.e we move form lower to higher thermodynamic probability macrostates as the overwhelming trend. This is backed up by massive observation.

    So, on inference to best explanation anchored by empirical observation, agency is the better explanation.

    And, those who reject it inthe relevant cases, as we just saw on the alleged reasons for denying tenure to Mr Gonzalez, do so by smuggling in inappropriate worldview considerations that lead them to selective hyperskepticism and thus begging the worldviews level question.

    Hope that brief note helps

    GEM of TKI

  30. Atom

    A third point, and a bit of a PS:

    3] Likelihoods . . .

    Note that the issue of probabilities here is hat of relative to a chance-dominated process.

    That is we are looking at in effect likelihoods relative to different causal factors.

    Relative to a chance-driven (or at least dominated) model, what is the likelihood of A?

    If it is below 1 in 10^150 or so, we have excellent reason to reject the chance-dominated model.

    But, now too, since we know that FSCI is routinely generated by agents — indeed is typically regarded as a signature of agency — then the likelihood of such FSCI on the model of agent action is much higher.

    So, absent interfering worldview asumptions and assertions, the reasonable inference is that we don’t get THAT lucky.

    Thus, the decision to reject the null [chance driven or dominated] model.

    4] But Natural Selection is not a chance process . . .

    This is a notorious and fallacious objection. First, differential reproductive success does not apply to the prebiotic world.

    Second, here is Wiki– calling a hostile witness — on Natural Selection:

    Natural selection acts on the phenotype, or the observable characteristics of an organism, such that individuals with favorable phenotypes are more likely to survive and reproduce than those with less favorable phenotypes. If these phenotypes have a genetic basis, then the genotype associated with the favorable phenotype will increase in frequency in the next generation. Over time, this process can result in adaptations that specialize organisms for particular ecological niches and may eventually result in the emergence of new species.

    The “more likely to” reveals the chance-driven nature of the NS half of RM + NS, the first half being by definition chance based. Of course the “can” and “may” are intended by Wiki in a different sense than is properly empirically warranted, i.e destruction or elimination of antecedent information leads to specialisation and reproductive isolation.

    Information loss is warranted, information creation on the scale of say 250 – 500 base pairs [or at least 1 in 10^150 . . .] is not.

    GEM of TKI

  31. Yeah. I’m not arguing for RM+NS, this is one of those things I view as an irrelevant distraction for my question. (I can understand the motivation for wanting to clarify that, but I am not arguing for Darwinian assumptions, so I’d rather not bring them up if possible.)

    I am simply asking a question about low probability. What can the low-probability of an event, by itself, tell us about the chance occurance of that event?

    Not much, you might say.

    Which is fine.

    But then we use a probability argument for 2LoT type of arguments and for CSI arguments: it is unlikely to randomly find a functional state given the scarceness of functional states vs. non-functional ones. To me, this appears to be one super probability argument.

    I just want a two or three sentence answer as to why low-probabilities matter in some cases (can be used to reject hypotheses), but in other cases we see even more unlikely events happen. It isn’t to be argumentative; it is to strengthen my own pro-ID argument.

  32. Hi again Atom:

    One of the healthy signs on the quality of UD is the fact that one is as likely to be exchanging with a person on the same side as one on the other side of the major dispute. In short we are looking at people who are not playing party-liner games.

    Okay, on the key points:

    1] RM + NS

    On long and sometimes painful experience, I have learned that it is wise to deal with likely side issues if one is to responsibly address a matter. [Sorry if that suggested that you were likely to make that argument -- I had lurkers in mind.]

    2] it is unlikely to randomly find a functional state given the scarceness of functional states vs. non-functional ones. To me, this appears to be one super probability argument.

    I am actually adverting to the classic hypothesis-testing stratefy, commonly used in inferential statistics:

    –> We have a result that is credibly contingent, so it is not the deterministic product of specified dynamics and initial conditions, with perhaps a bit of noise affecting the system trajectory across the space-time-energy etc domain.

    –> Contingent outcomes are on a general observation, produced by chance and/or agency. [Think of the law of falling objects, then make the object a die: the uppermost face is a matter of chance, unless and agent has loaded or at least incompetently made the die.]

    –> The null hypothesis, is that it is chance produced, or at least chance dominated. Thence, we look at where the outcome comes out relative to the probabilities across the relevant space of possible outcomes and appropriate models for probabilities of particular outcomes. (That may be a flat distribution if we have maximal ignorance, or it may be any one of the bell or distorted bell shaped distributions, or a trailing-off “reverse J” distribution, or a rising S distribution [sigmoid] or a U-distribution, etc.)

    –> In the relevant cases, we are usually looking at clusters of microstates that form a set of observationally distinguishable macrostates. Typically, there is a predominant cluster, which defines the most likely — overwhelmingly so — probable outcome. [Look at the microjet thought experiment I made in my always linked: the diffused, scattered at random state has overwhelmingly more accessible microstates, than a clumped at random state, and that in turn than a functionally specified configured flyable micro-jet. Available random forces and energy are therefore maximally unlikely to undo the diffusion, much less configure a functional system.]

    –> Such is of course physically possible and logically possible, but so overwhelmingly improbable that unless one can rule out agency by separate means, on observing such an improbable outcome, the null hypothesis is rejected with a very high degree of confidence. [Here, we are using 1 in 10^150 as a reasonable threshold on the probabilistic resources of the observed universe.]

    –> The alternative hypothesis, is agent action. We know, even trivially, that agents routinely generate FSCI beyond the Dembski type probabilistic bound [which BTW is far more stringent than the 1 in 10^40 or 50 etc often used in lab scale thermodynamics reasoning]. Indeed, there are no known exceptions to the observation that when we see directly the causal account for a case of FSCI, it is the product of agency. In short, it is not only possible but likely for agents to produce FSCI. (Observe, too, how probabilistic resources come into play: if we are able to genrate sufficient numbers of runs in a sufficiently large contingency space, what is imporbable on one run becomes more probable on a great many runs. As a trivial example, condoms are said to be about 90% likely to work. So, if we use condoms in high-risk environments 10 times, the chances of being protected “everytime” fall at the rate 0.9^n ~ 35% for ten tries. In short, exposures can overwhelm protection. But when the number of quantum states in the observed universe across its lifespan are not sufficient to lower the odds-against reasonably, that is a different ballgame entirely.)

    –> So, we infer that FSCI in the relevant cases beyond the Dembski bound is credibly the product of agency.

    –> So compelling is this case, that objection to it is by: [a] trying to speculate on a quasi-infinite, quasi-eternal wider unobserved universe, and/or [b] ruling that on methodological naturalistic grounds, only entities permissible in evolutionary materialist accounts of the cosmos may be permitted in “science.”

    –> The first is an open resort to speculative metaphysics, often mislabelled as “science,” and refusing to entertain the point thsat once we are in metaphysics, all live options are permited at the table of comparative difficulties. Once Agents capable of creating life and/or cosmi such as we observe are admitted to the table, the explanation by inference to agency soon shows vast superiority.

    –> The second also fails: it is a question-begging, historically inaccurate attempted redefinition of Science. [The consequences, as with the sad case of Mr Gonzalez, are plain for all to see.]

    –> Now, of course, such reasoning is provisional: we could incorrectly reject or accept the null hyp, and must be open to change in light of new evidence. [So much for the idea that the inference to design is a "science stopper" . . .] That is a characteristic property of science, and indeed, knowledge claims anchored to the messy world of observed fact in general: moral, not demonstrative [mathematically provable], credibility of claims.

    3] I just want a two or three sentence answer as to why low-probabilities matter in some cases (can be used to reject hypotheses), but in other cases we see even more unlikely events happen.

    THe issue is, what are the relevant probabilistic resources. When an event falls sufficiently low relative to those resources and a “reasonable threshold,” it is rational to reject the null — chance — hypothesis. In cases where contingency dominates, the alternative to chance is agency, i.e once we see contingency [outcomes could easily hav ebeen different], we are in a domain where one of two alternatives dominates, so to reasonably eliminate the one leads to a rational inference to the other as the best current explanation.

    This inference is of course as just noted, provisional. But, since when is that peculiar to the inference to design as opposed to chance as the cause of FSCI beyond the Dembski-type bound? [So, to make a special-pleading objection to this case, is to be selectively hyperskeptical, relative to a lot of science and statistics!)

    GEM of TKI

  33. Thanks GEM, I agree that UD is nice for the fact that we can discuss matters with those who are also fellow IDers that happen to have questions in a calm forum.

    The first part of your post was a great overview of the argument for design. I appreciated your always linked essay as well, especially the micro-jet thought experiment. I thought you hit the nail on the head there.

    But my issue asks a more fundamental question, I think.

    First, I do agree that FSCI has never been shown to be the result of anything other than agency. I also agree that from the data, agency is the best explanation. (We can agree to those points and be finished with them.)

    Ok, now we get to the last part of your post:

    THe issue is, what are the relevant probabilistic resources. When an event falls sufficiently low relative to those resources and a “reasonable threshold,” it is rational to reject the null — chance — hypothesis.

    Take my initial example again: we flip a coin 500 times. It makes a random sequence (Sequence A), that we did not specify beforehand. There are two sets that exist: “Is Sequence A” and “Not Is Sequence A”, with the former having one member, the latter having (10^150) – 1 members.

    Looking at the math, it is more likely that we would have hit a member of “Not Is Sequence A” by chance than hitting a member of “Is Sequence A”. But we still hit a member of “Is Sequence A”, regardless of that low-probability. The probabalistic resources are the Universal Probability Bound resources, namely every chance in the universe, for all time. Even with those resources we wouldn’t expect to have hit our sequence. (If it isn’t unlikely enough for you, just flip the coin 1000 times instead, thereby clearing this hurdle by a good margin).

    But we still got a member of the “Is Sequence A” set, regardless of the low probabilities. Therefore, low-probability events that are part of a relatively minute subset of all possible events can still happen by chance. (S1)

    What troubles me is that CSI arguments appear to boil down to the following basic form:

    “Low-probability events that are part of a relatively minute subset (specified/functional states) of all possible events (all states) will not be expected to occur by chance.” (S2)

    Do you see the problem?

    Either we can rule chance out (as in S2), or we cannot. If chance is a viable option, regardless of low-probabilities or probabalistic resources (as in S1), then we cannot rule out chance explanations.

    But we do rule them out, both in practice and in statistics. Furthermore, doing this WORKS. But it still seems like a problem to me to do so, without justification, since we can demonstrate S1 true.

  34. Atom:

    A few follow up points:

    1] we flip a coin 500 times. It makes a random sequence (Sequence A), that we did not specify beforehand. There are two sets that exist: “Is Sequence A” and “Not Is Sequence A”, with the former having one member, the latter having (10^150) – 1 members. Looking at the math, it is more likely that we would have hit a member of “Not Is Sequence A” by chance than hitting a member of “Is Sequence A”.

    What has happened here is that first, you are in effect painting the target around where you hit, ex post facto. That is, there is an internal dependence. The probability of an outcome given an observation of that outcome is a function of the reliability of the observation, not the process that may have generated that outcome. In this case, practically certain.

    That is very different from the probability of getting to any sequence at random in the set of outcomes for 500 coins tossed. Again, if any sequence will do, the a target is ~ 10^150 outcomes wide and the probability is in effect 1. [The world could come to an end suddenly before the coins settle . . .]

    Before the toss, the odds of any one outcome are ~ 1 in 10^150. After the toss, the odds of an observed outcome being what it is are ~1.

    But now, if the outcome is INDEPENDENTLY specified and rare, i.e functional in some way independent of making an observation of whatever string of H’s and T’s comes up AND hard to get to by chance, then we have a very different target. And therein lieth all the significance of SPECIFIED and FUNCTIONALLY SPECIFIED.

    That is, the chance-null-hyp elimination filter [you have a real pair of alternatives: chance and agency] is based on TWO tests, not just one — the outcome is significant and specific, and rare enough in the probability space that it is hard to get to by chance relative to the available resources.

    2] we still hit a member of “Is Sequence A”, regardless of that low-probability. The probabalistic resources are the Universal Probability Bound resources, namely every chance in the universe, for all time. Even with those resources we wouldn’t expect to have hit our sequence

    This underscores the problem just highlighted. You have a target made up after the fact, where the circumstances have changed and depending on how the circumstances have changed.

    Step 1, we have a set of coins. We toss.

    Step 2, we see what the outcome happens to be and say, aha, it is 1 in 10^150 or so that we get there to this specific sequence.

    Step 3, but that’s amazing, we have no reason to think we would get to this particular sequence!

    Now, of course, what has happened is that we have a target set of ~ 10^150. Any outcome will do. We do the experiment of tossing,a nd a particular outcome occurs. But to do that, we have moved the universe along, and here, to a new state in which we have a particular outcome of a coin toss, whatever arbitrary member of the set we happen to observe.

    What is the probability of getting that outcome in the after-the toss state of the universe? Pretty nearly 1 if observation is reliable.

    In short you are not comparing likes with likes. If you had predicted that we would get a sequence of 500 H-T choices that spells out in Ascii the opening words of the US Declaration of independence, tossed and got them, that would be a very different ballgame, but that is not what you have done.

    3] we still got a member of the “Is Sequence A” set, regardless of the low probabilities

    Sequence A asa specific string of H’s and T’s did not come into existence until after the toss.

    If we go back in time tot he point before the toss,a nd define Sequence A as “any one particular outcome of H’s and T’s, X1, X2, . . . X500; then, that has probability ~ 1. We toss,a nd since the universe did not wink out in the meantime, we see a particular sequence, lo and behold. Now we can fill in the blanks of 500 X’s, to get say httthhthttttthhhh . . . or whatever.

    4] What troubles me is that CSI arguments appear to boil down to the following basic form: “Low-probability events that are part of a relatively minute subset (specified/functional states) of all possible events (all states) will not be expected to occur by chance.” (S2)

    What happens is that you seem to be missing the point that the set in the target zone of functionally specified outcomes is set independently of the outcome of the coin tosses in prospect.

    And, the scope is such that with all the atoms of the observed universe serving as coins, and the succession of quantum states serving as tosses, not all the quantum states in the observed universe across its lifetime suffice to give enough tosses to make it likely in aggregate to access the functional,integrated, multiple-component states of interest by chance-driven processes.

    But, by contrast, agents routinely produce such systems by intent and skill. That is we have a known cause of such FSCI, vs a case where chance trial and error based searches [for want of functional intermediates sufficiently close in the configurational space: first functional state is isolated, and the others are too far apart to step from one to the other by chance] to get to the functional states from an arbitrary start-point.

    5] Either we can rule chance out (as in S2), or we cannot. If chance is a viable option, regardless of low-probabilities or probabalistic resources (as in S1), then we cannot rule out chance explanations. But we do rule them out, both in practice and in statistics. Furthermore, doing this WORKS.

    Here, we see the point that there is a difference between adequacy of warrant and proof beyond rational dispute.

    Of course, once we have a config space, it defines a set of possibilities that can in principle be accessed by chance-driven processes. But — and here Fisher et al captured common-sense and gave it structure, there is a point where [risk of error notwithstanding] it is reasonable to infer that an event was caused by intent not chance, with a certain level of confidence that can often be quantified.

    This is as you note a common technique of scientific work and statistical inference testing.

    To reject it in a case where it is not convenient for worldview or agenda to accept what the inferences say, while accepting it where it suits is of course self-servingly irrational. You can get away with it if you have institutional power, but hat does not make it a reasonable thing to do.

    Hence, my remarks on selctive hyperskepticism.

    GEM of TKI

  35. PD Here is my online discussion briefing note on selective hyperskepticism, a descriptive term for a fallacy of skepticism that Simon Greenleaf highlighted well over 100 years ago.

  36. I just want a two or three sentence answer as to why low-probabilities matter in some cases (can be used to reject hypotheses), but in other cases we see even more unlikely events happen.

    Low probabilities matter when the outcome is algorithmically compressible. Algorithmically compressible (easily describable) outcomes with low probability relevant to any known chance hypothesis relevant to the production of the event have uniformly been the products of intelligent agency where the causal history has been fully known. Therefore, confronted with an example of a low-probability event which happens also to be algorithmically compressible, one has epistemic warrant for inferring intelligent agency, rather than chance alone, produced the event.

    Or, if you want, read a much longer version of these three sentences here.

  37. Thanks jaredl and GEM. I’ll digest these today, and probably respond tomorrow.

  38. Atom & Jared:

    Thanks. The Fisher vs Bayes article and the other chapters are well worth the read.

    Maybe I should highlight my own three-sentencer from above:

    The issue is, what are the relevant probabilistic resources. When an event [which is independently and simply describable, i.e specified] falls sufficiently low relative to those resources and a “reasonable threshold,” [i.e. it is complex] it is rational to reject the null — chance — hypothesis. In cases where contingency dominates, the alternative to chance is agency, i.e once we see contingency [outcomes could easily have been different], we are in a domain where one of two alternatives dominates, so to reasonably eliminate the one leads to a rational inference to the other as the best current explanation.

    Okay

    GEM of TKI

  39. Ok, my only response would be this:

    1) Yes, I am painting the target after the fact. That was admitted up front, so I don’t see how it solves the issue (unless the act of my pre-specifying actually changes what can happen). But the sequence already exists, independently, prior to my flipping of the coins: if you arrange every 500 bit sequence from 000…000 to 111…111 you’ll find it in there, and you’ll also find the two categories “Is Sequence A” and “Not Is Sequence A” are already automatically defined as soon as Sequence A exists.

    2) Since Sequence A is just a 500 digit binary number, we know it exists independently of my event and has always been a member of a relatively small subset. So the independence criteria is met…unless we change it to mean “specified by an intelligence beforehand” (which we can never rule out, since we don’t know what every intelligence has specified.)

    3) Algorithmic complexity does play a role, but I always assumed it was because algorithmically compressible strings form a tiny subset of all possible strings. I thought it was this being part of a relatively tiny set was what made them special.

    We could just say “Well, intelligences are the only causes for algorithmically compressible contingent complex events” and not seek a further justification for why this is so in probability theory, which would make CSI a merely empirical observation. But if we want to root it in an objective mathematical basis, it seems the independently existing, unlikely, relatively tiny subset member Sequence A (or any such sequence) would become a problem.

    True, I have not defined what Sequence A is, but that is because it can be any sequence, and the problem would still exist.

  40. Addendum: Above I wrote:

    unless we change it to mean “specified by an intelligence beforehand” (which we can never rule out, since we don’t know what every intelligence has specified.)

    I just re-read that and realize it is irrelevant to the discussion at hand. The filter allows for false negatives, so “not being able to rule it out” doesn’t matter; we want to know if we can rule it in.

  41. Algorithmic complexity does play a role, but I always assumed it was because algorithmically compressible strings form a tiny subset of all possible strings. I thought it was this being part of a relatively tiny set was what made them special.

    It is precisely that feature which enables us, on Fisher’s approach to statistical hypothesis testing, to reject known chance hypotheses as explanations for the phenomena at issue in the face of low probability. The fact that algorithmic compressibility combined with low probability is a reliable indicator, in our experience, of the action of intelligent agency is what lets us go from ruling out all known chance hypotheses to inferring design.

  42. It is precisely that feature which enables us

    Which feature?

  43. [B]eing part of a relatively tiny set….

  44. Hi Atom & Jaredl:

    I follow up:

    1] A: the sequence already exists, independently, prior to my flipping of the coins: if you arrange every 500 bit sequence from 000…000 to 111…111 you’ll find it in there

    Actually, you cannot do this sort of search by exhaustion within the bounds of the known universe: there are 2^500 arrangements, or ~ 3.27*10^150 arrangements, more than the number of quantum states in the observed universe across its lifetime. There simply are not the physical resources to do it.

    This is a key part of the problem — you cannot exhaust the possibilities, so you have to target a subset of the abstract configuration space. A chance process by definition has no reason to prefer any one zone in the space over another, ands so it is maximally unlikely to successfully find any state that is a predefined target.

    –> This is the reason why you see so many speculative attempts to project a quasi-infinite unobserved wider universe as a whole, to provide enough “credible” scope for the cumulative odds to shorten. (Of course, that is a resort to the speculative and metaphysical, not the scientific, and so it is inferior tot he inference that we have an observed universe out there and the odds relative to what we see and can reasonably infer are as we just outlined.]

    2] you’ll also find the two categories “Is Sequence A” and “Not Is Sequence A” are already automatically defined as soon as Sequence A exists

    Of course, with the “paint the target around the sequence that happens to fall out” instance, the sequence A EXISTS after the fact, not independent of tossing the coin. The odds of A given observation of A are just about 1, as pointed out already.

    Where theings get interesting, is when we observe that we cannot lay out the configuration space in the observed physical world, i.e we cannot exhaust the arrangements. Then, we sample that conceptual not physical space, identifying ahead of time say an ASCII sequence of the opening words of the US DOI [or say Genesis 1:1 in KJV]

    Roll the coins, and bingo, that’s just what turns up, 1 in 10^150 or so. Not very likely at all! [Except, by a conjurer's trick that we didn't know of; i.e agency.]

    FOr the real world case of cells, the config spaces are much, much higher in scale. E.g. a typical 300-monomer protein can be arranged in 20^300 ~ 2.04*10^390 ways, and a 500k DNA strand can be arranged in ~ 9.90*10^301,029 ways. To get to life, we face an utterly incredible search on multiple dimensions.

    3] Since Sequence A is just a 500 digit binary number, we know it exists independently of my event and has always been a member of a relatively small subset. So the independence criteri[on] is met

    Sequences from 000 . . to 1111 . . . abstractly exist independent of being actualised materially. That is not the problem.

    The problem is to get to a targetted functional sequence that is isolated in the abstract space, in the physical world without exhausting probabilistic resources. As we just saw, that is just not reasonable in the gamut of the observed universe.

    Of course, having tossed the coins and converted X1, X2, . . . X500 into a particular instantiated sequence, one may ex post facto paint the target around it, but that is after the hard part was already done, selecting out of the abstract set of possibilities some particular case at random.

    Where the independent and functional specification become important is as just pointed out — a meaningful pattern not an arbitrary one. (And we happily accept false negatives; we do not need a super decoding algorithm that automatically identifies any and all possible meaningful sequences, just cases where we know the functionality/meaningfulness already. Actually, we have such an “algorithm” in hand, once one accepts that God is, but that is irrelevant to our case!]

    4] algorithmically compressible strings form a tiny subset of all possible strings

    Yes, and in a context where tiny is relative to a config space that cannot be wholly instantiated in the gamut of the observed universe.

    So, we must always abstract out a small subset of it within our reach, and lo and behold we hit the mark! [Now we have a choice: chance or agency. If I see coins laid out in a sequence spelling out Gen 1:1 in part in KJV, in ASCII -- note the compression/ simple describability here! -- I will infer to agency with high confidence!]

    5] if we want to root it in an objective mathematical basis, it seems the independently existing, unlikely, relatively tiny subset member Sequence A (or any such sequence) would become a problem.

    The problem is inherently about the intersection of the ideal mathematical world with the empirical one. That is always the issue with inference testing, and a part of the price you pay for that is that you have less than 100% confidence in your conclusions.

    Nor it that provisionality a novelty in the world of science. Scientists, in short, live by faith. So do Mathematicians, post Goedel, and so do philosophers. So does everyone else.

    The issue is which faith, why — and selective hyperskepticism does not make the cut.

    GEM of TKI

  45. I have another issue.

    I have stated elsewhere that cosmological arguments for design are necessarily vacuous, using Dembski’s formulation of CSI as the sole legitimate criterion for detecting design.

    One cannot perform the required probability calculation – it cannot be shown that things could be otherwise. As far as we know, the set of possible natural laws contains only one element.

    Why, therefore, do we infer design anyway?

    I’m going to suggest something here. Dembski, in crafting his explanatory filter, attempted to capture in philosophical and mathematical symbolism the instinctive each of us actually executes in inferring design. Clearly, the filter is missing something, for cosmological arguments seem compelling, even lacking the necessary probabilistic analysis.

    Could it be that mere algorithmic compressibility is, in fact, a reliable hallmark of design?

  46. “instinctive mental processes…”

  47. GEM,

    I guess one could draw the line at: your specification was physically instantiated at least once before the event, thus making it a true pre-specification. I’d agree with this.

    But then it doesn’t work for novel proteins and cell types. They were never physically instantiated (that we’re aware of) before they were actually made.

    Given a set of physical/chemical laws, the specifications for all proteins exist implicitly, in a mathematical sense.

    But again we could not explicity list out all possible AA sequences, even beginning at a certain finite length. And again, with these we are also noticing the events after the fact.

    So it seems your demarcation criterion would cut both ways.

    Jaredl, intersting thought. Kairosfocus did go through your objection in his always-linked article, you may read it and see if it satisfies you.

  48. Thanks for pointing that out. It doesn’t satisfy. Here’s the relevant portion(s), with some emphasis added:

    You can’t objectively assign “probabilities”: First, the argument strictly speaking turns on sensitivities, not probabilities– we have dozens of parameters, which are locally quite sensitive in aggregate, i.e. slight [or modest in some cases] changes relative to the current values will trigger radical shifts away from the sort of life-habitable cosmos we observe. Further, as Leslie has noted, in some cases the Goldilocks zone values are such as meet converging constraints. That gives rise to the intuitions that we are looking at complex, co-adapted components of a harmonious, functional, information-rich whole. So we see Robin Collins observing, in the just linked:”Suppose we went on a mission to Mars, and found a domed structure in which everything was set up just right for life to exist . . . Would we draw the conclusion that it just happened to form by chance? Certainly not . . . . The universe is analogous to such a “biosphere,” according to recent findings in physics. Almost everything about the basic structure of the universe–for example, the fundamental laws and parameters of physics and the initial distribution of matter and energy–is balanced on a razor’s edge for life to occur. As the eminent Princeton physicist Freeman Dyson notes, “There are many . . . lucky accidents in physics. Without such accidents, water could not exist as liquid, chains of carbon atoms could not form complex organic molecules, and hydrogen atoms could not form breakable bridges between molecules” (p. 251)–in short, life as we know it would be impossible.” So, independent of whether or not we accept the probability estimates that are often made, the fine-tuning argument in the main has telling force.

    The bolded simply repeats my point: we seem to find the cosmological argument intuitively compelling, even though it is necessarily vacuous, using Dembski’s work as a norm. Again, it cannot be shown that things could have been otherwise.

    Can one assign reasonable Probabilities? Yes. Where the value of a variable is not otherwise constrained across a relevant range, one may use the Laplace criterion of indifference to assign probabilites. In effect, since a die may take any one of six values, in absence of other constraints, the credible probability of each outcome is 1/6.

    The crucial point of disanalogy is that we cannot know whether there is more than one face to the universal die. Hence, this answer fails to provide empirical grounds for a probability assessment. What is required is evidence that the laws of nature are contingent.

    Similarly, where we have no reasopn to assume otherwise, the fact that relevant cosmological parameters may for all we know vary across a given range may be converted into a reasonable (though of course provisional — as with many things in science!) probability estimate.

    Here, Kairosfocus is assuming the very point at issue.

    So, for instance, the Cosmological Constant [considered to be a metric of the energy density of empty space, which triggers corresponding rates of expansion of space itself], there are good physical science reasons [i.e. inter alia Einsteinian General Relativity as applied to cosmology] to estimate that the credible possible range is 10^53 times the range that is life-accommodating, and there is no known constraint otherwise on the value. Thus, it is reasonable to apply indifference to the provisionally known possible range to infer a probability of being in the Goldilocks zone of 1 in 10^53. Relative to basic principles of probability reasoning and to the general provisionality of science, it is therefore reasonable to infer that this is an identifiable, reasonably definable value. (Cf Collins’ discussion, for more details.)

    Neither of these claims sufficies to produce the necessary demonstration of low-probability necessary to infer design utilizing Dembski’s criterion of CSI as the norm.

    I apologize for the lengthy citations, but I feel it necessary to show why I find Kairosfocus’s arguments unpersuasive.

  49. Hi Atom and Jaredl:

    Following up.

    (BTW, please note I am not primarily seeking to “persuade” but to point out what is sound or at least credible relative to factual adequacy, coherence and explanatory power. Dialectic, not rhetoric — I am all too aware (having lived through the damaging result of multiple economic fallacies in action here in the Caribbean) that a sound and well-established argument is often the least persuasive of all arguments.)

    On some points:

    1] Jaredl: I have stated elsewhere that cosmological arguments for design are necessarily vacuous, using Dembski’s formulation of CSI as the sole legitimate criterion for detecting design.

    Why should we accept that claim at all? After all, people were inferring accurately to design long before Dembski came along. And, indeed,t he very term, complex specified information came up out of the natural state of OOL research at the turn of the 80′s. Dembski has provided one mathematical model of how CSI serves as a design detection filter, not the whole definition.

    As to the issue of cosmological ID, this is at heart a sensitivity argument, i.e as Leslie put it, we have a looong wall, and here in a 100 yd stretch there is just this one fly [other portions elsewhere may be carpeted with flies for all we know or care]. Then, bang-splat, he is hit by a bullet. Do we ascribe that to random chance or good aim, why – on a COMPARATIVE DIFFICULTIES basis. The answer is obvious – and breaks through multiverse type arguments.

    The issue is not proof beyond rational dispute to a determined skeptic, but which underlying explanation is better, given issues over factual adequacy, coherence, and explanatory power: ad hoc vs simple vs simplistic. And BTW, that comparative difficulties relative to alternate live option start-points for explanation, is how we get away from vicious circularity in a world in which all arguents in the end embed core faith commitments.

    2] Atom: it doesn’t work for novel proteins and cell types. They were never physically instantiated (that we’re aware of) before they were actually made. Given a set of physical/chemical laws, the specifications for all proteins exist implicitly, in a mathematical sense.
    Of course, where did those handy laws come from.
    But more on point,t he issue on the specification of proteins etc is that we have mutually interacting complex, functionally specified entities forming a coherent integrated system that is sensitive to random perturbation – i.e is isolated in the resulting truly vast config space. That co-adapted functionality is independent of the chemical forces that would have had to form the first such molecules in whatever prebiotic soup, under evo mat scenarios. So, we see the problem of hitting a complex fine-tuned, coadapted design by chance, relative to doing so by agency. The former is vastly unlikely, the latter on experience not at all so.

    Of course, in this case, we can assign probability numbers using the known laws of the physics and chemistry involved, and the digital nature of the macro-molecules.

    3] we seem to find the cosmological argument intuitively compelling, even though it is necessarily vacuous, using Dembski’s work as a norm. Again, it cannot be shown that things could [not] have been otherwise.
    I have bolded the problem.
    There is a logical category error at work: arguments by inference to best explanation on a comparative difficulties basis are the precise reverse of proofs relative to generally agreed facts and assumptions: EXPLANATION –> OBSERVATIONS, vs FACTS ETC –> IMPLICATIONS. Science in general works by the former, as does philosophy; that is why conclusions are provisional and subject to further investigation or analysis.
    As one result, Dembski is happy to accept the point hat the inference to design by elimination is possibly wrong – as are all Fisher-style statistical inferences. But, if you use a selectively hyperskeptical criterion to reject design inferences you don’t like while accepting a science that is in fact riddled with such defeatable inferences in general, you are being inconsistent.
    4] we cannot know whether there is more than one face to the universal die. Hence, this answer fails to provide empirical grounds for a probability assessment.
    Same problem again. The reasoning is already known to be defeatable, but please provide empirical evidence before inferring that “defeatability in principle” implies “defeated in fact.” (The Laplacian principle of indifference is a generally used tool for probability assignments, and the possibility that the universe may be radically different in the abstract is irrelevant to the calculations of provisional probabilities, deriving therefrom relative to the world of credibly observed fact.)

    5] Neither of these claims sufficies to produce the necessary demonstration of low-probability necessary to infer design utilizing Dembski’s criterion of CSI as the norm.
    Again, we are looking at provisional inferences to best explanation [what science is about], not attempted demonstrative proofs.

    Within the context of such, we have produced a probability number that is relevant to what we credibly know – as opposed to whatever we may wish to speculate. So, to overturn the reasoning, one should provide reason to doubt the probability assignment relative to the empirical data, not the abstract possibility that things may be other than observed. [For, as Lord Russell pointed out, after all it is abstractly possible that the world as we experience and remember it was created in a flash five minutes ago, not as whatever we think we know about it. The two worlds are empirically indistinguishable.]

    GEM of TKI

  50. PS: Okay, here is a link on the issues of comparative difficulties and inference to best explanation etc. Here, on the worldview roots of proof, as well.

  51. I’m sorry; I don’t know how much more plainly I can put things, and it hasn’t helped, so I’m bowing out.

  52. Of course, where did those handy laws come from.

    The Intelligent Designer, IMO.

    But more on point, the issue on the specification of proteins etc is that we have mutually interacting complex, functionally specified entities forming a coherent integrated system that is sensitive to random perturbation

    Granted, all the properties and interactions isolate that subset and make it vastly smaller than the set “Not Is FSCI”. This was not at issue, I already granted that the subset “IS FSCI” is vastly smaller than the set “Not Is FSCI”

    i.e is isolated in the resulting truly vast config space.

    It is dwarfed by “Not Is FSCI”, but so is “Is Sequence A” by “Not IS Sequence A”. Both are implicitly rare and isolated, the second example moreso.

    That co-adapted functionality is independent of the chemical forces that would have had to form the first such molecules in whatever prebiotic soup, under evo mat scenarios.

    Again granted, and not at issue. (I’ll assume this is for the benefit of the lurkers.)

    So, we see the problem of hitting a complex fine-tuned, coadapted design by chance, relative to doing so by agency. The former is vastly unlikely, the latter on experience not at all so.

    This is only to say that when we hit an isolated, already defined subset (even if implicitly defined, as in the case of functional configurations or “Is Sequence A” vs. “Not Is Sequence A”) agency is the best explanation. True, but it is not always the correct one, giving us an inconsistent method (in some cases comes to correct conclusion, but using the same mathematical reasoning, comes to the wrong conclusion with Sequence A.)

    Of course, in this case, we can assign probability numbers using the known laws of the physics and chemistry involved, and the digital nature of the macro-molecules.

    Yes, we can also assign probabilities in the case of Sequence A, my N-bit binary number we hit by chance.

  53. Now that I feel that everyone involved understands the issue, I can lay out what I think is a clean solution to the difficulty.

    (If I am mistaken, or my solution fails to answer some difficulties, please don’t hsitiate to speak up.)

    As I pointed out before, we have our counter-example of Sequence A, which can be any N-bit binary number.

    It is implicitly specified, being part of a rare (one member) subset. More rare than many FSCI configurations, in fact.

    Sequence A is also a member of ever more exclusive subsets, beginning with all the sequences that share the same first digit (either begin with a 1 or 0), to those that share the first two digits, etc, which forms a powerset of all possible digit matches that this sequence could be a part of. (This may be confusingly worded, but I trust you understand the idea behind it…it is late.)

    Now, even though it is implicitly specified and the only isolated member of a subset (i.e. “Is Sequnce A”), so are all other N-bit sequences. (This may have been alluded to earlier by GEM, but not focused on; I think this is what solves the problem, after much thought.)

    Every sequence on N bits has a powerset of specifications that it matches. So we can call these naive specifications since they hold across all sequences.

    In this way, we are dealing with a meta-level of contingency. We can ask “What are the chances that we’ll hit a sequence that is part of an extremely isolated subset?” The answer is, of course, one. There is always a naive specification that applies across every sequence equally, thus telling us nothing about the probability of hitting that sequence.

    But other than the naive specification set, does a sequence belong to any additional specification set? This is where the additional level of contingency comes into play. These are the specifications that are important, if for no other reason, because they are contingent; a sequence does not have to be part of any additional specification set. So if it is, then its matching this new, non-naive specification set does have relevance for probability calculations, since it is asymmetrical across all possible sequences of that length.

    I’ll leave it at this. If there are any questions or concerns, or if an example is needed to understand what I’m trying to say, let me know.

  54. …There is always a naive specification that applies across every sequence equally…

    That should read:

    There is always a naive specification that can be applied to any sequence, thus all sequences equally belong to at least one isolated subset, making inclusion in such a naive implicit set meaningless and irrelevant to probability calculations. (Namely , because of the equiprobable nature of such inclusion across all sequences.)

  55. Hi again Atom & Jaredl (& onlookers):

    First, I see Jaredl bows out — fare thee well, friend.

    (And, I note that we are here dealing with the messy world of fact-anchored, defeatable reasoning to best explanation, not the ideal world of perfect proofs that cannot be disputed by rational agents. Indeed, in a post-Godel world, not even Mathematics rises to that exalted state of grace. Much less, science. Indeed, I think that Paul put it aptly 2,000 years ago: “we walk by faith and not by sight” the question being, which faith-point we accept and why . . . Thus, my linked discussions above. Philosophy is a meta issue in all of these discussions; that is a mark of paradigm shifts and scientific revolutions. Indeed, Lakatos reminds us that there is a worldview core in any paradigm, and that it is surrounded by a belt of scientific theories/ models. So, we need to be philosophically literate in order to be scientifically literate in a world in which sci revos are coming at us fast and furious.)

    Now, on further points:

    1] Atom: when we hit an isolated, already defined subset (even if implicitly defined, as in the case of functional configurations or “Is Sequence A” vs. “Not Is Sequence A”) agency is the best explanation. True, but it is not always the correct one, giving us an inconsistent method (in some cases comes to correct conclusion, but using the same mathematical reasoning, comes to the wrong conclusion with Sequence A.)

    Precisely — as with all empirically anchored science and all statistical reasoning on hypotheses that is open to errors of inference.

    That is, we are here up against the limits of the class of reasoning in general, not a defect of the particular hypothesis/ explanation/ model/ theory. Thus, the point that scientific reasoning is provisional, and open to correction in light of further observation and/or analysis. Thence,t he issue that we ought not to apply an inconsistent standard of evaluation, i.e we must avoid selective hyperskepticism because we don’t like the particular explanation that by reasonable criteria is “best.” [But also note that this means that the inference to design is just the opposite of a "science stopper"!]

    2] We can ask “What are the chances that we’ll hit a sequence that is part of an extremely isolated subset?” The answer is, of course, one. There is always a naive specification that applies across every sequence equally, thus telling us nothing about the probability of hitting that sequence.

    In short, if we throw the figurative i-sided dice a certain number of times, we will get an outcome of some sort out there, which will be a unique sequence within the set. This leads tot he point I made previously that, on condition of reliable observation, the probability of observing a given outcome accurately, having rolled the dice so to speak is pretty nearly 1.

    3] other than the naive specification set, does a sequence belong to any additional specification set? This is where the additional level of contingency comes into play. These are the specifications that are important, if for no other reason, because they are contingent; a sequence does not have to be part of any additional specification set. So if it is, then its matching this new, non-naive specification set does have relevance for probability calculations, since it is asymmetrical across all possible sequences of that length.

    That is, where there is an independent and relevant specification (other than this is a particular member of the configuration space), then it is possibly hard to hit by chance. Such specifications inter alia include: being biofunctional as a macromolecule within the DNA-controlled, information-based cellular machinery and algorithms, being a flyable configuration of aircraft parts, being a constellation of laws and parameters leading to a life-habitable cosmos relative to the observed biofunctions of cell-based life, etc.

    When it is actually hard to hit by chance, sufficiently so [i.e., e.g beyond the Dembski-type bound] that relevant probabilistic resources are exhausted, then that “actually hard” becomes so inferior an explanation relative tot he known behaviour of agents,that agent action is now the best explanation.

    But of course, the reasoning is defeatable.

    4] if it is, then its matching this new, non-naive specification set does have relevance for probability calculations, since it is asymmetrical across all possible sequences of that length.

    In effect we are back to the same conclusion, but by a bit of a roundabout. However, multiple pathways or ways of expressing an argument often help to provoke understanding and acceptance.

    I do not see anything to object to of any consequence in your solution to this point. And, as just noted, multiple substantially equivalent pathways of expression are often helpful.

    GEM of TKI

  56. Thanks for the prolonged discussion GEM. As I mentioned, you did allude to all the pieces of the puzzle, but I guess I needed it put together in the way I did for it to “get to the point” if you would. I think having the concept of naive specifications helps answer the question quickly and elegantly. I especially needed to see the contingency/asymmetry of non-naive specifications (FSCI, etc), versus the equiprobable symmetry of all naive specifications.

    Anyway, thank you for your patient input! And jaredl, thanks as well.

  57. Atom:

    From my view, it is more a question of what helps you see it better.

    But that’s important, too.

    GEM of TKI

Leave a Reply