Home » Intelligent Design » Doing Science Backwards?

Doing Science Backwards?

I just read an Insights article in the July 2008 edition of Scientific American. In a nutshell, biochemist Jeremy Nicholson billed as one of the world’s foremost experts on the metabolome (the collection of chemicals in the body which are byproducts of metabolic processes) is screening thousands of individuals to establish baseline amounts of different metabolites and then compares differences between individuals looking for consistent correlations between those differences and various kinds of diseases.

Seems like a good research plan to me. The noteworthy part that prompted the subject line of my article here is in the last paragraph of the first page of the SciAm article:

It is kind of like doing science backward: instead making hypotheses and then devising experiments to test them, he performs experiments first and tries to decipher his results later. He must sift through the range of chemicals produced by the genes people have, the food they eat, the drugs they take, the diseases they suffer from and the intestinal bacteria they harbor.

It struck me that this is in fact the same methodology that ID researchers use – look at the raw data and try to find patterns in it. Raw data, especially in fields like comparative genomics, is being amassed at an incredible rate and little of it is explained at this point in time.

While some may call this “doing science backward” I call it straightforward “reverse engineering” wherein you have a black box (you don’t know what’s inside the box or how it works) and you begin by amassing all the external information you can about it then form hypotheses which explain the data. Comparatve genomics at this point is an exercise in reverse engineering. It is not doing science in the dogmatic short form of hypothesis first and gathering evidence second. It’s evidence first and hypothesis second.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

7 Responses to Doing Science Backwards?

  1. No important hypothesis was ever formed without reference to data. Data always drives hypotheses. Just because we are taught in elementary school that hypotheses come first doesn’t make it so. That’s just a convenient way to teach the scientific process.

    Even if Nicholson makes a hypotheses from the data, he will need to do further experimentation to demonstrate his hypothesis is correct. And so will ID.

  2. Some may call it doing science backwards – I would not. Science is never done in a vacuum – the scientist is , a priori, NECESSARILY influenced by the sum total of the data that he has been exposed to over the course of his life.

    It’s not as if the scientist’s brain were a tabula rasa that allowed for uninformed hypothesis making subject to subsequent experimentation. I would argue that ALL scientific experimentation is the result of previously assimilated data informing current and future hypotheses and experimentation.

  3. One can make a case that one part of ID is what Dave is referring to, namely the comparison of genomes. Behe’s Edge of Evolution says that certain things are beyond the ability of natural processes to produce. A finding from the comparison of genomes will in the near future either continue to support or counter Behe’s claims.

    So far no one has countered Behe’s claims or else we would have never hear the end of it. So a hypothesis is that all the genomes studied should be consistent with micro evolution processes operating on some past gene pool to produce the variation we see. Any series of major finding that contradicts this will be a real blow to ID. My guess is that it will not be forthcoming. And as I said so far not even a hint of it.

    As the evidence pours in, it will continue to strengthen the ID hypothesis or weaken it. And the funny thing is that all this ID research is being done by those who support the Darwinian paradigm for all of evolution. So they are doing ID research and do not know it.

  4. The official scientific method is certainly:

    Data -> Hypotheses (or theories) -> New data

    Anyway, I am with Feyerabend in believing that strictly formalized methods are more a hindrance to science than a true scientific tool. In a sense, I believe that the only scientific “method” is more something like an attitude, the condition where we are looking at all that is known, and asking ourselves, with utter sincerity, “how could we best explain or understand that”? And then follow any hint which can come from our intuition, reason, feeling, or any other resource we have, and by any provisional method we can device, with the only aim to pursue the best explanation. Always remembering that no best explanation is final, and that science is not the place of absolute truth.

    What is really “new” today, at least in biology, is the extraordinary rate of accumulating new knowledge (in the sense of facts) which cannot be explained or understood by the current knowledge (in the sense of theories). The sequencing of genomes from different species is just an example. Another good example is the enormous quantity of data for the analysis of transcriptomes, by the very powerful technique of microarrays, a really astonishing treasure of information which, at present, we cannot even begin to analyze, least of all understand, beyond a very gross level. Still another example is the ever growing understanding of sophisticated molecular mechanisms, especially the regulation networks, either at the nuclear level (transcription factors), or at the level of the multiple pathways of communication between the cell surface and the nucleus, or finally at the intercellular level (see the growing data about the thousands of cytokines effecting cellular communication in multicellular beings).

    The fact is that, while many of those data can be partially understood at a local level (that is, we can more or less know which are the molecules a single transcription factor interacts with ), on the other hand almost nothing is understood of the general meaning of the whole networks, least of all of their implementation, procedures and regulation.

    So, it perfectly makes sense, today, to forget for a moment all the existing “local” theories, which explain much, but fail to explain much more, and all the existing “general” theories, such as darwinism, which give false explanations and can only be an obstacle to scientific understanding, and to frankly look at the bulk of data with an open mind, and really ask ourselves: “How could we best explain or understand that”?

    And, by the way, I absolutely agree with jerry: “all this ID research is being done by those who support the Darwinian paradigm for all of evolution. So they are doing ID research and do not know it.”

    That’s perfectly true: the discovery of new facts is made by all for all, it does not belong to anybody, and it belongs to everybody. The real war between ID and darwinism is a war between general paradigms of scientific knowledge, and not a competition in acquiring new facts.

  5. (Offtopic) Dave: you might like to post on an interesting news article in the October issue of “Physics Today” – “Study tracks the changes in a vision protein as fish evolved” – about an article published in Proceedings of the National Academy of Science – USA [105] (2008) by Yokoyama et al.

    It purports to show the power of Darwinian evolution at the molecular level in the opsin molecule used for vision in fishes, but is an excellent example of how little it can accomplish — after 400 million years: telling is this quote:

    Mutations aside, the functional core has remained more or less constant.

    What mutations have occurred have shifted the wavelength of light at which the peak sensitivity is found up and down, a useful change depending on the water depth and conditions at which particular species live. Sounds like a clever design to me.

  6. Note: The second paragraph in the quote above is NOT part of the quote…

  7. SCheesman:

    Interesting. The following statements in the article look very intriguing too:

    “A bigger surprise occurred when Yokoyama ran the opsins’ genetic sequences through programs designed to identify naturally selected mutations.

    Yokoyama had determined which mutations really do change ?max. Could the statistical methods find them too? The answer turned out to be no. Why that should be the case isn’t clear.

    Those methods, he points out, may lack the sensitivity to detect adaptive mutations that become fixed within a short period of time, as may be the case in vertebrate opsins.

    Statistical methods don’t take into account the many-body nature of proteins. “Shozo shows quite compellingly that just because you change one site doesn’t mean you’re going to get a functional change,” he says. “It depends, rather, on the context of the other amino acids.” “

Leave a Reply