Home » Climate change, Design inference, Global Warming, Intelligent Design, Physics, Science » Do humans influence temperature records?

Do humans influence temperature records?

Can the methods of Intelligent Design be brought to bear to detect anthropogenic influence in temperature records? Core to the climate debate is the danger of catastrophic anthropogenic global warming. We hear of “tipping points” promising coast lands drowning in glacial melt. Defining “very likely” as > 90%, the IPCC’s Climate Change 2007: Synthesis Report holds that:

Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic GHG concentrations.

In The Smoking Gun At Darwin Zero Willis Eschenback examines temperature records at Darwin, North Australia. He looks

at what happens when the GHCN removes the “in-homogeneities” to “adjust” the data. Of the five raw datasets, the GHCN discards two, . . . The three remaining records are first “homogenized” and then averaged to give the “GHCN Adjusted” temperature record for Darwin.
To my great surprise, here’s what I found. To explain the full effect, I am showing this with both datasets starting at the same point (rather than ending at the same point as they are often shown).

Figure 7. GHCN homogeneity adjustments to Darwin Airport combined record

YIKES! Before getting homogenized, temperatures in Darwin were falling at 0.7 Celcius per century … but after the homogenization, they were warming at 1.2 Celcius per century. And the adjustment that they made was over two degrees per century … when those guys “adjust”, they don’t mess around. And the adjustment is an odd shape, with the adjustment first going stepwise, then climbing roughly to stop at 2.4C. . . .

Figure 8 Darwin Zero Homogeneity Adjustments. Black line shows amount and timing of adjustments.

Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? We have five different records covering Darwin from 1941 on. They all agree almost exactly. Why adjust them at all? They’ve just added a huge artificial totally imaginary trend to the last half of the raw data! Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?

Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.. . .

And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers. . . .

Do you agree with Eschenback in attributing to humans these differences in reported temperatures? Can such “adjustments” be reliably distinguished from natural variations such as those due to Figure 3 Glacial fluctuations, Temperature & PDO

See Easterbrook's presentations onglobal warming including his predictions of global cooling and warming . See also Matt Vooro on AMO and PDO- The Real Climate Makers In United States?

So what say you? Can anthropogenic influence be detected in temperature records or can these variations be considered as natural? Can such data be depended on to make public policy decisions for trillion dollar investments?

See Willis Eschenback’s full article: The Smoking Gun At Darwin Zero

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

4 Responses to Do humans influence temperature records?

  1. As an Australian, I must say that I am absoutely outraged at the duplicity shown by the Global Historical Climate Network (GHCN), in its clumsy adjustment of temperature records from my country. Willis Eschenback is quite right to describe it as “blatantly bogus.” The GHCN has been caught red-handed, and its “experts” can’t talk their way out of this one. A lie is a lie.

    A few weeks ago the notion that temperature data from around the world for the past 100 years was being “cooked,” would have sounded like the ravings of a conspiracy theorist. Willis Eschenback wisely refrains from drawing that conclusion: we don’t yet know the full extent of these duplicitous data adjustments. But I for one think that the “fixing” of the data from Australia is just the tip of the iceberg.

    How many data “fixers” were there? I’d say probably no more than a few dozen at the outside. The thousands of scientists contributing to the last IPCC report were, for the most part, acting in good faith, I’m sure. Could the relatively small number of scientists who compiled the raw temperature records from around the planet for the past 100 years have succumbed to their own “groupthink” and engaged in highly questionable statistical fudges which they justified to themselves, on the grounds that they were “saving the world”? And did they then dupe the larger community of scientists who didn’t have access to the raw data, or who may have had access but didn’t think of questioning the scientific integrity of the data “compilers”?

    Getting back to your questions:

    Can anthropogenic influence be detected in temperature records or can these variations be considered as natural? Can such data be depended on to make public policy decisions for trillion dollar investments?

    My answer at the present time would be: no. Climatologist Roy Spencer has written some excellent articles explaining why the hypothesis of man-made global warming can neither be proven nor disproven, based on the data.

    In Hotspots and Fingerprints , Spencer writes:

    [T]he hotspot is not a unique signature of manmade greenhouse gases. It simply reflects anomalous heating of the troposphere — no matter what its source. Anomalous heating gets spread throughout the depth of the troposphere by convection, and greater temperature rise in the upper troposphere than in the lower troposphere is because of latent heat release (rainfall formation) there.

    For instance, a natural decrease in cloud cover would have had the same effect. It would lead to increased solar warming of the ocean, followed by warming and humidifying of the global atmosphere and an acceleration of the hydrologic cycle.

    Thus, while possibly significant from the standpoint of indicating problems with feedbacks in climate models, the lack of a hotspot no more disproves manmade global warming than the existence of the hotspot would have proved manmade global warming. At most, it would be evidence that the warming influence of increasing GHGs in the models has been exaggerated, probably due to exaggerated positive feedback from water vapor.

    In a recent post entitled, Can Global Warming Predictions be Tested with Observations of the Real Climate System? , Spencer argues that uncertainty over cloud feedbacks makes global warming predictions impossible:

    [T]o measure cloud feedbacks, we need to determine how much clouds change in response to a temperature change. But most researchers do not realize that this is not possible without accounting for causation in the opposite direction, i.e., the extent to which temperature changes are a response to cloud changes.

    As I will demonstrate in my AGU talk on December 16, for all practical purposes it is not possible (at least not yet) to measure cloud feedbacks because the two directions of causation are intermingled in nature. As a result, it is not possible with current methods to measure feedbacks in response to a radiative forcing event such as a change in cloud cover, or even a major volcanic eruption, such as that from the 1991 eruption of Mt. Pinatubo.

    The reason is that the size of the radiative forcing of a temperature change overwhelms the size of the radiative feedback upon that temperature change, and our satellite measurements can not tell the difference. (Emphases mine – VJT.)

    Roy Spencer and William Braswell’s 2008 presentation, Feedback vs. Chaotic Radiative Forcing: Smoking Gun Evidence for an Insensitive Climate System? is also well worth having a look at.

    Spencer’s concluding paragraphs in the post cited above, entitled, Can Global Warming Predictions be Tested with Observations of the Real Climate System? , are well worth pondering:

    I suspect that the climate modeling groups have only publicized models that produce the amount of warming they believe “looks about right”, or “looks reasonable”. Through group-think (or maybe the political leanings of, and pressure from, the IPCC leadership?), they might well have tossed out any model experiments which produced very little warming.

    In any event, I believe that the scientific community’s confidence that climate change is now mostly human-caused is seriously misplaced. It is time for an independent review of climate modeling, with experts from other physical (and even engineering) disciplines where computer models are widely used. The importance of the issue demands nothing less.

    Furthermore, the computer codes for the climate models now being used by the IPCC should be made available to other researchers for independent testing and experimentation. The Data Quality Act for U.S.-supported models already requires this, but this law is being largely ignored. (Emphases mine – VJT.)

    Sounds like the lawyers need to get busy on this one.

  2. vjtorley
    Excellent response on the challenges of robustly detecting a statistically significant anthropogenic signature in the overall climate with validated models.

    My intended double entendre worked.

    What do you think of the differences between the final and raw data as shown above by Willis Eschenback?

  3. Also look at:
    How correcting the data heats the earth.


    More analysis like Eschenback did needs to be done!

Leave a Reply