Archive for August, 2012

Spurious Warmth in NOAA’s USHCN from Comparison to USCRN

Wednesday, August 22nd, 2012

It looks like the Gold Standard (USHCN) for U.S. temperature monitoring is spuriously warmer than the Platinum Standard (USCRN)

After Anthony Watts pointed out that the record warm July announced by NOAA based upon the “gold standard” USHCN station network was about 2 deg. F warmer than a straight average of the 114 core US Climate Reference Network (USCRN) stations, I thought I’d take a look at these newer USCRN stations (which I will call the “Platinum Standard” in temperature measurement) and see how they compare to nearest-neighbor USHCN stations.

USCRN Stations in Google Earth

First, I examined the siting of the core set of 114 stations in Google Earth. Most of them are actually visible in GE imagery, as seen in this example from Kentucky (click for full-size image):

The most identifiable features of the USCRN sites are the three white solar radiation shields over the 3 temperature sensors, and the circular wind shield placed around the precipitation gauge.

While most of the CRN sites are indeed rural, some of them are what I would call “nearly rural”, and a few will probably have limited urban heat island (UHI) effects due to their proximity to buildings and pavement, such as this one next to a 300 meter-diameter paved surface near Palestine, TX which NASA uses as a research balloon launch site:

The larger image (from October) suggests that the ground cover surrounding the paved area is kept free of vegetation, probably by spraying, except right around the weather sensors themselves.

A few station locations have 2 USCRN sites located relatively close to each other, presumably to check calibration. A particularly interesting pair of sites is near Stillwater, OK, where one site is a few hundred meters from residential Stillwater, while the paired site is about 2.4 km farther out of town:

Whether by design or not, this pair of sites should allow evaluation of UHI effects from small towns. Since the temperature sensors (Platinum Resistance Thermometers, or PRTs) are so accurate and stable, they can be used to establish fairly tiny temperature differences between the few CRN neighboring station pairs which have been installed in the U.S.

From my visual examination of these 114 USCRN sites in Google Earth, the “most visited” site is one in rural South Dakota, which apparently is quite popular with cattle, probably looking for food:

Just 1 km to the southwest is this even more popular spot with the locals:

Hopefully, this USCRN site will not experience any BHI (Bovine Heat Island) effects from localized methane emissions, which we are told is a powerful source of greenhouse warming.

Elevation Effects
One important thing I noticed in my visual survey of the 114 USCRN sites is the tendency for them to be placed at higher elevations compared to the nearby USHCN sites. This is a little unfortunate since temperature decreases with height by roughly 5 deg. C per km, which is 0.5 deg. C per 100 meters, an effect which cannot be ignored when comparing the USCRN and USHCN sites. Since I could not find a good source of elevation data for the USCRN sites, I used elevations from Google Earth.

USCRN and USHCN Station Comparisons

As a first cut at the analysis, I compared all available monthly average temperatures for HCN-CRN station pairs where the stations were no more than 30 km apart in distance and 100 m in elevation. This greatly reduces the number of USCRN stations from nominally 114 to only 42, which were matched up with a total of 46 USHCN stations.

The results for all seasons combined shows that the USHCN stations are definitely warmer than their “platinum standard” counterparts:

The discrepancy is somewhat greater during the warm season, as indicated by the results for just June-July-August:

Regarding that Stillwater, OK USCRN station pair, the site closest to the Stillwater residential area averaged 0.6 deg. C warmer year-round (0.5 deg. C warmer in summer) than the more rural site 2 km farther out of town. This supports the view that substantial UHI effects can arise even from small towns.

The largest UHI effects in the above plots are from USHCN Santa Barbara, CA, with close to 4 deg. C warming compared to the nearby USCRN station. Both stations are located about the same distance (a few hundred meters) from the Pacific Ocean.

What Does this Mean for U.S Temperature Records?

I would say these preliminary results, if they pan out, indicate we should be increasingly distrustful of using the current NOAA USHCN data for long-term trends as supporting evidence for global warming, or for the reporting of new high temperature records. As the last 2 plots above suggest:

1) even at “zero” population density (rural siting), the USHCN temperatures are on average warmer than their Climate Reference Network counterparts, by close to 0.5 deg. C in summer.

2) across all USHCN stations, from rural to urban, they average 0.9 deg. C warmer than USCRN (which approaches Anthony Watt’s 2 deg. F estimate for July 2012).

This evidence suggests that much of the reported U.S. warming in the last 100+ years could be spurious, assuming that thermometer measurements made around 1880-1900 were largely free of spurious warming effects. This is a serious issue that NOAA needs to address in an open and transparent manner.

The good news is that the NOAA U.S. Climate Reference Network is a valuable new tool which will greatly help to better understand, and possibly correct for, UHI effects in the U.S. temperature record. It is to their credit that the program, now providing up to 10 years of data, was created.

Fun with summer statistics. Part 2: The Northern Hemisphere Land

Wednesday, August 15th, 2012

Guest post by John Christy, UAHuntsville, Alabama State Climatologist
(NOTE: Fig. 2.2 has now been extended in time.)

I was finishing up my U.S. Senate testimony for 1 Aug when a reporter sent me a PNAS paper by Hansen et al. (2012) embargoed until after the Hearing. Because of the embargo, I couldn’t comment about Hansen et al. at the Hearing. This paper claimed, among other things, that the proportion of the Northern Hemisphere land area (with weather stations) that exceeded extreme summer hot temperatures was now 10 percent or more for the 2006 to 2011 period.

For extremes at that level (three standard deviations or 3-sigma) this was remarkable evidence for “human-made global warming.” Statistically speaking, the area covered by that extreme in any given hotter-than-average year should only be in the lowest single digits … that is, if the Hansen et al. assumptions are true – i.e., (a) if TMean accurately represents only the effect of extra greenhouse gases, (b) if the climate acts like a bell-shaped curve, (c) if the bell-shaped curve determined by a single 30-year period (1951-1980) represents all of natural climate variability, and (d) if the GISS interpolated and extrapolated dataset preserves accurate anomaly values. (I hope you are raising a suspicious eyebrow by now.)

The conclusion, to which the authors jumped, was that such a relatively large area of recent extremes could only be caused by the enhanced greenhouse effect. But, the authors went further by making an attempt at advocacy, not science, as they say they were motivated by “the need for the public to appreciate the significance of human-made global warming.”

Permit me to digress into an opinionated comment. In 2006, President George W. Bush was wrong when he said we were addicted to oil. The real truth is, oil, and other carbon-based fuels, are merely the affordable means by which we can satisfy our true addictions – long life, good health, prosperity, technological progress, adequate food supplies, internet services, freedom of movement, protection from environmental threats, and so on. As I’ve said numerous times after living in Africa, – without energy, life is brutal and short.

Folks with Hansen’s view are quick to condemn carbon fuels while overlooking the obvious reasons for their use and the astounding benefits they provide (and in which they participate). The lead author referred to coal trains as “death trains – no less gruesome than if they were boxcars headed to the crematoria.” The truth, in my opinion, is the exact opposite – carbon has provided accessible energy that has been indisputably responsible for enhancing security, longevity, and the overall welfare of human life. In other words, carbon-based energy has lifted billions out of an impoverished, brutal existence.

In my view, that is “good,” and I hope Hansen and co-authors would agree. I can’t scientifically demonstrate that improving the human condition is “good” because that is a value judgment about human life. This “good” is simply something I believe to be of inestimable value, and which at this point in history is made possible by carbon.

Back to science. After reading Part 1, everyone should have some serious concerns about the methodology of the Hansen et al. as published in PNAS. [By the way, I went through the same peer-review process for this post as for a PNAS publication: I selected my colleague Roy Spencer, a highly qualified, award-winning climate scientist, as the reviewer.]

With regard to (a) above, I’ve already provided evidence in Part 1 that TMean misrepresents the response of the climate system to extra greenhouse gases. So, I decided to look only at TMax. For this I downloaded the station data from the Berkeley BEST dataset (quality-controlled version). This dataset has more stations than GISS, and can be gridded so as to avoid extrapolated and interpolated values where strange statistical features can arise. This gridding addresses assumption (d) above. I binned the data into 1° Lat x 2° Lon grids, and de-biased the individual station time series relative to one another within each grid, merging them into a single time series per grid. The results below are for NH summer only, to match the results that Hansen et al. used to formulate their main assertions.

In Fig. 2.1 I show the percentage of the NH land areas that Hansen et al. calculated to be above the TMean 3-sigma threshold for 2006 to 2011 (black-filled circles). The next curve (gray-filled circles) is the same calculation, using the same base period (1951-1980), but using TMax from my construction from the BEST station data. The correlation between the two is high, so broad spatial and temporal features are the same. However, the areal coverage drops off by over half, from Hansen’s 6-year average of 12 percent to this analysis at 5 percent (click for full-size version):

Now, I believe assumption (c), that the particular climate of 1951-1980 can provide the complete and ideal distribution for calculating the impact of greenhouse gas increases, displays a remarkably biased view of the statistics of a non-linear dynamical system. Hansen et al. claim this short period faithfully represents the natural climate variability of not just the present, but the past 10,000 years – and that 1981-2011 is outside of that range. Hansen assuming any 30-year period represents all of Holocene climate is simply astounding to me.

A quick look at the time series of the US record of high TMax’s (Fig.1.1 in Part 1) indicates that the period 1951-1980 was one of especially low variability in the relatively brief 110-year climate record. Thus, it is an unrepresentative sample of the climate’s natural variability. So, for a major portion of the observed NH land area, the selection of 1951-80 as the reference-base immediately convicts the anomalies for those decades outside of that period as criminal outliers.

This brings up an important question. How many decades of accurate climate observations are required to establish a climatology from which departures from that climatology may be declared as outside the realm of natural variability? Since the climate is a non-linear, dynamical system, the answer is unknown, but certainly the ideal base-period would be much longer than 30 years thanks to the natural variability of the background climate on all time scales.

We can test the choice of 1951-1980 as capable of defining an accurate pre-greenhouse warming climatology. I shall simply add 20 years to the beginning of the reference period. Certainly Hansen et al. would consider 1931-1950 as “pre-greenhouse” since they considered their own later reference period of 1951-1980 as such. Will this change the outcome?

The result is the third curve from the top (open circles) in Fig. 2.1 above, showing values mostly in the low single digits (6-year average of 2.9 percent) being generally a quarter of Hansen et al.’s results. In other words, the results change quite a bit simply by widening the window back into a period with even less greenhouse forcing for an acceptable base-climate. (Please note that the only grids used to calculate the percentage of area were those with at least 90 percent of the data during the reference period – I couldn’t tell from Hansen et al. whether they had applied such a consistency test.)

The lowest curve in Fig. 2.1 (squares) uses a base reference period of 80 years (1931-2010) in which a lot of variability occurred. The recent decade doesn’t show much at all with a 1.3 percent average. Now, one may legitimately complain that since I included the most recent 30 years of greenhouse warming in the statistics, that the reference period is not pure enough for testing the effect. I understand fully. My response is, can anyone prove that decades with even higher temperatures and variations have not occurred in the last 1,000 or even 10,000 pre-greenhouse, post-glacial years?

That question takes us back to our nemesis. What is an accurate expression of the statistics of the interglacial, non-greenhouse-enhanced climate? Or, what is the extent of anomalies that Mother Nature can achieve on her own for the “natural” climate system from one 30-year period to the next? I’ll bet the variations are much greater than depicted by 1951-1980 alone, so this choice by Hansen as the base climate is not broad enough. In the least, there should be no objection to using 1931-1980 as a reference-base for a non-enhanced-greenhouse climate.

In press reports for this paper (e.g., here), Hansen indicated that “he had underestimated how bad things could get” regarding his 1988 predictions of future climate. According to the global temperature chart below (Fig. 2.2), one could make the case that his comment apparently means he hadn’t anticipated how bad his 1988 predictions would be when compared with satellite observations from UAH and RSS:

By the way, a climate model simulation is a hypothesis and Fig. 2.2 is called ”testing a hypothesis.” The simulations fail the test. (Note that though allowing for growing emissions in scenario A, the real world emitted even more greenhouse gases, so the results here are an underestimate of the actual model errors.)

The bottom line of this little exercise is that I believe the analysis of Hansen et al. is based on assumptions designed to confirm a specific bias about climate change and then, like a legal brief, advocates for public acceptance of that bias to motivate the adoption of certain policies (see Hansen’s Washington Post Op-Ed 3 Aug 2012).

Using the different assumptions above, which I believe are more scientifically defensible, I don’t see alarming changes. Further, the discussion in and around Hansen et al. of the danger of carbon-based energy is simply an advocacy-based opinion of an immensely complex issue and which ignores the ubiquitous and undeniable benefits that carbon-based energy provides for human life.

Finally, I thought I just saw the proverbial “horse” I presumed was dead twitch a little (see Part 1). So, I want to beat it one more time. In Fig. 2.3 is the 1900-2011 analysis of areal coverage of positive anomalies (2.05-sigma or 2.5 percent significance level) over USA48 from the BEST TMax and TMin gridded data. The reference period is 1951-1980:

Does anyone still think TMax and TMin (and thus TMean) have consistently measured the same physical property of the climate through the years?

It’s August and the dewpoint just dipped below 70°F here in Alabama, so I’m headed out for a run.

REFERENCE:
Hansen, J., M. Sato and R. Ruedy, 2012: Perception of climate change. Proc. Nat. Ac. Sci., doi/10.1073/pnas.1205276109.

Fun with summer statistics. Part I: USA

Monday, August 13th, 2012

guest post by John Christy, UAHuntsville, Alabama State Climatologist

Let me say two things up front. 1. The first 10 weeks of the summer of 2012 were brutally hot in some parts of the US. For these areas it was hotter than seen in many decades. 2. Extra greenhouse gases should warm the climate. We really don’t know how much, but the magnitude is more than zero, and likely well below the average climate model estimate.

Now to the issue at hand. The recent claims that July 2012 and Jan-Jul 2012 were the hottest ever in the conterminous US (USA48) are based on one specific way to look at the US temperature data. NOAA, who made the announcement, utilized the mean temperature or TMean (i.e. (TMax + TMin)/2) taken from station records after adjustments for a variety of discontinuities were applied. In other words, the average of the daily high and daily low temperatures is the metric of choice for these kinds of announcements.

Unfortunately, TMean is akin to averaging apples and oranges to come up with a rather uninformative fruit. TMax represents the temperature of a well-mixed lower tropospheric layer, especially in summer. TMin, on the other hand, is mostly a measurement in a shallow layer that is easily subjected to deceptive warming as humans develop the surface around the stations.

The problem here is that TMin can warm over time due to an increase in turbulent mixing (related to increasing local human development) which creates a vertical redistribution of atmospheric heat. This warming is not primarily due to the accumulation of heat which is the signature of the enhanced greenhouse effect. Since TMax represents a deeper layer of the troposphere, it serves as a better proxy (not perfect, but better) for measuring the accumulation of tropospheric heat, and thus the greenhouse effect. This is demonstrated theoretically and observationally in McNider et al. 2012. I think TMax is a much better way to depict the long-term temperature character of the climate.

With that as a introduction, the chart of TMax generated by Roy in this post, using the same USHCNv2 stations as NOAA, indicates July 2012 was very hot, coming in at third place behind the scorching summers of 1936 and 1934. This is an indication that the deeper atmosphere, where the greenhouse effect is more directly detected, was probably warmer in those two years than in 2012 over the US.

Another way to look at the now diminishing heat wave is to analyze stations with long records for the occurrence of daily extremes. For USA48 there are 970 USHCN stations with records at least 80 years long. In Fig. 1.1 is the number of record hot days set in each year by these 970 stations (gray). The 1930s dominate the establishment of daily TMax record highs (click for full-size):

But for climatologists, the more interesting result is the average of the total number of records in ten-year periods to see the longer-term character. The smooth curve shows that 10-year periods in the 1930s generated about twice as many hot-day records as the most recent decades. Note too, that if you want to find a recent, unrepresentative, “quiet” period for extremes, the 1950s to 1970s will do (see Part 2 to be posted later).

Figure 1.2 below compares the ten-year averages between high TMax and high TMin records:

There has been a relatively steady rise in high TMin records (i.e. hot nights) which does not concur with TMax, and is further evidence that TMax and TMin are not measuring the same thing. They really are apples and oranges. As indicated above, TMin is a poor proxy for atmospheric heat content, and it inflicts this problem on the popular TMean temperature record which is then a poor proxy for greenhouse warming too.

Before I leave this plot, someone may ask, “But what about those thousands of daily records that we were told were broken this year?” Unfortunately, there is a lot of confusion about that. Records are announced by NOAA for stations with as little as 30 years of data, i.e. starting as late as 1981. As a result, any moderately hot day now will generate a lot of “record highs.” But, most of those records were produced by stations which were not operating during the heat waves of the teens, twenties, thirties and fifties. That is why the plots I’ve provided here tell a more complete climate story. As you can imagine, the results aren’t nearly so dramatic and no reporter wants to write a story that says the current heat wave was exceeded in the past by a lot. Readers and viewers would rather be told they are enduring a special time in history I think.

Because the central US was the focus of the recent heat, I generated the number of Jan-Jul record high daily TMaxs for eight states, AR, IL, IN, IA, KS, MO, NE and OK that includes 2012 (Fig. 1.3):

(Because a few stations were late, I multiplied the number in 2012 by 1.15 to assure their representation). For these states, there is no doubt that the first seven months of 2012 haven’t seen as many record hot days since the 1930s. In other words, for the vast majority of residents of the Central US, there were more days this year that were the “hottest ever” over their lifetimes. (Notice too, that the ten-year averages of TMax and TMin records mimic the national results – high TMin records are becoming more frequent while TMax records have been flat since the 1930s.)

The same plot for the west coast states of CA, OR and WA (Fig. 1.4) shows that the last three years (Jan-Jul only) have seen a dearth of high temperature records:

However, even with these two very different climates, one feature is consistent – the continuously rising number of record hot nights relative to record hot days. This increase in hot nights is found everywhere we’ve looked. Unfortunately because many scientists and agencies use TMean (i.e. influenced by TMin) as a proxy for greenhouse-gas induced climate change, their results will be misleading in my view.

I keep mentioning that the deep atmospheric temperature is a better proxy for detecting the greenhouse effect than surface temperature. Taking the temperature of such a huge mass of air is a more direct and robust measurement of heat content. Our UAHuntsville tropospheric data for the USA48 show July 2012 was very hot (+0.90°C above the 1981-2010 average), behind 2006 (+0.98 °C) and 2002 (+1.00 °C) and just ahead of 2011 (+0.89 °C). The differences (i.e. all can be represented by +0.95 ±0.06) really can’t be considered definitive because of inherent error in the dataset. So, in just the last 34 Julys, there are 3 others very close to 2012, and at least one or two likely warmer.

Then, as is often the case, the weather pattern that produces a sweltering central US also causes colder temperatures elsewhere. In Alaska, for example, the last 12 months (-0.82 °C) have been near the coldest departures for any 12-month period of the 34 years of satellite data.

In the satellite data, the NH Land anomaly for July 2012 was +0.59 °C. Other hot Julys were 2010 +0.69, and 1998 at +0.67 °C. Globally (land and ocean), July 2012 was warm at +0.28 °C, being 5th warmest of the past 34 Julys. The warmest was July 1998 at +0.44 °C. (In Part 2, I’ll look at recent claims about Northern Hemisphere temperatures.)

So, what are we to make of all the claims about record US TMean temperatures? First, they do not represent the deep atmosphere where the enhanced greenhouse effect should be detected, so making claims about causes is unwise. Secondly, the number of hot-day extremes we’ve seen in the conterminous US has been exceeded in the past by quite a bit. Thirdly, the first 10 weeks of 2012’s summer was the hottest such period in many parts of the central US for residents born after the 1930’s. So, they are completely justified when they moan, “This is the hottest year I’ve ever seen.”

By the way, for any particular period, the hottest record has to occur sometime.

REFERENCE
McNider, R.T., G.J. Steeneveld, A.A.M. Holtslag, R.A. Pielke Sr., S. Mackaro, A. Pour-Biazar, J. Walters, U. Nair, and J.R. Christy, 2012: Response and sensitivity of the nocturnal boundary layer over land to added longwave radiative forcing. J. Geophys. Res., 117, D14106, doi:10.1029/2012JD017578.

July 2012 Hottest Ever in the U.S.? Hmmm….I Doubt It

Wednesday, August 8th, 2012

Using NCDC’s own data (USHCN, Version 2), and computing area averages for the last 100 years of Julys over the 48 contiguous states, here’s what I get for the daily High temps, Low temps, and daily Averages (click for large version):

As far as daily HIGH temperatures go, 1936 was the clear winner. But because daily LOW temperatures have risen so much, the daily AVERAGE July temperature in 2012 barely edged out 1936.

Now, of course, we have that nagging issue of just how much urban heat island (UHI) effect remains in the data. The NCDC “homogenization” procedures are not really meant to handle long-term UHI warming, which has probably occurred at most of the 1218 stations used in the above plot.

Also, minimum temperatures are much more influenced by wind conditions and other factors near the surface…Max temperatures give a much better idea of how warm an air mass is over a deep layer.

Also, I thought one month doesn’t make a climate trend? If we look at the 5-year running mean of the daily averages for July’s over the last 100 years, we see that while recent Julys have indeed been warm, it is questionable whether they rival the 1930s:

And if we do the same 5-year averaging on July maximum temperatures, the 1930s were obviously warmer:

So, all things considered (including unresolved issues about urban heat island effects and other large corrections made to the USHCN data), I would say July was unusually warm. But the long-term integrity of the USHCN dataset depends upon so many uncertain factors, I would say it’s a stretch to to call July 2012 a “record”.

A New Analysis of U.S. Temperature Trends Since 1943

Monday, August 6th, 2012

With all of the hoopla over recent temperatures, I decided to see how far back in time I could extend my U.S. surface temperature analysis based upon the NOAA archive of Integrated Surface Hourly (ISH) data.

The main difference between this dataset and the others you hear about is that trends are usually based upon daily maximum and minimum temperatures (Tmax and Tmin), which have the longest record of observation. Unfortunately, one major issue with those datasets is that the time of day at which the maximum or minimum temperature is recorded makes a difference, due to a double-counting effect. Since the time of observation of Tmax and Tmin has varied over the years, this potentially large effect must be adjusted for, however imperfectly.

Here I will show U.S. temperature trends since 1943 based upon 4x per day observations, always made at the same synoptic times 00, 06, 12, and 18 UTC. This ends up including only about 50 stations, roughly evenly distributed throughout the U.S., but I thought it would be a worthwhile exercise nonetheless. Years before 1943 simply did not have enough stations reporting, and it wasn’t until World War II when routine weather observations started being made on a more regular and widespread basis.

The following plot shows monthly temperature departures from the 70-year (1943-2012) average, along with a 4th order polynomial fit to the data, and it supports the view that the 1960s and 1970s were unusually cool, with warmer conditions existing in the 1940s and 1950s (click for large version):

It’s too bad that only a handful of the stations extend back into the 1930’s, which nearly everyone agrees were warmer in the U.S. than the 40’s and 50’s.

What About Urban Heat Island Effects?

Now, the above results have no adjustments made for possible Urban Heat Island (UHI) effects, something Anthony Watts has been spearheading a re-investigation of. But what we can do is plot the individual station temperature trends for these ~50 stations against the population density at the station location as of the year 2000, along with a simple linear regression line fit to the data:

It is fairly obvious that there is an Urban Heat Island effect in the data which went into the first plot above, with the most populous stations generally showing the most warming, and the lowest population locations showing the least warming (or even cooling) since 1943. For those statisticians out there, the standard error of the calculated regression slope is 29% of the slope value.

So, returning to the first plot above, it is entirely possible that the early part of the record was just warm as recent years, if UHI adjustments were made.

Unfortunately, it is not obvious how to make such adjustments accurately. It must be remembered that the 2nd plot above only shows the relative UHI warming of higher population stations compared to the lower population stations, and previous studies have suggested that even the lower population stations experience warming as well. In fact, published studies have shown that most of the spurious UHI warming is observed early in population growth, with less warming as population grows even larger.

Again, what is different about the above dataset is it is based upon temperature observations made 4x/day, always at the same time, so there is no issue with changing time-of-observation, as there is with the use of Tmax and Tmin data.

Of course, all of this is preliminary, and not ready for peer review. But it is interesting.

U.S. Surface Temperature Update for July, 2012: +1.11 deg. C

Monday, August 6th, 2012

The U.S. lower-48 surface temperature anomaly from my population density-adjusted (PDAT) dataset was 1.11 deg. C above the 1973-2012 average for July 2012, with a 1973-2012 linear warming trend of +0.145 deg. C/decade (click for full-size version):

I could not compute the corresponding USHCN anomaly this month because it appears the last 4 years of data in the file is missing (ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/9641C_201208_F52.avg.gz). Someone please correct me if I am mistaken.

Note that the 12-month period ending in July 2012 is also the warmest 12-month period in the 40 year record. I cannot compare these statistics to the (possibly warmer) 1930s because for the most part only max and min temperatures were reported back then, and my analysis depends upon 4x/day observations at a specific synoptic reporting times.

There is also no guarantee that my method for UHI adjustment since 1973 has done a sufficient job of removing UHI effects. A short description of the final procedure I settled on for population density adjustment of the surface temperatures can be found here.

Post-Normal Science: Deadlines, or Conflicting Values?

Sunday, August 5th, 2012

Never have so many scientists forecast so far into the future such fearful weather with so little risk of consequence for being wrong.” – I just made that up.

There is an excellent essay over at Judith Curry’s Climate Etc. blog by Steven Mosher entitled Post Normal Science: Deadlines, dealing with the factors involved in so-called post-normal science, which as Steve summarized, is science where:

1. Facts are uncertain
2. Values are in conflict
3. Stakes are high
4. Immediate action is required

Not all scientific problems are created equal. Some physical processes are understood well enough to allow their routine use to make predictions which invariably turn out correct. We can launch a mission to Mars based upon our knowledge of the gravitational force exerted by the planets, or predict the future position of the planets many years in advance.

But many other scientific problems are not understood with enough certainty to make accurate predictions. If those problems also have huge societal impacts where policy decisions must also be made, we enter the realm of post normal science (Funtowicz and Ravetz, 1991).

Why the Urgency?

I must admit, I have a problem with the need for such a distinction as “post-normal science”, other than to be an excuse for one set of values to attempt to beat another set of values into submission. All science involves uncertainty. That is nothing new. Also, all policy decisions involve uncertainty, and even without uncertainty there will be winners and losers when policies are changed.

After all, who is to decide whether decisions are urgent? I know that politicians might urgently desire to make new policies in a certain direction which typically favor a certain constituency, but there are abundant examples where decisions are made by government which later turn out to be bad.

The first one that comes to mind is the ethanol mandate. I don’t care if it was well intended. Millions of people throughout history have been killed through the good intentions of a few misguided individuals. Too often, policy decisions have been knee-jerk reactions to some perceived problem which was either exaggerated, or where the unintended consequences of the decisions were ignored — or both.

And it’s not just the politicians who want to change the world. I have related before my experience in talking with “mainstream” climate scientists that they typically believe that no matter what the state of global warming science, we still need to get away from our use of fossil fuels, and the sooner the better.

To the extent that fossil fuels are a finite resource, I would agree with them we will eventually need a large-scale replacement. But in the near-term, what exactly are our policy options? You cannot simply legislate new, abundant, and inexpensive energy sources into existence. We are stuck with fossil fuels as our primary energy source for decades to come simply because the physics have not yet provided us with a clear alternative.

And since poverty is the leading killer of humans, and everything humans do requires energy, any policy push toward more expensive energy should be viewed with suspicion. I could argue from an economic perspective that we should be burning the cheapest fuel as fast as possible to help spur economic growth, which will maximize the availability of R&D funding, so that we might develop new energy technologies sooner rather than later.

Why the need for either “normal” or “post-normal” categories?

Post-normal science follows on Thomas Khun’s 1962 concept of “normal science” in which he claimed science makes the greatest advances through occasional paradigm shifts in the scientific community.

Now, a paradigm shift in science is something which I would argue should not occur, because it implies the scientists were a little too confident (arrogant?) in their beliefs to begin with. If the majority of scientists in some field finally realize they were wrong about something major, what does that say about their objectivity?

Scientists should always be open to the possibility they are wrong — as they frequently are — and it should come as little surprise when they finally discover they were wrong. But scientists are human, gravitating toward popular theories which enjoy favored status in funding, persuasive and even charismatic leader-scientists, and routinely participate in “confirmation bias” where evidence is sought which supports a favored theory, while disregarding evidence which is contrary to the theory.

Anthropogenic global warming

Which brings us to global warming theory. I currently believe that, based upon theory, adding carbon dioxide to the atmosphere should cause some level of warming, but the state of the science is too immature to say with any level of confidence how much warming that will be. If even 50% of the warming we have seen in the last 50 years is part of a natural climate cycle, it would drastically alter our projections of future warming downward.

Or, it is even theoretically possible that adding carbon dioxide to the atmosphere will have no measureable impact on global temperatures or weather, that basically for a given amount of sunlight the climate system maintains a relatively constant greenhouse effect. I’m not currently of this opinion, but I cannot rule it out, either.

So, we are faced with making policy decisions in the face of considerable uncertainty. As such, global warming theory would seem to be the best modern example of post-normal science. Funtowicz and Ravetz argued we must then rely upon other sources of knowledge in order to make decisions. We must look beyond science and include all stakeholders in the process of formulating policy. I have no problem with this. In fact I would say it always occurs, no matter how certain the science is. Scientific knowledge does not determine policy.

The trouble arises when “stakeholders” ends up being a vocal minority with some ideological interest which does not adequately appreciate economic realities.

Deadlines…or Conflicting Values?

In Mosher’s essay he eloquently argues that it is the deadlines which largely lead to not-so-scientific behavior of climate scientists.

But I would instead argue that the deadlines were only imposed because of competing values. Some political point of view had decided to misuse science to get its way, and those supporting the opposing point of view are then dragged into a fight, one which they did not ask for.

Regarding deadlines (the need for “immediate action”), there is no reason why the objective and truthful scientist cannot just say, “we don’t know enough to make an informed decision at this time”, no matter what the deadline is. It’s not the scientist’s job to make a policy decision.

Instead what we have with the IPCC is governmental funding heavily skewed toward the support of research which will (1) perpetuate and expand the role of government in the economy, and (2) perpetuate and expand the need for climate scientists.

To the extent that skeptics such as myself or John Christy speak out on the subject, it is (in my view anyway) an attempt to reveal the evidence, and physical interpretations of the evidence, which do not support putative global warming theory.

Sure, we might have to shout louder than a “normal scientist” would, but that is because we are constantly being drowned out, or even silenced through the pal- …er… peer-review process.

Our involvement in this would not have been necessary if some politicians and elites had not decided over 20 years ago that it was time to go after Big Energy through an unholy alliance between government and scientific institutions. We did not ask for this fight, but to help save the integrity of science as a discipline we are compelled to get involved.

UAH Global Temperature Update for July, 2012: +0.28 deg. C

Thursday, August 2nd, 2012

The global average lower tropospheric temperature anomaly for July (+0.28 °C) was down from June 2012 (+0.37 °C). Click on the image for the full-size version:

Here are the monthly stats:

YR MON GLOBAL NH SH TROPICS
2011 01 -0.01 -0.06 +0.04 -0.37
2011 02 -0.02 -0.04 +0.00 -0.35
2011 03 -0.10 -0.07 -0.13 -0.34
2011 04 +0.12 +0.20 +0.04 -0.23
2011 05 +0.13 +0.15 +0.12 -0.04
2011 06 +0.32 +0.38 +0.25 +0.23
2011 07 +0.37 +0.34 +0.40 +0.20
2011 08 +0.33 +0.32 +0.33 +0.16
2011 09 +0.29 +0.30 +0.27 +0.18
2011 10 +0.12 +0.17 +0.06 -0.05
2011 11 +0.12 +0.08 +0.17 +0.02
2011 12 +0.13 +0.20 +0.06 +0.04
2012 1 -0.09 -0.06 -0.12 -0.14
2012 2 -0.11 -0.01 -0.21 -0.28
2012 3 +0.11 +0.13 +0.09 -0.11
2012 4 +0.30 +0.41 +0.19 -0.12
2012 5 +0.29 +0.44 +0.14 +0.03
2012 6 +0.37 +0.54 +0.20 +0.14
2012 7 +0.28 +0.44 +0.11 +0.33

As a reminder, the most common reason for large month-to-month swings in global average temperature is small fluctuations in the rate of convective overturning of the troposphere, discussed here.