New U.S. Population-Adjusted Temperature Dataset (PDAT), 1973-2012

April 5th, 2012 by Roy W. Spencer, Ph. D.

This is the first of what I hope will be monthly surface temperature updates for the contiguous U.S., based upon 280 International Surface Hourly (ISH) stations which have reasonably complete temperature records since 1973.

Following up on my previous post showing that ISH station warming trends during 1973-2011 were a function of population density, I have quantified the average temperature trend increase with population density (2000 population data) over the U.S, then applied a linear trend correction to each of the stations based upon that relationship.

A few of the findings:

1) Essentially all of the +0.20 deg. C/decade average warming trend over the U.S. in the last 40 years computed from the CRUTem3 dataset (which the IPCC relies upon for its official global warming pronouncements) evaporates after population adjustment (no claim is made for countries other than the U.S.)

2) Even without any adjustments, the ISH data have a 20% lower warming trend than the CRUTem3 data, a curious result since the CRUTem3 dataset is supposedly adjusted for urban heat island effects.

3) The only calendar month with obvious long-term warming is January, due to unusually cold U.S. winters during the 1970s.

4) Last month (March, 2012) is the second warmest monthly temperature anomaly in the 40 year record, and easily the warmest March, even after population adjustment.

For the time being, I’ve decided to post the results for comment rather than attempt to get the work published, which would be a much bigger effort. My hope is that the new dataset will stimulate debate in the climate research community over the existence of residual urban heat island (UHI) effects causing a spurious warming component in commonly used temperature datasets.

Unadjusted ISH Temperature Data vs. CRUTem3 Over the U.S.

As discussed in my previous post, the raw data come from the International Surface Hourly (ISH) database which is continuously updated at NCDC. I average the 4 synoptic reporting times (00, 06, 12, and 18 UTC) together to get a daily average temperature for each station. These are the most often reported times of day in the record, and using them alone maximizes the number of stations available for analysis while at the same time providing (what I believe to be) a more physically meaningful “daily” average than maximum and minimum temperatures do.

At least 80% of the daily data must be present to compute an average for a month from each station, and at least 90% of the months during 1973-present must also be available, as well as ALL calendar months from 1973 and 2011. Nominally 280 stations in the U.S. meet this requirement, a number which does not change substantially throughout the 40 year record.

When monthly anomalies (relative to 1973-2011) are computed in 5 deg. lat/lon grids covering the contiguous U.S. from both the CRUTem3 dataset and from the unadjusted data from 280 stations, here are the resulting monthly variations during 1973 through March, 2012 for ISH and through January 2012 for CRUTem3 (all images can be clicked to see the large, detailed versions):

The monthly correlation between the two time series in the above plot is 0.994. Curiously, even without any adjustments to the ISH data, the resulting ISH linear warming trend (+0.157 deg. C/decade) is about 20% lower than the CRUTem3 trend (+0.198 deg. C/decade).

If we difference the two time series, we get this (click for full res version):

There are a couple things to note. First, we see that the excess warming in CRUTem3 versus unadjusted ISH data is growing with time.

Secondly, there is some evidence of artifacts which are likely from the CRUTem3 dataset, such as a sudden downward adjustment starting about November, 1996. My understanding is that the CRUTem3 dataset has a station distribution which changes over time. Also, I believe there are adjustments made to the data from individual stations. In the ISH dataset, however, we have 280 stations with essentially complete data from beginning to end of the record, with no adjustments; it is difficult to see how such a jump could have arisen from the ISH data.

The Population Density Adjustment

Linear temperature trends computed from each of the 280 stations reveal a dependence on population density. Just as has been found in previous studies based upon spatial temperature patterns (cities being warmer than the surrounding countryside), we find that the warming trend with time increases rapidly with population at low population densities, then levels off at high population densities.

This nonlinear relationship is found here to go as population density (PD) raised to the 0.2. power (warming ~ PD0.2). The following plot shows the results for all stations individually, as well as for averages in 4 population subgroups (click for full res version):

As seen in the above pair of plots, essentially the same regression coefficient is computed whether I use all stations individually, or average them into 4 population subgroups. The standard error of the regression coefficient is +/- 20%, which should give some idea of the statistical uncertainty in the population-based adjustments to the temperature data shown below.

Significantly, I will assume that this average relationship between temperature trend and population density is entirely due to the urban heat island effect, and remove it from each station. Of course, not all stations would have a UHI effect, but others will have a strong effect. The above plots’ regression lines show the average relationship across all stations, which I simply remove from all stations. This avoids qualitative decisions about individual stations’ histories, which would be difficult to reproduce by other investigators, and keeps the methodology simple.

Since (as we will see) this adjustment removes most of the warming trend in the U.S. since 1973, it will be the most criticized. It will be claimed that the warming trends are indeed real, and that it must be by coincidence that the most populated regions of the country have also warmed the most.

But that claim has no independent evidence, other than the thermometer data. It has no more support than my claim that the warming dependence on population is spurious, due to the UHI effect.

In fact, I think it has less support. We know based upon many published studies that the UHI effect is real, at least in spatial terms (cities average warmer than the surrounding countryside). The above plots show a similar effect on temperature trends, with a nonlinear functional dependence approximately like that seen in the spatial dependence found by other investigators. That this effect would be fortuitous seems to stretch credulity.

Results with Population Density Adjustment

The regression coefficient from the above plot was used to make a linear temperature trend adjustment to the ISH temperatures, starting with zero adjustment in January 1973. The resulting plot, analogous to the very first one above, for U.S. temperature variations since 1973 is shown next (click for full res. version):

Significant, the population adjustment erases essentially all of the U.S. warming over the last 40 years. Nevertheless, last month (March, 2012) is seen to be the 2nd warmest month in the 40 year record, and (as we will see) easily the warmest March.

The corresponding difference plot between the two datasets shows what I am interpreting to be considerable spurious warming in the CRUTem3 dataset:

U.S Temperature Variations, 1973-2012, by Calendar Month

When we examine the seasonal dependence of U.S. temperature changes over the last 40 years, we find that the only month with significant warming is January, and even that is only because there were so many cold Januaries in the late 1970s and early 1980s. The other months are essentially flat. Plots for individual months are shown next, and note that the January, February, and March plots end in 2012, while the others end in 2011 (click for full res. versions):











Conclusions

I am quite surprised that, even without any adjustments, the ISH data show 20% less U.S. warming than the CRUTem3 data over the 1973-2011 period. Since the CRUTem3 data are supposedly adjusted for urban heat island (UHI) effects, this seems quite curious, to say the least.

When the ISH temperature data are corrected for the average warming bias — shown here to be a function of population density — it essentially erases 40 years of U.S. warming: from +0.20 deg C/decade in the IPCC-blessed CRUTem3 dataset, to +0.01 deg. C/decade. For those interested in statistical uncertainties, the standard error in the regression coefficient I used would amount to about +/-20% uncertainty in the reduction in the warming trend.

The warmth of March, 2012 is indeed anomalous, at least in the context of the last 40 years. But as the plot for all March’s (above) shows, one month does not a warming trend make. :)

UPDATE #1: The Recent Warm Winter
Regarding the recent winter, if we plot trailing 3-month averages of the population-adjusted temperatures, we see that January through March of this year (2012) was the warmest 3 month period of the 40-year record. Of course, the warmest 3 months must occur at some point in the record, and since there is no long-term trend in the data, I would wager that it is a temporary blip, rather than a sudden shift into a new climate regime:

UPDATE #2: Why the Discrepancy with UAH LT Temperatures?
It has been pointed out that our UAH LT (tropospheric temperature) product has a warming trend for 1979-2011 of about +0.20 deg. C, so why the difference with my near-zero surface temperature trend (which is near-zero whether you start in 1973, or 1979)?

The monthly correlation between the two datasets is 0.87, so there is reasonably good agreement on that time scale, but a time series plot of their difference suggests some sort of step jump in 1995:
.
The direction of the change would be either spurious warming in UAH LT or spurious cooling in the ISH PDAT surface temperature data. The plot really doesn’t look like the CRUTem3 -minus- ISH PDAT plot (reproduced below), so I don’t have a ready explanation for it:

Now, 1995 happens to be when the NOAA-11 satellite was replaced by NOAA-14, and those two satellites had to be intercalibrated with NOAA-12, which was going through its own diurnal drift. So, there might be a diurnal drift issue here that has not been sufficiently accounted for. Maybe our new (but unfinished) diurnal adjustement strategy for Version 6 of the UAH dataset will shed light on this.

Of course, it is always possible that a weather regime change around that time led to a change in the tropospheric temperature lapse rate, but that is pure speculation on my part.


40 Responses to “New U.S. Population-Adjusted Temperature Dataset (PDAT), 1973-2012”

Toggle Trackbacks

  1. Paul S says:

    Roy,

    If there has been essentially zero surface warming in the US, where has the +0.21ºC/Decade trend in the UAH TLT data come from?

  2. If that is indeed the 1979-present trend for the U.S. (I haven’t looked), then 4 possibilities come to mind:

    1) the difference is real, and surface warming has been compensated by extra convective heat transport from the surface to the troposphere;

    2) my analysis is wrong;

    3) the UAH LT data over the U.S. are wrong;

    4) some combination of the above.

    • Also, see Update #2, above…there seems to be a step change in LT -minus- Tsfc.

    • Ian Smith says:

      I suspect the answer is because you have incorrectly adjusted the data. The correct approach is to look for a relationship between population density and temperature, and then use this to adjust the temperatures.
      By adjusting TEMPERATURE TRENDS, as you have done, you are simply DETRENDING the data (by removing the average TREND).
      As a test, try repeating your analysis, but replace population density with any other random variable (for example, the number of characters in each station’s name). I predict you will get the same result.

  3. BillC says:

    @ Roy point 1) is the US a large enough area to have a consistent correlation between surface temp and TLT, even without errors? e.g. what’s the component of horizontal to vertical convective transport?

  4. D.C. says:

    Sorry, this doesn’t pass the smell test. Population and temperature are completely unrelated measurements — correlation does not imply causation. You can’t “correct” the data absent a showing that there has in fact been a substantial change in the urban character at a particular temperature recording site. Just because the population density of a region has increased doesn’t mean the location in which the temperature is being record has become more urbanized. 70 years ago we were recording temperatures in downtown Baltimore on the rooftop of the Commons House. Today, they’re taken at BWI International — and yet the records are still being broken. If they were still taken downtown, they would be getting shattered. Look at the temperatures at the Inner Harbor site at the Science Museum. It’s routinely several degrees warmer than BWI — the NWS-appointed “official” temperature for the city of Baltimore. And that’s not even taken on a rooftop like temperatures used to be. Watts et al. demonstrated rooftop temperatures can add a bias of up to 5 to 10 degrees in addition to any UHI concerns. Which reading do you think would have a greater bias — BWI today or the Commons House roof in 1945?

    • Then how do you explain that the higher the population density, the more warming there has been?….and with a functional relationship similar to what is seem for spatial warming as a function of population density?

      The IPCC makes cause-and-effect arguments with weaker evidence than this.

  5. D.C. says:

    Hmm… Kind of strange that there’s no temperature increase given that water temperatures have been increasing and ice cover has been decreasing on the Great Lakes and other waters in the midwest and northeast. I wonder what mechanism of the urban heat island effect could cause that?

  6. Olavi says:

    Paul S says:

    April 5, 2012 at 10:26 AM

    Roy,

    If there has been essentially zero surface warming in the US, where has the +0.21ºC/Decade trend in the UAH TLT data come from?
    ———————————————-

    UAH has no UHI correction.

    There is two explanations to warming. UHI and Sun, so what is the role of CO2. ZERO as I have said many years.

  7. Brian D says:

    The 0.21 trend is posted on your site here for the USA. Just looking at precip for the US, since the early 70′s, we have seen more average to above average years than compared to the previous 40yrs(1930-1969) based on the century average. 1970-2011 averaged 30.18 inches where 1930-1969 averaged 28.44 inches. Indicating an increase in convective transport?

  8. Olavi says:

    D.C. says:

    April 5, 2012 at 1:12 PM

    Hmm… Kind of strange that there’s no temperature increase given that water temperatures have been increasing and ice cover has been decreasing on the Great Lakes and other waters in the midwest and northeast. I wonder what mechanism of the urban heat island effect could cause that?
    ——————————————————

    Billions of gallons warm waste water from cities, energy plants and factories.

  9. Brian D says:

    NCDC records from 1973-2011 in the US a 0.44 degF/decade trend in annual mean temps, and records no trend in precip. But the precip average is higher than any time in the earlier record as I posted earlier. Don’t know if this info is helpful but its there.

    As one who has lived on the Big Lake all his life, I can say with certainty that the ice conditions in winter changed dramatically after the 1998 El Nino. Winter 2008 was a higher ice year that was more common to see prior to 1998. But ice conditions have struggled to be even average at any point in winter which has allowed Lake temps to be on the warmer side later in the summer.

  10. phi says:

    About the comparison Crutem3 – UAH TLT. I just finished a small program that I tested on comparing UAH TLT CruTem3 NH and NH land (annual values). This does not directly concern the U.S. but is still interesting. The idea is to admit that a linear bias (eg UHI) affects the first serie. The second (a proxy) is calibrated on the first based solely on high frequencies. It is admitted that the proxy is not affected by a systematic bias (trend assumed correct).
    http://img713.imageshack.us/img713/883/crutem3uah.png
    It can be seen in the particular case a possible bias of 0.093 ° C per decade over CruTem3.

    Another interesting item is to compare the standard deviations of high frequencies, we see that the TLT amplify surface temperatures by a factor of 0.1891/0.1782=1.06. This value is not very reliable given the short duration of the calibration period but it is consistent with what one can assume (a lower tropospheric amplification over land than over oceans).

  11. Kasuha says:

    Why is CRUTem3 analysed here if CRUTem4 is already available? Or are they the same for US?

  12. phi says:

    Surprising. The same treatment applied to dendro (MXD) Briffa 1998 and Jones temperatures 1999 (both series are derived from climategate mail).
    http://img221.imageshack.us/img221/3179/briffa1998p.png
    The bias is the same (0.093 ° C per decade)!

  13. Kasuha says:

    “Significant, the population adjustment erases essentially all of the U.S. warming over the last 40 years.”
    This particularly is no surprise to me as the method is about adjusting the trend to “zero population station trend” which comes out as 0.0128 C/decade from your regression. I just think it’s not 100% correct as there are no zero population stations in the dataset.
    In my opinion, if we assume the low population stations are the right ones and don’t have any problems such as spurious cooling, then these should be taken as baseline. Average trend of stations in the 1-32 people/km2 range seem to have a little less than 0.1C/decade warming and the rest should be adjusted to these rather than to zero.
    I also think the scatter plot of trend vs density would deserve a bit more attention, running just a single line through it is probably not enough.

  14. Doug Cotton says:

     
     
    Why be surprised at erasing 40 years of US warming, Roy? We are on the downside of the 60 year cycle.

    In my view we need to focus on the assumed problem, namely carbon dioxide and, to a lesser extent, methane perhaps. If I refer to trace gases take it to mean these, because I refuse to call them greenhouse gases.

    We have what we have in the Earth’s total system. Somehow, in some way we may never fully understand, a long-term near equilibrium situation has developed. We have some energy being generated in the core, mantle and crust, most likely by fission I think, but I won’t go into that. But it does set up a temperature gradient from the core to the surface which is very stable below the outer kilometre or so of the crust. However, it may vary in long-term natural cycles that have something to do with planetary orbits. Likewise, the intensity of solar radiation getting through the atmosphere to the surface may also vary in natural cycles which may have something to do with planetary influences on the Sun, and on the eccentricity of Earth’s orbit and on cosmic ray intensity and on cloud cover, ENSO cycles etc.

    There is much to be learned about such natural cycles, and we have seen papers by Nicola Scafetta for example which appear to provide compelling evidence of the natural cycles. I believe that in fact such natural cycles are quite sufficient to explain all observed climate change, including what has happened in the last half century or so, right up to the present. The world has just been alarmed because the 1000 year cycle and the 60 year cycle were both rising around 1970 to 1998, just as they did by about the same amount 60 years earlier, and 60 years before that and no doubt further back. We cannot escape the obvious fact that there is a ~1000 year cycle which is due for another maximum within 50 to 200 years. Then there will be 500 years of falling temperatures.

    But the central issue is whether or not trace gases are really having any effect at all on climate.

    In my paper I have explained the physics of heat transfer and demonstrated why trace gases cannot have any effect whatsoever on what we call climate.

    Climate may be thought of as the mean of temperature measurements, usually made in the air between 1.5 and 2 metres above the ground. Thermometers are affected by the thermal energy in that air near the surface. As you can read here thermal energy is distinct from heat. It is transferred by molecular collision processes (conduction and diffusion,) by physical movement (convection) and by radiation. . The energy in radiation is not thermal energy. Thermal energy is first converted to electromagnetic (radiated) energy and then that EM energy has to be converted back to thermal energy in a target. Hence, in a sense thermal energy only appears to be transferred by radiation.

    The Second Law of Thermodynamics (SLoT) tells us that in any (one way, independent) spontaneous process, entropy cannot decrease unless external energy is added. There are no two ways about it. If spontaneous radiation emanates from a cooler object (or atmosphere) its EM energy cannot be converted back to thermal energy in a warmer target, such as Earth’s surface. This point is not debatable. A violation of the SLoT cannot be excused on the grounds that there will be some subsequent independent process (maybe not even radiation) which will transfer more thermal energy back to the atmosphere. If you disagree, you are mistaken.

    However, the radiation from a cooler body can affect the radiative component of the cooling of a warmer body. Although such radiation undergoes what I call “resonant scattering” this does involve the “resonators” in the warmer body and uses up some of its radiating capacity. Because the incident radiation supplies the energy, the warmer body does not need to convert an equivalent amount of its own thermal energy. Hence it cools more slowly.

    But, the resonating process involves all the (potential) different frequencies in the incident radiation. There will be far less effect when there are limited frequencies as is the case for radiation from a trace gas in the atmosphere. Furthermore, the effect depends on the temperature of that gas and is less when it is cooler. It is far less from space (equivalent to about 2.7K) and so there is no slowing of cooling for that portion of radiation which gets through the atmospheric window.

    The remaining radiation (when we look at net figures, not all that backradiation) represents less than a third of all the cooling processes from the surface to the atmosphere. The other non-radiative processes can, and will, simply speed up in order to compensate, because they do so if the temperature gap increases. There are further reasons discussed in Q.3 in the previous post.

    So there is no overall effect at all due to trace gases on the rate of cooling of the surface. Thus there can be no effect upon climate.

    Discussion on this continues on this thread.

     
     

  15. Christopher Game says:

    Responding to the post of Doug Cotton of April 5, 2012 at 7:41 PM.

    Doug Cotton writes: “As you can read here thermal energy is distinct from heat.”

    Here Doug Cotton is asking the reader to refer to the Wikipedia as support for his argument. This was an unreliable move.

    The term “thermal energy” is not one admitted as strictly defined in thermodynamics. This is part of the import of the first law of thermodynamics. That he does not seem to understand this, and that he cites the Wikipedia in support, seems like evidence that Doug Cotton does not understand the full import of the first law of thermodynamics.

    Moreover, Doug Cotton writes: “The energy in radiation is not thermal energy.” He is up against the opinion of Max Planck here. Most of us accept Planck’s view, which is based on a long history of physical study of heat radiation.

  16. David Reeve says:

    Roy you conclude “I am quite surprised that, even without any adjustments, the ISH data show 20% less U.S. warming than the CRUTem3 data over the 1973-2011 period. Since the CRUTem3 data are supposedly adjusted for urban heat island (UHI) effects, this seems quite curious, to say the least.”

    However, I believe I am correct in saying the CRUTem3 data set would be recording the metaparameter (Tmax -Tmin)/2, whilst you have averaged 4 temporally equally spaced point samples that align to max and min points rather obliquely by the relationship of solar time to UTC. I continue to have grave difficulty in giving either of these metaparameters physical meaning, but, be that as it may, as far as the “tea leaf reading” of global temperature anomalies goes, I would have said it is quite likely these two different metaparameters won’t trend equally. For instance, if the UHA effect is to restrict the depth of the minimum temperature, reducing night cooling, then you would see pretty much what you are seeing between the two data sets, wouldn’t you?

  17. Gordon Robertson says:

    Christopher Game “The term “thermal energy” is not one admitted as strictly defined in thermodynamics”.

    Thermal energy is a more accurate way of saying heat. Carnot struggled with what it is, but Clausius cut to the chase. He said it was the excitation of atoms in a substance, and he based the definition of entropy on the disgregation, or separation of molecules in a substance, which became more extreme as it warmed.

    The basis of heat is atomic lattice vibration and the change in energy levels of electrons. To distinguish heat from infrared energy, which can raise the level of excitation in atoms, it is convenient to use the term thermal energy.

    In their paper on the falsification of the greenhouse effect, Gerlich and Tscheuschner commented on a claim by uber-alarmist, Stefan Rahmstorf, that the 2nd law is not contravened because a net ENERGY balance between a warmer and a cooler body is positive (i.e. flows from the warmer to the cooler body).

    G&T replied that energy is not heat, and that the 2nd law is related to heat not energy, but I think they meant to say IR energy is not heat, something I attribute to the fact their primary language is German. IR is electromagnetic energy and carries no more heat than light carries colour. Colour is a property of the human eye, not a property of light. UV, light and IR are all part of a continuum of vibrating energies.

    I feel this is a huge mistake in climate science, confusing IR with heat. Rahmstorf would have been correct in summing IR between a warm and hot body, but he had no right inferring that IR is heat, therefore the 2nd law is contravened by the AGW theory.

    Heat can only be transferred from a warmer body to a cooler body, and although I cannot yet explain the atomic relationship between IR from a cooler energy source and atoms vibrating at a higher energy level, I am quite sure the IR from the cooler body cannot make the atoms at a higher energy level vibrate harder. If they can’t raise the energy level to make them vibrate harder, they can’t increase the heat content.

    Clausius was clear about that. He claimed IR flowed both ways between a warmer and a cooler body but that heat could only be transferred from the warmer body to the cooler.

    It may be a moot point anyway. Robert Wood made an excellent point in 1909, that surface radiation likely does not act anymore than a few feet above the ground. If you hold your hand a foot away from a 1500 watt electric stove ring, you can endure the IR. If you move it to within 1/4 inch, it will cook your flesh by raising the excitation in the molecules of your flesh. That’s how microwaves operate as well, and radar and intense radio frequency energy. And that’s how quickly IR drops off in intensity.

    One more thing. Wood, G&T and more recently, Nahle, got the same results from an experiment in which boxes were covered with plain glass and a sheet of rock salt respectively. Rock salt apparently allows IR to pass through it. When the boxes were allowed to sit in the sun for a while, they both showed similar temperatures.

    That proves real greenhouses do not warm due to trapped IR but due to a lack of convection. In the Nahle experiment, he allowed convection in one box and it cooled. In that case, GHGs cannot duplicate a real greenhouse because absorbing IR, or slowing it down as they claim, is not going to make a difference. Also, the atmosphere is loaded with convection.

  18. Gordon Robertson says:

    Doug Cotton “We have some energy being generated in the core, mantle and crust….”

    ‘SOME’ energy in the core??? You’re a bit cheeky, aren’t you? The surface of the core is at roughly the same temperature as the Sun’s surface, about 5000 C. The Sun is much hotter internally. I would venture that having 5000 C temps on one end of a gradient would have a significant effect. Many geophysicists seem in denial about that.

    John Christy of UAH has offered that the 30 year satellite record can be viewed in two stages. There is a period prior to the 1998 El Nino extreme which features average temps below the 1981 – 2010 average, which he puts down to the cooling of volcanic aerosols. Post 1998 warming features El Nino warming.

    I am very curious as to how the global average jumped 0.2 C between 2001 and 2002 and stayed there for a few years. I’ve got this thing in my head about the effects of impulses in an electronic circuit, causing ringing after the impulse. I wonder if the atmosphere can be affected in the same way, sort of like a resonance.

    If you look at the graph on Roy’s site, it is plain that the post 1998 period is cyclical, between El Nino highs and La Nino lows. The question arises as to what has caused the heightened ENSO activity. One article I read links it to a phase change in the Pacific Decadal Oscillation.

  19. jim karlock says:

    No warming since 1973 – pretty close to when the proxies (of hide the decline fame) quit showing warming. Oops.

    Thanks
    JK

  20. D.C. says:

    Interesting that step change between UAH and PDAT appears in the mid 1990s. I also noticed you limited your analysis to ISH stations, rather than use the full complement of co-op sites from GHCN. In any case, the mid 90s was a time of widespread deployment of ASOS at those sites. I know there were studies at the time that showed ASOS imparted a bit of cool bias as compared to conventional means of temperature-recording. I wonder if the switch to ASOS was not adequately accounted for in the temperature records.

  21. Mike Blackadder says:

    Is the discrepancy shown here explained by the TOA (time of observation) adjustment? If so then I’m thinking that the TOA adjustment basically accounts for the entire trend in the US. That doesn’t mean it is necessarily an unjustified adjustment, but I think important to keep perspective how subtle a trend they attempt to extract from a very noisy record, and obviously it isn’t the data itself that indicates significant warming, but rather the validity of methods in adjusting the records that has to be examined.

  22. Hector M. says:

    Dr Spencer’s argument would be much reinforced if he uses at least two population dates, say 1970 and 2010, or 1970-80-90-00-10, and not only one (2000). In its present version, the observed correlation is a cross-section (temperature trend during a period, vs population at one point in time), whereas what is to be shown is a longitudinal correlation (temperature change vs population change). For instance, a large city where population has remained roughly constant should show no particular UHI effect, while another city with a smaller but rapidly growing population should show a perceptible UHI effect.
    On the other hand, a station may show a localized heat effect, even if it is “rural”, insofar as the station’s immediate surroundings have undergone some relevant transformation (e.g. when the green field in which it was located was paved with concrete or asphalt to convert it into a parking lot). As shown in the station survey carried out by volunteers coordinated by Anthony Watts, many supposedly “rural” stations are now in not-so-green locations.
    Third, pop density is not the only factor: industrial areas may have scant population but a lot of local heating sources such as concrete, engines, manufacturing plants, etc. The same goes for airports, or for downtown areas with relatively little resident population, mostly occupied by offices and with very intense vehicle movement (think lower Mannhattan).
    This is not meant to detract of Dr Spencer’s contribution, but as a suggestion to improve its power.

  23. Noblesse Oblige says:

    In this context you may find this Gallop poll amusing. http://www.greencarcongress.com/2012/03/gallup-20120322.html 79% of Americans thought this past winter was warmer than usual. In fact it was, in the east and midwest where the population density is highest, and indeed 90+ % of those in those regions thought so. But what is more interesting is that 55% of those in the western part of the country also thought so, even though it was not. How should we interpret this? 1A. The power of suggestion contained in the poll itself or what people heard about other regions of the country; 1B. Preconditioning of the population by decades of drumbeat, leading to their expectation that we will get warmer, even if it doesn’t; 2. Random: half of the population is always going to think it is warmer, while the other half think it is cooler.

  24. Just some guy says:

    Heres a study that would be more convincing to me: if there were a way to deleted all the ISA data for locations with population density above 32 per sq km (for the entire time period) and just use data which have never been exposed to UHI effects. Then look at the plot.

  25. harrywr2 says:

    Noblesse Oblige says

    “But what is more interesting is that 55% of those in the western part of the country also thought so, even though it was not. How should we interpret this?”

    US National news is broadcast from New York. There is always going to be a ‘local’ bias in the national reporting as a result. Unusually warm weather on the east coast is going to be a bigger story then a simultaneous ‘unusually cold’ weather on the west coast.

  26. Doug Cotton says:

    The following good question was asked on another site: What is the motivation for a new hypothesis, when Statistical Thermodynamics and Quantum Mechanics, as they have been understood for 100 years, completely explain the observed data and come to the same conclusions about CO2 vs WV, the atmospheric window, etc? to which I responded …

    There is a huge difference when you understand the mechanism by which the rate of radiative cooling is slowed during the process of resonant scattering.

    This results in the degree of slowing effect being related not only to the temperature of the source, but also the number and position of the spectral lines in the emission. Those lines which are at significantly different frequencies from the peak have lower intensity because such is restricted by the Planck curve.

    AGW proponents make out that all the CO2 has nearly the same effect in total as all the water vapour, whereas I say it has probably less than 1% the effect of all water vapour, as each molecule has less effect due to few frequencies (spectral lines) in its emission.

    If you would like me to respond to anything you have to say on this, please post on the dedicated thread on tallbloke’s talkshop.

  27. barry says:

    This is a new argument for me; humanity is causing the temperature increases, but not because of CO2. It is our urbanisation, and associated waste disposal into the seas, and around glacier sites that is causing these indicators to give the impression that warming is widespread. TLT data also shows warming, but this may be due to problems with the data. It would be quite a stunning revelation that all these indicators – including species migration and the shift of seasons and climate zones – which are well-explained by a system wide warming, instead are the result of local effects.

  28. Scott Supak says:

    “I would wager that it is a temporary blip, rather than a sudden shift into a new climate regime.”

    Well, then, you should go place those wagers:

    https://www.intrade.com/v4/markets/?eventId=91252

  29. Christopher Game says:

    Response to the post of Gordon Robertson of April 6, 2012 at 12:43 AM.

    Gordon Robertson writes: “Thermal energy is a more accurate way of saying heat.”

    Gordon Robertson’s post offers argument for his statement, and then makes other comments.

    Gordon does not directly try to refute my statement that he cites, “that the term “thermal energy” is not one admitted as strictly defined in thermodynamics.” The reason for my statement lies in the full meaning of the first law of thermodynamics, which accurately distinguishes transfer of energy as heat and as work. The term “thermal energy” does not do that, and that is why it is not admitted as a strictly defined term in thermodynamics.

    Gordon is mistaken to say that thermal energy is a more accurate way of saying heat, at least as far as thermodynamics is concerned. I accept that “thermal energy” is more polysyllabic than ‘heat’, but that doesn’t make it more quantitatively informative. If Gordon really thinks that the term “thermal energy” is more quantitatively informative than the thermodynamic concept of heat, he would do well to go back to his textbooks of thermodynamics to find out why he is mistaken. He could look particularly at accounts of the Joule-Thompson porous plug experiment.

  30. Ian Smith says:

    Dr Spencer,

    Correct me if I am wrong.
    It seems that you have simply detrended the data using linear regression and then seem surprised that the trends disappear.

    You would also get the same result by performing a linear regression between the trends and any other random variable you care to chose.

    For example, I could plot temperature trends versus the price of fish, find the linear relationship (no matter how insignificant), go back and use this relationship to adjust the data. Voila! the trends disappear.
    Therefore any conclusions from drawn from this exercise are meaningless.

    Ian

  31. Alan S. Blue says:

    Realizing in advance that a measurement of ‘lower troposphere’ temperature is, at best, a proxy for measuring surface temperature; What does a -site specific- direct comparison look like?

    That is, instead of comparing -average- satellite to -average- ground stations, compare a specific location.

    This should also help diagnose any specific issues like satellite drift, and the cross-calibration required for switching satellites.

    So long as you can get a competent set of ground data for at least a single location.

  32. lgl says:

    That difference looks a bit like an inverted Nino3.4 http://virakkraft.com/UAH-ISH-ENSO.png

  33. KR says:

    Why not group populations into only two clusters – then your R2 would be 1.0, a perfect prediction match!

    Because, of course, averaging into data clusters (as you did) throws away the variance information. The proper R2 for population versus temperature trend is indeed the 0.0795 you first calculated – an insignificant correlation.

    The UHI does _not_ significantly affect any of the major temperature anomaly records.

  34. Brian D says:

    Sure wouldn’t be surprised to see 0.30 anomaly for April.

  35. Everalda says:

    great website my friends, this is an awesome post, keep them ideas coming up…good luck.http://www.kitsucesso.com