Archive for December, 2011

Why Atmospheric Pressure Cannot Explain the Elevated Surface Temperature of the Earth

Friday, December 30th, 2011

Ned Nikolov’s alternative theory that compression of the lower atmosphere can account for the Earth’s surface temperature being about 33 deg. C higher than calculations suggest it should be (based upon the rate at which sunlight is absorbed) is an admittedly attractive one.

The role of pressure on the average surface temperature of the Earth has been a point of confusion even among atmospheric science and meteorology students (it was for me). We were taught much more about the various processes which cause temperature to *change*, but not so much about the processes which determine what the average temperature *is*.

Background: The Dry Adiabatic Lapse Rate
The dry adiabatic lapse rate of temperature is the rate at which the temperature of a parcel of air decreases with altitude (9.8 deg. C per km) if no energy is gained or lost by that parcel to its surroundings (thus the term “a-diabatic”), or though condensation heating by water vapor (thus “dry”).

It is important to understand that the adiabatic lapse rate deals with temperature *changes* as a result of pressure changes, but it says nothing about what the average temperature will be at any given altitude. It starts with a parcel of air of known temperature, but does not explain why the parcel had that temperature to begin with.

Conceptualizing the Processes Controlling Atmospheric Temperature
The average air temperature at any altitude (including the surface) is an energy budget issue, not an air pressure issue. In fact, energy budget considerations explain the average temperature of just about everything we experience on a daily basis: the inside of buildings, car engines, a pot on the stove, etc.

One useful way to conceptualize the processes determining the time-average surface temperature (neglecting heat transport below the surface) is through this simple thought experiment:

1) start with an atmosphere at absolute zero temperature

2) turn the sunlight on

3) the surface warms as it absorbs solar radiation

4) the warmer the surface gets, the greater the rate at which it loses energy by IR radiation and convection

5) the temperature will eventually stabilize (albeit with a rather large day-night cycle) when the average rate of energy loss equals the average rate of energy gain from the sun.

Note I did not need to mention atmospheric pressure.

While the above steps sound simple, what complicates things in the real world is that these energy gain and loss processes are also occurring at all altitudes, and in different proportions, all of which influence the surface energy budget. This makes it very difficult to conceptualize how they all combine to produce the average temperature profile of the atmosphere observed today.

We (Danny Braswell and I) have found that physical intuition can be built if you construct a “simple” computer program to model the processes in one dimension (vertical). While computer modeling has a bad connotation among many global warming skeptics, it is just putting actual numbers behind hand-waving concepts. If you can’t do that, then all you have left is hand waving.

Many years ago Danny put together such a model so we could examine global warming claims, especially the claim that increasing CO2 will cause warming. The model was indeed able to explain the average vertical temperature structure of the atmosphere. We could initialize the model with an atmosphere at absolute zero, or at an absurdly high temperature, and it would still settle out to about the same temperature profile as is observed in the global average. (I continue to challenge those with alternative theories to do the same).

One of the first things you discover when putting numbers to the problem is the overriding importance of infrared radiative absorption and emission to explaining the atmospheric temperature profile. These IR flows would not occur without the presence of “greenhouse gases”, which simply means gases which absorb and emit IR radiation. Without those gases, there would be no way for the atmosphere to cool to outer space in the presence of continuous convective heat transport from the surface.

Indeed, it is the “greenhouse effect” which destabilizes the atmosphere, leading to convective overturning. Without it, there would not be weather as we know it. The net effect of greenhouse gases is to warm the lowest layers, and to cool the upper layers.

The greenhouse effect thus continuously “tries” to produce a lapse rate much steeper than the adiabatic lapse rate, but convective overturning occurs before that can happen, cooling the lower troposphere and warming the upper troposphere through a net convective transport of heat from lower layers to upper layers.

Now, it’s the downward component of IR radiative flow that many skeptics seem to have a problem with. They ask, how can IR radiation flow from colder temperature at higher altitudes to warmer temperatures at lower altitudes? That would contradict the 2nd Law of Thermodynamics.

Of course, it’s the *net* (upward plus downward) IR flow that must be from higher temperature to lower temperature, and so greenhouse theory does not contradict the 2nd Law of Thermodynamics. If you don’t like the idea of a downward flowing component to the ‘net’, then just conceptualize the effect of greenhouse gases as reducing the rate at which IR energy flows from higher temperature to lower temperature. There, 2nd Law problem solved.

But then, through energy budget considerations, if you reduce the ability of the surface and lower atmosphere to cool in the face of solar heating, the temperature must rise until the rate of energy loss equals the rate of energy gain. This is how greenhouse gases warm the lower atmosphere.

In any event, it is the processes which control the rates of energy gain and loss (not pressure) which determine what the average temperature will be, whether at the surface or any other altitude in the atmosphere.

Thought Experiment #1 on The Pressure Effect
If it is atmospheric pressure which causes the relative warmth of the lower troposphere versus the upper troposphere, then why is the average temperature of the stratosphere virtually constant with height, despite the air pressure at the base of the stratosphere (200 millibars) being about 100x that at the top of the stratosphere (2 millibars)?

If you say it’s due to sunlight absorption by ozone warming the middle and upper stratosphere, you would be correct. But how does the stratosphere then lose all of that extra energy it gains by solar absorption? Well, that occurs through IR emission, primarily from carbon dioxide. The temperature of the ‘ozone layer’ increases until the IR loss (primarily by CO2) equals the rate of solar absorption by ozone. Again, it’s an energy budget issue, not an air pressure issue.

The point I’m making with the stratosphere example is that greenhouse gases are necessary to explain the temperature profile of the stratosphere, not what the “pressure enhancement” theory of climate would predict.

And if greenhouse gases influence the stratosphere, then they must also be operating in the troposphere.

Thought Experiment #2 on the Pressure Effect
Imagine we start with the atmosphere we have today, and then magically dump in an equal amount of atmospheric mass having the same heat content. Let’s assume the extra air was all nitrogen, which is not a greenhouse gas. What would happen to the surface temperature?

Ned Nikolov would probably say that the surface temperature would increase greatly, due to a doubling of the surface pressure causing compressional heating. And he would be correct….initially.

But what would happen next? The rate of solar energy absorption by the surface (the energy input) would still be the same, but now the rate of IR loss by the surface would be much greater, because of the much higher surface temperature brought about through compressional heating.

The resulting energy imbalance would then cause the surface (and overlying atmosphere) to cool to outer space until the rate of IR energy loss once again equaled the rate of solar energy gained. The average temperature would finally end up being about the same as before the atmospheric pressure was doubled.

While I applaud Ned Nikolov’s willingness to advance a controversial alternative, at this point I still must side with the greenhouse effect (despite its terrible name) as an explanation for the average surface temperature of the Earth being considerably higher than that calculated based upon the rate of solar heating of the surface alone.

One of the more significant aspects of the above discussion, which was demonstrated theoretically back in the mid-1960s by Manabe and Strickler, is that the cooling effects of weather short-circuit at least 50% of the greenhouse effect’s warming of the surface. In other words, without surface evaporation and convective heat loss, the Earth’s surface would be about 70 deg. C warmer, rather than 33 deg. C warmer, than simple solar absorption by the surface would suggest.

Thus, weather cools the surface in the face of radiative heating.

And, yes, this effect is included in the climate models used by the IPCC. It would have to be, otherwise the average temperature distributions in those models would be wildly wrong: much too warm in the lower troposphere, and much too cold in the upper troposphere.

I continue to maintain that the major source of error in global warming predictions based upon the IPCC models is not in the physics of the greenhouse effect, but in the realm of feedbacks: especially, how clouds respond to a warming tendency. All of the 20+ models predict clouds will enhance warming; I believe they will reduce warming.

Unfortunately, determining cloud feedbacks from our observations of the climate system is an exceedingly difficult problem. Even more difficult is publishing any evidence of negative cloud feedback in the peer reviewed literature.

Finally, I want to address 3 stumbling blocks which people encounter in all of this.

FIRST, if you are still confused about whether greenhouse gases warm or cool the climate system, let me make the following 2 points:

1) For the atmosphere as a whole, greenhouse gases COOL the atmosphere, through IR radiation to outer space, in the face of heating of the atmosphere by the solar-heated surface.

2) In the process, however, greenhouse gases drastically change the vertical temperature structure of the atmosphere, warming the lower layers, and cooling the upper layers. Think of greenhouse gases as a “radiative blanket”…when you add a blanket over a heat source, it warms the air between the blanket and the heat source, but it cools the air away from the heat source.

Greenhouse gases change the energy budget of all layers of the atmosphere, and it is the energy budget (balance between energy gain and energy loss) which determines what the average temperatures of those layers will be.

SECONDLY, some people claim that IR emission and absorption cannot affect the atmospheric temperature profile because the rate of IR emission and absorption by each layer must be the same.


The rate of absorption of IR by a layer is mostly independent of temperature; the rate of emission, though, increases rapidly with temperature. In general, the rates of IR absorption and emission by atmospheric layers are quite different. The difference is made up by convective heat transport and (especially in the stratosphere) solar absorption.

THIRDLY, if you are wondering, “If temperature change is an energy budget issue, then why does the temperature of an air parcel change when you change its altitude? Doesn’t the temperature change necessarily imply an energy budget change?

The answer is no.

When an air parcel is raised adiabatically, it’s loss of thermal energy is balanced by an equal gain in potential energy due to its altitude. The ‘dry static energy’ of the parcel thus remains the same, which equals cpT + gZ, where cp is the specific heat capacity, T is temperature in Kelvin, g is the gravitational acceleration, and Z is height in meters.

Of course, averaged over the whole Earth, there can be no net change in altitude; all air parcels rising (and cooling) at any given pressure altitude must be matched by an equivalent mass of air parcels sinking (and warming) at that same pressure altitude.


Friday, December 23rd, 2011

I see this morning a news report of a metal ball falling out of the sky and landing in Namibia:

While the find seems to have baffled local authorities, it didn’t take me long to identify it as a satellite hydrazine propellant tank, made of titanium:

The size (14 inches in diameter) and weight (about 8 kg) match.

Lotsa stuff flying around in orbit these days, and eventually it all must come back down. Fortunately, most of it burns up before reaching the ground.

Addressing Criticisms of the UAH Temperature Dataset at 1/3 Century

Wednesday, December 21st, 2011

The UAH satellite-based global temperature dataset has reached 1/3 of a century in length, a milestone we marked with a press release in the last week (e.g. covered here).

As a result of that press release, a Capital Weather Gang blog post by Andrew Freedman was dutifully dispatched as damage control, since we had inconveniently noted the continuing disagreement between climate models used to predict global warming and the satellite observations.

What follows is a response by John Christy, who has been producing these datasets with me for the last 20 years:

Many of you are aware that as a matter of preference I do not use the blogosphere to report information about climate or to correct the considerable amount of misinformation that appears out there related to our work. My general rule is never to get in a fight with someone who owns an obnoxious website, because you are simply a tool of the gatekeeper at that point.

However, I thought I would do so here because a number of folks have requested an explanation about a blog post connected to the Washington Post that appeared on 20 Dec. Unfortunately, some of the issues are complicated, so the comments here will probably not satisfy those who want the details and I don’t have time to address all of its errors.

Earlier this week we reported on the latest monthly global temperature update, as we do every month, which is distributed to dozens of news outlets. With 33 years of satellite data now in the hopper (essentially a third of a century) we decided to comment on the long-term character, noting that the overall temperature trend of the bulk troposphere is less than that of the IPCC AR4 climate model projections for the same period. This has been noted in several publications, and to us is not a new or unusual statement.

Suggesting that the actual climate is at odds with model projections does not sit well with those who desire that climate model output be granted high credibility. I was alerted to this blog post within which are, what I can only call, “myths” about the UAH lower tropospheric dataset and model simulations. I’m unfamiliar with the author (Andrew Freedman) but the piece was clearly designed to present a series of assertions about the UAH data and model evaluation, to which we were not asked to respond. Without such a knowledgeable response from the expert creators of the UAH dataset, the mythology of the post may be preserved.

The first issue I want to address deals the relationship between temperature trends of observations versus model output. I often see such posts refer to an old CCSP document (2006) which, as I’ve reported in congressional testimony, was not very accurate to begin with, but which has been superseded and contradicted by several more recent publications.

These publications specifically document the fact that bulk atmospheric temperatures in the climate system are warming at only 1/2 to 1/4 the rate of the IPCC AR4 model trends. Indeed actual upper air temperatures are warming the same or less than the observed surface temperatures (most obvious in the tropics) which is in clear and significant contradiction to model projections, which suggest warming should be amplified with altitude.

The blog post even indicates one of its quoted scientists, Ben Santer, agrees that the upper air is warming less than the surface – a result with which no model agrees. So, the model vs. observational issue was not presented accurately in the post. This has been addressed in the peer reviewed literature by us and others (Christy et al. 2007, 2010, 2011, McKitrick et al. 2010, Klotzbach et al. 2009, 2010.)

Then, some people find comfort in simply denigrating the uncooperative UAH data (about which there have been many validation studies.) We were the first to develop a microwave-based global temperature product. We have sought to produce the most accurate representation of the real world possible with these data – there is no premium in generating problematic data. When problems with various instruments or processes are discovered, we characterize, fix and publish the information. That adjustments are required through time is obvious as no one can predict when an instrument might run into problems, and the development of such a dataset from satellites was uncharted territory before we developed the first methods.

The Freedman blog post is completely wrong when it states that “when the problems are fixed, the trend always goes up.” Indeed, there have been a number of corrections that adjusted for spurious warming, leading to a reduction in the warming trend. That the scientists quoted in the post didn’t mention this says something about their bias.

The most significant of these problems we discovered in the late 1990’s in which the calibration of the radiometer was found to be influenced by the temperature of the instrument itself (due to variable solar shadowing effects on a drifting polar orbiting spacecraft.) Both positive and negative adjustments were listed in the CCSP report mentioned above.

We are always working to provide the best products, and we may soon have another adjustment to account for an apparent spurious warming in the last few years of the aging Aqua AMSU (see operational notes here). We know the data are not perfect (no data are), but we have documented the relatively small error bounds of the reported trends using internal and external evidence (Christy et al. 2011.)

A further misunderstanding in the blog post is promoted by the embedded figure (below, with credit given to a John Abraham, no affiliation). The figure is not, as claimed in the caption, a listing of “corrections”:

The major result of this diagram is simply how the trend of the data, which started in 1979, changed as time progressed (with minor satellite adjustments included.) The largest effect one sees here is due to the spike in warming from the super El Nino of 1998 that tilted the trend to be much more positive after that date. (Note that the diamonds are incorrectly placed on the publication dates, rather than the date of the last year in the trend reported in the corresponding paper – so the diamonds should be shifted to the left by about a year. The 33 year trend through 2011 is +0.14 °C/decade.)

The notion in the blog post that surface temperature datasets are somehow robust and pristine is remarkable. I encourage readers to check out papers such as my examination of the Central California and East African temperature records. Here I show, by using 10 times as many stations utilized in the popular surface temperature datasets, that recent surface temperature trends are highly overstated in these regions (Christy et al. 2006; 2009). We also document how surface development disrupts the formation of the nocturnal boundary layer in many ways, leading to warming nighttime temperatures.

That’s enough for now. The Washington Post blogger, in my view, is writing as a convinced advocate, not as a curious scientist or impartial journalist. But, you already knew that.

In addition to the above, I (Roy) would like to address comments made by Ben Santer in the Washington Post blog:

A second misleading claim the (UAH) press release makes is that it’s simply not possible to identify the human contribution to global warming, despite the publication of studies that have done just that. “While many scientists believe it [warming] is almost entirely due to humans, that view cannot be proved scientifically,” Spencer states.

Ben Santer, a climate researcher at Lawrence Livermore National Laboratory in California, said Spencer and Christy are mistaken. “People who claim (like Roy Spencer did) that it is “impossible” to separate human from natural influences on climate are seriously misinformed,” he wrote via email. “They are ignoring several decades of relevant research and literature. They are embracing ignorance.” “Many dozens of scientific studies have identified a human “fingerprint” in observations of surface and lower tropospheric temperature change,” Santer stated.

In my opinion, the supposed “fingerprint” evidence of human-caused warming continues to be one of the great pseudo-scientific frauds of the global warming debate. There is no way to distinguish warming caused by increasing carbon dioxide from warming caused by a more humid atmosphere responding to (say) naturally warming oceans responding to a slight decrease in maritime cloud cover (see, for example, “Oceanic Influences on Recent continental Warming“).

Many papers indeed have claimed to find a human “fingerprint”, but upon close examination the evidence is simply consistent with human caused warming — while conveniently neglecting to point out that the evidence would also be consistent with naturally caused warming. This disingenuous sleight-of-hand is just one more example of why the public is increasingly distrustful of the climate scientists they support with their tax dollars.

UAH Global Temperature Update for Nov. 2011: +0.12 deg. C

Thursday, December 15th, 2011

The global average lower tropospheric temperature anomaly for November, 2011 remained about the same as last month, at +0.12 deg. C (click on the image for the full-size version):

The 3rd order polynomial fit to the data (courtesy of Excel) is for entertainment purposes only, and should not be construed as having any predictive value whatsoever.

Here are this year’s monthly stats:

2011 1 -0.010 -0.055 0.036 -0.372
2011 2 -0.020 -0.042 0.002 -0.348
2011 3 -0.101 -0.073 -0.128 -0.342
2011 4 +0.117 +0.195 +0.039 -0.229
2011 5 +0.133 +0.145 +0.121 -0.043
2011 6 +0.315 +0.379 +0.250 +0.233
2011 7 +0.374 +0.344 +0.404 +0.204
2011 8 +0.327 +0.321 +0.332 +0.155
2011 9 +0.289 +0.304 +0.274 +0.178
2011 10 +0.116 +0.169 +0.062 -0.054
2011 11 +0.123 +0.075 +0.170 +0.024

Since last month I predicted another temperature fall for November, which did not occur, I will admit that I should have followed my own advice: don’t try predicting the future based upon the daily temperature updates posted at the Discover website.

FYI, I’m making progress on the Version 6 of the global temperature dataset, and it looks like the new diurnal drift correction method is working.

[Reminder: Since AMSR-E failed in early October, there will be no more sea surface temperature updates from that instrument.]

November Global Temperature Update Delayed

Friday, December 9th, 2011

There has been a delay in our monthly processing of global temperature data from AMSU.

An undersea telecommunications cable used to transmit about half of the huge volume of data coming from the Aqua satellite was cut in late November off the coast of the Netherlands, delaying receipt of that data. While there were redundant data transmission capabilities, apparently both failed.

Also, John Christy and I have been on separate travels quite a bit lately (I spent 2 weeks in Miami after my daughter had an emergency C-section — I’m a grandpa!) and now I’m at the AGU in San Francisco, with a trip to DC early next week, so monitoring of the situation has been difficult.

Version 6 of the UAH Dataset is in the Works

I have been working on a new diurnal drift correction for the UAH global temperature dataset, which will be released as Version 6 when it is finished.

The orbital drift of most of the satellites carrying the AMSUs (and earlier MSUs) has been the largest source of uncertainty in getting long-term satellite temperature trends, and the correction for this drift has been a research topic for us off-and-on for many years.

Fortunately, there has always been at least one satellite operating without significant drift, and so we have used those satellites as a “backbone”, or anchor, for the others. The Aqua satellite is the only one which has its orbit maintained with on-board propulsion, but channel 5 on the Aqua AMSU instrument has become increasingly noisy in recent years, so we anticipate at some point we will no longer be able to rely on it, thus the need for a new diurnal drift adjustment.

I’m hopeful that the new procedure I’ve developed will work well, which is rather novel and is mostly insensitive to instrument calibration (see if you can figure out how that would work, wink-wink). The ultimate test will be the removal of long-term drift between simultaneously operating satellites, which also depends on season. It should allow us to get better regional temperature trends, land-vs-ocean trends, and remove some spurious season-dependent differences in temperature trends.

The earliest Version 6 of the UAH dataset would be available is the early January update of the December temperature data.