Archive for May, 2009

A Layman’s Explanation of Why Global Warming Predictions by Climate Models are Wrong

Friday, May 29th, 2009

(last edited 8:30 a.m. 1 June 2009 for clarity)

23-ipcc-climate-models1
I occasionally hear the complaint that some of what I write is too technical to understand, which I’m sure is true. The climate system is complex, and discussing the scientific issues associated with global warming (aka “climate change”) can get pretty technical pretty fast.

Fortunately, the most serious problem the climate models have (in my view) is one which is easily understood by the public. So, I’m going to make yet another attempt at explaining why the computerized climate models tracked by the U.N.’s Intergovernmental Panel on Climate Change (IPCC) – all 23 of them – predict too much warming for our future. The basic problem I am talking about has been peer reviewed and published by us, and so cannot be dismissed lightly.

But this time I will use no graphs (!), and I will use only a single number (!!) which I promise will be a small one. ;)

I will do this in three steps. First, I will use the example of a pot of water on the stove to demonstrate why the temperature of things (like the Earth) rises or falls.

Secondly, I will describe why so many climate model “experts” believe that adding CO2 to the atmosphere will cause the climate system to warm by a large, possibly catastrophic amount.

Finally, I will show how Mother Nature has fooled those climate experts into programming climate models to behave incorrectly.

Some of this material can be found scattered through other web pages of mine, but here I have tried to create a logical progression of the most important concepts, and minimized the technical details. It might be edited over time as questions arise and I find better ways of phrasing things.

The Earth’s Climate System Compared to a Pot of Water on the Stove

Before we discuss what can alter the global-average temperature, let’s start with the simple example of a pot of water placed on a stove. Imagine it’s a gas stove, and the flame is set on its lowest setting, so the water will become warm but will not boil. To begin with, the pot does not have a lid.
2-pots-on-stove
Obviously, the heat from the flame will warm the water and the pot, but after about 10 minutes the temperature will stop rising. The pot stops warming when it reaches a point of equilibrium where the rate of heat loss by the pot to its cooler surroundings equals the rate of heat gained from the stove. The pot warmed as long as an imbalance in those two flows of energy existed, but once the magnitude of heat loss from the hot pot reached the same magnitude as the heat gain from the stove, the temperature stopped changing.

Now let’s imagine we turn the flame up slightly. This will result in a temporary imbalance once again between the rate of energy gain and energy loss, which will then cause the pot to warm still further. As the pot warms, it loses energy even more rapidly to its surroundings. Finally, a new, higher temperature is reached where the rate of energy loss and energy gain are once again in balance.

But there’s another way to cause the pot to warm other than to add more heat: We can reduce its ability to cool. If next we place a lid on the pot, the pot will warm still more because the rate of heat loss is then reduced below the rate of heat gain from the stove. In this case, loosely speaking, the increased temperature of the pot is not because more heat is added, but because less heat is being allowed to escape.

Global Warming

The example of what causes a pot of water on a stove to warm is the same fundamental situation that exists with climate change in general, and global warming theory in particular. A change in the energy flows in or out of the climate system will, in general, cause a temperature change. The average temperature of the climate system (atmosphere, ocean, and land) will remain about the same only as long as the rate of energy gain from sunlight equals the rate of heat loss by infrared radiation to outer space. This is illustrated in the following cartoon:

global-energy-balance
Again, the average temperature of the Earth (like a pot of water on the stove) will only change when there is an imbalance between the rates of energy gained and energy lost.

What this means is that anything that can change the rates of energy flow illustrated above — in or out of the climate system — can cause global warming or global cooling.

In the case of manmade global warming, the extra carbon dioxide in the atmosphere is believed to be reducing the rate at which the Earth cools to outer space. This already occurs naturally through the so-called “greenhouse effect” of the atmosphere, a process in which water vapor, clouds, carbon dioxide and methane act as a ‘radiative blanket’, insulating the lower atmosphere and the surface, and raising the Earth’s average surface temperature by an average of 33 deg. C (close to 60 deg. F).

The Earth’s natural greenhouse effect is like the lid on our pot of water on the stove. The lid reduces the pot’s ability to cool and so makes the pot of water, on average, warmer than it would be without the lid. (I don’t think you will find the greenhouse effect described elsewhere in terms of an insulator — like a blanket — but I believe that is the most accurate analogy.) Similarly, the Earth’s natural greenhouse effect keeps the lower atmosphere and surface warmer than if there was no greenhouse effect. So, more CO2 in the atmosphere slightly enhances that effect.

And also like the pot of water, the other basic way to cause warming is to increase the rate of energy input — in the case of the Earth, sunlight. Note that this does not necessarily require an increase in the output of the sun. A change in any of the myriad processes that control the Earth’s average cloud cover can also do this. For instance, the IPCC talks about manmade particulate pollution (“aerosols”) causing a change in global cloudiness…but they never mention the possibility that the climate system can change its own cloud cover!

If the amount of cloud cover reflecting sunlight back to space decreases from, say, a change in oceanic and atmospheric circulation patterns, then more sunlight will be absorbed by the ocean. As a result, there will then be an imbalance between the infrared energy lost and solar energy gained by the Earth. The ocean will warm as a result of this imbalance, causing warmer and more humid air masses to form and flow over the continents, which would then cause the land to warm, too.

The $64 Trillion Question: By How Much Will the Earth Warm from More CO2?

Now for a magic number that we will be referring to later, which is how much more energy is lost to outer space as the Earth warms. It can be calculated theoretically that for every 1 deg C the Earth warms, it gives off an average of about 3.3 Watts per square meter more infrared energy to space. Just as you feel more infrared (heat) radiation coming from a hot stove than from a warm stove, the Earth gives off more infrared energy to space the warmer it gets.

This is part of the climate system’s natural cooling mechanism, and all climate scientists agree with this basic fact. What we don’t agree on is how the climate system responds to warming by either enhancing, or reducing, this natural cooling mechanism. The magic number — 3.3 Watts per sq. meter — represents how much extra energy the Earth loses if ONLY the temperature is increased, by 1 deg. C, and nothing else is changed. In the real world, however, we can expect that the rest of the climate system will NOT remain the same in response to a warming tendency.

Thus, the most important debate is global warming research today is the same as it was 20 years ago: How will clouds (and to a lesser extent other elements in the climate system) respond to warming, thereby enhancing or reducing the warming? These indirect changes that further influence temperature are called feedbacks, and they determine whether manmade global warming will be catastrophic, or just lost in the noise of natural climate variability.

Returning to our example of the whole Earth warming by 1 deg. C, if that warming causes an increase in cloud cover, then the 3.3 Watts of extra infrared loss to outer space gets augmented by a reduction in solar heating of the Earth by the sun. The result is a smaller temperature rise. This is called negative feedback, and is can be illustrated conceptually like this:

cloud-feedback-negative1

If negative feedback exists in the real climate system, then manmade global warming will become, for most practical purposes, a non-issue.

But this is not how the IPCC thinks nature works. They believe that cloud cover of the Earth decreases with warming, which would let in more sunlight and cause the Earth to warm to an even higher temperature. (The same is true if the water vapor content of the atmosphere increases with warming, since water vapor is our main greenhouse gas.) This is called positive feedback, and all 23 climate models tracked by the IPCC now exhibit positive cloud and water vapor feedback. The following illustration shows conceptually how positive feedback works:

cloud-feedback-positive

In fact, the main difference between models that predict only moderate warming versus those that predict strong warming has been traced to the strength of their positive cloud feedbacks.

How Mother Nature Fooled the World’s Top Climate Scientists

Obviously, the question of how clouds in the REAL climate system respond to a warming tendency is of paramount importance, because that guides the development and testing of the climate models. Ultimately, the models must be based upon the observed behavior of the atmosphere.

So, what IS observed when the Earth warms? Do clouds increase or decrease? While the results vary with which years are analyzed, it has often been found that warmer years have less cloud cover, not more.

And this has led to the ‘scientific consensus’ that cloud feedbacks in the real climate system are probably positive, although by an uncertain amount. And if cloud feedbacks end up being too strongly positive, then we are in big trouble from manmade global warming.

But at this point an important question needs to be asked that no one asks: When the climate system experiences a warm year, what caused the warming? By definition, cloud feedback can not occur unless the temperature changes…but what if that temperature change was caused by clouds in the first place?

This is important because if decreasing cloud cover caused warming, and this has been mistakenly interpreted as warming causing a decrease in cloud cover, then positive feedback will have been inferred even if the true feedback in the climate system is negative.

As far as I know, this potential mix-up between cause and effect — and the resulting positive bias in diagnosed feedbacks — had never been studied until we demonstrated it in a peer-reviewed paper in the Journal of Climate. Unfortunately, because climate research covers such a wide range of specialties, most climate experts are probably not even aware that our paper exists.

So how do we get around this cause-versus-effect problem when observing natural climate variations in our attempt to identify feedback? Our very latest research, now in peer review for possible publication in the Journal of Geophysical Research, shows that one can separate, at least partially, the effects of clouds-causing-temperature-change (which “looks like” positive feedback) versus temperature-causing-clouds to change (true feedback).

We analyzed 7.5 years of our latest and best NASA satellite data and discovered that, when the effect of clouds-causing-temperature-change is accounted for, cloud feedbacks in the real climate system are strongly negative. The negative feedback was so strong that it more than cancelled out the positive water vapor feedback we also found. It was also consistent with evidence of negative feedback we found in the tropics and published in 2007.

In fact, the resulting net negative feedback was so strong that, if it exists on the long time scales associated with global warming, it would result in only 0.6 deg. C of warming by late in this century.

Natural Cloud Variations: The Missing Piece of the Puzzle?

In this critical issue of cloud feedbacks – one which even the IPCC has admitted is their largest source of uncertainty — it is clear that the effect of natural cloud variations on temperature has been ignored. In simplest of terms, cause and effect have been mixed up. (Even the modelers will have to concede that clouds-causing-temperature change exists because we found clear evidence of it in every one of the IPCC climate models we studied.)

But this brings up another important question: What if global warming itself has been caused by a small, long-term, natural change in global cloud cover? Our observations of global cloud cover have not been long enough or accurate enough to document whether any such cloud changes have happened or not. Some indirect evidence that this has indeed happened is discussed here.

Even though they never say so, the IPCC has simply assumed that the average cloud cover of the Earth does not change, century after century. This is a totally arbitrary assumption, and given the chaotic variations that the ocean and atmosphere circulations are capable of, it is probably wrong. Little more than a 1% change in cloud cover up or down, and sustained over many decades, could cause events such as the Medieval Warm Period or the Little Ice Age.

As far as I know, the IPCC has never discussed their assumption that global average cloud cover always stays the same. The climate change issue is so complex that most experts have probably not even thought about it. But we meteorologists by training have a gut feeling that things like this do indeed happen. In my experience, a majority of meteorologists do not believe that mankind is mostly to blame for global warming. Meteorologists appreciate how complex cloud behavior is, and most tend to believe that climate change is largely natural.

Our research has taken this gut feeling and demonstrated with both satellite data and a simple climate model, in the language that climate modelers speak, how potentially serious this issue is for global warming theory.

And this cause-versus-effect issue is not limited to just clouds. For instance, there are processes that can cause the water vapor content of the atmosphere to change, mainly complex precipitation processes, which will then change global temperatures. Precipitation is what limits how much of our main greenhouse gas, water vapor, is allowed to accumulate in the atmosphere, thus preventing a runaway greenhouse effect. For instance, a small change in wind shear associated with a change in atmospheric circulation patterns, could slightly change the efficiency with which precipitation systems remove water vapor, leading to global warming or global cooling. This has long been known, but again, climate change research covers such a wide range of disciplines that very few of the experts have recognized the importance of obscure published studies like this one.

While there are a number of other potentially serious problems with climate model predictions, the mix-up between cause and effect when studying cloud behavior, by itself, has the potential to mostly deflate all predictions of substantial global warming. It is only a matter of time before others in the climate research community realize this, too.


White Roofs and Global Warming: A More Realistic Perspective

Thursday, May 28th, 2009

There has been quite a lot of buzz in the last day or so about Energy Secretary Steven Chu claiming that making the roofs of buildings white (and brightening paved surfaces such as roads and parking lots) would “offset” a large part of the warming effect of humanity’s carbon dioxide emissions. The basis for this idea – which is not new — is that brighter surfaces reflect more of the sunlight reaching the Earth’s surface back to outer space. I found a presentation of the study he is basing his comments on by Hashem Akbari of Lawrence Berkeley National Lab here. A less technical summary of those results fit for public consumption is here.

I found some of the Akbari presentation calculations to be needlessly complicated, maybe because he was computing how much of humanity’s CO2 emissions in terms of billions of tons of CO2, or in millions of cars, could be offset. I dislike such numbers since they give no indication of what those mean relative to the whole atmosphere or their warming effect. Such big numbers lead people to buy hybrid cars while thinking they are helping to save the planet. As I have pointed out before, it takes 5 years of CO2 emissions by humanity in order to add just 1 molecule of CO2 to each 100,000 molecules of atmosphere. This obviously gives a very different impression than “billions of tons of CO2″.

So, we need to dispense with numbers that mislead people about what is really of interest: What fraction of the impact of more CO2 on the climate system can be alleviated through certain actions?

So, I thought I would make some calculations of my own to determine just what fraction of the ‘extra CO2 warming effect’ can be canceled out with the cooling effect of brighter roofs and pavement, on a global basis. Here are the pertinent numbers, along with some parenthetical comments.

The Goal: Offset the Radiative Forcing from More CO2 in the Atmosphere
Doubling of the CO2 content of the atmosphere has been estimated to cause 3.8 Watts per sq. meter of radiative forcing of the climate system, a value which will presumably be reached sometime late in this century. The current CO2 content of the atmosphere is about 40% higher than that estimated in pre-industrial times. Note this does NOT say anything about how much temperature change the radiative forcing would cause; that is a function of climate sensitivity – feedbacks – which I believe has been greatly overestimated anyway. For if climate sensitivity is low, then the extra CO2 in the atmosphere will have little impact on global temperatures, as will any of our ‘geoengineering’ attempts to offset it.

But let’s use that number as a basis for our goal: How much of the extra 3.8 Watts per sq. meter of radiative forcing by late in this century (or about 40% of that number today) can be offset by brightening these manmade surfaces?

This calculation is somewhat simpler than what Akbari presented, but assumes the intermediate calculations he made are basically correct. I will be multiplying together the following five numbers to estimate the Watts per sq. meter of radiative cooling (more sunlight reflected to outer space) that we might be able to achieve. Again, these numbers are the same or consistent with the numbers Akbari uses.

(1) The radiative forcing from a change in the Earth’s albedo (reflectivity): Akbari uses 1.27 Watts per sq. meter for a 0.01 change in albedo, which is the same as 127 Watts per sq. meter per unit change (1.0) in albedo. I think this might actually be an underestimate, but I will use the same value he uses.

(2) The fraction of global land area covered by urban areas: Akbari claims 1% (0.01) of land is now urbanized, a statistic I have seen elsewhere. I think this is an overestimate, but I will use it anyway.

(3) The fraction of the Earth covered by land: I’ll assume 30% (0.3).

(4) The fraction of urban areas that are modifiable: Akbari assumes 60% (25% roofs + 35% pavement). I think this number is high because Akbari is implicitly assuming the sun is always directly overhead during the daytime, whereas throughout the day the sides of buildings are illuminated by the sun more, and the roofs (and shadowed pavement) illuminated less, but I will use 60% (0.6) anyway.

(5) The assumed increase in albedo of the modified (brightened) surfaces: Akbari assumes an albedo increase of 0.4 for roofs and 0.15 for pavement, which when combined with his assumed urban coverage of 25% roofs and 35% pavement, results in a weighted average albedo increase of about 0.25.

When you multiply these 5 numbers together (127 x 0.01 x 0.3 x 0.6 x 0.25), you get about 0.06 Watts per sq. meter increase in reflected sunlight off the Earth from modifying 60% of all urban surfaces in the world. Comparing this to our goal — offsetting 3.8 Watts per sq. meter for a doubling of CO2 — we find you have only offset 1.6% of the carbon dioxide emissions.

Or, if today we could magically transform all of these surfaces instantly, we will have offset (1.6/0.4=) only 4% of the radiative forcing that has been accumulated from the last 100+ years of carbon dioxide emissions.

Of course, getting the whole world to ‘permanently’ recoat 60% of the surface area of their urban areas would be no small task, even to achieve such a small offset (1.6% by late in this century, or 4% today) of the warming effect of more CO2 in the atmosphere.

But this discussion does not address the positive energy savings from white roofs reducing the cooling load on buildings, something Akbari mentions as another benefit. From poking around on the internet I find that for single-story buildings, about 20% of the cooling load comes from the roof — not all of which is recoverable from a brighter roof. Energy savings would diminish for multi-story buildings, such as those in urban (as opposed to suburban) areas. [This source of energy savings can be estimated more easily. From other sources I find that about 5% of U.S. energy use is for air conditioning. If we could reduce that by, say, ten percent through brighter roofs, then 0.5% (one half of one percent) of our energy demand could be alleviated.]

And as I mentioned at the outset, if climate sensitivity is low, then the warming effect of more CO2 (as well as the cooling effect of any geoengineering ‘fixes’ to the problem) will have little effect on global temperatures, anyway.

The lesson here is that when put it in context of the total CO2 ‘problem’ — rather than just throwing big numbers around — the brightening of roofs and paved surfaces would offset only a tiny fraction of our CO2 emissions. The remaining question is just how much of this ambitious goal could be achieved, and at what cost.


The MIT Global Warming Gamble

Saturday, May 23rd, 2009

the-mit-greenhouse-gamble-small31

Climate science took another step backward last week as a new study from the Massachusetts Institute of Technology was announced which claims global warming by 2100 will probably be twice as bad as the United Nations Intergovernmental Panel on Climate Change (IPCC) has predicted.

The research team examined a range of possible climate scenarios which combined various estimates of the sensitivity of the climate system with a range of possible policy decisions to reduce greenhouse gas emissions which (presumably) cause global warming. Without policy action, the group’s model runs “indicate a median probability of surface warming of 5.2 degrees Celsius by 2100, with a 90% probability range of 3.5 to 7.4 degrees”.

Since that average rate of warming (about 0.5 deg. C per decade) is at least 2 times the observed rate of global-average surface temperature rise over the last 30 years, this would require our current spate of no warming to change into very dramatic and sustained warming in the near future.

And the longer Mother Nature waits to comply with the MIT group’s demands, the more severe the warming will have to be to meet their projections.

Of course, as readers of this web site will know, the MIT results are totally dependent upon the climate sensitivity that was assumed in the climate model runs that formed the basis for their calculations. And climate modelers can get just about any level of warming they want by simply making a small change in the processes controlling climate sensitivity – especially cloud feedbacks — in those models.

So, since the sensitivity of the climate system is uncertain, these researchers followed the IPCC’s lead of using ‘statistical probability’ as a way of treating that uncertainty.

But as I have mentioned before, the use of statistical probabilities in this context is inappropriate. There is a certain climate sensitivity that exists in the real climate system, and it is true that we do not know exactly what that sensitivity is. But this does not mean that our uncertainty over its sensitivity can be translated into some sort of statistical probability.

The use of statistical probabilities by the IPCC and the MIT group does two misleading things: (1) it implies scientific precision where none exists, and (2) it implies the climate system’s response to any change is a “roll of the dice”.

We know what the probability of rolling a pair of sixes with dice is, since it is a random event which, when repeated a sufficient number of times, will reveal that probability (1 in 36). But in contrast to this simple example, there is instead a particular climate sensitivity that exists out there in the real climate system. The endless fascination with playing computer games to figure out that climate sensitivity, in my opinion, ends up wasting a lot of time and money.

True, there are many scientists who really do think our tinkering with the climate system through our greenhouse gas emissions is like playing Russian roulette. But the climate system tinkers with itself all the time, and the climate has managed to remain stable. There are indeed internal, chaotic fluctuations in the climate system that might appear to be random, but their effect on the whole climate system are constrained to operate within a certain range. If the climate system really was that sensitive, it would have forced itself into oblivion long ago.

The MIT research group pays lip service to relying on “peer-reviewed science”, but it looks like they treat peer-reviewed scientific publications as random events, too. If 99 papers have been published which claim the climate system is VERY sensitive, but only 1 paper has been published that says the climate system is NOT very sensitive, is there then a 99-in-100 (99%) chance that the climate system is very sensitive? NO. As has happened repeatedly in all scientific disciplines, it is often a single research paper that ends up overturning what scientists thought they knew about something.

In climate research, those 99 papers typically will all make the same assumptions, which then pretty much guarantees they will end up arriving at the same conclusions. So, those 99 papers do not constitute independent pieces of evidence. Instead, they might be better described as evidence that ‘group think’ still exists.

It turns out that the belief in a sensitive climate is not because of the observational evidence, but in spite of it. You can start to learn more about the evidence for low climate sensitivity (negative feedbacks) here.

As the slightly-retouched photo of the MIT research group shown above suggests, I predict that it is only a matter of time before the climate community placing all its bets on the climate models is revealed to be a very bad gamble.


Global Warming Causing Carbon Dioxide Increases: A Simple Model

Monday, May 11th, 2009

Edited May 15, 2009 to correct percentage of anthropogenic contribution to model, and minor edits for clarity.

Global warming theory assumes that the increasing carbon dioxide concentration in the atmosphere comes entirely from anthropogenic sources, and it is that CO2 increase which is causing global warming.

But it is indisputable that the amount of extra CO2 showing up at the monitoring station at Mauna Loa, Hawaii each year (first graph below) is strongly affected by sea surface temperature (SST) variations (second graph below), which are in turn mostly a function of El Nino and La Nina conditions (third graph below):
simple-co2-model-fig01
simple-co2-model-fig02
simple-co2-model-fig03

During a warm El Nino year, more CO2 is released by the ocean into the atmosphere (and less is taken up by the ocean from the atmosphere), while during cool La Nina years just the opposite happens. (A graph similar to the first graph also appeared in the IPCC report, so this is not new). Just how much of the Mauna Loa Variations in the first graph are due to the “Coke-fizz” effect is not clear because there is now strong evidence that biological activity also plays a major (possibly dominant) role (Behrenfeld et al., 2006). Cooler SST conditions during La Nina are associated with more upwelling of nutrient-rich waters, which stimulates plankton growth.

The direction of causation between carbon dioxide and SST is obvious since the CO2 variations lag the sea surface temperature variations by an average of six months, as shown in the following graph:
simple-co2-model-fig04

So, I keep coming back to the question: If warming of the oceans causes an increase in atmospheric CO2 on a year-to-year basis, is it possible that long-term warming of the oceans (say, due to a natural change in cloud cover) might be causing some portion of the long-term increase in atmospheric CO2?

I decided to run a simple model in which the change in atmospheric CO2 with time is a function of sea surface temperature anomaly. The model equation looks like this:

delta[CO2]/delta[t] = a*SST + b*Anthro

Which simply says that the change in atmospheric CO2 with time is proportional to some combination of the SST anomaly and the anthropogenic (manmade) CO2 source. I then ran the model in an Excel spreadsheet and adjusted an “a” and “b” coefficients until the model response looked like the observed record of yearly CO2 accumulation rate at Mauna Loa.

It didn’t take long to find a model that did a pretty good job (a = 4.6 ppm/yr per deg. C; b=0.1), as the following graph shows:
simple-co2-model-fig05

Since the long term rise in atmospheric CO2 has been averaging about 50% of human emissions, the 0.1 value for “b” means that 20% of long-term rise is anthropogenic, while the other 80% is natural for that particular model fit.

The peak correlation between the modeled and observed CO2 fluctuation is now at zero month time lag, supporting the model’s realism. The model explained 50% of the variance of the Mauna Loa observations.

The best model fit assumes that the temperature anomaly at which the ocean switches between a sink and a source of CO2 for the atmosphere is -0.2 deg. C, indicated by the bold line in the SST graph (the second graph in this article). In the context of longer-term changes, it would mean that the ocean became a net source of more atmospheric CO2 around 1930.

A graph of the resulting model versus observed CO2 concentration as a function of time is shown next:
simple-co2-model-fig06

If I increase the value of b from 0.1 to 0.2 (40% of the long term CO2 rise being anthropogenic), the following graph shows a somewhat different model fit that works better in the middle of the 50-year record, but then over-estimates the atmospheric CO2 concentration late in the record:
simple-co2-model-fig07

There will, of course, be vehement objections to this admittedly simple model. One will be that “we know the atmospheric CO2 increase is manmade because the C13 carbon isotope concentration in the atmosphere is decreasing, which is consistent with a fossil fuel source.” But has been discussed elsewhere, a change in ocean biological activity (or vegetation on land) has a similar signature…so the C13 change is not a unique signature of fossil fuel source.

My primary purpose in presenting all of this is simply to stimulate debate. Are we really sure that ALL of the atmospheric increase in CO2 is from humanity’s emissions? After all, the natural sources and sinks of CO2 are about 20 times the anthropogenic source, so all it would take is a small imbalance in the natural flows to rival the anthropogenic source. And it is clear that there are natural imbalances of that magnitude on a year-to-year basis, as shown in the first graph.

What could be causing long-term warming of the oceans? My first choice for a mechanism would be a slight decrease in oceanic cloud cover. There is no way to rule this out observationally because our measurements of global cloud cover over the last 50 to 100 years are nowhere near good enough.

And just how strenuous and vehement the resulting objections are to what I have presented above will be a good indication of how politicized the science of global warming has become.


REFERENCES
Michael J. Behrenfeld et al., “Climate-Driven Trends in Contemporary Ocean Productivity,” Nature 444 (2006): 752-755.

Why America Does Not Care About Global Warming

Wednesday, May 6th, 2009

It is clear that concern in the United States over global warming is diminishing. In a Washington Whispers editorial, Paul Bedard quotes a Gallup Poll editor as saying Al Gore’s campaign to raise awareness of the “climate crisis” has failed. In one recent Pew Research Center survey, global warming rated at the bottom of a list of 20 domestic issues that Americans are concerned about.

Why is there so much apathy in the U.S. over something that threatens to transform the world by killing off thousands of species, flooding coastal areas, and making the world as much as 10 degrees F hotter? Do we just not care about the environment? Has the global warming message been oversold? Is the public experiencing ‘global warming fatigue’? Does the global warming problem seem so insurmountable to people that they just want to ignore it and hope that it goes away?

From my travels around the country and talking to people, the largest source of apathy is none of these. In my experience, people simply do not believe the ‘scientific consensus’ is correct. Most people do believe the Earth has warmed, but they think that warming has been largely natural. This was recently supported by a Rasmussen Reports poll which showed only 1/3 of American voters now believe global warming is caused by humans.

How can non-experts question the opinion of scientific experts? I believe it is because the public seems to have a better appreciation than the scientists do of a fundamental truth: There are some problems that science does not yet understand. There have been predictions of environmental doom before, and those have all failed. This has made people suspicious of spectacular scientific claims. As I have mentioned before, even Mark Twain over 100 years ago made fun of the predilection scientists have for making grand extrapolations and pronouncements:

There is something fascinating about science. One gets such wholesale returns of conjecture from such a trifling investment of fact.
-Mark Twain, Life on the Mississippi (1883)

The scientific community has brought this problem upon themselves by not following their own rules and procedures. Scientists have ‘cried wolf’ too many times, and some day that might end up hurting all of us when some unusual and dangerous scientific concern does arise. I think that people intuitively understand that spectacular scientific claims require spectacular evidence. Just saying something ‘might’ happen ‘if current trends continue’ does not impress the public. They have heard it all before.

The analogy I sometimes think about is our understanding of the human brain. What if there was a group of researchers who built a computer model of how the brain works, and they claimed that they could take some measurements of your brain and tell you what you would be thinking 24 hours from now. Would you believe them?

No, even though you are not an expert regarding the operation of the human mind, you probably would not believe them. From your daily experience you would suspect that those experts were probably overreaching, and claiming they knew more than they really did.

Of course, if the experts had performed such experiments before and succeeded, then you might be more inclined to believe them. But in the climate business, we have no previous forecast successes that are relevant to the theory of manmade global warming. We can’t even forecast natural climate variations, because we do not understand them. Simply forecasting long-term warming probably has a 50/50 chance of being correct just by accident, since it seems to be more common for warming or cooling to occur than for the temperature to remain constant, year after year, decade after decade.

And coming up with possible explanations for what has happened in the past (‘hindcasts’) do not really count, either. It is too easy to happen upon the wrong explanation which can be made to fit the data, a technique scientists call (rather pejoratively) “curve-fitting”.

In weather forecasting, MANY forecasts are required before one can confidently determine, based upon the number of successes and failures, whether those forecasts had any real skill beyond what might be expected just based upon chance. Climate forecasting is nowhere near being able to demonstrate forecast skill to the level of confidence that is routinely discussed in weather forecasting.

So, for those of you in the environmental community who think the global warming message needs to be repackaged, or rephrased, or have a change in terminology …well…I think you are wasting your time. The people have gotten the message, loud and clear: Global warming is manmade, and it is only going to get worse.

The trouble is that the people simply don’t think you know what you are talking about. And if global warming is largely a natural process, cutting down on our greenhouse gas emissions is going to have no measurable effect on future global temperatures.

Now, if you think you might succeed through a different kind of deception of the public…well, that indeed might work.


April 2009 Global Temperature Update: +0.09 deg. C

Tuesday, May 5th, 2009

YR MON GLOBE   NH   SH   TROPICS
2009   1   0.304   0.443   0.165   -0.036
2009   2   0.347   0.678   0.016   0.051
2009   3   0.206   0.310   0.103   -0.149
2009   4   0.091   0.126   0.055   -0.010

1979-2009 Graph

Once again there is a rather large discrepancy between our monthly anomaly (+0.09 deg. C.) and that produced by Remote Sensing Systems (RSS, +0.20 deg. C). We (John Christy and I) believe the difference is due to some combination of three factors:

1) we calculate the anomalies from a wider latitude band, 84S to 84N whereas RSS stops at 70S, and Antarctica was cooler than average in April (so UAH picks it up).

2) The monthly anomaly is relative to the 1979-1998 base period, which for RSS had a colder mean period relative to April 2009 (i.e. their early Aprils in the 1979-1998 period were colder than ours.)

3) RSS is still using a NOAA satellite whose orbit continues to decay, leading to a sizeable diurnal drift adjustment. We are using AMSU data from only NASA’s Aqua satellite, whose orbit is maintained, and so no diurnal drift adjustment is needed. The largest diurnal effects occur during Northern Hemisphere spring, and I personally believe this is the largest contributor to the discrepancy between UAH and RSS.


Climate Model Predictions: It’s Time for a Reality Check

Saturday, May 2nd, 2009

The fear of anthropogenic global warming is based almost entirely upon computerized climate model simulations of how the global atmosphere will respond to slowly increasing carbon dioxide concentrations. There are now over 20 models being tracked by the IPCC, and they project levels of warming ranging from pretty significant to catastrophic by late in this century. The following graph shows an example of those models’ forecasts based upon assumed increases in atmospheric carbon dioxide this century.

ipcc-ar4-model-projections

While there is considerable spread among the models, it can be seen that all of them now produce levels of global warming that can not be ignored.

But what is the basis for such large amounts of warming? Is it because we know CO2 is a greenhouse gas, and so increasing levels of atmospheric CO2 will cause warming? NO!…virtually everyone now agrees that the direct warming effect from extra CO2 is relatively small – too small to be of much practical concern.

No, the main reason the models produce so much warming depends upon uncertain assumptions regarding how clouds will respond to warming. Low and middle-level clouds provide a ‘sun shade’ for the Earth, and the climate models predict that those clouds will dissipate with warming, thereby letting more sunlight in and making the warming worse.

[High-altitude (cirrus) clouds have the opposite effect, and so a dissipation of those clouds would instead counteract the CO2 warming with cooling, which is the basis for Richard Lindzen's 'Infrared Iris' theory. The warming in the models, however, is now known to be mostly controlled by the low and middle level clouds – the “sun shade” clouds.]

But is this the way nature works? Our latest evidence from satellite measurements says “no”. One would think that understanding how the real world works would be a primary concern of climate researchers, but it is not. Rather than trying to understand how nature works, climate modelers spend most of their time trying to get the models to better mimic average weather patterns on the Earth and how those patterns change with the seasons. The unstated assumption is that if the models do a better job of mimicking average weather and the seasons, then they will do a better job of forecasting global warming.

But this assumption can not be rigorously supported. To forecast global warming, we need to know how the average climate state — and especially clouds — will change in response to the little bit of warming from the extra CO2. Indeed, the model that best replicates the average climate of the Earth might be the worst one at predicting future warming.

This fact gets glossed over – or totally ignored – as the IPCC dazzles us with the level of effort that has been invested in computer modeling of the climate system over the last 20 years. The IPCC can show how many people they have working on improving the models, how many years and how much money has been invested, how big and fast their computers are, and how many peer-reviewed scientific publications have resulted.

But unless we know how clouds change with warming, it is all a waste of time from the standpoint of knowing how serious manmade global warming will be. Even the IPCC admits this is their biggest uncertainty…so why is so little work being done trying to answer that question?

AN APPEAL TO THE DECISION MAKERS

We now have billions of dollars in satellite assets orbiting the Earth, continuously collecting high-quality data on natural, year-to-year changes in climate. I believe that these satellite measurements contain the key to understanding whether manmade global warming will be catastrophic, or merely lost in the noise of natural climate variability.

That is why I spend as much time as I can spare trying to understand those satellite measurements. But we need many more people working on this effort. Despite its importance, I have yet to meet anyone who is trying to do what I am doing.

To be fair, the modelers do indeed compare their models to satellite measurements. But those comparisons have not been detailed enough to answer the most important questions…like how clouds respond to warming.

The comparisons they have done have been confusing and inconclusive, which is part of the reason why they don’t rely on the satellite measurements very much. The modelers claim that the satellite measurements have been too ambiguous, and so they increasingly rely only upon the models.

But I will continue to assert (until I am blue in the face or die, whichever comes first) that their confusion stems from a very simple issue they have overlooked: mixing up cause and effect. The previous satellite observations that showed clouds tend to decrease with warming does not mean that warming causes clouds to decrease!

We have recently submitted to Journal of Geophysical Research a research paper that shows how one can tell the difference between cause and effect — between clouds causing a temperature change, and temperature causing a cloud change. And when this is done during the analysis of satellite data, it is clear that warming causes an increase in the sunshade effect of clouds. (While the data did suggest strong positive water vapor feedback, which enhances warming, that was far exceeded by the cooling effect of negative feedback from cloud changes.)

These results suggest that the climate system has a strong thermostatic control mechanism – exactly opposite to the way the IPCC models have been programmed to behave — and that the widespread concern over manmade global warming might well be a false alarm.

The potential importance of this result to the global warming debate demands a reexamination of all of the satellite data that have been collected over the last 25 years, with the best minds the science community can spare. Simply asserting that ‘Dr. Spencer does not know what he is talking about’ will not cut it any more.

We now have two papers in the peer-reviewed scientific literature that paved the way for this work (here and here), and so one can not simply dismiss the issue based upon some claim that we ‘skeptics’ do not publish our work.

I just presented our latest results at the NASA CERES Team meeting to about 100 attendees, and there were no major objections voiced to my analysis of the results. (CERES is the instrument that monitors how global cloud changes affect the energy balance of the Earth). I was pleased to see that there are still some scientists who are interested in the science.

Rather than simply asserting that I am wrong, why not take a fresh look at the data that have been collected over the years? Given the importance of the issue, it would seem to be the prudent thing to do. A red team-blue team approach is needed here, with the red team specifically looking for evidence that the IPCC has been wrong in their previous evaluation of the satellite data.

I suggested this years ago in congressional testimony, but one thing I’ve learned is that most congressional hearings are not designed to uncover the truth.

Maybe those in control of the research dollars are afraid of what might be found if the research community looked too closely at the satellite measurements. There are now billions — if not trillions — of dollars in future taxes, economic growth, and transfers of wealth between countries that are riding on the climate models being correct.

Scientific debate has all been shut down. The science of climate change was long ago taken over by political interests, and I am not hopeful that the situation will improve anytime soon. But I will continue to try to change that.