Brrr…the Troposphere Is Ignoring Your SUV

October 30th, 2011

For those tracking the daily global temperature updates at the Discover website, you might have noticed the continuing drop this month in global temperatures. The mid-tropospheric AMSU channels are showing even cooler temperatures than we had at this date with the last (2008) La Nina. The following screen shot is for AMSU channel 6 (click for large version).

A check of the lower stratospheric channels (9, 10) suggests this is not a stratospheric effect bleeding over into the tropospheric channels.

With the current (and forecast to continue) stormy pattern over the U.S., I have to wonder whether the atmosphere is currently in a destabilized state. I doubt that surface temperatures anomalies are as anomalously low as the mid-troposphere temperatures are running, which in combination with anomalously cold mid- and upper-tropospheric temperatures means there is extra energy available for storms. (Since AMSR-E failed in early October, our sea surface temperature plot is no longer showing current data, so I have no easy way to check surface temperatures.)

Of course, this too shall pass. I just thought it was an interesting curiosity during a time when some pundits are claiming global warming is “accelerating”. Apparently, they are still stuck in the last millennium.

Our GRL Response to Dessler Takes Shape, and the Evidence Keeps Mounting

October 12th, 2011

I will be revealing some of the evidence we will be submitting to Geophysical Research Letters (GRL) in response to Dessler’s paper claiming to refute our view of the forcing role of clouds in the climate system.

To whet your appetite, here is a draft version of one of the illustrations (click for the large version). It clearly shows the large discrepancy which exists between the IPCC climate models and satellite observations in the way they show the Earth shedding excess radiant energy in response to warming. This is central to question of how much warming can be expected from anthropogenic greenhouse gas emissions, because the less radiant energy the model’s shed per degree of warming, the more the models continue to warm.

The figure above represents 700 years of data (50 years each from all 14 models we have analyzed), and all 20 years of global Earth radiant energy budget data which exists from 2 satellite periods. Each point plotted represents an estimate of how much energy is lost (gained) by the Earth per degree of warming (cooling) during year-to-year climate variations in the individual decades.

Results for various averaging times are shown: Monthly (used by Dessler), 3 and 12 monthly (used by Forster & Gregory, 2006 J. Climate in their analysis of ERBE data, results of which are plotted as blue squares above), and 18 months used by only us in our analysis of the CERES data. We decided showing results for multiple averaging times is better than arguing with our critics over what averaging time is best. (If there are two options, A and B, and we chose A, our critics would claim there was an Exxon-funded conspiracy to exclude B.)

Of course, this evidence also supports one of the main conclusions of our Remote Sensing paper published earlier this year: there is a large discrepancy between the IPCC climate models and observations. That’s the paper which led to the resignation of the journal’s Chief Editor, and an apology from that journal to Kevin Trenberth for even publishing our paper (never mind it was peer reviewed by researchers who also publish on the subject).

The Effect of Volcanoes in Models versus Observations

One new twist that emerges from the above figure comes from the blue triangles, representing the model decades involving large episodic radiative forcing events by volcanic aerosols, compared to decades without volcanic forcing (yellow triangles). These blue triangles clearly show that a low bias in the regression-diagnosed feedback parameter tends to occur when time-varying radiative forcing is present (The volcanoes were Mt. Agung in the 1960s, El Chichon in the 1980’s, and Mt. Pinatubo in the 1990s. 7 of the 14 models included strong, episodic volcanic forcing, as independently decided from data presented by Forster & Taylor, 2006 J. Climate).

Furthermore, comparison of those blue triangles to the Pinatubo-influenced ERBE satellite data (blue squares, separately computed and previously published by IPCC-affiliated researchers) shows even a larger discrepancy than do the yellow (non-volcanic) triangles compared to the (orange) CERES data, which experience no major volcanic events. While one might argue that the CERES satellite measurements (orange circles) are not totally inconsistent with the yellow model triangles, the same cannot be said about the ERBE Pinatubo-influenced observations (blue squares) versus the blue model triangles. This has become a common IPCC defense of the climate models (“…well, the observations aren’t totally inconsistent with all of the models…”), as if this somehow constitutes validation of the climate models.

How Do the Results Jibe with Dessler (2010)?
Dessler (2010) in effect made a calculation representing the single orange circle on the far left. He interpreted it as evidence of positive cloud feedback (all of the IPCC models now exhibit positive cloud feedback), and indeed if I were to take that single circle, with its diagnosed net feedback parameter of only 1.2 W m-2 K-1, I might be inclined to agree it does, indeed, suggest positive cloud feedback.

But note how that single orange circle compares to the models (the triangles) when the exact same calculation is made from them. There is a significant discrepancy, which is seen to grow at the longer averaging times where the feedback signal is expected to more clearly emerge.

And the discrepancy appears to be the greatest in decades that experienced major volcanic eruptions.

Conclusion

The evidence keeps mounting that the Earth is more resistant to radiative forcing than are the climate models used by the IPCC to project future climate change. While it doesn’t actually prove the models are wrong in their projections of global warming, I don’t see how discrepancies this large can continue to be ignored.

If not for the public policy implications (which Dessler admits was the impetus for his 2011 paper criticizing our work), evidence as strong as that contained in the above illustration would be easily embraced by the climate research community. Maybe some day.

It will be interesting to see whether GRL rejects our paper out of hand. Maybe it would help if I joined the Union of Concerned Scientists. Hmmmm.

P.S….another tidbit for those following Dessler’s claim that clouds can’t cause climate change…
Dessler claims that changes in ocean temperature are way too large to be caused by clouds. Well, the year-to-year changes in Levitus global ocean heat content of the 0-700 m layer during the 2000-2010 satellite period of record yields a yearly standard deviation of 0.5 Watts per sq. meter for the energy required. In comparison, the yearly standard deviation of the global oceanic CERES satellite radiative fluxes is 0.3 Watts per sq. meter, which represents 60% of the energy required to cause the ocean temperature changes. Using any reasonable feedback parameter combined with the sea surface temperature variations yields only 0.1 Watts per sq. meter.

Thus, cloud variations (or maybe even natural water vapor variations?) can constitute an important natural forcing component of climate variability. And since it is our physical interpretation of observed climate variability that impacts our estimates of climate sensitivity, it also impacts our estimates of future global warming (aka climate change).

At this point, I suspect Dessler’s conclusions to the contrary are partly the result of a large amount of noise in temperature changes with time computed from short-term Levitus ocean heat content data.

I’ve Looked at Clouds from Both Sides Now -and Before

October 8th, 2011

…sometimes, the most powerful evidence is right in front of your face…..

I never dreamed that anyone would dispute the claim that cloud changes can cause “cloud radiative forcing” of the climate system, in addition to their role as responding to surface temperature changes (“cloud radiative feedback”). (NOTE: “Cloud radiative forcing” traditionally has multiple meanings. Caveat emptor.)

But that’s exactly what has happened. Andy Dessler’s 2010 and 2011 papers have claimed, both implicitly and explicitly, that in the context of climate, with very few exceptions, cloud changes must be the result of temperature change only.

Shortly after we became aware of Andy’s latest paper, which finally appeared in GRL on October 1, I realized the most obvious and most powerful evidence of the existence of cloud radiative forcing was staring us in the face. We had actually alluded to this in our previous papers, but there are so many ways to approach the issue that it’s easy to get sidetracked by details, and forget about the Big Picture.

Well, the following graph is the Big Picture. It shows the 3-month variations in CERES-measured global radiative energy balance (which Dessler agrees is made up of forcing and feedback), and it also shows an estimate of the radiative feedback alone using HadCRUT3 global temperature anomalies, assuming a feedback parameter (λ) of 2 Watts per sq. meter per deg (click for full-size version):

What this graph shows is very simple, but also very powerful: The radiative variations CERES measures look nothing like what the radiative feedback should look like. You can put in any feedback parameter you want (the IPCC models range from 0.91 to 1.87…I think it could be more like 3 to 6 in the real climate system), and you will come to the same conclusion.

And if CERES is measuring something very different from radiative feedback, it must — by definition — be radiative forcing (for the detail-oriented folks, forcing = Net + feedback…where Net is very close to the negative of [LW+SW]).

The above chart makes it clear that radiative feedback is only a small portion of what CERES measures. There is no way around this conclusion.

Now, our 3 previous papers on this subject have dealt with trying to understand the extent to which this large radiative forcing signal (or whatever you want to call it) corrupts the diagnosis of feedback. That such radiative forcing exists seemed to me to be beyond dispute. Apparently, it wasn’t. Dessler (2011) tries to make the case that the radiative variations measured by CERES are not enough energy to change the temperature of the ocean mixed layer…but that is a separate issue; the issue addressed by our previous 3 papers is the extent to which radiative forcing masks radiative feedback. [For those interested, over the same period of record (April 2000 through June 2010) the standard deviation of the Levitus-observed 3-month changes in temperature with time of the upper 200 meters of the global oceans corresponds to 2.5 Watts per sq. meter]

I just wanted to put this evidence out there for people to see and understand in advance. It will be indeed part of our response to Dessler 2011, but Danny Braswell and I have so many things to say about that paper, it’s going to take time to address all of the ways in which (we think) Dessler is wrong, misused our model, and misrepresented our position.

UAH Global Temperature Update for September 2011: +0.29 deg. C

October 4th, 2011

The global average lower tropospheric temperature anomaly for September, 2011 retreated a little again, to +0.29 deg. C (click on the image for the full-size version):

The 3rd order polynomial fit to the data (courtesy of Excel) is for entertainment purposes only, and should not be construed as having any predictive value whatsoever.

Here are this year’s monthly stats:

YR MON GLOBAL NH SH TROPICS
2011 1 -0.010 -0.055 0.036 -0.372
2011 2 -0.020 -0.042 0.002 -0.348
2011 3 -0.101 -0.073 -0.128 -0.342
2011 4 +0.117 +0.195 +0.039 -0.229
2011 5 +0.133 +0.145 +0.121 -0.043
2011 6 +0.315 +0.379 +0.250 +0.233
2011 7 +0.374 +0.344 +0.404 +0.204
2011 8 +0.327 +0.321 +0.332 +0.155
2011 9 +0.289 +0.309 +0.270 +0.175

The global sea surface temperatures from AMSR-E through the end of AMSR-E’s useful life (October 3, 2011) are shown next. The trend line is, again, for entertainment purposes only:

On the subject of the drop-off in temperatures seen in the AMSR-E data in the last week, I have been getting questions about the daily AMSU tracking data at the Discover website which shows Aqua AMSU channel 5 (which our monthly updates are computed from) is now entering record-low territory (for the date, anyway, and only since the Aqua record began in 2002). While I have always cautioned people against reading too much into week-to-week changes in global average temperature, this could portend a more significant drop in the next (October) temperature update, as the new La Nina approaches.

AMSR-E Ends 9+ Years of Global Observations

October 4th, 2011


UPDATE #1: See update at end.

The Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) was automatically spun down to its designed 4 rpm safe condition last night after recent increases in the amount of power required to keep it spinning at its nominal 40 rpm were beginning to cause noticeable jitter in NASA’s Aqua satellite.

The instrument has over 480 pounds of spinning mass, and the lubricant in the bearing assembly gradually deteriorates over time. This deterioration has been monitored, and automatic shutdown procedures have been in place for years if the amount of torque required to keep AMSR-E spinning exceeded a certain threshold.

Starting about October 1, AMSR-E was causing yaw vibrations in the Aqua satellite attitude which were increasingly exceeding the +/- 25 arcsecond limits that are required by other instruments on the spacecraft. Last night, the 4.5 Newton-meter torque limit was apparently exceeded, and the instrument was automatically spun down to 4 rpm.

At this point it appears that this event likely ends the useful life of AMSR-E, which has been continuously gathering global data on a variety of parameters from sea ice to precipitation to sea surface temperature. It’s 9+ year lifetime exceeded its 6 year design life.

AMSR-E was provided to NASA by Japan’s Aerospace Exploration Agency (JAXA), and was built by Mitsubishi Electric Company. It was launched aboard the Aqua satellite from Vandenberg AFB on May 2, 2002. It has been an extremely successful experiment, and has gathered a huge quantity of data that will be revealing secrets of weather and climate as scientific research with the archived data continues in the coming years.

As the U.S. Science Team Leader for AMSR-E, I would like to congratulate and thank all of those who made AMSR-E such a success: JAXA, MELCO, NASA, the University of Alabama in Huntsville, the National Snow and Ice Data Center (NSIDC) in Boulder, and the U.S. and Japanese Science Teams who developed the algorithms that turned the raw data collected by AMSR-E into so many useful products.

The good news is that AMSR2, a slightly modified and improved version of AMSR-E, will be launched early next year on Japan’s GCOM-W satellite, and will join Aqua and the other satellites in NASA’s A-Train constellation of Earth observation satellites in their twice-daily, 1:30 a.m./p.m. sun-synchronous polar orbit. It is my understanding that those data will be shared in near-real time with U.S. agencies.

We had hoped that AMSR-E would provide at least one year over data overlap with the new AMSR2 instrument. It remains to be determined – and is only speculation on my part – whether there might be an attempt to gather some additional data from AMSR-E later to help fulfill this cross-calibration activity with AMSR2. [The Aqua satellite can easily accommodate the extra torque imparted to the spacecraft, and last night’s spin-down of AMSR-E was mostly to eliminate the very slight chance of sudden failure of the AMSR-E bearing assembly which could have caused the Aqua satellite to go into an uncontrolled and unrecoverable tumble.]

Again, I want to thank and congratulate all of those who made AMSR-E such a huge success!

UPDATE #1: As of early this morning, the torque required to keep AMSR-E spinning at 4 rpm was too large for its own momentum compensation mechanism to handle, with excessive amounts of momentum being dumped to the spacecraft. As a result, the instrument has now been spun down to 0 rpm. The satellite has shed the excessive momentum, and is operating normally, as are the other instruments aboard the spacecraft (MODIS, CERES, and AIRS).

The Rest of the Cherries: 140 decades of Climate Models vs. Observations

September 22nd, 2011

Since one of the criticisms of our recent Remote Sensing paper was that we cherry-picked the climate models we chose to compare the satellite observations of climate variations to, here are all 140 10-year periods from all 14 climate models’ 20th Century runs we analyzed (click to see the full res. version):

As you can see, the observations of the Earth (in blue, CERES radiative energy budget versus HadCRUT3 surface temperature variations) are outside the range of climate model behavior, at least over the span of time lags we believe are most related to feedbacks, which in turn determine the sensitivity of the climate system to increasing greenhouse gas concentrations. (See Lindzen & Choi, 2011 for more about time lags).

Now, at ZERO time lag, there are a few decades from a few models (less than 10% of them) which exceed the satellite measurements. So, would you then say that the satellite measurements are “not inconsistent” with the models? I wouldn’t.

Especially since the IPCC’s best estimate of future warming (about 3 deg C.) from a doubling of atmospheric CO2 is almost exactly the AVERAGE response of ALL of the climate models. Note that the average of all 140 model decades (dashed black line in the above graph) is pretty darn far from the satellite data.

So, even with all of 140 cherries picked, we still see evidence there is something wrong with the IPCC models in general. And I believe the problem is they are too sensitive, and thus are predicting too much future global warming.

An Open Letter of Encouragement to Dr. Dessler

September 14th, 2011

Since I keep getting asked about the “latest” on the ongoing debate over clouds and feedback diagnosis between myself and Andy Dessler, I decided that this would be the best way to handle it under the current circumstances:


An Open Letter of Encouragement to Dr. Dessler

Dear Andy:

Thank you for the issues you have raised in your new paper, which I was only recently made aware of after it had already been peer reviewed and accepted for publication in Geophysical Research Letters.

Even though we disagree on the subject, I am pleased you have chosen to vigorously dispute the potential role of clouds in both confounding the diagnosis of the sensitivity of the climate system, as well as in contributing to climate variability and climate change.

I just wanted to encourage you to publish that paper as soon as you can, with or without the changes I suggested on my blog.

I am very sincere in my encouragement. I am anxious for the science to progress on this important issue, and so I eagerly await the official publication.

All the best,

-Roy W. Spencer

The Good, The Bad, and The Ugly: My Initial Comments on the New Dessler 2011 Study

September 7th, 2011

UPDATE: I have been contacted by Andy Dessler, who is now examining my calculations, and we are working to resolve a remaining difference there. Also, apparently his paper has not been officially published, and so he says he will change the galley proofs as a result of my blog post; here is his message:

“I’m happy to change the introductory paragraph of my paper when I get the galley proofs to better represent your views. My apologies for any misunderstanding. Also, I’ll be changing the sentence “over the decades or centuries relevant for long-term climate change, on the other hand, clouds can indeed cause significant warming” to make it clear that I’m talking about cloud feedbacks doing the action here, not cloud forcing.”

Update #2 (Sept. 8, 2011): I have made several updates as a result of correspondence with Dessler, which will appear underlined, below. I will leave it to the reader to decide whether it was our Remote Sensing paper that should not have passed peer review (as Trenberth has alleged), or Dessler’s paper meant to refute our paper.

NOTE: This post is important, so I’m going to sticky it at the top for quite a while.
While we have had only one day to examine Andy Dessler’s new paper in GRL, I do have some initial reaction and calculations to share. At this point, it looks quite likely we will be responding to it with our own journal submission… although I doubt we will get the fast-track, red carpet treatment he got.

There are a few positive things in this new paper which make me feel like we are at least beginning to talk the same language in this debate (part of The Good). But, I believe I can already demonstrate some of The Bad, for example, showing Dessler is off by about a factor of 10 in one of his central calculations.

Finally, Dessler must be called out on The Ugly things he put in the paper (which he has now agreed to change).

1. THE GOOD

Estimating the Errors in Climate Feedback Diagnosis from Satellite Data

We are pleased that Dessler now accepts that there is at least the *potential* of a problem in diagnosing radiative feedbacks in the climate system *if* non-feedback cloud variations were to cause temperature variations. It looks like he understands the simple-forcing-feedback equation we used to address the issue (some quibbles over the equation terms aside), as well as the ratio we introduced to estimate the level of contamination of feedback estimates. This is indeed progress.

He adds a new way to estimate that ratio, and gets a number which — if accurate — would indeed suggest little contamination of feedback estimates from satellite data. This is very useful, because we can now talk about numbers and how good various estimates are, rather than responding to hand waving arguments over whether “clouds cause El Nino” or other red herrings.

I have what I believe to be good evidence that his calculation, though, is off by a factor of 10 or so. More on that under THE BAD, below.

Comparisons of Satellite Measurements to Climate Models

Figure 2 in his paper, we believe, helps make our point for us: there is a substantial difference between the satellite measurements and the climate models. He tries to minimize the discrepancy by putting 2-sigma error bounds on the plots and claiming the satellite data are not necessarily inconsistent with the models.

But this is NOT the same as saying the satellite data SUPPORT the models. After all, the IPCC’s best estimate projections of future warming from a doubling of CO2 (3 deg. C) is almost exactly the average of all of the models sensitivities! So, when the satellite observations do depart substantially from the average behavior of the models, this raises an obvious red flag.

Massive changes in the global economy based upon energy policy are not going to happen, if the best the modelers can do is claim that our observations of the climate system are not necessarily inconsistent with the models.

(BTW, a plot of all of the models, which so many people have been clamoring for, will be provided in The Ugly, below.)

2. THE BAD

The Energy Budget Estimate of How Much Clouds Cause Temperature Change

While I believe he gets a “bad” number, this is the most interesting and most useful part of Dessler’s paper. He basically uses the terms in the forcing-feedback equation we use (which is based upon basic energy budget considerations) to claim that the energy required to cause the observed changes in the global-average ocean mixed layer temperature are far too large to be caused by satellite-observed variations in the radiative input into the ocean brought about by cloud variations (my wording).

He gets a ratio of about 20:1 for non-radiatively forced (i.e. non-cloud) temperature changes versus radiatively (mostly cloud) forced variations. If that 20:1 number is indeed good, then we would have to agree this is strong evidence against our view that a significant part of temperature variations are radiatively forced. (It looks like Andy will be revising this downward, although it’s not clear by how much because his paper is ambiguous about how he computed and then combined the radiative terms in the equation, below.)

But the numbers he uses to do this, however, are quite suspect. Dessler uses NONE of the 3 most direct estimates that most researchers would use for the various terms. (A clarification on this appears below). Why? I know we won’t be so crass as to claim in our next peer-reviewed publication (as he did in his, see The Ugly, below) that he picked certain datasets because they best supported his hypothesis.

The following graphic shows the relevant equation, and the numbers he should have used since they are the best and most direct observational estimates we have of the pertinent quantities. I invite the more technically inclined to examine this. For those geeks with calculators following along at home, you can run the numbers yourself:

Here I went ahead and used Dessler’s assumed 100 meter depth for the ocean mixed layer, rather than the 25 meter depth we used in our last paper. (It now appears that Dessler will be using a 700 m depth, a number which was not mentioned in his preprint. I invite you to read his preprint and decide whether he is now changing from 100 m to 700 m as a result of issues I have raised here. It really is not obvious from his paper what he used).

Using the above equation, if I assumed a feedback parameter λ=3 Watts per sq. meter per degree, that 20:1 ratio Dessler gets becomes 2.2:1. If I use a feedback parameter of λ=6, then the ratio becomes 1.7:1. This is basically an order of magnitude difference from his calculation.

Again I ask: why did Dessler choose to NOT use the 3 most obvious and best sources of data to evaluate the terms in the above equation?:
(1) Levitus for observed changes in the ocean mixed layer temperature; (it now appears he will be using a number consistent with the Levitus 0-700 m layer).

(2) CERES Net radiative flux for the total of the 2 radiative terms in the above equation, (this looks like it could be a minor source of difference, except it appears he put all of his Rcld variability in the radiative forcing term, which he claims helps our position, but running the numbers will reveal the opposite is true since his Rcld actually contains both forcing and feedback components which partially offset each other.)

(3): HadSST for sea surface temperature variations. (this will likely be the smallest source of difference)

The Use of AMIP Models to Claim our Lag Correlations Were Spurious

I will admit, this was pretty clever…but at this early stage I believe it is a red herring.

Dessler’s Fig. 1 shows lag correlation coefficients that, I admit, do look kind of like the ones we got from satellite (and CMIP climate model) data. The claim is that since the AMIP model runs do not allow clouds to cause surface temperature changes, this means the lag correlation structures we published are not evidence of clouds causing temperature change.

Following are the first two objections which immediately come to my mind:

1) Imagine (I’m again talking mostly to you geeks out there) a time series of temperature represented by a sine wave, and then a lagged feedback response represented by another sine wave. If you then calculate regression coefficients between those 2 time series at different time leads and lags (try this in Excel if you want), you will indeed get a lag correlation structure we see in the satellite data.

But look at what Dessler has done: he has used models which DO NOT ALLOW cloud changes to affect temperature, in order to support his case that cloud changes do not affect temperature! While I will have to think about this some more, it smacks of circular reasoning. He could have more easily demonstrated it with my 2 sine waves example.

Assuming there is causation in only one direction to produce evidence there is causation in only one direction seems, at best, a little weak.

2) In the process, though, what does his Fig. 1 show that is significant to feedback diagnosis, if we accept that all of the radiative variations are, as Dessler claims, feedback-induced? Exactly what the new paper by Lindzen and Choi (2011) explores: that there is some evidence of a lagged response of radiative feedback to a temperature change.

And, if this is the case, then why isn’t Dr. Dessler doing his regression-based estimates of feedback at the time lag of maximum response, as Lindzen now advocates?

Steve McIntyre, who I have provided the data to for him to explore, is also examining this as one of several statistical issues. So, Dessler’s Fig. 1 actually raises a critical issue in feedback diagnosis he has yet to address.

3. THE UGLY

(MOST, IF NOT ALL, OF THESE OBJECTIONS WILL BE ADDRESSED IN DESSLER’S UPDATE OF HIS PAPER BEFORE PUBLICATION)

The new paper contains a few statements which the reviewers should not have allowed to be published because they either completely misrepresent our position, or accuse us of cherry picking (which is easy to disprove).

Misrepresentation of Our Position

Quoting Dessler’s paper, from the Introduction:

“Introduction
The usual way to think about clouds in the climate system is that they are a feedback… …In recent papers, Lindzen and Choi [2011] and Spencer and Braswell [2011] have argued that reality is reversed: clouds are the cause of, and not a feedback on, changes in surface temperature. If this claim is correct, then significant revisions to climate science may be required.”

But we have never claimed anything like “clouds are the cause of, and not a feedback on, changes in surface temperature”! We claim causation works in BOTH directions, not just one direction (feedback) as he claims. Dr. Dessler knows this very well, and I would like to know:

1) what he was trying to accomplish by such a blatant misrepresentation of our position, and

2) how did all of the peer reviewers of the paper, who (if they are competent) should be familiar with our work, allow such a statement to stand?

Cherry picking of the Climate Models We Used for Comparison

This claim has been floating around the blogosphere ever since our paper was published. To quote Dessler:

“SB11 analyzed 14 models, but they plotted only six models and the particular observational data set that provided maximum support for their hypothesis. “

How is picking the 3 most sensitive models AND the 3 least sensitive models going to “provide maximum support for (our) hypothesis”? If I had picked ONLY the 3 most sensitive, or ONLY the 3 least sensitive, that might be cherry picking…depending upon what was being demonstrated.

And where is the evidence those 6 models produce the best support for our hypothesis? I would have had to run hundreds of combinations of the 14 models to accomplish that. Is that what Dr. Dessler is accusing us of?

Instead, the point of using the 3 most sensitive and 3 least sensitive models was to emphasize that not only are the most sensitive climate models inconsistent with the observations, so are the least sensitive models.

Remember, the IPCC’s best estimate of 3 deg. C warming is almost exactly the warming produced by averaging the full range of its models’ sensitivities together. The satellite data depart substantially from that. I think inspection of Dessler’s Fig. 2 supports my point.

But, since so many people are wondering about the 8 models I left out, here are all 14 of the models’ separate results, in their full, individual glory:

I STILL claim there is a large discrepancy between the satellite observations and the behavior of the models.

CONCLUSION

These are my comments and views after having only 1 day since we received the new paper. It will take weeks, at a minimum, to further explore all of the issues raised by Dessler (2011).

Based upon the evidence above, I would say we are indeed going to respond with a journal submission to answer Dessler’s claims. I hope that GRL will offer us as rapid a turnaround as Dessler got in the peer review process. Feel free to take bets on that. 🙂

And, to end on a little lighter note, we were quite surprised to see this statement in Dessler’s paper in the Conclusions (italics are mine):

“These calculations show that clouds did not cause significant climate change over the last decade (over the decades or centuries relevant for long-term climate change, on the other hand, clouds can indeed cause significant warming).”

Long term climate change can be caused by clouds??! Well, maybe Andy is finally seeing the light! 😉 (Nope. It turns out he meant ” *RADIATIVE FEEDBACK DUE TO* clouds can indeed cause significant warming”. An obvious, minor typo. My bad.)

Dessler vs. Rick Perry: Is the 2011 Texas Drought Evidence of Human-Caused Climate Change?

September 5th, 2011

One of the most annoying things about the climate change debate is that any regional weather event is blamed on humans, if even only partly. Such unscientific claims cannot be supported by data — they are little more than ambiguous statements of faith.

The current “exceptional” Texas drought is no exception. People seem to have short memories…especially if they were born after most of the major climate events of the past occurred.

Andy Dessler recently made what I’m sure he thought was a safe claim when faulting Texas Gov. Rick Perry for being “cavalier about climate change” (as if we could stop climate from changing by being concerned about it).

Dessler said, “..warming has almost certainly made the (Texas) heat wave and drought more extreme than it would otherwise have been.”

This clever tactic of claiming near-certainty of at least SOME effect of humans on weather events was originally invented by NASA’s James Hansen in his 1988 Senate testimony for Al Gore, an event that became the turning point for raising public awareness of “global warming” (oops, I’m sorry, I mean climate change).

The trouble is that climate change theory predicts changes, up and down, in just about anything you can imagine. So, anything unusual that happens anywhere, anytime, is deemed “consistent” with global warming.

But this tactic can work both ways — a specific drought might have instead been made LESS severe by the general tendency toward MORE rainfall, which is a much more robust prediction of the climate models with warming.

For example, let’s look at June-July total rainfall over the whole contiguous U.S. — which is only 1.6% of the Earth’s surface — over the last 100+ years (August data are not yet posted at NCDC):

What we see are some major drought events, and 2011 is not one of the big ones. The Big Kahuna was the Dust Bowl of the 1930’s. The 1950’s also experienced record droughts (see the animation here). These were before increasing CO2 in the atmosphere could be reasonably blamed for anything, except maybe enhancing plant growth a little.

Even NOAA’s Tom Karl back in 1981, before global warming politics took over his job description at NCDC, authored a paper on how the 1980 drought (which was pretty darn bad, long-lived, and widespread) was less severe than those in the 1930s and 1950s:

Note the price tag of the 1980 drought: $43 Billion. They are saying the current TX-OK drought will run somewhere north of $5 Billion.

But what do we ALSO see in the long term in the above U.S. rainfall plot? If anything, an UPWARD trend in rainfall. This is for the whole U.S., not just Texas….

Yes, I Know Texas is Like a Whole Other Country…
…but it is only 0.14% of the surface of the Earth. It is much easier for naturally-occurring stagnant weather patterns to cause drought (or flood) conditions over one, or even several states, because the descending (or ascending) portions of weather systems cover these smaller regional scales. It’s rare for them to cover the whole U.S.

So, now let’s look at what the rainfall record looks like for Texas:

Even though the August data are not yet available at this writing, I’m quite sure this year’s Texas drought will indeed be a record one….at least in the rather short (in climate parlance) period of record (just over 100 years) that we have enough rainfall data to analyze.

But what else do we see in the record? How about that big rainfall PEAK in 2007? I’ll bet someone can dig up an “expert” back in 2007 saying the Texas floods of 2007 were also caused by global warming.

And note that the long-term rainfall trend in Texas is not downward.

Surely, Dr. Dessler knows that a single data point (2011) does not constitute a “trend”.

The fact is that record dry (and wet) years in relatively small regions are actually quite common…because they usually happen in different places each year. Weather records are location-specific. This year is Texas’ super-drought year. Last year it was in part of Ukraine. Next year it will be somewhere else, maybe multiple places.

And even if droughts (or floods) do end up becoming more frequent, the question of just how much of that change can be blamed on humans versus Mother Nature still remains unanswered…unless you accept the pseudo-scientific faith-based statements put out by the IPCC leadership.

More Thoughts on the War Being Waged Against Us

September 5th, 2011

After having a day or so to digest some of what others have said about this whole mess, I’ve been trying to find better ways of expressing the science which is being disputed here. I’ve also gone back and tried to figure out exactly which part of our analysis was (supposedly) in error.

A Re-Examination of our Paper
So, first I went back and re-read our paper to find out what we did that was so seriously in error that it caused the journal’s Editor-in-Chief to resign (but not retract the paper?)

My conclusion is that it is still one damn fine and convincing paper. The evidence verges on being indisputable.

Our paper not only didn’t ignore previous work on the subject (as we have been accused of by Kevin Trenberth), our main purpose was to show why the commonly used data analysis methods in previous works was wrong. To accuse us of ignoring previous work reveals either total ignorance or deception on the part of our critics. (Publishing a paper that “ignored previous work” was a central reason given by the Editor-in-Chief for his resignation).

The key figures in our paper are Fig. 3 & Fig. 4. We reveal the large discrepancy between climate models and observations in how the Earth gains & loses energy to space during warming and cooling, and show based upon basic forcing-feedback theory why most previous estimates of feedbacks from observational data are (1) virtually worthless, and (2) have likely given the illusion of higher climate sensitivity than what really exists in nature.

It is something we have shown before using phase space analysis.

We are told our paper will indeed be disputed this week, as Andy Dessler has hurriedly written and gotten favorable peer review on a paper in Geophysical Research Letters. (Gee, I wonder if the peer reviewers were also associated with the IPCC, whose models they are trying to protect from scrutiny?)

We Need Scientific Analysis, Not Opinion Polls of Scientists
What is particularly frustrating in all this is the lack of people who are willing to actually read our papers and examine the evidence. Most, if not all, of our critics could not even explain what we have shown with the evidence. They simply assume we must be wrong.

They instead resort to nearly libelous ad hominem attacks, and hand-waving objections which are either straw men, red herrings, or just plain false.

They claim the model we used was “bad” (even though it is commonly used in many previous studies, and recommended to us by one of the leading climate modelers in the world), and that is was “tuned” to match the data. The last claim is absolutely hilarious, since the more complex climate models they use are constantly being re-tuned by small armies of scientists in efforts to get them to better agree with the observed behavior of the climate system.

Our critics then repeat each others’ talking points to the press and in blogs, and since few outsiders are willing to actually read our papers, the public resorts to simply accepting opinions they hear through the various media outlets.

Where Have All the Real Scientists Gone?
The basic issue we research is not that difficult to understand. And unless a few of you physicist-types out there get involved and provide some truly independent analysis of all this, the few of us out here who are revealing why the IPCC climate models being used to predict global warming are nowhere close to having been “validated”, are going to lose this battle.

We simply cannot compete with a good-ole-boy, group think, circle-the-wagons peer review process which has been rewarded with billions of research dollars to support certain policy outcomes.

It is obvious to many people what is going on behind the scenes. The next IPCC report (AR5) is now in preparation, and there is a bust-gut effort going on to make sure that either (1) no scientific papers get published which could get in the way of the IPCC’s politically-motivated goals, or (2) any critical papers that DO get published are discredited with any and all means available.

We are constantly being demanded to meet a higher standard than our critics hold themselves to when it comes to getting research proposals funded, or getting research results published. This war was going on many years before the ClimateGate e-mails were leaked and revealed the central players’ active interference in the peer review process. We seldom complained about this professional bias against us because it ends up sounding like sour grapes.

But when we are actively being accused of what the other side is guilty of, I will not stay silent.

And (BTW) we get no funding from Big Oil or other private energy interests. Another urban legend.

I hate to say it, but we need some sharper tools in our shed than we have right now. And the fresh eyes we need cannot have the threat of a loss of government funding hanging over their heads if what they find happens to disagree with Al Gore, James Hansen, et al.