Too Much Cotton Leads to Meltdown

August 27th, 2015

no-kangaroo-sock-puppetsI try to be patient with folks. Give them the benefit of the doubt. I have allowed just about any kind of comments to be posted (and remain posted) on this blog in the name of free and open debate.

But I am beginning to fear for the emotional health of myself and others.

One of our avid commenters from down under, Doug Cotton, has overstayed his welcome. Doug has a maddening adeptness at pushing his own view of the physical universe, erecting strawman arguments and challenges faster than a random number generator on a supercomputer. He belittles others who do not agree with him.

He is on continual output mode, impervious to reason, shedding physical laws like water off a platypus’s back.

Doug has made up many email addressess, names, and has posted from many IP addresses as he circumvents bloggers’ attempts to restrict him. He has already been banned from most climate blogs. A humorous post by Anthony Watts over a year ago (A Critical Mass of Cotton) will give you some idea of what we have had to put up with over the years.

I have tried to restrict him, but he keeps returning. I have automated restrictions on dozens of screen names, email addresses, and IP addresses.

Yes, Doug, I realize that we are a bunch of dolts who have not been ordained with the secrets of the universe the way that you have been. Maybe you should put your efforts into your own blog, and let your followers congregate there for your sermons on how gravity explains temperature.

Now, lest some people fear this will be the end of Doug on Dr. Roy’s site, fear not: Doug will not be going away…

What WILL be happening is I will be a little more proactive about deleting every comment I see from Doug. Since he sometimes uses fake names, I won’t always be successful. So, you should stay alert for his nuggets of wisdom before they disappear.

After all, I have nothing better to do with my time.

New Evidence Regarding Tropical Water Vapor Feedback, Lindzen’s Iris Effect, and the Missing Hotspot

August 17th, 2015

As part of a DOE grant we are testing climate models against satellite observations, particularly regarding the missing “hotspot” in the tropics, that is, the expected region of enhanced warming in the tropical mid- and upper troposphere as the surface warms. Since 1979 (the satellite period of record), it appears that warming in those upper layers has been almost non-existent, despite some surface warming and increases in low-level humidity.
For years I have claimed that the missing hotspot could be evidence of neutral or even negative water vapor feedback, which would also help explain weaker than expected surface warming.

Climate modelers are all but certain that water vapor feedback is positive. I have discussed elsewhere (e.g. here) how that might not be the case, even as lower atmospheric water vapor increases, and it’s related to how precipitation efficiency might change with warming leading to drying of the troposphere above the boundary layer. This is also part of Lindzen’s “Iris Effect”. While water vapor at the lowest altitudes over the ocean is strongly tied to surface temperature, free-tropospheric humidity is controlled by precipitation microphysics, and we have little information on how that changes with warming.

So, I’ll get right to the subject of this post. We have analyzed 11 years of water vapor channel (183.3 GHz) data from the AMSU-B instrument on the NOAA-18 satellite, and compared it to the mid-tropospheric temperature data from AMSU channel 5 (the “MT” channel). Specifically, we computed monthly gridpoint anomalies in all channels over the 11 year period, and regressed the 183.3 GHz brightness temperature (Tb) anomalies against the channel 5 Tb anomalies. This should give information on how much the free troposphere moistens or dries when it changes temperature.

The following image shows the gridpoint regression coefficients for the monthly anomalies during June 2005 through May 2015:

Fig. 1. Gridpoint regression coefficients between the NOAA-18 AMSU-B 183.3 GHz channels Tb and AMSU-A channel 5 Tb during June 2005 through May 2015.

Fig. 1. Gridpoint regression coefficients between the NOAA-18 AMSU-b 183.3 GHz channels Tb and AMSU-A channel 5 Tb during June 2005 through May 2015. Ch. 18 is 183.3+/-1 GHz, generally peaking in the upper troposphere; ch. 19 is 183.3+/-3 GHz peaking in the upper-mid troposphere, and ch. 20 is 183.3 +/-7 GHz peaking in the lower mid-troposphere.

Yellow to red colors are where absolute humidity decreases with warming; green is humidity increasing to roughly maintain constant RH, and blue is where humidity increases even more than constant RH. The signal of El Nino/La Nina is clear over the Pacific Ocean, where the features represent a regional rearrangement of deep convection (upward motion) and subsidence (sinking motion) patterns.

But what really matters for water vapor feedback is the net effect of these patterns…how they average together. The following graph (left panel) shows latitude band averages of the gridpoint regression coefficients in the above imagery, while the right panel shows the same computations from 15 years (2006-2020) from the GFDL ESM2M climate model:

Fig. 2. Zonal averages of the patterns seen in Fig. 1 (left panel), and similar computations made from the GFDL ESM2M climate model (right panel).

Fig. 2. Zonal averages of the patterns seen in Fig. 1 (left panel), and similar computations made from the GFDL ESM2M climate model (right panel).

The vertical dashed lines in Fig. 2 are based upon computations made from the AFGL tropical and mid-latitude radiosonde profile data; values of about 0.2 correspond to constant relative humidity (RH) with warming, while values of ~1.2 correspond to constant specific humidity, q (no water vapor increase). Values over 1.2 would be water vapor (q) actually decreasing with warming, and potentially indicative of negative water vapor feedback.

Note that in the tropical observations portion of the left panel in Fig. 2, all three 183.3 GHz channels (corresponding to different free-tropospheric layers) suggest decreasing water vapor with warming. (I don’t know how cirrus clouds might also be affected, but Lindzen has argued as part of his Iris Effect hypothesis that vapor and cirrus cloud cover should change together, and the 183.3 GHz data are affected somewhat by thick cirrus).
The mid-latitudes seem to be mostly in the realm of positive water vapor feedback, although maybe not constant RH (which is what the models tend to do). It would take more work to determine just what these extratropical humidity channel changes really mean in terms of broadband infrared radiative feedback.

Comparison of these same metrics to CMIP5 climate model data has been slow, since the necessary humidity and temperature profile data have been unavailable from the CMIP5 archive for months. Nevertheless, we were able to download data for two GFDL models (from the GFDL website), and I’m showing one of those in the right panel above, where we used a radiative transfer model to compute the same satellite microwave channels from the model temperature and humidity profiles. Note that in the tropics (say, 25N to 25S) the model tends to keep approximately constant RH when all those latitude bands are taken together.

This is pretty typical behavior for climate models, which are tuned to act this way. The models don’t actually contain the necessary precipitation microphysics, something even their convective parameterizations can’t fix because we really don’t know how detrainment from convection changes with warming anyway. In other words, you can’t parameterize something that you don’t even understand and can’t measure.

One curious clue from the above plots of models versus observations is how the three 183.3 GHz channel curves separate in the tropical observations, but not in the model. This would occur if convection detrains at higher altitudes with warming, with the mid-tropospheric humidity getting depressed even more as that very dry air descends from aloft, while mid-tropospheric detrainment and mixing from convection into the surrounding environmental air decreases.

Presumably, the primary source of variability in the observations is El Nino/La Nina (ENSO), which many climate models do not mimic very well. But the GFDL model we chose to compare to in Fig. 2 also produces very strong ENSO activity, so we think this is a pretty valid comparison between a model and observations.

This is all very preliminary, and we await the CMIP5 archive coming back online again late this month so that we can analyze more models. But if this discrepancy between models and observations holds across most or all models, we might have some important insight into how the models might not be accounting for increasing precipitation efficiency during warming, and in turn why the hotspot hasn’t developed… and why global warming in general is weaker than programmed into the climate models.

Perseid Meteor Shower was Slow, but Colorful

August 13th, 2015

Last night I took several hundred photos between 11:30 pm and 4 am trying to catch some Perseid meteors. I only got about 15 bright ones, so I’d say this was not a good year for the Perseids.

But almost every one started out with a blue-green trail as they burned up, for example this one near the Andromeda galaxy (the fuzzy area to the lower right, click image for full-size):

Perseid meteor near the Andromeda galaxy, August 13, 2015. Canon 6D with 16-35 f/4 lens wide open at 16mm, 30 sec exp, ISO800.

Perseid meteor near the Andromeda galaxy, August 13, 2015. Canon 6D with 16-35 f/4 lens wide open at 16mm, 30 sec exp, ISO800.

The blue-green color I’ve seen explained as copper or magnesium or iron or nickel, so I have no idea what to believe. Expert opinion welcome.

Spencer on Varney & Co Talking Obama’s Clean Power Plan

August 11th, 2015

Stuart Varney interviewed me live this morning during Varney & Co on Fox Business…I did not know the specific questions he would ask, so I was kind of winging it:


Ice Amazingly Persists in Hudson and James Bay

August 9th, 2015

While the world frets over global warming, sea ice amazingly persists as far south as James Bay in Canada–not much farther north than Maine–as seen in this NASA color satellite image of swirling ice patterns from yesterday, August 8, 2015 (click for full size):

NASA MODIS image of sea ice persisting as far south as James Bay (Canada) on 8 August 2015.

NASA MODIS image of sea ice persisting as far south as James Bay (Canada) on 8 August 2015.

Two weeks ago it was reported that the worst mid-summer ice conditions in 20 years was preventing the routine delivery of supplies by ship in eastern Hudson Bay, and a Canadian ice breaker had to be called in to help.

Spencer on Stossel’s “Science Wars”

August 5th, 2015

My most recent appearance on Stossel is now available on YouTube where I had the opportunity to share my opinions of those great global warming experts Bill Nye (The Science Guy™), Neil DeGrasse Tyson (The Anti-Pluto Guy), and Al Gore (The Politician-Turned-Alarmist Guy).

I always wanted to be bleeped on national TV. :-)

Color Satellite Shows CA Wildfire Smoke Spreading Over Pacific

August 4th, 2015

The wildfires north of San Francisco are far from contained today, with over 90 sq. miles torched, 9,000 firefighters involved battling the blazes, and 13,000 people ordered evacuated from their homes.

Yesterday afternoon this NASA satellite color image showed the locations of satellite-observed hotspots (red dots) and smoke spreading westward out over the Pacific Ocean (click for full-size):

NASA color imagery from the Aqua satellite showing widespread wildfires over Northern California (remapped into Google Earth).

NASA color imagery from the Aqua satellite showing widespread wildfires over Northern California (remapped into Google Earth).

UAH V6.0 Global Temperature Update for July 2015: +0.18 C

August 3rd, 2015

NOTE: This is the fourth monthly update with our new Version 6.0 dataset. Differences versus the old Version 5.6 dataset are discussed here.

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for July, 2015 is +0.18 deg. C, down considerably from the June, 2015 value of +0.33 deg. C (click for full size version):
UAH_LT_1979_thru_July_2015_v6

The global, hemispheric, and tropical LT anomalies from the 30-year (1981-2010) average for the last 7 months are:

YR MO GLOBE NH SH TROPICS
2015 1 +0.28 +0.40 +0.16 +0.13
2015 2 +0.18 +0.30 +0.05 -0.06
2015 3 +0.17 +0.26 +0.07 +0.05
2015 4 +0.09 +0.18 -0.01 +0.10
2015 5 +0.29 +0.36 +0.21 +0.28
2015 6 +0.33 +0.41 +0.25 +0.46
2015 7 +0.18 +0.33 +0.03 +0.48

Strong July cooling occurred in the Southern Hemisphere extratropics, with a weak drop in the Northern Hemisphere extratropics. The tropics continue to warm with El Nino conditions there.

The global image for July, 2015 should be available in the next several days here.

The new Version 6 files (use the ones labeled “beta2″) should be updated soon, and are located here:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tmt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0beta/ttp
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tls

15 Years of CERES Versus Surface Temperature: Climate Sensitivity = 1.3 deg. C

July 20th, 2015

The NASA CERES project has updated their EBAF-TOA Edition 2.8 radiative flux dataset through March of 2015, which now extends the global CERES record to just over 15 years (since March 2000, starting with NASA’s Terra satellite). This allows us to get an update of how the radiative budget of the Earth responds to surface temperature variations, which is what determines climate sensitivity and thus how much warming (and associated climate change) we can expect from a given amount of radiative forcing (assuming the forcing-feedback paradigm is sufficiently valid for the climate system).

For those who are familiar with my work, I have a strong (and published) opinion on estimating feedback from observed variations in global radiative flux and surface temperature. Dick Lindzen and his co-authors have published on the same issue, and agree with me:

Specifically,

Time-varying radiative forcing in the climate system (e.g. due to increasing CO2, volcanic eruptions, and natural cloud variations) corrupt the determination of radiative feedback.

This is the “cause-versus-effect” issue I have been harping on for years, and discussed extensively in my book, The Great Global Warming Blunder. It is almost trivially simple to demonstrate (e.g. published here, despite the resignation of that journal’s editor [forced by Kevin Trenberth?] for allowing such a sacrilegious thing to be published).

It is also the reason why the diagnosis of feedbacks from the CMIP5 climate models is done using one of two methods that are outside the normal running of those models: either (1) running with an instantaneous and constant large radiative forcing (4XCO2)….so that the resulting radiative changes are then almost all feedback in response to a substantial temperature change being caused by the (constant) radiative forcing; or (2) running a model with a fixed and elevated surface temperature to measure how much the radiative budget of the modeled climate system changes (less optimum because it’s not radiative forcing like global warming, and the resulting model changes are not allowed to alter the surface temperature).

If you try to do it with any climate model in its normal operating mode (which has time-varying radiative forcing), you will almost always get an underestimate of the real feedback operating in the model (and thus an over-estimate of climate sensitivity). We showed this in our Remote Sensing paper. So why would anyone expect anything different using data from the real climate system, as (for example) Andy Dessler has done for cloud feedbacks?

(It is possible *IF* you know the time history of the radiative forcing imposed upon the model, and subtract it out from the model radiative fluxes. That information was not archived for CMIP3, and I don’t know whether it is archived for the CMIP5 model runs).

But what we have in the real climate system is some unknown mixture of radiative forcing(s) and feedback — with the non-feedback radiative variations de-correlating the relationship between radiative feedback and temperature. Thus, diagnosing feedback by comparing observed radiative flux variations to observed surface temperature variations is error-prone…and usually in the direction of high climate sensitivity. (This is because “radiative forcing noise” in the data pushes the regression slope toward zero, which would erroneously indicate a borderline unstable climate system.)

What is necessary is to have non-radiative forced variations in global-average surface temperature sufficiently large that they partly overcome the noise in the data. The largest single source of this non-radiative forcing is El Nino/La Nina, which correspond to a global-average weakening/strengthening of the overturning of the ocean.

It turns out that beating down noise (both measurement and geophysical) can be accomplished somewhat with time-averaging, so 3-monthly to annual averages can be used….whatever leads to the highest correlations.

Also, a time lag of 1 to 4 months is usually necessary because most of the net radiative feedback comes from the atmospheric response to a surface temperature change, which takes time to develop. Again, the optimum time lag is that which provides the highest correlation, and seems to be the longest (up to 4 months) with El Nino and La Nina events.

Anyway, here is the result for 15 years of annual CERES net radiative flux variations and HadCRUT4 surface temperature variations, with the radiative flux lagged 4 months after temperature:

Fig. 1. Global, annual area averages of CERES-measured Net radiative flux variations against surface temperature variations from HadCRUT4, with a 4 month time lag to maximize correlation (flux after temperature).

Fig. 1. Global, annual area averages of CERES-measured Net radiative flux variations against surface temperature variations from HadCRUT4, with a 4 month time lag to maximize correlation (flux after temperature).

Coincidentally, the 1.3 deg. C best estimate for the climate sensitivity from this graph is the same as we got with our 1D forcing-feedback-mixing climate model, and as I recently got with a simplified model that stores energy in the deep ocean at the observed rate (0.2 W/m2 average since the 1950s).

Again, the remaining radiative forcing in the 15 years of data causes decorrelation and (almost always) an underestimate of the feedback parameter (and overestimate of climate sensitivity). So, the real sensitivity might be well below 1.3 deg. C, as Lindzen believes. The inherent problem in diagnosing feedbacks from observational data is one which I am absolutely sure exists — and it is one which is largely ignored. Most of the “experts” who are part of the scientific consensus aren’t even aware of it, which shows how a small obscure issue can change our perception of how sensitive the climate system is.

This is also just one example of why hundreds (or even thousands) of “experts” agreeing on something as complex as climate change really doesn’t mean anything. It’s just group think in an echo chamber riding on a bandwagon.

Now, one can legitimately argue that the relationship in the above graph is still noisy, and so remains uncertain. But this is the most important piece of information we have to observationally determine how the real climate system responds radiatively to surface temperature changes, which then determines how big a problem global warming might be.

It’s clear that the climate models can be programmed to get just about any climate sensitivity one wants…currently covering a range of about a factor of 3! So, at some point we need to listen to what Mother Nature is telling us. And the above graph tells us that the climate system appears to be more stable than the experts believe.

New Pause-Busting Temperature Dataset Implies Only 1.5 C Climate Sensitivity

July 14th, 2015

Amid all of the debate over whether the global warming pause/hiatus exists or not, I’d like to bring people back to a central issue:

Even if it has warmed in the last 15 years, the rate of surface warming (and deep-ocean warming) we have seen in the last 50 years still implies low climate sensitivity.

I will demonstrate this with a simplified version of our 1D time-dependent energy balance model (Spencer & Braswell, 2014).

The reason why you can model global average climate system temperature variations with a simple energy balance model is that, given a certain amount of total energy accumulation or loss (in Joules) over the surface of the Earth in a certain amount of time, there will be a certain amount of warming or cooling of the depth-averaged ocean temperature. This is just a statement of energy conservation, and is non-controversial.

The rate of heat accumulation is the net of “forcing” and “feedback”, the latter of which stabilizes the climate system against runaway temperature change (yes, even on Venus). On multi-decadal time scales, we can assume without great error that the ocean is the dominant “heat accumulator” in climate change, with the land tagging along for the ride (albeit with a somewhat larger change in temperature, due to its lower effective heat capacity). The rate at which extra energy is being stored in the deep ocean has a large impact on the resulting surface temperature response.

The model feedback parameter lambda (which determines equilibrium climate sensitivity, ECS=3.8/lambda) is adjustable, and encompasses every atmospheric and surface process that changes in response to warming to affect the net loss of solar and infrared energy to outer space. (Every IPCC climate model will also have an effective lambda value, but it is the result of the myriad processes operating in the model, rather than simply specified).

Conceptually, the model looks like this:

Fig. 1. Simple time-dependent 1 layer model of global oceanic average mixed layer tamperature.

Fig. 1. Simple time-dependent 1 layer model of global oceanic average mixed layer tamperature.

I have simplified the model so that, rather than having many ocean layers over which heat is diffused (as in Spencer & Braswell, 2014), there is just a top (mixed) layer that “pumps” heat downward at a rate that matches the observed increase in deep-ocean heat content over the last 50 years. This has been estimated to be 0.2 W/m2 since the 1950s increasing to maybe 0.5 W/m2 in the last 10 years.

I don’t want to argue whether this deep ocean warming might not even be occurring. Nor do I want to argue whether the IPCC-assumed climate forcings are largely correct. Instead, I want to demonstrate that , even if we assume these things AND assume the new pause-busting Karlized ocean surface temperature dataset is correct, it still implies low climate sensitivity.

Testing the Model Against CMIP5

If I run the model (available in spreadsheet form here) with the same radiative forcings used by the big fancy CMIP5 models (RCP 6.0 radiative forcing scenario), I get a temperature increase that roughly matches the average of all of the CMIP5 models, for the 60N-60S ocean areas (average CMIP5 results for the global oceans from the KNMI Climate Explorer):

Fig. 2. Simple model run to match the average of all CMIP5 models under the RCP 6.0 radiative forcing scenario.

Fig. 2. Simple model run to match the average of all CMIP5 models under the RCP 6.0 radiative forcing scenario.

The climate sensitivity I used to get this result was just over 2.5 C for a doubling of atmospheric CO2, which is consistent with published numbers for the typical climate sensitivity of many of these models. To be consistent with the CMIP5 models, I assume in 1950 that the climate system is 0.25 C warmer than the “normal balanced climate” state. This affects the model feedback calculation (the warmer the climate is assumed to be from “normal” the greater the loss of radiant energy to space). Of course, we really don’t know what the “normal balanced” state of the real climate system is…or even if there is one.

Running the Model to Match the New Pause-Busting Temperature Dataset

Now, let’s see how we have to change the climate sensitivity to match the new Karlized ERSST v4 dataset, which reportedly did away with the global warming pause:

Fig. 3. Simple model match to new pause-busting ERSST (v4) dataset.

Fig. 3. Simple model match to new pause-busting ERSST (v4) dataset.

In this case, we see that a climate sensitivity of only 1.5 C was required, a 40% reduction in climate sensitivity. Notably, this is at the 1.5C lower limit for ECS that the IPCC claims. Thus, even in the new pause-busting dataset the warming is so weak that it implies a climate sensitivity on the verge of what the IPCC considers “very unlikely”.

Running the Model to Match the New Pause-Busting Temperature Dataset (with ENSO internal forcing)

Finally, let’s look at what happens when we put in the observed history El Nino and La Nina events as a small radiative forcing (more incoming during El Nino, outgoing during La Nina, evidence for which was presented by Spencer & Braswell, 2014) and temporary internal energy exchanges between the mixed layer and deeper layers:

Fig. 4. As in Fig. 3, but with El Nino and La Nina variations included in the model (0.3 W/m2 per MEI unit radiative forcing, 0.4 W/m2 per MEI unit non-radiative forcing [heat excahnge between mixed layer and deeper layers]).

Fig. 4. As in Fig. 3, but with El Nino and La Nina variations included in the model (0.3 W/m2 per MEI unit radiative forcing, 0.4 W/m2 per MEI unit non-radiative forcing [heat excahnge between mixed layer and deeper layers]).

Now we have reduced the required climate sensitivity necessary to explain the observations to only 1.3 C, which is nearly a 50% ECS reduction below the 2.5C necessary to match the CMIP5 models. This result is similar to the one achieved by Spencer & Braswell (2014).

Comments

The simplicity of the model is not a weakness, as is sometimes alleged by our detractors — it’s actually a strength. Since the simple model time step is monthly, it avoids the potential for “energy leakage” in the numerical finite difference schemes used in big models during long integrations. Great model complexity does not necessarily get you closer to the truth.

In fact, we’ve had 30 years and billions of dollars invested in a marching army of climate modelers, and yet we are no closer to tying down climate sensitivity and thus estimates of future global warming and associated climate change. The latest IPCC report (AR5) gives a range from 1.5 to 4.5 C for a doubling of CO2, not much different from what it was 30 years ago.

There should be other simple climate model investigations like what I have presented above, where basic energy balance considerations combined with specific assumptions (like the deep oceans storing heat at an average rate of 0.2 W/m2 over the last 50 years) are used to diagnose climate sensitivity by matching the model to observations.

The IPCC apparently doesn’t do this, and I consider it a travesty that they don’t. ;-)

I’ll leave it up to the reader to wonder why they don’t.