UAH Global Temperature Update for February, 2018: +0.20 deg. C

March 1st, 2018

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for February, 2018 was +0.20 deg. C, down a little from the January value of +0.26 deg. C:

Global area-averaged lower tropospheric temperature anomalies (departures from 30-year calendar monthly means, 1981-2010). The 13-month centered average is meant to give an indication of the lower frequency variations in the data; the choice of 13 months is somewhat arbitrary… an odd number of months allows centered plotting on months with no time lag between the two plotted time series. The inclusion of two of the same calendar months on the ends of the 13 month averaging period causes no issues with interpretation because the seasonal temperature cycle has been removed, and so has the distinction between calendar months.

The global, hemispheric, and tropical LT anomalies from the 30-year (1981-2010) average for the last 14 months are:

2017 01 +0.33 +0.31 +0.34 +0.10
2017 02 +0.38 +0.57 +0.19 +0.08
2017 03 +0.23 +0.36 +0.09 +0.06
2017 04 +0.27 +0.28 +0.26 +0.21
2017 05 +0.44 +0.39 +0.49 +0.41
2017 06 +0.21 +0.33 +0.10 +0.39
2017 07 +0.29 +0.30 +0.27 +0.51
2017 08 +0.41 +0.40 +0.42 +0.46
2017 09 +0.54 +0.51 +0.57 +0.54
2017 10 +0.63 +0.66 +0.59 +0.47
2017 11 +0.36 +0.33 +0.38 +0.26
2017 12 +0.41 +0.50 +0.33 +0.26
2018 01 +0.26 +0.46 +0.06 -0.12
2018 02 +0.20 +0.24 +0.15 +0.03

The linear temperature trend of the global average lower tropospheric temperature anomalies from January 1979 through February 2018 remains at +0.13 C/decade.

The UAH LT global anomaly image for February, 2018 should be available in the next few days here.

The new Version 6 files should also be updated in the coming days, and are located here:

Lower Troposphere:
Lower Stratosphere:

Warming to 2100: A Lukewarmer Scenario

February 28th, 2018

My previous post dealt with a 1D model of ocean temperature changes to 2,000m depth, optimized to match various observed quantities: deep-ocean heat storage, surface temperature warming, the observed lagged-variations between CERES satellite radiative flux and surface temperature, and warming/cooling associated with El Nino/La Nina.

While that model was meant to match global average (land+ocean) conditions, I more recently did one for oceans-only (60N-60S). I changed a few things, so the models are not directy comparable. For example, I used all of the RCP6.0 radiative forcings, but with the land use and snow albedo changes removed (since the model is ocean-only). For SST observations, I used the ERSSTv5 data.

The resulting equilibrium climate sensitivity (ECS) is 1.54 deg. C (coincidently the same as the previous, global model).

What I thought would be fun, though, would be to run the model out to 2100. This requires an estimate of ENSO activity (I used the MEI index). After examining the history of MEI, including it’s low-frequency variations (which are somewhat related to the Pacific Decadal Oscillation, PDO), I made the February 2018 MEI values onward equal to the Feb. 1929 values up to the present.

The resulting forecast shows global average SST almost reaching 1.5 C above pre-industrial times by the end of this century:

2-Layer ocean model sea surface temperature variations. See the figure inset for model assumptions and how it was tuned.

Because I used past MEI data for the future, the lack of significant warming until the late 2040s is due to reduced El Nino activity that was observed from about 1940 to the late 1970s. The enhanced warming after 2040 is analogous to the enhanced warming from stronger El Nino activity that existed from the late 1970s to the late 1990s.

Of course, this whole exercise assumes that, without humans, the climate system would have had no temperature trend between 1765-2100. That is basically the IPCC assumption — that the climate system is in long-term energy equilibrium, not only at the top-of atmosphere, but in terms of changes in ocean vertical circulation whcih can warm the surface and atmosphere without any TOA radiative forcing.

I don’t really believe the “climate stasis” assumption, because I believe the Medieval Warm Period and the Little Ice Age were real, and that some portion of recent warming has been natural. In that case, the model climate sensitivity would be lower, and the model warming by 2100 would be even less.

What would cause warming as we came out of the Little Ice Age? You don’t need any external forcing (e.g. the Sun) to accomplish it, although I know that’s a popular theory. My bet (but who knows?) is a change in ocean circulation, possibly accompanied by a somewhat different cloud regime. We already know that El Nino/La Nino represents a bifurcation in how the climate system wants to behave on interannual time scales. Why not multi-century time scale bifurcations in the deep ocean circulation? This possibility is simply swept under the rug by the IPCC.

A 1D Model of Global Temperature Changes, 1880-2017: Low Climate Sensitivity (and More)

February 22nd, 2018

UPDATE(2/23/18): The previous version of this post had improper latitude bounds for the HadCRUT4 Tsfc data. I’ve rerun the results… the conclusions remain the same. I have also added proof that ENSO is accompanied by its own radiative forcing, a controversial claim, which allows it to cause multi-decadal climate change. In simple terms, this is clear evidence the climate system can cause its own, natural, internally-generated climate changes. This is partly what has caused recent warming, and the climate modelling community has assumed it was all human-caused.

Executive Summary
A 1D forcing-feedback model with two equivalent-ocean layers is used to model monthly global average surface temperatures from 1880 through 2017. Reflected shortwave (SW) and thermally emitted longwave (LW) forcings and feedbacks are included in an attempt to obtain the closest match between the model and HadCRUT4 surface temperatures based upon correlation and long-term trends.

The traditional radiative forcings included are RCP estimates of volcanic (SW), anthropogenic greenhouse gases (LW), and anthropogenic direct aerosol forcing (SW). The non-traditional forcings are ENSO-driven SW radiative forcing based upon the observed lagged relationship between CERES satellite SW radiative flux and the Multivariate ENSO Index during 2000-2017, which shows radiative accumulation (loss) during El Nino warming (La Nina cooling), and a non-radiative forcing of surface temperature proportional to ENSO activity since 1871 (MEI “ext” index).

Heat is pumped into the deep ocean in proportion how far the surface layer temperature deviates from energy equilibrium, with the proportionality constant chosen to match the observed average rate of heat accumulation in the 0-2000m layer between 1990 and 2017 from NODC data.

LW and SW feedbacks are adjusted in the model to optimize model agreement with observations, as is the model surface layer depth, and the ENSO non-radiative forcing strength. By incrementally changing the adjustable parameters, the model and observed surface temperature trends are matched and (using monthly running 12-month averages) a correlation of 0.88 is achieved from 1880-2017. The optimum effective depth of the surface mixed layer is 38 meters (equivalent to 54 m ocean, 0 m land in the global average), and the resulting model equilibrium climate sensitivity is 1.54 deg. C, which is less than half the average IPCC AR5 model sensitivity of 3.4 deg.

Curiously, the model surface temperature trend during 1979-2017 (+0.113 C/decade) is a much closer match to our UAH LT data (+0.128 C/decade) than it is to the HadCRUT4 data (+0.180 C/decade), despite the fact the model was optimized to match HadCRUT4 during 1880-2017.

It is also demonstrated that using either the model-generated, or the CERES-observed, radiative fluxes during 2000-2017 to diagnose feedbacks results in a climate sensitivity that is far too high, consistent with the published papers of Spencer & Braswell on this subject. Thus, CERES-derived radiative feedback, while useful for model comparison, should not be used to diagnose feedbacks in the climate system.

Background: CERES Radiative Fluxes Cannot be Used to Diagnose Global Feedbacks

I recently revisited the CERES-EBAF dataset of top-of-atmosphere (TOA) radiative fluxes, a multi-satellite best-estimate of those fluxes updated for the period March 2000 through September, 2017. When I examined the feedback parameters (regression coefficients) diagnosed from the new, longer data record, the result for the Net (thermally emitted longwave LW + reflected shortwave SW) was clearly unrealistic. The plot of monthly global radiative flux variations are shown in Fig. 1, for LW, SW, and Net (LW+SW) fluxes compared to global average surface temperature variations from HadCRUT4.

FIG. 1. Scatterplots of monthly global average anomalies in CERES SW, LW, and Net (LW+SW) radiative fluxes versus HadCRUT4 surface temperatures, March 2000 through September 2017. The negative sign of the regression result in the bottom plot is physically impossible if interpreted as a net feedback parameter in the climate system.

Significantly, the Net flux regression result in Fig. 1 (-0.12 W/m2 K) is physically impossible as a feedback interpretation, with the wrong sign. It would suggest that as the climate system warms, it traps even more radiative energy, which would produce an unstable climate system with runaway warming (or cooling).

The SW and LW regression results in Fig. 1 are at least possible in terms of their signs… at face value suggesting positive SW feedback, and for the longwave (compared to a temperature-only “Planck effect” value of 3.2 W/m2 K), the 1.72 W/m2 K value would suggest positive LW feedback, probably from water vapor (maybe high clouds).

As I will demonstrate, however, the regression coefficients themselves are not well related to feedback, and thus climate sensitivity. (The equilibrium climate sensitivity is computed by dividing the theoretically-expected radiative forcing from a doubling of atmospheric CO2 [2XCO2], 3.7 W/m2, by the Net feedback parameter, which must be positive for the climate system to be stable [all IPCC models have Net feedback parameters that are positive]).

We have published a few papers on this subject before, and it was the theme of my book, The Great Global Warming Blunder. I have, quite frankly, been disappointed that the climate research establishment (with the exception of Dick Lindzen) has largely ignored this issue. I hope that the work (in progress) I post here will lead to some renewed interest in the subject.

After spending some time (once again) trying to come up with some way to convincingly explain why the regression coefficients like those in Fig. 1 aren’t really a measure of feedback (without gnashing of teeth in the climate community, or journal editors resigning after publishing our paper), I decided to code up a simple 1D forcing-feedback model that would allow me to (1) explain the temperature variations since 1880 in a physically consistent way, and then (2) use the radiative output from the model during the CERES period (2000-2017) to show that the model-diagnosed feedback parameters indicate a much higher climate sensitivity than was actually specified in the model run.

In the rest of the post below, I believe I will convincingly demonstrate what I am saying… while also providing both an estimate of climate sensitivity from the last 137 years of climate variability, and explaining features like the pre-1940 warming trend, the post 1940 warming hiatus, and the post-1997 warming hiatus.

The 1D Energy Balance Forcing-Feedback Model

While striving for maximum simplicity while still explaining the observed data, I finally realized that the 20-layer ocean used in the model of Spencer & Braswell (2014) was needlessly complex, and the resulting criticism of our ocean heat diffusion scheme was a distraction from the core conclusions of the paper.

So, I’ve now convinced myself that all that is required is a 2-layer model, where the rate of deep ocean storage is simply proportional to how warm the surface layer gets compared to energy equilibrium. While not necessarily totally representative of how the ocean works, it does meet the IPCC expectation that as global temperatures warm, the deep ocean also warms, and allows a sink for a portion of the energy that accumulates in the surface layer. The proportionality constant for this is set to produce the same amount of average 0-2000m warming from NODC ocean heat content (OHC) during 1990-2017. We couldnt do this in our original work because estimates of 0-2000m OHC were not yet published (I contacted Sid Levitus at the time, and he said they were working on it).

The depth of the model top layer is an adjustable parameter that can be tuned to provide the best agreement with HadCRUT4 observations; it is assumed to represent a global average of an ocean mixed layer of constant depth, and assumed no net storage (or loss) of energy by land during warming (or cooling).

The model is based upon the commonly used forcing-feedback energy budget equation for the climate system, assuming temperature deviations are from some state of energy equilibrium (I know, that’s debatable… bear with me here):

ΔT/Δt = [F(t) – λ ΔTsfc]/Cp

This equation simply says that the temperature change with time of a system with heat capacity Cp is related to the time-varying forcings F (say, excess radiative energy forced into the system from anthropogenic GHG accumulation) minus the net radiative feedback (radiative loss by the system proportional to how warm it gets, with λ being the net feedback parameter with units W/m2 K). The net feedback parameter λ implicitly includes all fast surface and atmospheric feedbacks in the system: clouds, water vapor, lapse rate changes, etc.

In our case, there are two model layers, the forcings are several, and there is a transfer of energy between the two ocean layers. Importantly, I also separate out the LW and SW forcings (and feedbacks) so we can ultimately compare the model results during 2000-2017 with the CERES satellite measurements during the same period of time.

The model radiative forcings include the RCP6.0 anthropogenic GHGs (assumed LW), volcanic aerosols (assumed SW), and anthropogenic aerosol direct forcing (assumed SW). The indirect aerosol forcing is excluded since there is recent evidence aerosol forcing is not as strong as previously believed, so I retain only the direct forcing as a simple way to reduce the total (direct+indirect) anthropogenic aerosol forcing.

As Spencer and Braswell (2014) did, I include an ENSO-related SW radiative (and a little LW) forcing, proportional to the MEI extended index (1871-2017). I use a total value of 0.23 W/m2 per MEI index, initially calculated as 0.20 by regression from how much average CERES SW energy accumulation (loss) there is averaged over the 1 to 3 months before El Nino (La Nina) during the updated CERES data record (March 2000-September 2017). The SW and LW forcing values were adjusted slightly as the model was run until the model lag regression coefficients of MEI versus radiative flux matched the same metrics from CERES observations. I have added the following intermediate figure to demonstrate this controversial claim: that ENSO involves not only a change in the vertical temperature structure of the ocean (non-radiative forcing of surface temperature), but that radiative changes precede ENSO; that is, ENSO provides its own radiative forcing of the climate system:

Intermediate Plot A: The CERES observed relationship between radiative flux and ENSO activity can ONLY be explained by invoking radiative forcing prior to ENSO. This significantly impacts the “feedback” interpretation of CERES radiative fluxes, decorrelating their relationship to temperature, thus giving the illusion of an excessively sensitive climate system if one interprets the regression slopes as only due to feedback.

The ENSO non-radiative forcing (e.g. warming of the surface layer during El Nino, with an energy-equivalent cooling of the deeper layer, due to a global-average reduction in the rate of ocean overturning) is directly proportional to the MEI index value, with no time lag. It is tuned to help maximize the match between modeled and observed ENSO warming and cooling episodes in surface temperatures.

Significantly, I have adjusted the MEI values by a constant so that their sum during 1871-2017 is zero. This is to avoid expected criticism that the MEI index could be inadvertently driving a net gain or loss of energy by the model climate system over this time because it has a net high bias. (This is indeed a possibility in nature; I note even that with the mean removed, there is a small upward linear trend in the MEI, corresponding to a radiative forcing of -0.08 W/m2 in 1871, linearly increasing to +0.08 W/m2 in 2017 using my CERES-derived coefficient; I have not looked at how much this trend affects the results, and it might well be that La Nina activity was more prevelant in the late 1800s and El Nino more prevalent in the last 20th Century). Here is what the MEI time series looks like, on an expanded scale so you can see how the 10-year trailing averages of MEI reveals interdecadal variations, which are an important component of global temperature variability:

Intermediate Plot B. The merged and biased-adjusted extended MEI time series, 1871 through 2017, revealing decadal time scale variability in the trailing 10-year averages. This decadal variability, combined with both radiative and non-radiative forcing of surface temperatures related to MEI causes much of the multidecadal temperature variations we have experinced in the instrumental record.

As mentioned above, the rate of deep-ocean heat storage is simply assumed to be proportional to how far the surface layer temperature departs from energy equilibrium… the warmer the surface layer gets, the faster heat is pumped into the model deep ocean. The proportionality constant is tuned until the model produces an average deep-ocean (0-2000m) heating rate of 0.51 W/m2 over the period 1990 through 2017, matching NODC data after being modified by the global coverage by ocean (71%), and assuming the land does not store (or lose) appreciable energy.

The model is entered into an Excel spreadsheet with each row being a one month time step. It is initialized in the year 1765, which is when the RCP radiative forcing is initialied to zero. Correspondingly, the model temperature is initialized at zero departure from energy equilibrium in 1765 (this is not necessary if one believes the climate system was in the Little Ice Age at that time, but for now I want to make assumptions as similar to IPCC climate model assumptions as possible).

The adjustable parameters of the model are changed to improve the model fit to the HadCRUT4 data in real time in the Excel spreadsheet. For example, one parameter (say, the surface layer thickness) is adjusted until maximum agreement is reached. Then another parameter is adjusted (say, the LW feedback parameter) in the same way until further improvement is achieved. But then the other parameters must be re-adjusted. This iterative process is rather brute-force, but within a few hours one converges on a set of adjustable parameter values which produce the best results in terms of correlation and matching temperature trends between the model and HadCRUT4 observations.

Model Results

Fig. 2 shows one of many model realizations which comes close to the data, in terms of correlations (here about 0.88) and the same temperature trends. Note that the observed temperature time series has a 12-month smoother applied (click for large version).

Fig. 2. One-dimensional time-dependent model of global average equivalent-ocean surface layer temperature departures from energy equilibrium (dark blue), using RCP6 radiative forcings, ENSO-related radiative and non-radiative forcing, and deep ocean storage of heat proportional to the surface layer temperature departure from equilibrium. HadCRUT4 surface temperature anomalies (12-month smoothed, red) are adjusted vertically on the graph to have the same average values as the model. The temperature trends lines (1880-2017, dashed) of the model and observations coincide, since part of the feedback tuning is to force the trends to match. The UAH LT temperature variations are shown in light blue.

Following are several significant findings from this modeling exercise:

1. the specified model feedback parameters correspond to an equilibrium climate sensitivity of only 1.54 deg. C. This is less than half of the IPCC AR5 model average of 3.4 deg. C, and in close agreement with the best estimate of 1.6 deg. C of Lewis and Curry (2015). As we already know, the IPCC models tend to overestimate warming compared to what has been observed, and the current study suggests their excess warming is due to the models’ climate sensitivity being too high.

2. Note that the ENSO activity during the 20th Century largely explains the anomalous warmth around the 1940s. In fact, this feature exists even with the anthropogenic aerosol forcing removed, in which case a warming hiatus exists from the 1940s to the 1980s. This is the result of the ENSO radiative forcing term (0.23 W/m2 per MEI index value) combined with stronger El Ninos before the 1940s and weaker ones from the 1940s until the late 1970s.

3. The warming hiatus from 1997 to 2016 is evident in the model.

4. The model trend during the satellite temperature record (1979-2017) shows much better agreement with the UAH LT (lower troposphere) temperatures than with HadCRUT4, even though HadCRUT4 was used to optimize the model (!):

Here are the 1979-2017 trends, and correlation with model:

Model: +0.113 C/decade

UAH LT: +0.128 C/decade (r=0.81)

HadCRUT4: +0.180 C/decade (r=0.85)

Compared to the model, the UAH LT trend is only 0.015 C/decade higher, but the HadCRUT4 trend is 0.067 C/decade higher.

5. We can take the model output radiative fluxes, which include both forcing and feedback, during the CERES satellite period of record (March 2000 through September 2017) to see if the “feedbacks” diagnosed from regression are consistent with the actual feedbacks specified in the model. What we find (Fig. 3) is that, just as Spencer & Braswell have been arguing, the feedback parameters diagnosed from the radiative flux and temperature variations lead to regression coefficients quite far from those specified:

Fig. 3. Model diagnosed feedback parameters for the same period as the CERES satellite radiative flux record (March 200 through September 2017) shown in Fig. 1. Significantly, the model-diagnosed feedback parameters (regression slopes) are far from those specified in the model, leading to a gross overestimation of climate sensitivity if interpreted as feedback parameters.

The ECS thus (incorrectly) diagnosed from the model radiative fluxes is 3.25 deg. C, even though the feedbacks specified in the model have an ECS of 1.54 deg. C! This supports our contention that use of CERES radiative fluxes to estimate ECS will lead to overestimation of climate sensitivity (e.g. Spencer & Braswell, 2011). The cause of the problem is time-varying radiative forcing internal to the climate system contaminating the radiative feedback signal.

Note that there is less scatter in the model plots (Fig. 3) that in the observations (Fig. 1). This is mainly due to the observations in Fig. 1 having far more sources of internal radiative forcing than the one specified in the model (only ENSO-related). Contrary to what the IPCC seems to believe (and what Andy Dessler has argued to me before), there are all kinds of non-feedback radiative variations in the climate system, internally generated by chaotic variability not caused by temperature changes. Cloud (and thus SW radiative flux) variations are NOT simply a response to surface temperature changes; some of those temperature changes are due to cloud variations caused by any number of atmospheric circulation-related changes.

Put more simply, causation works in both directions between temperature and radiative flux; if causation is assumed in only one direction (temperature change => radiative flux change), then diagnosing feedback parameters from the data will lead to a bias toward high climate sensitivity.


The 1D model fit to the HadCRUT4 data is quite good, despite the simplicity of the model. The model climate sensitivity of only 1.54 deg. C is just within the IPCC’s likely ECS range of 1.5 to 4.5 deg. C, and well below the AR5 model average ECS of 3.4 deg. C.

I believe this is some of the strongest evidence yet that (1) the real climate system is relatively insensitive, and (2) feedbacks diagnosed from TOA radiative fluxes cannot be used to diagnose feedbacks, and thus climate sensitivity.

The above must be considered as a work in progress. Publication (if it is ever allowed by the IPCC gatekeepers) will require demonstration of the sensitivity of the model results to changes in the adjustable parameters. I do posts like this partly to help guide and organize my thinking on the problem.

It is also worth noting that one can do all kinds of experiments with such a simple model, such as exploring the effect of the inclusion or exclusion of various forcings on the model results. Some of this was done by Spencer and Braswell (2014) who found that inclusion of ENSO effects substantially reduced the model’s climate sensitivity.


Lewis, N., and C.A. Curry, 2015: The implications for climate sensitivity of AR5 forcing and heat uptake estimates. Climate Dynamics, 45 (3-4), 1009-1023.

Spencer, R. W., and W. D. Braswell, 2011: On the misdiagnosis of surface temperature feedbacks from variations in Earths radiant energy balance. Remote Sensing, 3, 1603-1613; doi:10.3390/rs3081603

Spencer, R.W., and W.D. Braswell, 2014: The role of ENSO in global ocean temperature changes during 1955-2011 simulated with a 1D climate mode. Asia-Pacific Journal of Atmospheric Sciences, 50(2), 229-237.

Diagnosing Climate Sensitivity Assuming Some Natural Warming

February 16th, 2018

Climate sensitivity has been diagnosed based upon energy budget considerations by several authors in recent years using observational data combined with estimates of anthropogenic radiative forcing (e.g. Otto et al., 2013; Lewis & Curry, 2014).

Significantly, they generally calculate a lower equilibrium climate sensitivity (ECS) than the average of the IPCC AR5 climate models. Whereas the IPCC models average about 3.4 deg. C of warming from a doubling of atmospheric CO2 (2XCO2), these diagnostic studies get ECS from about 1.6 to 2.0 deg. C. Nic Lewis has provided detailed analysis over at Judith Curry’s blog about what goes into these estimates and the uncertainties of each observational variable.

The ECS estimate is based upon conservation of energy, and uses four variables in a single equation that uses differences in the climate system at two different times (say, two different decades) sufficiently separated in time where there has been a large climate reponse in surface temperature to an assumed radiative forcing. Note that the climate response is assumed to be a response to anthropogenic radiative forcing, plus volcanoes, and analysis is usually restricted to the oceans (where heat storage can be more accurately estimated):

ECS = F2XCO2[ ΔT/(ΔF – ΔQ)],


F2XCO2 = 3.7 W/m2, the assumed radiative forcing from a doubling of atmospheric CO2,

ΔT = the change in global average surface temperature between two periods (deg. C);

ΔF = the change in radiative forcing (imposed energy imbalance on the climate system at top of atmosphere) between two time periods (W/m2);

ΔQ = the change in ocean heat storage between two time periods(W/m2).

In the aforementioned papers, the earlier time period has been chosen to be in the mid- to late- 1800s, while the second has been some subset of the period 1970-2011.

I have verified the above equation using a time-dependent energy balance model of a 2-layer ocean extending to 2,000m depth using either the RCP6.0 radiative forcing history, or an instantaneously imposed doubling of CO2 back in the 1800s, and I get the same ECS calculated from the model output as I prescribed as input to the model. The equation works.

What if a Portion of Recent Warming Was Natural?

As you might recall, the IPCC is quite certain that the dominant cause of warming since the mid-20th Century was due to anthropogenic forcings.

What does “dominant” mean? Well, I’m sure it means over 50%. This implies that they are leaving the door open to the possibility that some of the recent warming has been natural, right?

Well, we can use the above equation to do a first-cut estimate of what the diagnosed climate sensitivity would be if some fraction of the surface and deep-ocean warming was natural.

All we have to do is replace ΔQ with fΔQ, where f is the fraction of ocean warming which is human-caused. We also do the same thing for the surface warming term: fΔT.

When we do this for anthropogenic fractions from 0% to 100%, here’s what we get:

How the data-diagnosed equilibrium climate sensitivity changes assuming different fractions of the warming due to humans (and the rest natural).

Note that even assuming 70% of recent ocean warming is due to humans (consistent with their claim that humans “dominate” warming), that the diagnosed climate sensitivity is only 1.3 deg. C. which is below the range even the IPCC (AR5) considers likely (1.5 to 4.5 deg. C).

Now, this raises an interesting issue… almost a dichotomy. I have heard some IPCC-type folks claim that recent anthropogenic warming could have been damped by some natural cooling mechanism. After all, the models are warming (on average) about twice as fast as the measurements of the lower troposphere. If they really believe the models, and also believe there has been some natural cooling mechanism going on suppressing anthropogenic warming, why doesn’t the IPCC simply claim ALL recent warming was due to human causation? That would be the logical conclusion.

But the way the AR5 was written, they are suggesting that a portion of recent warming could be natural, which is the basis for my analysis, above, which produces a very low climate sensitivity number.

They can’t have it both ways.

UAH Global Temperature Update for January, 2018: +0.26 deg. C

February 1st, 2018

Coolest tropics since June, 2012 at -0.12 deg. C.

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for January, 2018 was +0.26 deg. C, down from the December, 2017 value of +0.41 deg. C:

Global area-averaged lower tropospheric temperature anomalies (departures from 30-year calendar monthly means, 1981-2010). The 13-month centered average is meant to give an indication of the lower frequency variations in the data; the choice of 13 months is somewhat arbitrary… an odd number of months allows centered plotting on months with no time lag between the two plotted time series. The inclusion of two of the same calendar months on the ends of the 13 month averaging period causes no issues with interpretation because the seasonal temperature cycle has been removed as has the distinction between calendar months.

The global, hemispheric, and tropical LT anomalies from the 30-year (1981-2010) average for the last 13 months are:

2017 01 +0.33 +0.31 +0.34 +0.10
2017 02 +0.38 +0.57 +0.20 +0.08
2017 03 +0.23 +0.36 +0.09 +0.06
2017 04 +0.27 +0.28 +0.26 +0.21
2017 05 +0.44 +0.39 +0.49 +0.41
2017 06 +0.21 +0.33 +0.10 +0.39
2017 07 +0.29 +0.30 +0.27 +0.51
2017 08 +0.41 +0.40 +0.42 +0.46
2017 09 +0.54 +0.51 +0.57 +0.54
2017 10 +0.63 +0.66 +0.59 +0.47
2017 11 +0.36 +0.33 +0.38 +0.26
2017 12 +0.41 +0.50 +0.33 +0.26
2018 01 +0.26 +0.46 +0.06 -0.12

Note that La Nina cooling in the tropics has finally penetrated the troposphere, with a -0.12 deg. C departure from average. The last time the tropics were cooler than this was June, 2012 (-0.15 deg. C). Out of the 470 month satellite record, the 0.38 deg. C one-month drop in January tropical temperatures was tied for the 3rd largest, beaten only by October 1991 (0.51 deg. C drop) and August, 2014 (0.41 deg. C drop).

The last time the Southern Hemisphere was this cool (+0.06 deg. C) was July, 2015 (+0.04 deg. C).

The linear temperature trend of the global average lower tropospheric temperature anomalies from January 1979 through January 2018 remains at +0.13 C/decade.

The UAH LT global anomaly image for January, 2018 should be available in the next few days here.

The new Version 6 files should also be updated in the coming days, and are located here:

Lower Troposphere:
Lower Stratosphere:

U.S. Corn Yield a New Record – Again

January 29th, 2018

Global warming be damned — full speed ahead on the Maize Train.

Kentucky Corn Growers Association

The numbers are in from USDA, and 2017 saw a new record in average corn yield, with 176.6 bushels per acre.

In fact, the last four growing seasons (2014, 2015, 2016, 2017) had higher yields than any previous years. The last time that happened was in 1964.

And compared to 1964, the U.S. is producing nearly three times as much corn per acre as we did back then.

There is no indication of a slowdown in the long-term upward trends in corn yields. While the 176.6 bpa U.S. average for 2017 is a huge increase compared to just 50 years ago, the latest winner for the highest yield produced by a single farmer has risen again to over 542 bpa, which is fully three times the U.S. average yield.

While the global warmmongers continue to wring their hands over rising temperatures hurting yields (the Corn Belt growing season has indeed warmed slightly since 1960), improved varieties and the “global greening” benefits of more atmospheric CO2 have more than offset any negative weather effects — if those even exist.

Globally, upward trends in all grain yields have been experienced in recent decades. Of course, droughts and floods cause regional crop failures almost every year. That is normal and expected. But there has been no global average increase in these events over the last century.

In his latest movie, Al Gore claimed just the opposite for wheat yields in China. While I hesitate to call him a liar, since I don’t know where he got his information — Gore was just plain wrong.

The sky is not falling. Life on Earth depends upon CO2, even though there is so little of it — now 4 parts per 10,000 of the atmosphere, compared to 3 parts a century ago. No matter how much we emit, nature gobbles up 50% of it.

Most of the evidence suggests that life is now breathing more freely than any time in human history, thanks to our CO2 emissions.

Sydney Heat and “Bomb” Snowstorm: Pimped Out for Climate Change

January 7th, 2018

It’s been an eventful weather week in some portions of the globe. In fact, it is always an eventful weather week – somewhere.

But what really drives the narrative is when weather extremes — which always have, and always will, occur — happen to hit major metropolitan areas. Many people are already aware of the relentless guffawing resulting from Al Gore’s tweet that Michael Mann says the Northeast’s current cold wave is just what global warming predicts. (As I recall, Mann is a mathematician, not a meteorologist. Correction: Mann is a geologist/geophysicist, which is equally uninformed on atmospheric dynamics.)

Yesterday, Kristine Phillips of The Washington Post wrote about the recent “bomb” snowstorm in New England, the ensuing cold wave, and the extreme heat (110+ deg. F) that has just hit Sydney, Australia.

To her credit, she did not explicitly put the blame on climate change for these events, but her legal-background prose came pretty darn close… just close enough so that the casual reader would make the connection. Wink-wink, nod-nod.

The trouble is that neither of these two events are exceptional from a meteorological perspective. That is, they have happened before (Sydney’s 117 deg. F peak was exceeded in 1939), and they will happen again.

It is only when we can demonstrate that such events are increasingly occurring over, say, 50 to 100 years that we can begin to invoke climate change. (And even then we must debate the various causes of climate change.) So far, that evidence is sorely lacking.

The Sydney Heat Wave

Here’s the GFS forecast model analysis of surface temperature departures from average for about the time that peak temperatures were reached in Sydney yesterday. Maybe you can tell me which of these cold and warm patterns are consistent with global warming theory and which aren’t? (Hint: Warming should be occurring basically everywhere):

GFS analysis of surface temperature departures from normal at about the time 110 deg. F temperatures were reached in Sydney, Australia ( graphic).

See that hotspot in the Sydney Basin? That is a localized effect of downslope winds from the highlands to the west which causes enhanced warming of the air, as well as bushfires. It clearly does not represent what is happening across Australia as a whole. Australia is exceedingly hot this time of year anyway, heat which is made even worse since the sun is closer to the Earth in January than in July (leading to a 7% range in solar radiation reaching the Earth).

The “Bomb” Blizzard

Meteorologist Fred Sanders coined the term “bomb” in 1980 to refer to a non-tropical cyclone whose central pressure drops by at least 24 millibars in 24 hours.

They happen every year.

But what doesn’t happen every year is them influencing major metro areas. So, the recent nor’easter snowstorm to hit the Mid Atlantic and New England was also a “bomb” because the low pressure center intensified so rapidly. These events happen every year in, for example, the North Atlantic and North Pacific.

We meteorologists used to talk about “bombs” fairly regularly in the 1980s, but not so much in recent years. I wonder if maybe climate change is making winter storms weaker? Hmmm…

And to attribute every winter cold wave or heat wave to global warming is just plain silly. These things happen even without global warming (which, by the way, I do believe is occurring, just not very strongly, dangerously, or maybe not even mostly due to human causation). Seasoned New Englanders can tell you that.

Meanwhile, The Weather Channel (aka “The Disaster Channel”) serves up a steady stream of weather porn to titillate the senses.

And before you believe that warmth in January is unusual, “January thaws” are a routine phenomenon, too, which is why the term was coined. According to the Glossary of Meteorology:

“The daily temperature averages at Boston, computed for the years 1873 to 1952, show a well- marked peak on 20-23 January; the same peak occurs in the daily temperatures of Washington, D.C., and New York City. Statistical tests show a high probability that it is a real singularity. The January thaw is associated with the frequent occurrence on the above-mentioned dates of southerly winds on the back side of an anticyclone off the southeastern United States.”

Nevertheless, the weird-weather-is-climate-change narrative will continue until the populace finally agrees with the warmongers that we can control our weather through taxation and regulation.

Da Bomb

January 4th, 2018

GOES-16 image of the intense extra-tropical cyclone at 8:45 EST January 4, 2018.

The rapidly intensifying non-tropical cyclone producing heavy snow and blizzard conditions over the mid-Atlantic and New England is meeting expectations, with localized snowfalls of over 6 inches already this morning.

The latest NAM model forecast of additional snowfall after 7 a.m. this morning until tomorrow morning shows up to 12-18 inches of snow over portions of Massachusetts, Connecticut, Rhode Island, Vermont, and Maine (graphic courtesy of

Maximum additional snow accumulations from 7 a.m. Thursday Jan. 4 to 7 a.m. Friday, from the NAM weather forecast model.

As of 9 a.m. EST, all 5 NWS reporting stations in Rhode Island have heavy snow falling.

The term “bomb” was coined by meteorologist Fred Sanders in 1980 to refer to a non-tropical low pressure area that intensifies at least 24 millibars in 24 hours. They happen every year, and are usually centered offshore in the winter where cold continental air masses meet warm oceanic air masses, providing maximum energy to the intensification process.

UAH Global Temperature Update for December, 2017: +0.41 deg. C

January 2nd, 2018

2017 Third Warmest in the 39-Year Satellite Record

Global Satellite Monitoring of Temperature Enters its 40th Year

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for December, 2017 was +0.41 deg. C, up a little from the November, 2017 value of +0.36 deg. C:

Global area-averaged lower tropospheric temperature anomalies (departures from 30-year calendar monthly means, 1981-2010). The 13-month centered average is meant to give an indication of the lower frequency variations in the data; the choice of 13 months is somewhat arbitrary… an odd number of months allows centered plotting on months with no time lag between the two plotted time series. The inclusion of two of the same calendar months on the ends of the 13 month averaging period causes no issues with interpretation because the seasonal temperature cycle has been removed as has the distinction between calendar months.

The global, hemispheric, and tropical LT anomalies from the 30-year (1981-2010) average for the last 24 months are:

2016 01 +0.55 +0.72 +0.38 +0.85
2016 02 +0.85 +1.18 +0.53 +1.00
2016 03 +0.76 +0.98 +0.54 +1.10
2016 04 +0.72 +0.85 +0.58 +0.93
2016 05 +0.53 +0.61 +0.44 +0.70
2016 06 +0.33 +0.48 +0.17 +0.37
2016 07 +0.37 +0.44 +0.30 +0.47
2016 08 +0.43 +0.54 +0.32 +0.49
2016 09 +0.45 +0.51 +0.39 +0.37
2016 10 +0.42 +0.43 +0.42 +0.47
2016 11 +0.46 +0.43 +0.49 +0.38
2016 12 +0.26 +0.26 +0.27 +0.24
2017 01 +0.32 +0.31 +0.34 +0.10
2017 02 +0.38 +0.57 +0.19 +0.07
2017 03 +0.22 +0.36 +0.09 +0.05
2017 04 +0.27 +0.28 +0.26 +0.21
2017 05 +0.44 +0.39 +0.49 +0.41
2017 06 +0.21 +0.33 +0.10 +0.39
2017 07 +0.29 +0.30 +0.27 +0.51
2017 08 +0.41 +0.40 +0.41 +0.46
2017 09 +0.54 +0.51 +0.57 +0.54
2017 10 +0.63 +0.67 +0.59 +0.47
2017 11 +0.36 +0.33 +0.38 +0.26
2017 12 +0.41 +0.50 +0.33 +0.26

The linear temperature trend of the global average lower tropospheric temperature anomalies from January 1979 through December 2017 remains at +0.13 C/decade.

2017 ended up being the 3rd warmest year in the satellite record for the globally-averaged lower troposphere, at +0.38 deg. C above the 1981-2010 average, behind 1st place 2016 with +0.51 deg. C, and 2nd place 1998 at +0.48 deg. C.

The UAH LT global anomaly image for December, 2017 should be available in the next few days here.

The new Version 6 files should also be updated in the coming days, and are located here:

Lower Troposphere:
Lower Stratosphere:

U.S. Average Temperature Plummets to 11 deg. F

January 1st, 2018

This morning at 7 a.m. EST, the area average temperature across the contiguous 48 states was a frigid 11 deg. F.

Here’s the high-resolution surface temperature analysis from NCEP, graphic courtesy of

Surface temperature analysis at 7 a.m. EST January 1, 2018.

Over 85% of the nation is below freezing, and nearly 1/3 is below 0 deg. F. The forecast is for cold air to continue to flow down out of Canada into the central and eastern U.S. for most of the coming week.