Archive for October, 2019

Does the Climate System Have a Preferred Average State? Chaos and the Forcing-Feedback Paradigm

Friday, October 25th, 2019

NOTE: I have written on this subject before, but it is important enough that we need to keep thinking about it. It is also related to the forcing-feedback paradigm of climate change, which I usually defend — but which I will here take a skeptical view toward in the context of long-term climate change.

1575 Winter Landscape with Snowfall near Antwerp by Lucas van Valckenborch.Städel Museum/Wikimedia Commons

The UN IPCC scientists who write the reports which guide international energy policy on fossil fuel use operate under the assumption that the climate system has a preferred, natural and constant average state which is only deviated from through the meddling of humans. They construct their climate models so that the models do not produce any warming or cooling unless they are forced to through increasing anthropogenic greenhouse gases, aerosols, or volcanic eruptions.

This imposed behavior of their “control runs” is admittedly necessary because various physical processes in the models are not known well enough from observations and first principles, and so the models must be tinkered with until they produce what might be considered to be the “null hypothesis” behavior, which in their worldview means no long-term warming or cooling.

What I’d like to discuss here is NOT whether there are other ‘external’ forcing agents of climate change, such as the sun. That is a valuable discussion, but not what I’m going to address. I’d like to address the question of whether there really is an average state that the climate system is constantly re-adjusting itself toward, even if it is constantly nudged in different directions by the sun.

If there is such a preferred average state, then the forcing-feedback paradigm of climate change is valid. In that system of thought, any departure of the global average temperature from the Nature-preferred state is resisted by radiative “feedback”, that is, changes in the radiative energy balance of the Earth in response to the too-warm or too-cool conditions. Those radiative changes would constantly be pushing the system back to its preferred temperature state.

But what if there isn’t only one preferred state?

I am of the opinion that the F-F paradigm does indeed apply for at least year-to-year fluctuations, because phase space diagrams of the co-variations between temperature and radiative flux look just like what we would expect from a F-F perspective. I touched on this in yesterday’s post.

Where the F-F paradigm might be inapplicable is in the context of long-term climate changes which are the result of internal fluctuations.

Chaos in the Climate System

Everyone agrees that the ocean-atmosphere fluid flows represent a non-linear dynamical system. Such systems, although deterministic (that is, can be described with known physical equations) are difficult to predict the future behavior of because of their sensitive dependence on the current state. This is called “sensitive dependence on initial conditions”, and it is why weather cannot be forecast more than a week or so in advance.

The reason why most climate researchers do not think this is important for climate forecasting is that they are dealing with how the future climate might differ from today’s climate in a time-averaged sense... due not to changes in initial conditions, but in the “boundary conditions”, that is, increasing CO2 in the atmosphere. Humans are slightly changing the rules by which the climate system operates — that is, the estimated ~1-2% change in the rate of cooling of the climate system to outer space as a result of increasing CO2.

There are still chaotic variations in the climate system, which is why any given climate model forced with the same amount of increasing CO2 but initialized with different initial conditions in 1760 will produce a different globally-averaged temperature in, say, 2050 or 2060.

But what if the climate system undergoes its own, substantial chaotic changes on long time scales, say 100 to 1,000 years? The IPCC assumes this does not happen. But the ocean has inherently long time scales — decades to millennia. An unusually large amount of cold bottom water formed at the surface in the Arctic in one century might take hundreds or even thousands of years before it re-emerges at the surface, say in the tropics. This time lag can introduce a wide range of complex behaviors in the climate system, and is capable of producing climate change all by itself.

Even the sun, which we view as a constantly burning ball of gas, produces an 11-year cycle in sunspot activity, and even that cycle changes in strength over hundreds of years. It would seem that every process in nature organizes itself on preferred time scales, with some amount of cyclic behavior.

This chaotic climate change behavior would impact the validity of the forcing-feedback paradigm as well as our ability to determine future climate states and the sensitivity of the climate system to increasing CO2. If the climate system has different, but stable and energy-balanced, states, it could mean that climate change is too complex to predict with any useful level of accuracy.

El Nino / La Nina as an Example of a Chaotic Cycle

Most climate researchers view the warm El Nino and cool La Nina episodes conceptually as departures from an average climate state. But I believe that they are more accurately viewed as a bifurcation in the chaotic climate system. In other words, during Northern Hemisphere winter, there are two different climate states (El Nino or La Nina) that the climate system tends toward. Each has its own relatively stable configuration of Pacific trade winds, sea surface temperature patterns, cloudiness, and global-average temperature.

So, in a sense, El Nino and La Nina are different climate states which Earth has difficulty choosing between each year. One is a globally warm state, the other globally cool. This chaotic “bifurcation” behavior has been described in the context of even extremely simple systems of nonlinear equations, vastly simpler than the equations describing the time-evolving real climate system.

The Medieval Warm Period and Little Ice Age

Most historical records and temperature proxy evidence point to the Medieval Warm Period and Little Ice Age as real, historical events. I know that most people try to explain these events as the response to some sort of external forcing agent, say indirect solar effects from long-term changes in sunspot activity. This is a natural human tendency… we see a change, and we assume there must be a cause external to the change.

But a nonlinear dynamical system needs no external forcing to experience change. I’m not saying that the MWP and LIA were not externally forced, only that their explanation does not necessarily require external forcing.

There could be internal modes of chaotic fluctuations in the ocean circulation which produce their own stable climate states which differ in global-average temperature by, say, 1 deg. C. One possibility is that they would have slightly different sea surface temperature patterns or oceanic wind speeds, which can cause slightly different average cloud amounts, thus altering the planetary albedo and so the amount of sunlight the climate system has to work with. Or, the precipitation systems produced by the different climate states could have slightly different precipitation efficiencies, which then would affect the average amount of the atmosphere’s main greenhouse gas, water vapor.

Chaotic Climate Change and the Forcing-Feedback Paradigm

If the climate system has multiple, stable climate states, each with its own set of slightly different energy flows that still produce global energy balance and relatively constant temperatures (whether warmer or cooler), then the “forcing-feedback framework” (FFF, as my Australian friend Christopher Game likes to call it) would not apply to these climate variations, because there is no normal, average climate state to which ‘feedback’ is constantly nudging the system back toward.

Part of the reason for this post is the ongoing discussion I have had over the years with Christopher on this issue, and I want him to know that I am not totally deaf to his concerns about the FFF. As I described yesterday, we do see forcing-feedback type behavior in short-term climate fluctuations, but I agree that the FFF might not be applicable to longer-term fluctuations. In this sense, I believe Christopher Game is correct.

The UN IPCC Will Not Address This Issue

It is clear that the UN IPCC, by its very charter, is primarily focused on human-caused climate change. As a result of political influence (related to the desire of governmental regulation over the private sector) it will never seriously address the possibility that long-term climate change might be part of nature. Only those scientists who are supportive of this anthropocentric climate view are allowed to play in the IPCC sandbox.

Substantial chaos in the climate system injects a large component of uncertainty into all predictions of future climate change, including our ability to determine climate sensitivity. It reduces the practical value of climate modelling efforts, which cost billions of dollars and support the careers of thousands of researchers. While I am generally supportive of climate modeling, I am appropriately skeptical of the ability of current climate models to provide enough confidence to make high-cost energy policy decisions.

Comments on the Gregory et al. Climate Sensitivity Paper and Nic Lewis’s Criticism

Thursday, October 24th, 2019

NOTE: Comments for this post have all been flagged as pending for some reason. I’m testing the spam blocker to see what the problem might be. Until it is fixed, I might have to manually approve comments as I have time during the day.

A recent paper by Jonathan Gregory and co-authors in Climate Dynamics entitled How accurately can the climate sensitivity to CO2 be estimated from historical climate change? addresses in considerable detail the issues which limit our ability to determine that global warming holy grail, “equilibrium climate sensitivity” (ECS, the eventual global average surface warming response to a doubling of atmospheric CO2). Despite decades of research, climate models still exhibit climate sensitivities that range over a factor of three (about 1.5 to 4.5 deg. C for 2XCO2), and a minority of us believe the true sensitivity could be less than 1.5 deg. C.

Obviously, if one could confidently determine the climate sensitivity from observations, then the climate modelers could focus their attention on adjusting their models to reproduce that known sensitivity. But so far, there is no accepted way to determine climate sensitivity from observations. So, instead the climate modeling groups around the world try different approaches to modeling the various physical processes affecting climate change and get a rather wide range of answers for how much warming occurs in response to increasing atmospheric CO2.

One of the problems is that increasing CO2 as a climate forcing is unique in the modern instrumental record. Even if we can measure radiative feedbacks in specific situations (e.g., month to month changes in tropical convection) there is no guarantee that these are the same feedbacks that determine long-term sensitivity to increasing CO2. [If you are one of those who believe the feedback paradigm should not be applied to climate change — you know who you are — you might want to stop reading now to avoid being triggered.]

The Lewis Criticism

The new paper uses climate models as a surrogate for the real climate system to demonstrate the difficulty in measuring the “net feedback parameter” which in turn determines climate sensitivity. While I believe this is a worthwhile exercise, Nic Lewis has objected (originally here, then reposted here and here) to one of the paper’s claims regarding errors in estimating feedbacks through statistical regression techniques. It is a rather obscure point buried in the very long and detailed Gregory et al. paper, but it is nonetheless important to the validity of Lewis and Curry (2018) published estimates of climate sensitivity based upon energy budget considerations. Theirs is not really a statistical technique (which the new paper criticizes), but a physically-based technique applied to the IPCC’s own estimates of the century time scale changes in global radiative forcing, ocean heat storage, and surface temperature change.

From what I can tell, Nic’s objection is valid. Even though it applies to only a tiny portion of the paper, it has significant consequences because the new paper appears to be an effort to de-legitimize any observational estimates of climate sensitivity. I am not questioning the difficulty and uncertainty in making such estimates with current techniques, and I agree with much of what the paper says on the issue (as far as it goes, see the Supplement section, below).

But the authors appear to have conflated those difficulties with the very specific and more physics-based (not statistics-based) climate sensitivity estimates of the Lewis and Curry (2018) paper. Based upon the history of the UN IPCC process of writing its reports, the Gregory et al. paper could now be invoked to claim that the Lewis & Curry estimates are untrustworthy. The fact is that L&C assumes the same radiative forcing as the IPCC does and basically says, the century time scale warming that has occurred (even if it is assumed to be 100% CO2-caused) does not support high climate sensitivity. Rather than getting climate sensitivity from a model that produces too much warming, L&C instead attempt to answer the question, “What is the climate sensitivity based upon our best estimates of global average temperature change, radiative forcing, and ocean heat storage over the last century?”

Vindication for the Spencer and Braswell Studies

I feel a certain amount of vindication upon reading the Gregory et al. paper. It’s been close to 10 years now since Danny Braswell and I published a series of papers pointing out that time-varying radiative forcing generated naturally in the climate system obscures the diagnosis of radiative feedback. Probably the best summary of our points was provided in our paper On the diagnosis of radiative feedback in the presence of unknown radiative forcing (2010). Choi and Lindzen later followed up with papers that further explored the problem.

The bottom line of our work is that standard ordinary least-squares (OLS) regression techniques applied to observed co-variations between top-of-atmosphere radiative flux (from ERBE or CERES satellites) and temperature will produce a low bias in the feedback parameter, and so a high bias in climate sensitivity. [I provide a simple demonstration at the end of this post]. The reason why is that time-varying internal radiative forcing (say, from changing cloud patterns reflecting more or less sunlight to outer space) de-correlates the data (example below). We were objecting to the use of such measurements to justify high climate sensitivity estimates from observations.

Our papers were, of course, widely criticized, with even the editor of Remote Sensing being forced to resign for allowing one of the papers to be published (even though the paper was never retracted). Andrew Dessler objected to our conclusions, claiming that all cloud variations must ultimately be due to feedback from some surface temperature change somewhere at some time (an odd assertion from someone who presumably knows some meteorology and cloud physics).

So, even though the new Gregory et al. paper does not explicitly list our papers as references, it does heavily reference Proistosescu et al. (2018) which directly addresses the issues we raised. These newer papers show that our points were valid, and they come to the same conclusions we did — that high climate sensitivity estimates from the observed co-variations in temperature and radiative flux were not trustworthy.

The Importance of the New Study

The new Gregory et al. paper is extensive and makes many good conceptual points which I agree with. Jonathan Gregory has a long history of pioneering work in feedback diagnosis, and his published research cannot be ignored. The paper will no doubt figure prominently in future IPCC report writing.

But I am still trying to understand the significance of CMIP5 model results to our efforts to measure climate sensitivity from observations, especially the model results in their Fig. 5. It turns out what they are doing with the model data differs substantially with what we try to do with radiative budget observations from our limited (~20 year) satellite record.

First of all, they don’t actually regress top of atmosphere total radiative fluxes from the models against temperature; they first subtract out their best estimate of the radiative forcing applied to those models. This helps isolate the radiative feedback signal responding to the radiative forcing imposed upon the models. Furthermore, they beat down the noise of natural internal radiative and non-radiative variability by using only annual averages. Even El Nino and La Nina events in the models will have trouble surviving annual averaging. Almost all that will remain after these manipulations is the radiative feedback to just the CO2 forcing-induced warming. This also explains why they do not de-trend the 30-year periods they analyze — that would remove most of the temperature change and thus radiative feedback response to temperature change. They also combine model runs together before feedback diagnosis in some of their calculations, further reducing “noise” from internal fluctuations in the climate system.

In other words, their methodology would seem to have little to do with determination of climate sensitivity from natural variations in the climate system, because they have largely removed the natural variations from the climate model runs. The question they seem to be addressing is a very special case: How well can the climate sensitivity in models be diagnosed from 30-year periods of model data when the radiative forcing causing the temperature change is already known and can be subtracted from the data? (Maybe this is why they term theirs a “perfect model” approach.) If I am correct, then they really haven’t fully addressed the more general question posed by their paper’s title: How accurately can the climate sensitivity to CO2 be estimated from historical climate change? The “historical climate change” in the title has nothing to do with natural climate variations.

Unfortunately — and this is me reading between the lines — these newer papers appear to be building a narrative that observations of the climate system cannot be used to determine the sensitivity of the climate system; instead, climate model experiments should be used. Of course, since climate models must ultimately agree with observations, any model estimate of climate sensitivity must still be observations-based. We at UAH continue to work on other observational techniques, not addressed in the new papers, to tease out the signature of feedback from the observations in a simpler and more straightforward manner, from natural year-to-year variations in the climate system. While there is no guarantee of success, the importance of the climate sensitivity issue requires this.

And, again, Nic Lewis is right to object to their implicit lumping the Lewis & Curry observational determination of climate sensitivity work from energy budget calculations in with statistical diagnoses of climate sensitivity, the latter which I agree cannot yet be reliably used to diagnose ECS.

Supplement: A Simple Demonstration of the Feedback Diagnosis Problem

Whether you like the term “feedback” or not (many engineering types object to the terminology), feedback in the climate sense quantifies the level to which the climate system adjusts radiatively to resist any imposed temperature change. This radiative resistance (dominated by the “Planck effect”, the T^4 dependence of outgoing IR radiation on temperature) is what stabilizes every planetary system against runaway temperature change (yes, even on Venus).

The strength of that resistance (e.g., in Watts per square meter of extra radiative loss per deg. C of surface warming) is the “net feedback parameter”, which I will call λ. If that number is large (high radiative resistance to an imposed temperature change), climate sensitivity (proportional to the reciprocal of the net feedback parameter) is low. If the number is small (weak radiative resistance to an imposed temperature change) then climate sensitivity is high.

[If you object to calling it a “feedback”, fine. Call it something else. The physics doesn’t care what you call it.]

I first saw the evidence of the the different signatures of radiative forcing and radiative feedback when looking at the global temperature response to the 1991 eruption of Mt. Pinatubo. When the monthly, globally averaged ERBE radiative flux data were plotted against temperature changes, and the data dots connected in chronological order, it traced out a spiral pattern. This is the expected result of a radiative forcing (in this case, reduced sunlight) causing a change in temperature (cooling) that lags the forcing due to the heat capacity of the oceans. Importantly, this involves a direction of causation opposite to that of feedback (a temperature change causing a radiative change).

The newer CERES instruments provide the longest and most accurate record of changes in top-of-atmosphere radiative balance. Here’s the latest plot for 19 years of monthly Net (reflected shortwave SW plus emitted longwave LW) radiative fluxes versus our UAH lower tropospheric temperatures.

Fig. 1. Observed monthly global average anomalies in UAH lower tropospheric temperatures (LT) versus anomalies in CERES Net radiative flux at the top-of-atmosphere, March 2000 through April 2019.

Note I have connected the data dots in chronological order. We see than “on average” (from the regression line) there appears to be about 2 W/m2 of energy lost per degree of warming of the lower troposphere. I say “appears” because some of the radiative variability in that plot is not due to feedback, and it decorrelates the data leading to uncertainty in the slope of the regression line, which we would like to be an estimate of the net feedback parameter.

This contaminating effect of internal radiative forcing can be demonstrated with a simple zero-dimensional time-dependent forcing-feedback model of temperature change of a swamp ocean:

Cp[dT(t)/dt] = F(t) – λ [dT(t)]

where the left side is the change in heat content of the swamp ocean with time, and on the right side F is all of the radiative and non-radiative forcings of temperature change (in W/m2) and λ is the net feedback parameter, which multiplies the temperature change (dT) from an assumed energy equilibrium state.

While this is probably the simplest time-dependent model you can create of the climate system, it shows behavior that we see in the climate system. For example, if I make time series of low-pass filtered random numbers about zero to represent the known time scales of intraseasonal oscillations and El Nino/La Nina, and add in another time series of low-pass filtered “internal radiative forcing”, I can roughly mimic the behavior seen in Fig. 1.

Fig. 2. As in Fig. 1, but produced by a simple time-dependent forcing feedback model with a “swamp” ocean of assumed 15 m depth, and low-pass filtered random forcings which are approximately 60% radiative (e.g. random cloud variations) and 40% non-radiative (e.g. intraseasonal oscillations and ENSO). The model time step is one day, and the model output is averaged to 30 days, and run for the same period of time (230 months) as in Fig. 1.

Now, the key issue for feedback diagnosis is that even though the regression line in Fig. 2 has a slope of 1.8 W m-2 K-1, the feedback I specified in the model run was 4 W m-2 K-1. Thus, if I had interpreted that slope as indicating the sensitivity of the simple model climate system, I would have gotten 2. 1 deg. C, when in fact the true specified sensitivity was only 0.9 deg. C (assuming 2XCO2 causes 3.7 W m-2 of radiative forcing).

This is just meant to demonstrate how internal radiative variability in the climate system corrupts the diagnosis of feedback from observational data, which is also a conclusion of the newer published studies referenced above.

And, as I have mentioned above, even if we can diagnose feedbacks from such short term variations in the climate system, we have no guarantee that they also determine (or are even related to) the long-term sensitivity to increasing CO2.

So (with the exception of studies like L&C) be prepared for increased reliance on climate models to tell us how sensitive the climate system is.

Record Antarctic Stratospheric Warming Causes Sept. 2019 Global Temperature Update Confusion

Friday, October 4th, 2019

While the vast majority of our monthly global temperature updates are pretty routine, September 2019 is proving to be a unique exception. The bottom line is that there is nothing wrong with the UAH temperatures we originally reported. But what I discovered about last month is pretty unusual.

It all started when our global lower tropospheric (LT) temperature came in at an unexpectedly high +0.61 deg. C above the 1981-2010 average. I say “unexpected” because, as WeatherBell’s Joe Bastardi has pointed out, the global average surface temperature from NOAA’s CFS model had been running about 0.3 C above normal, and our numbers are usually not that different from that model product.

[By way of review, the three basic layers we compute average temperatures from the satellites are, in increasing altitude, the mid-troposphere (MT), tropopause region (TP), and lower stratosphere (LS). From these three deep layer temperatures, we compute the lower tropospheric (LT) product using a linear combination of the three main channels, LT = 1.548MT – 0.538TP +0.01LS.]

Yesterday, John Christy noticed that the Southern Hemisphere was unusually warm in our lower stratosphere (LS) temperature product, while the Northern Hemisphere was unusually cool. This led me to look at the tropical results for our mid-troposphere (MT) and ‘tropopause’ (TP) products, which in the tropics usually track each other. A scatterplot of them revealed September 2019 to be a clear outlier, that is, the TP temperature anomaly was too cool for the MT temperature anomaly.

So, John put a notice on his monthly global temperature update report, and I added a notice to the top of my monthly blog post, that we suspected maybe one of the two satellites we are currently using (NOAA-19 and Metop-B) had problems.

As it turns out, there were no problems with the data. Just an unusual regional weather event that produced an unusual global response.

Blame it on Antarctica

Some of you might have seen news reports several weeks ago that a strong stratospheric warming (SSW) event was expected to form over Antarctica, potentially impacting weather in Australia. These SSW events are more frequent over the Arctic, and occur in winter when (put very simply) winds in the stratosphere flow inward and force air within the cold circumpolar vortex to sink (that’s called subsidence). Since the stratosphere is statically stable (its temperature lapse rate is nearly isothermal), any sinking leads to a strong temperature increase. CIRES in Colorado has provided a nice description of the current SSW event, from which I copied this graphic showing the vertical profile of temperature normally (black like) compared to that for September (red line).

By mass continuity, the air required for this large-scale subsidence must come from lower latitudes, and similarly, all sinking air over Antarctica must be matched by an equal mass of rising air, with temperatures falling. This is part of what is called the global Brewer-Dobson circulation in the stratosphere. (Note that because all of this occurs in a stable environment, it is not ‘convection’, but must be forced by dynamical processes).

As can be seen in this GFS model temperature field for today at the 30 mb level (about 22 km altitude) the SSW is still in play over Antarctica.

GFS model temperature departures from normal at about 22 km altitude in the region around Antarctica, 12 UTC 4 October 2019. Graphic from WeatherBell.com.

The following plot of both Arctic and Antarctic UAH LS temperature anomalies shows just how strong the September SSW event was, with a +13.7 deg. C anomaly averaged over the area poleward of 60 deg. S latitude. The LS product covers the layer from about 15 to 20 km altitude.

As mentioned above, when one of these warm events happens, there is cooling that occurs from the rising air at the same altitudes, even very far away. Because the Brewer-Dobson circulation connects the tropical stratosphere to the mid-latitudes and the poles, a change in one region is mirrored with opposite changes elsewhere.

As evidence of this, if I compute the month-to-month changes in lower stratospheric temperatures for a few different regions, I find the following correlations between regions (January 1979 through September 2019). These negative correlations are evidence of this see-saw effect in stratospheric temperature between different latitudes (and even hemispheres).

Tropics vs. Extratropics: -0.78

Arctic vs. S. Hemisphere: -0.70

Antarctic vs. N. Hemisphere: -0.50

N. Hemis. vs. S. Hemis.: -0.75

Because of the intense stratospheric warming over Antarctica, it caused an unusually large difference in the NH and SH anomalies, which raised a red flag for John Christy.

Next I can show that the SSW event extended to lower altitudes, influencing the TP channel which we use to compute the LT product. This is important because sinking and warming at the altitudes of the TP product (roughly 8-14 km altitude) can cause cooling at those same altitudes very far away. This appears to be why I noticed the tropics having the lowest-ever TP temperature anomaly for the MT anomaly in September, which raised a red flag for me.

In this plot of the difference between those two channels [TP-MT] over the Antarctic, we again see that September 2019 was a clear outlier.

Conceptually, that plot shows that the SSW subsidence warming extends down into altitudes normally considered to be the upper troposphere (consistent with the CIRES plot above). I am assuming that this led to unusual cooling in the tropical upper troposphere, leading to what I thought was anomalous data. It was indeed anomalous, but the reason wasn’t an instrument problem, it was from Mother Nature.

Finally, Danny Braswell ran our software, leaving out either NOAA-19 or Metop-B, to see if there was an unusual difference between the two satellites we combine together. The global LT anomaly using only NOAA-19 was +0.63 deg. C, while that using only Metop-B was +0.60 deg. C, which is pretty close. This essentially rules out an instrument problem for the unusually warm LT value in September, 2019.

UAH Global Temperature Update for September, 2019: +0.61 deg. C (see update, below)

Tuesday, October 1st, 2019

UPDATE: (10/3/2019, 4:55 p.m. CDT): We have discovered that the last 1-2 months of LT data could be biased high. This is based upon a quick analysis of tropical temperatures where our mid-tropospheric (MT) and upper-tropospheric product (TP) anomalies are usually in good agreement. September 2019 is a clear outlier, with TP much too cold compared to MT. MT was cooler in the tropics in than in August, but because TP fell so much more, their weighted difference produced a spuriously warm result for LT. Furthermore, the tropical LS (lower stratospheric temperature) is at a record low in the tropics, a result which I do not believe. I will provide an update when we figure out the problem.

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for September, 2019 was +0.61 deg. C, up considerably from the August value of +0.38 deg. C.

The linear warming trend since January, 1979 remains at +0.13 C/decade (+0.11 C/decade over the global-averaged oceans, and +0.18 C/decade over global-averaged land).

Various regional LT departures from the 30-year (1981-2010) average for the last 21 months are:

YEAR MO GLOBE NHEM. SHEM. TROPIC USA48 ARCTIC AUST
2018 01 +0.29 +0.51 +0.06 -0.10 +0.70 +1.39 +0.52
2018 02 +0.24 +0.28 +0.21 +0.05 +0.99 +1.22 +0.35
2018 03 +0.28 +0.43 +0.12 +0.08 -0.19 -0.32 +0.76
2018 04 +0.21 +0.32 +0.09 -0.14 +0.06 +1.02 +0.84
2018 05 +0.16 +0.38 -0.05 +0.01 +1.90 +0.14 -0.24
2018 06 +0.20 +0.33 +0.06 +0.11 +1.11 +0.76 -0.42
2018 07 +0.30 +0.38 +0.22 +0.28 +0.41 +0.24 +1.48
2018 08 +0.18 +0.21 +0.16 +0.11 +0.02 +0.11 +0.37
2018 09 +0.13 +0.14 +0.13 +0.22 +0.89 +0.23 +0.27
2018 10 +0.19 +0.27 +0.12 +0.30 +0.20 +1.08 +0.43
2018 11 +0.26 +0.24 +0.27 +0.45 -1.16 +0.68 +0.55
2018 12 +0.25 +0.35 +0.15 +0.30 +0.25 +0.69 +1.20
2019 01 +0.38 +0.35 +0.41 +0.35 +0.53 -0.15 +1.15
2019 02 +0.37 +0.47 +0.28 +0.43 -0.02 +1.04 +0.05
2019 03 +0.34 +0.44 +0.25 +0.41 -0.55 +0.96 +0.58
2019 04 +0.44 +0.38 +0.51 +0.53 +0.50 +0.92 +0.91
2019 05 +0.32 +0.29 +0.35 +0.39 -0.61 +0.98 +0.38
2019 06 +0.47 +0.42 +0.52 +0.64 -0.64 +0.90 +0.35
2019 07 +0.38 +0.33 +0.44 +0.45 +0.11 +0.33 +0.87
2019 08 +0.38 +0.38 +0.39 +0.42 +0.17 +0.44 +0.23
2019 09 +0.61 +0.64 +0.58 +0.60 +1.21 +0.75 +0.57

This makes September, 2019 the warmest September in the 41 year satellite record.

The UAH LT global anomaly image for September, 2019 should be available in the next few days here.

The global and regional monthly anomalies for the various atmospheric layers we monitor should be available in the next few days at the following locations:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt