Archive for the ‘Blog Article’ Category

Tomorrow’s Total Lunar Eclipse, and a Mystery

Saturday, January 19th, 2019

Tomorrow night (January 20-21) will present the whole U.S. with a total lunar eclipse, the best one until May 15, 2022.

Totality here in Alabama will occur approximately from 10:40 to 11:40 p.m. CST. Clear weather will be restricted mostly to the southeastern U.S., and portions of the Northern Plains and Great Lakes:

A Mystery (to me, anyway)

There’s one aspect of the eclipse I cannot figure out. I’m sure the explanation will be simple, and when someone explains it to me, my response will be, “DOH!”.

The illumination of the moon during totality is due to light scattered through Earth’s atmosphere. Just as we see red sunsets, that red light will be shining on the moon from an annulus of red sunset light circling the Earth.

What I don’t understand, though, is the role of sunlight refraction (bending of sunlight) as it passes through the atmosphere at an oblique angle. The refraction occurs whether it is the moon or the sun being viewed through the limb, and I will use the example of moonlight shining through the limb.

My understanding is that light (from either the moon or sun) bends as I crudely show in the following cartoon. The “mystery” arises from the fact that we know that the appearance of the moon is that it is flattened due to refraction (this is NOT a diagram of what is happening during the eclipse.. it’s a general question about how either sunlight or moonlight is refracted as it passes close to Earth’s limb):

The moon composite photo is from the ISS, so it is exactly analogous to the situation shown in the drawing.

So, the mystery: Why is the moon flattened rather elongated? I simply don’t know. But I’m sure the explanation is simple.

Update: Mystery Solved

As I suspected, the problem was in the way I was looking at it. As
Brent Auvermann suggests in the comments, here’s the proper way to look at it. The eye sees the top and bottom of the moon at 2 slightly different angles, which are normally separated by 0.5 deg. But when the view in the direction of the bottom of the moon (0.5 deg. below the top of the moon) goes through a lot of atmosphere, it gets refracted downward, and the view from that direction comes from below the moon. In other words, what is a 0.5 degree subtended angle viewed by the eye actually originates from a bigger angle than that on the other side of the Earth’s limb. That causes the bottom of the moon to be compressed into a smaller angle (flattened):

Ocean Warming in Climate Models Varies Far More than Recent Study Suggests

Thursday, January 17th, 2019

I wanted to expand upon something that was mentioned in yesterday’s blog post about the recent Cheng et al. paper which was widely reported with headlines suggesting a newer estimate of the rate of ocean warming is 40% higher than old estimates from the IPCC AR5 report in 2013. I demonstrated that the new dataset was only only 11% warmer when compared to the AR5 best estimate of ocean warming during 1971-2010.

The point I want to reemphasize today is the huge range in ocean warming between the 33 models included in that study. Here’s a plot based upon data from Cheng’s website which, for the period in question (1971-2010) shows a factor of 8 range between the model with the least ocean warming and the model with the most warming, based upon linear trends fitted to the model curves:

Yearly ocean heat content (OHC) changes since 1971 in 33 models versus the recent Cheng reanalysis of XBT and Argo ocean temperature data for the surface to 2,000m layer. The vertical scale is in both ZettaJoules (10^21 Joules) and in deg. C (assuming an ocean area of 3.6 x 10^14 m^2). The Cheng et al. confidence interval has been inflated by 1.43 to account for the difference between the surface area of the Earth (Cheng et al. usage) and the actual ocean surface area.

I have also included Cheng’s reanalysis of ocean heat content (OHC) data over the same period of time, showing how well it fits the *average* of all 33 models included in the study. Cheng’s OHC dataset is now the warmest of all reanalyzed OHC datasets, which means (mark my words) it will gain the greatest favor in the next IPCC report.

Mark. My. Words.

What is disconcerting is the huge (8x) range in ocean warming between models for the period 1971-2010. This is partly due to continuing uncertainty in climate sensitivity (ranging over a factor of ~3 according to the IPCC), but also due to uncertainties in how much aerosol forcing has occurred, especially in the first half of the period in question. The amount of climate system warming in models or in nature is a function of both forcing and the system response to that forcing.

If models are based upon fundamental physical principles, as we are often told, how can they give such a wide range of results? The answer, of course, is that there are some physical processes which are not well known, for example how clouds and upper tropospheric water vapor change with warming. The devil is in the details.

Dodgy Statistics

One of the problems with the results in the Cheng et al. study is how the 90% confidence intervals are computed. Most people will simply assume they are related to how well the stated numbers are known. That is, how good the various observational and model estimates of ocean warming are.

But they would be wrong.

The confidence intervals given in the paper (we are told at the end of the Supplementary Materials section) simply refer to how well each time series of OHC (whether observations or models) is fit by a regression line.

They have nothing to do with how good a certain OHC dataset is. In fact, they assume (as John Christy pointed out to me) each dataset is perfect!

In the above plot I show the difference between the quoted 90% confidence interval in the paper for the models, and the 90% confidence interval I computed which represents the variability between the models warming trends, which is much more informative to most readers. The difference is huge.

What Cheng et al. provided for confidence intervals isn’t “wrong”. It’s simply misleading for most readers who are trying to figure out how good these various observational OHC trends are, or how uncertain the climate model OHC trends are.

Is the Average of the Climate Models Better than the Individual Models?

Cheng et al. only deal with the 33-model average, and don’t mention the huge inter-model differences. One might claim that the average of the 33 models is probably better than the individual models, anyway.

But I’m not so sure one can make such an argument.

The various climate models cannot be viewed as some sort of “truth model” about which the various modeling groups around the world have added noise. If that were the case then, yes, the average of all the models would give the best result.

Instead, each modeling group makes their own best estimate of a wide variety of forcings and physical processes in the models, and they get a wide variety of results. It is not clear which of them is closest to the truth. It could be an outlier model is best. For example, the model with the closest agreement with our (UAH) satellite tropospheric temperatures since 1979 is the Russian model, which wasn’t even included in the new study. That model has the lowest rate of tropospheric warming since 1979 out of over 100 models we have checked.

The new OHC dataset might reduce uncertainty somewhat (although we still don’t know how accurate it is), but one also has to evaluate surface temperature trends, tropospheric temperature trends (which I believe are telling us water vapor feedback isn’t as strong as in the models), as well as uncertainties in forcings which, even if the models contained perfect physics, would still lead to different projected rates of warming.

Given all of the uncertianties, I think we are still far from understanding just how much future warming will occur from increasing CO2 concentrations in the atmosphere.

Media Reports of +40% Adjustment in Ocean Warming Were Greatly Exaggerated

Wednesday, January 16th, 2019

Summary: The recently reported upward adjustment in the 1971-2010 Ocean Heat Content (OHC) increase compared to the last official estimate from the IPCC is actually 11%, not 40%. The 40% increase turns out to be relative to the average of various OHC estimates the IPCC addressed in their 2013 report, most of which were rejected. Curiously, the new estimate is almost identical to the average of 33 CMIP climate models, yet the models themselves range over a factor of 8 in their rates of ocean warming. Also curious is the warmth-enhancing nature of temperature adjustments over the years from surface thermometers, radiosondes, satellites, and now ocean heat content, with virtually all data adjustments leading to more warming rather than less.

I’ve been trying to make sense out of the recent Science paper by Cheng et al. entitled How Fast are the Oceans Warming? The news headlines I saw which jumped out at me (and several others who asked me about them) were:

World’s Oceans Warming 40% Faster than Previously Thought (EcoWatch.com),

The oceans are heating up 40% faster than scientists realized which means we should prepare for more disastrous flooding and storms (businessinsider.com)

For those who read the paper, let me warn you: The paper itself does not have enough information to figure out what the authors did, but the Supplementary Materials for the paper provide some of what is needed. I suspect this is due to editorial requirements by Science to make articles interesting without excessive fact mongering.

One of the conclusions of the paper is that Ocean Heat Content (OHC) has been rising more rapidly in the last couple decades than in previous decades, but this is not a new finding, and I will not discuss it further here.

Of more concern is the implication that this paper introduces some new OHC dataset that significantly increases our previous estimates of how much the oceans have been warming.

As far as I can tell, this is not the case.

Dazed and Confused

Most of the paper deals with just how much the global oceans from the surface to 2,000 m depth warmed during the period 1971-2010 (40 years) which was also a key period in the IPCC 5th Assessment Report (AR5).

And here’s where things get confusing, and I wasted hours figuring out how they got their numbers because the authors did not provide sufficient information.

Part of the confusion comes from the insistence of the climate community on reporting ocean warming in energy content units of zettajoules (a zettajoule is 1,000,000,000,000,000,000,000 Joules, which is a billion trillion Joules… also a sextillion Joules, but male authors fear calling it that), rather than in what is actually measured (degrees). This leads to confusion because almost nowhere is it ever stated what assumed area of ocean was used in the computation of OHC (which is proportional to both temperature change and the volume of seawater involved in that temperature change). I’ve looked in this paper and other papers (including Levitus), and only in the 2013 IPCC report (AR5) did I find the value 3.6 x 10^14 square meters given for ocean area. (Just because we know the area of the global oceans doesn’t mean that is what is monitored, or what was used in the computation of OHC).

Causing still further confusion is that Cheng et al. then (apparently) take the ocean area, and normalize it by the entire area of the Earth, scaling all of their computed heat fluxes by 0.7. I have no idea why, since their paper does not deal with the small increase in heat content of the land areas. This is just plain sloppy, because it complicates and adds uncertainty when others try to replicate their work.

It also raises the question of why energy content? We don’t do that for the atmosphere. Instead, we use what is measured — degrees. The only reason I can think of is that the ocean temperature changes involved are exceedingly tiny, either hundredths or thousandths of a degree C, depending upon what ocean layer is involved and over what time period. Such tiny changes would not generate the alarm that a billion-trillion Joules would (or the even scarier Hiroshima bomb-equivalents).

But I digress.

The Results

I think I finally figured out what Cheng et al. did (thanks mostly to finding the supporting data posted at Cheng’s website).

The “40%” headlines derive from this portion of the single figure in their paper, where I have added in red information which is either contained in the Supplementary Materials (3-letter dataset IDs from the authors’ names) or are my own annotations:

The five different estimates of 40-year average ocean heating rates from the AR5 report (gray bars) are around 40% below the newer estimates (blue bars), but the AR5 report did not actually use these five in their estimation — they ended up using only the highest of these (Domingues et al., 2008). As Cheng mentions, the pertiment section of the IPCC report is the “Observations: Oceans” section of Working Group 1, specifically Box 3.1 which contains the numerical facts one can factmonger with.

From the discussion in Box 3.1, one can compute that the AR5-estimated energy accumulation rate in the 0-2000 m ocean layer (NOT adjusted for total area of the Earth) during 1971-2010 corresponds to an energy flux of 0.50 Watts per sq. meter. This can then be compared to newer estimates computed from Cheng’s website data (which is stated to be the data used in the Science study) of 0.52 W/m2 (DOM), 0.51 W/m2 (ISH), and 0.555 W/m2 (CHG).

Significantly, even if we use the highest of these estimates (Cheng’s own dataset) we only get an 11% increase above what the IPCC claimed in 2013 — not 40%.

Agreement Between Models and Observations

Cheng’s website also contains the yearly 0-2000m OHC data from 33 CMIP5 models, from which I calculated the average warming rate, getting 0.549 W/m2 (again, not scaled by 0.7 to get a whole-Earth value). This is amazingly close to Cheng’s 0.555 W/m2 he gets from reanalysis of the deep-ocean temperature data.

This is pointed to as evidence that observations support the climate models which, in turn, are of course the basis for proposed energy policy changes and CO2 emissions reduction.

How good is that multi-model warming rate? Let me quote the Science article (again, these number are scaled by 0.7):

“The ensemble average of the models has a linear ocean warming trend of 0.39 +/- 0.07 W/m2 for the upper 2000 m from 1971-2010 compared with recent observations ranging from 0.36 to 0.39 W/m2.”

See that +/- 0.07 error bar on the model warming rate? That is not a confidence interval on the warming rate. It’s the estimated error in the fit of a regression line to the 33-model average warming trace during 1971-2010. It says nothing about how confident we are in the warming rate, or even the range of warming rates BETWEEN models.

And that variation between the models is where things REALLY get interesting. Here’s what those 33 models’ OHC warming profiles look like, relative to the beginning of the period (1971), which shows they range over a factor of 8X (from 0.11 W/m2 to 0.92 W/m2) for the period 1971-2010!

What do we make of a near-perfect level of agreement (between Cheng’s reanalysis of OHC warming from observational data, and the average of 33 climate models), when those models themselves disagree with each other by up to a factor of 8 (700%)?

That is a remarkable stroke of luck.

It’s Always Worse than We Thought

It is also remarkable how virtually every observational dataset — whether (1) surface temperature from thermometers, (2) deep-ocean temperature measurements, atmospheric temperature from (3) satellites, and from (4) radiosondes, when reanalyzed for the same period, always ends up with more (not less) warming? What are the chances of this? It’s like flipping a coin and almost always getting heads.

Again, a remarkable stroke of luck.

Chuck Todd Devotes an Hour to Attacking a Strawman

Thursday, January 3rd, 2019

or, All Credentialed Journalists are Sex Abusers

Meet the Depressed host Chuck Todd, sans brain.

Chuck Todd, on a recent episode of Meet the Press, highlighted the issue of global warming and climate change. He unapologetically made it clear that he wasn’t interested in hearing from people on the opposing side of the scientific issue, stating:

“We’re not going to debate climate change, the existence of it. The Earth is getting hotter. And human activity is a major cause, period. We’re not going to give time to climate deniers. The science is settled, even if political opinion is not.”

This is what’s called a “strawman” argument, where you argue against something your opponent never even claimed.

I cannot think of a single credentialed, published skeptical climate scientist who doesn’t believe in the “existence” of climate change, or that “the Earth is getting hotter”, or even that human activity is likely a “major cause”. Pat Michaels, Richard Lindzen, Judith Curry, John Christy, and myself (to name a few) all believe these things. That journalists continue to characterize us as having extremist views shows just how far journalism has fallen as a (somewhat) respectable profession.

What if I claimed that all journalists are sex abusers? Of course, no reasonable person would believe that. Yet, I would wager that up to half of the U.S. population has been led to believe that climate change skeptics are “deniers” (as in, Holocaust deniers), about whom journalist Ellen Goodman said 12 years ago,

“Let’s just say that global warming deniers are now on a par with Holocaust deniers”

At least my hypothetical claim that “journalists are sex abusers” is statistically more accurate than journalists’ claims that we skeptical scientists “deny” this, that, and the other thing (for those allegations, see Mark Halperin, Matt Lauer, Tom Brokaw, Charlie Rose, Tavis Smiley, Michael Oreskes, and others).

The fact is that even if humans are, say, 60% responsible for the warming of the global ocean and atmosphere over the last 60 years (which would be consistent with both the UN IPCC’s and Todd’s phrasing), the lastest analyses (Lewis & Curry, 2018) of what this would mean leads to an eventual warming of only 1 deg. C from a doubling of atmospheric CO2 (we are currently about halfway to that doubling). That’s only 1/3 of what the IPCC claims is going to happen, and an even smaller fraction of what the ratings-boosting extremists who journalists like to trot out will claim.

A Nuance Chuck Todd is Ill-Prepared to Discuss

Journalists are notoriously under-informed on science issues. For example, let’s look at the claim that recent warming has been human-caused. It is easy to show that such attribution is more faith-based than science-based.

Between 2005 and 2017, the global network of thousands of Argo floats have measured an average temperature increase of the upper half of the ocean of 0.04 deg. C. That’s less than 0.004 C/year, an inconceiveably small number.

Significantly, it represents an imbalance in energy flows in and out of the climate system of only 1 part in 260. That’s less than 0.5%, and climate science does not know any of the NATURAL flows of energy to that level of accuracy. The tiny energy imbalance causing the warming is simply ASSUMED to be the fault of humans and not part of some natural cycle in the climate system. Climate models are adjusted in a rather ad hoc manner until their natural energy flows balance, then increasing CO2 from fossil fuels is used as the forcing (imposed energy imbalance) causing warming.

That’s circular reasoning. Or, some might say, garbage in, garbage out.

The belief in human-caused warming exceeding a level that what would be relatively benign, and maybe even beneficial, is just that — a belief. It is not based upon known, established, and quantified scientific principles. It is based upon the assumption that natural climate change does not exist.

So, journalists do a lot of talking about things of which they know nothing. As Scarecrow from the Wizard of Oz said in 1939,

UAH Global Temperature Update for December 2018: +0.25 deg. C

Wednesday, January 2nd, 2019

2018 was 6th Warmest Year Globally of Last 40 Years

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for December, 2018 was +0.25 deg. C, down a little from +0.28 deg. C in November:

Global area-averaged lower tropospheric temperature anomalies (departures from 30-year calendar monthly means, 1981-2010). The 13-month centered average is meant to give an indication of the lower frequency variations in the data; the choice of 13 months is somewhat arbitrary… an odd number of months allows centered plotting on months with no time lag between the two plotted time series. The inclusion of two of the same calendar months on the ends of the 13 month averaging period causes no issues with interpretation because the seasonal temperature cycle has been removed, and so has the distinction between calendar months.

Various regional LT departures from the 30-year (1981-2010) average for the last 24 months are:

YEAR MO GLOBE NHEM. SHEM. TROPIC USA48 ARCTIC AUST
2017 01 +0.33 +0.32 +0.35 +0.11 +0.28 +0.95 +1.22
2017 02 +0.39 +0.58 +0.20 +0.08 +2.16 +1.33 +0.22
2017 03 +0.23 +0.37 +0.10 +0.06 +1.22 +1.24 +0.98
2017 04 +0.28 +0.29 +0.27 +0.22 +0.90 +0.23 +0.40
2017 05 +0.45 +0.40 +0.50 +0.42 +0.11 +0.21 +0.06
2017 06 +0.22 +0.34 +0.10 +0.40 +0.51 +0.10 +0.34
2017 07 +0.29 +0.31 +0.28 +0.51 +0.61 -0.27 +1.03
2017 08 +0.41 +0.41 +0.42 +0.47 -0.54 +0.49 +0.78
2017 09 +0.55 +0.52 +0.58 +0.54 +0.30 +1.06 +0.60
2017 10 +0.64 +0.67 +0.60 +0.48 +1.22 +0.83 +0.86
2017 11 +0.36 +0.34 +0.39 +0.27 +1.36 +0.68 -0.12
2017 12 +0.42 +0.50 +0.33 +0.26 +0.45 +1.37 +0.36
2018 01 +0.26 +0.46 +0.06 -0.11 +0.59 +1.36 +0.43
2018 02 +0.20 +0.25 +0.16 +0.04 +0.92 +1.19 +0.18
2018 03 +0.25 +0.40 +0.10 +0.07 -0.32 -0.33 +0.60
2018 04 +0.21 +0.32 +0.11 -0.12 -0.00 +1.02 +0.69
2018 05 +0.18 +0.41 -0.05 +0.03 +1.93 +0.18 -0.39
2018 06 +0.21 +0.38 +0.04 +0.12 +1.20 +0.83 -0.55
2018 07 +0.32 +0.43 +0.22 +0.29 +0.51 +0.29 +1.37
2018 08 +0.19 +0.22 +0.17 +0.13 +0.07 +0.09 +0.26
2018 09 +0.15 +0.15 +0.14 +0.24 +0.88 +0.21 +0.19
2018 10 +0.22 +0.31 +0.13 +0.34 +0.25 +1.11 +0.39
2018 11 +0.28 +0.27 +0.30 +0.50 -1.13 +0.69 +0.53
2018 12 +0.25 +0.32 +0.19 +0.32 +0.20 +0.65 +1.19

The 2018 globally averaged temperature anomaly, adjusted for the number of days in each month, is +0.23 deg. C, making 2018 the 6th warmest year in the now-40 year satellite record of global lower tropospheric temperature variations.

The linear temperature trend of the global average lower tropospheric temperature anomalies from January 1979 through December 2018 remains at +0.13 C/decade.

The UAH LT global anomaly image for December, 2018 should be available in the next few days here.

The new Version 6 files should also be updated at that time, and are located here:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt

Government Shutdown Delays UAH Global Temperature Update

Tuesday, January 1st, 2019

The NOAA CLASS system we obtain our satellite orbit files (raw data) from has been taken offline until the government shutdown ends. As a result, our UAH monthly global temperature update is delayed.

UPDATE: We have a separate data feed and so I’ll be able to post results tomorrow, Jan. 2.

Giving Credit to Willis Eschenbach

Monday, December 31st, 2018

The non-greenhouse theory of Nikolov (and now Zeller-Nikolov) continues to live on, most recently in this article I’ve been asked about on social media.

In short, it is the theory that there really isn’t a so-called “greenhouse effect”, and that the excess planetary surface temperatures on Earth, Venus, and other planets above the Stefan-Boltzmann (SB) temperature calculated from the rate of absorbed solar radiation is due to compressional heating by the atmosphere.

This is a popular alternative explanation that I am often asked about. Of course, if there is no “greenhouse effect”, we don’t have to worry about increasing CO2 in the atmosphere and all of the global warmmongers can go home.

I have posted on this blog many times over the years all of the evidences I can think of to show there really is a greenhouse effect, but it is never enough to change the minds of those who have already convinced themselves that planetary surface temperatures are only a function of (1) absorbed sunlight and (2) atmospheric pressure, as Zeller and Nikolov claim.

I’ve always had the nagging suspicion there was a simpler proof that the Zeller-Nikolov theory was wrong, but I could never put my finger on it. My co-worker, Danny Braswell (a PhD computational physicist) and I have joked over the years that we tend to make problems too difficult… we’ve spent days working a problem when the simple solution was staring us in the face all along.

Enter citizen scientist Willis Eschenbach, a frequent contributor at Wattsupwiththat.com, who back in 2012 posted there a “proof” that Nikolov was wrong. The simplicity of the proof makes it powerful, indeed. I don’t know why I did not notice it at the time. My apologies to Willis.

Basically, the proof starts with the simplified case of the average planetary temperature without an atmosphere, which can be calculated using a single equation (the Stefan-Boltzmann equation). Conceptually, in the absence of an atmosphere, sunlight will heat the surface and the temperature will rise until the rate of emitted infrared radiation from the surface to outer space equals the rate of absorbed solar energy. (To be accurate, one needs to take into account the fact the planet is rotating and spherical, the rate of heat conduction into the sub-surface, and you also need to know the planet’s albedo (solar reflectivity) and infrared emissivity).

The SB equation always results in a surface temperature that is too cold compared to surface temperatures when an atmosphere is present, and greenhouse theory is traditionally invoked to explain the difference.

Significantly, Willis pointed out that if atmospheric pressure is instead what raises the temperature above the S-B value, as the Zeller-Nikolov theory claims, the rate of energy loss by infrared radiation will then go up (for the same reason a hotter fire feels hotter on your skin at a distance). But now the energy loss by the surface is greater than the energy gained, and energy is no longer conserved. Thus, warming cannot occur from increasing pressure alone.

In other words, without the inclusion of the greenhouse effect (which has downward IR emission by the atmosphere reducing the net loss of IR by the surface), the atmospheric pressure hypothesis of Zeller-Nikolov cannot explain surface temperatures above the Stefan-Boltzmann value without violation of the fundamental 1st Law of Thermodynamics: Conservation of Energy.

This is a simple and elegant proof that radiation from the atmosphere does indeed warm the surface above the S-B value. This will be my first go-to argument from now on when asked about the no-greenhouse theory.

I like to give credit where credit is due, and Willis provided a valuable contribution here.

(For those who are not so scientifically inclined, I still like the use of a simple hand-held IR thermometer to demonstrate that the cold atmosphere can actually cause a warmer surface to become warmer still [and, no, the 2nd Law of Thermodynamics is not violated]).

2018 6th Warmest Year Globally of Last 40

Thursday, December 20th, 2018

Even before our December numbers are in, we can now say that 2018 will be the 6th warmest year in the UAH satellite measurements of global-average lower atmospheric temperatures, at +0.23 deg. C (+0.41 deg. F) above the thirty-year (1981-2010) average.

(Jan. 2, 2019 update confirms this.)

The following plot ranks all of the years from warmest to coolest, with the ten warmest and ten coolest years indicated:

The first (1979) and last (2018) years in the record are indicated in purple.

2018 is also the 40th year of satellite data for monitoring global atmospheric temperatures.

We are currently working on Version 6.1 of the dataset, which will have new diurnal drift corrections. Preliminary results suggest that the resulting linear warming trend over the 40 years (+0.13 C/decade) will not change substantially, and thus will remain considerably cooler than the average rate of warming across the IPCC climate models used for energy policy, CO2 emissions reductions, and the Paris Agreement.

The Five Questions Global Warming Policy Must Answer

Tuesday, December 18th, 2018

It is no secret that I doubt increasing CO2 in the atmosphere will have enough negative effects on the global environment to warrant the extreme cost to humanity of substantially reducing those effects. Note that this statement has both science and energy policy components. In fact, with “global greening” we should consider the possibility of net positive benefits.

The public perception of global warming risks has involved a mixture of exaggerated claims regarding both the science and the energy policy, instigated by a minority of activist scientists and amplified by an eager news media. In my Kindle e-book Global Warming Skepticism for Busy People, I list 5 questions I believe must be answered in the affirmative before embarking on any large-scale decarbonization of the global economy:

The Five Big Questions
1) Is warming and associated climate change mostly human-caused?
2) Is the human-caused portion of warming and associated climate change large enough to be damaging?
3) Do the climate models we use for proposed energy policies accurately predict climate change?
4) Would the proposed policy changes substantially reduce climate change and resulting damage?
5) Would the policy changes do more good than harm to humanity?

As I state in my book, it is not obvious that the answer to any of the five questions is yes, let alone all of them. The first three questions deal with the science, and the last two deal with energy policy.

Regarding the first question, I might concede it is indeed possible most of the warming since the 1950s is human-caused. This is a core conclusion of the IPCCs 5th Assessment Report (AR5).

But so what? What it acknowledges is rather unremarkable, given (as we will see) the rather slow rate of global warming. As the second question asks, is the human component large enough to cause substantial damage? There has yet to be any good evidence produced that weather extremes are worse in recent decades than in previous centuries. Even warming itself appears to have begun in centuries past, before humans could be blamed, with proxy evidence of previous receding glaciers and low Arctic sea ice extent (and very high extent during the Little Ice Age) begging the questions of just how large natural climate variations have been, and what is the naturally preferred state of the climate system anyway?

This leads to the third question, which has to do with the fact the latest generation of climate models produce, on average, about 2 times too much warming compared to the rates at which the global surface temperature and deep ocean have observed to have been warming, with the latest energy budget study (which makes the same climate forcing assumptions over the last 100+ years as the models) suggesting more like 1.6 deg. C of eventual warming from a doubling of atmospheric CO2, rather than 3.2 deg. C projected by the average climate model. (And even THAT assumes ALL of the warming is human-caused!)

In the last 40 years, the discrepancy between models and observations for the globally-averaged lower atmosphere looks like this:

The discrepancy in the surface temperatures is less dramatic, but growing:

How can such models, which are increasingly portrayed as accurate, be defended with a straight face for energy policy decisions? The amount of warming they produce is not based upon physical first principles, as is often claimed. That some warming should occur is based upon fairly solid principles, but the amount of warming from increasing CO2 is entirely debatable.

The dirty little secret is that the models are tuned so that only increasing CO2 causes warming, since the various uncertain sources of natural climate change are either not known well enough to include, or are purposely programmed out of the models. (How do I know? Because NONE of the natural energy flows in and out of the climate system are known to the accuracy [about 1%] needed to blame recent warming on increasing CO2, rather than on Mother Nature. Those natural energy flows in the models are simply forced to be in balance, and so the cause of model warming ends up being anthropogenic. Thus the models use circular reasoning to establish human causation.)

The fourth and fifth questions have to do with whether we can really reduce CO2 emissions as long as humanity needs fossil fuels to reduce poverty and create prosperity. I have nothing against alternative energy sources per se, as long as they are practical and cost-competitive. Everything humanity does requires energy, and as long as China and India continue to reduce poverty with ever-growing usage of fossil fuels, global CO2 emissions will continue to increase, no matter what the United States does. With about 1 billion people in the world still without electricity, I believe it is immoral to deprive them of access to affordable energy.

Out of the 5 Big Questions, which are most important? Ultimately, economics is what rules peoples lives. Poverty kills, and forcing people to use more expensive energy will worsen poverty.

In France we are seeing the violent push-back against green energy policies (among other , mainly economic, issues), and we havent even yet reached the point where policies will reduce future CO2 emissions by enough to measure the effect by the end of this century in terms of global temperature. So, if you think the Paris riots are bad, wait until you see the public response to policies that will reduce future CO2 emissions by, say, 50%.

But we cannot ignore the science. What if the science was absolutely certain we were in for 20 deg. C of warming and a collapse of the Antarctic ice sheet, with 200 ft. of sea level rise? Then humanity might be willing to make large sacrifices to save itself. So, the science does matter… the question is, can it be trusted?

Based upon the observed rate of global warming (which is too small for any individual to feel in their lifetime), and failed climate model projections, I’d say the current state of the science is not yet ready for primetime.

For now, the science supports some modest and mostly harmless warming, but not enough warming to justify CO2 emissions reductions that would destroy the global economy, worsen global poverty, and have no measureable effect on global temperatures by the end of this century anyway.

Allstate Should Pull this Ad and Apologize for Misleading the Public

Monday, December 17th, 2018

I’ve been meaning to comment about this TV ad for Allstate insurance, which enraged me the first time I saw it. Allstate knows better (the insurance business deals with probability and statistics) and they knew this was a lie when they put the ad together:

In the ad, actor Dennis Haysbert says:

“A once-in-500-year storm should happen once every 500 years, right? The fact is, there have been 26 in the last decade.”

Setting aside the fact that we don’t have enough statistics to say anything meaningful about what happens over 500 years (hydrologists prefer to stick to 100 years as justified when talking about rare events), Allstate knows very well that such statistics refer to the repeat period for the same location… not for (say) the whole United States. It is not unusual for once-in-100 year weather events to occur more than once, maybe several times, each year somewhere in the U.S.

I consider this false advertising because Allstate knows better, and is purposely misleading the public to make more money.