Archive for 2016

First Week of 2017: Record Cold, 48 States Going Below Freezing

Wednesday, December 28th, 2016

It is increasingly looking like the first full week of 2017 will be greeted with a cold air outbreak over the Lower 48 states that will be widespread and persistent.

Early next week the cold air will enter the U.S. through Montana and the Dakotas, where temperatures will likely plunge into the minus 30 deg F (or colder) range.

By the end of the week, single digits could extend into the southeast U.S., and a hard freeze could push into central Florida (graphic courtesy of Weatherbell.com):

GFS model forecast surface temperatures for Friday morning, Jan. 6, 2017.

As can be seen, substantial portions of all 48 states might well be below 32 deg. F.

At the longer range, there appears to be a reinforcing plunge of even more frigid air heading south out of northwest Canada in the second week of January.

Science Under President Trump: End the Bias in Government-Funded Research

Wednesday, December 21st, 2016

You might expect that my background in climate research would mean my suggestions to a Trump Administration would be all climate-related. And there’s no question that climate would be a primary focus, especially neutering the Endangerment Finding by the EPA which, if left unchecked, will weaken our economy and destroy jobs, with no measurable benefit to the climate system.

But there’s a bigger problem in U.S. government funded research of which the climate issue is just one example. It involves bias in the way that government agencies fund science.

Government funds science to support pre-determined policy outcomes

So, you thought government-funded science is objective?

Oh, that’s adorable.

Since politicians are ultimately in charge of deciding how much money agencies receive to dole out to the research community, it is inevitable that politics and desired outcomes influence the science the public pays for.

Using climate as an example, around thirty years ago various agencies started issuing requests for proposals (RFPs) for scientists to research the ways in which humans are affecting climate. Climate research up until that time was mostly looking into natural climate fluctuations, since the ocean-atmosphere is a coupled nonlinear dynamical system, capable of producing climate change without any external forcing whatsoever.

Giddy from the regulatory success to limit the production of ozone-destroying chemicals in the atmosphere with the 1973 Montreal Protocol, the government turned its sights on carbon dioxide and global warming.

While ozone was a relatively minor issue with minor regulatory impact, CO2 is the Big Kahuna. Everything humans do requires energy, and for decades to come that energy will mostly come from fossil fuels, the burning of which produces CO2.

The National Academies, which are supposed to provide independent advice to the nation on new directions in science, were asked by the government to tell the government to study human causes of climate change. (See how that works?)

Research RFPs were worded in such a way that researchers could blame virtually any change they saw on humans, not Mother Nature. And as I like to say, if you offer scientists billions of dollars to find something… they will do their best to find it. As a result, every change researchers saw in nature was suddenly mankind’s fault.

The problem with attribution in global warming research is that any source of warming will look about the same, whether human-caused or nature-caused. The land will warm faster than the ocean. The high northern latitudes will warm the most. Winters will warm somewhat more than summers. The warming will be somewhat greater at 10 km altitude than at the surface. It doesn’t matter what caused the warming. So, it’s easy for the experts to say the warming is “consistent with” human causation, without mentioning it could also be “consistent with” natural causation.

The result of this pernicious, incestuous relationship between government and the research community is biased findings by researchers tasked to find that which they were paid to find. The problem has been studied at the Cato Institute by Pat Michaels, among others; Judith Curry has provided a good summary of some of the related issues.

The problem is bigger than climate research

The overarching goal of every regulatory agency is to write regulations. That’s their reason for existence.

It’s not to strengthen the economy. Or protect jobs. It’s to regulate.

As a result, the EPA continues the push to make the environment cleaner and cleaner, no matter the cost to society.

How does the EPA justify, on scientific grounds, the effort to push our pollution levels to near-zero?

It comes from the widespread assumption that, if we know huge amounts of some substance is a danger, then even tiny amounts must be be a danger as well.

This is how the government can use, say, extreme radiation exposure which is lethal, and extrapolate that to the claim that thousands of people die every year from even low levels of radiation exposure.

The only problem is that it is probably not true; it is the result of bad statistical analysis. The assumption that any amount of a potentially dangerous substance is also dangerous is the so-called linear no-threshold issue, which undergirds much of our over-regulated society.

In fact, decades of research by people like Ed Calabrese has suggested that exposure to low levels of things which are considered toxic in large amounts actually strength the human body and make it more resilient — even exposure to radiation. You let your children get sick because it will strengthen their immune systems later in life. If you protected them from all illnesses, it could prove fatal later in life. Read about the Russian family Lost in the Taiga for 40 years, and how their eventual exposure to others led to their deaths due to disease.

The situation in climate change is somewhat similar. It is assumed that any climate change is bad, as if climate never changed before, or as if there is some preferred climate state that keeps all forms of life in perpetual peace and harmony.

But, if anything, some small amount of warming is probably beneficial to most forms of life on Earth, including humans. The belief that all human influence on the environment is bad is not scientific, but religious, and is held by most researchers in the Earth sciences.

In my experience, it is unavoidable that scientists’ culture, wordview, and even religion, impact the way they interpret data. But let that bias be balanced by other points of view. Since CO2 is necessary for life on Earth, an unbiased scientist would be taking that into account before pontificating on the supposed dangers of CO2 emissions. That level of balance is seldom seen in today’s research community. If you don’t toe the line, getting research results that support desired government policy outcomes, you won’t get funded.

Over-regulation kills people

You might ask, what’s wrong with making our environment ever-cleaner? Making our food ever-safer? Making our radiation exposure ever-lower?

The answer is that it is expensive. And as any economist will tell you (except maybe Paul Krugman), the money we spend on such efforts is not available to address more pressing problems.

Since poverty is arguably the most lethal of killers, I believe we have a moral obligation to critically examine any regulations which have the potential of making poverty worse.

And that’s what is wrong with the Precautionary Principle, a popular concept in environmental circles, which states that we should avoid technologies which carry potential risk for harm.

The trouble is that you also add risk when you prevent society from technological benefits, based upon your risk-adverse worldview of its potential side effects. Costs always have to be weighed against benefits. Thats the way everyone lives their lives, every day.

Are you going to stop feeding your children because they might choke on food and die? Are you going to stop driving your car because there are 40,000 automobile deaths per year?

Oh, you dont drive? Well, are you going to stop crossing the street? That’s also a dangerous activity.

Every decision humans make involve cost-vs-benefit tradeoffs. We do it consciously and subconsciously.

Conclusions & Recommendations

In my opinion, we are an over-regulated society. Over-regulation not only destroys prosperity and jobs, it ends up killing people. And political pressures in government to perform scientific research that favors biased policy outcomes is part of the problem.

Science is being misused, prostituted if you wish.

Yes, we need regulations to help keep our air, water, and food reasonably clean. But government agencies must be required to take into account the costs and risks their regulations impose upon society.

Just as too much pollution can kill people, so too can too much regulation of pollution.

I don’t believe that cutting off funding for research into human causes of climate change is the answer. Instead, require that a portion of existing climate funding be put into investigating natural causes of climate change, too. Maybe call it a Red Team approach. This then removes the bias in the existing way such research programs are worded and funded.

I’ve found that the public is very supportive of the idea that climate changes naturally, and until we determine how much of the change weve seen is natural, we cannot say how much is human-caused.

While any efforts to reduce the regulatory burden will be met with claims that the new administration is out to kill your children, I would counter these objections with, “No, expensive regulations will kill our children, due to the increased poverty and societal decay they will cause. 22,000 children die each day in the world due to poverty; in contrast, we aren’t even sure if anyone has ever died due to human-caused global warming.”

Using a simple analogy, you can make your house 90% clean and safe relatively easily, but if you have to pay to make it 100% clean and safe (an impossible goal), you will no longer be able to afford food or health care. Is that what we want for our children?

The same is true of our government’s misguided efforts to reduce human pollution to near-zero.

U.S. Colder Now than All of Last Winter

Sunday, December 18th, 2016

If it seems like the current cold snap is unusual, you are right.

As of 5 7 a.m. EST this morning, Sunday, Dec. 18, the average temperature across the Lower 48 states of the U.S. is colder than any time all last winter.

As this plot of hourly temperatures shows, the average temperature is 18 16 deg. F, which is 2 4 deg. colder than any time last winter (graphic courtesy of Weatherbell.com):

Even the unusual warmth remaining in the Southeast U.S. is not enough to offset the frigid airmass much of the country is now experiencing. And the coldest part of winter is still six weeks away.

New Location for UAH Version 6 Text Files

Tuesday, December 13th, 2016

Now that our paper describing the UAH Version 6 methodology is in-press (publication date unknown), the text files containing monthly global and regional deep-layer temperature anomalies are now in a new location, without the “beta” identifier:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt

UAH Global Temperature Update for November 2016: +0.45 deg. C

Thursday, December 1st, 2016

November Temperature Up a Little from October; 2016 Almost Certain to be Warmest in 38 Year Satellite Record

NOTE: This is the twentieth monthly update with our new Version 6.0 dataset. Differences versus the old Version 5.6 dataset are discussed here. The paper describing the methodology has been accepted for publication.

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for November 2016 is +0.45 deg. C, up a little from the October value of +0.41 deg. C (click for full size version):

uah_lt_1979_thru_november_2016_v6

The global, hemispheric, and tropical LT anomalies from the 30-year (1981-2010) average for the last 23 months are:

YEAR MO GLOBE NHEM. SHEM. TROPICS
2015 01 +0.30 +0.44 +0.15 +0.13
2015 02 +0.19 +0.34 +0.04 -0.07
2015 03 +0.18 +0.28 +0.07 +0.04
2015 04 +0.09 +0.19 -0.01 +0.08
2015 05 +0.27 +0.34 +0.20 +0.27
2015 06 +0.31 +0.38 +0.25 +0.46
2015 07 +0.16 +0.29 +0.03 +0.48
2015 08 +0.25 +0.20 +0.30 +0.53
2015 09 +0.23 +0.30 +0.16 +0.55
2015 10 +0.41 +0.63 +0.20 +0.53
2015 11 +0.33 +0.44 +0.22 +0.52
2015 12 +0.45 +0.53 +0.37 +0.61
2016 01 +0.54 +0.69 +0.39 +0.84
2016 02 +0.83 +1.17 +0.50 +0.99
2016 03 +0.73 +0.94 +0.52 +1.09
2016 04 +0.71 +0.85 +0.58 +0.94
2016 05 +0.55 +0.65 +0.44 +0.72
2016 06 +0.34 +0.51 +0.17 +0.38
2016 07 +0.39 +0.48 +0.30 +0.48
2016 08 +0.43 +0.55 +0.32 +0.49
2016 09 +0.44 +0.49 +0.39 +0.37
2016 10 +0.41 +0.42 +0.39 +0.46
2016 11 +0.45 +0.41 +0.50 +0.37

To see how we are now progressing toward a record warm year in the satellite data, the following chart shows the average rate of cooling for the rest of 2016 that would be required to tie 1998 as warmest year in the 38-year satellite record:

uah-v6-lt-with-2016-projection

Based upon this chart, it now seems virtually impossible for 2016 to not be a record warm year in the UAH dataset.

UPDATE: It should be pointed out that 2016 will end up being 0.03-0.04 deg. C warmer than 1998, which is probably not a statistically significant difference given the uncertainties in the satellite dataset adjustments.

The “official” UAH global image for November, 2016 should be available in the next several days here.

The new Version 6 files (use the ones labeled “beta5”) should be updated soon, and are located here:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/uahncdc_lt_6.0beta5.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tmt/uahncdc_mt_6.0beta5.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0beta/ttp/uahncdc_tp_6.0beta5.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tls/uahncdc_ls_6.0beta5.txt

Global Warming: Policy Hoax versus Dodgy Science

Thursday, November 17th, 2016

treesIn the early 1990s I was visiting the White House Science Advisor, Sir Prof. Dr. Robert Watson, who was pontificating on how we had successfully regulated Freon to solve the ozone depletion problem, and now the next goal was to regulate carbon dioxide, which at that time was believed to be the sole cause of global warming.

I was a little amazed at this cart-before-the-horse approach. It really seemed to me that the policy goal was being set in stone, and now the newly-formed United Nations Intergovernmental Panel on Climate Change (IPCC) had the rather shady task of generating the science that would support the policy.

Now, 25 years later, public concern over global warming (aka climate change) is at an all-time low remains at the bottom of the list of environmental concerns.

Why is that?

Maybe because people don’t see its effects in their daily lives.

1) By all objective measures, severe weather hasn’t gotten worse.

2) Warming has been occurring at only half the rate that climate models and the IPCC say it should be.

3) CO2 is necessary for life on Earth. It has taken humanity 100 years of fossil fuel use to increase the atmospheric CO2 content from 3 parts to 4 parts per 10,000. (Please don’t compare our CO2 problem to Venus, which has 230,000 times as much CO2 as our atmosphere).

4) The extra CO2 is now being credited with causing global greening.

5) Despite handwringing over the agricultural impacts of climate change, current yields of corn, soybeans, and wheat are at record highs.

As an example of the disconnect between reality and the climate models which are being relied upon to guide energy policy, here are the yearly growing season average temperatures in the U.S 12-state corn belt (official NOAA data), compared to the average of the climate model projections used by the IPCC:

corn-belt-temp-jja-thru-2016-vs-42-cmip5-models

Yes, there has been some recent warming. But so what? What is its cause? Is it unusual compared to previous centuries? Is it necessarily a bad thing?

And, most important from a policy perspective, What can we do about it anyway?

The Policy Hoax of Global Warming

Rush Limbaugh and I have had a good-natured mini-disagreement over his characterization of global warming as a “hoax”. President-elect Trump has also used the “hoax” term.

I would like to offer my perspective on the ways in which global warming is indeed a “hoax”, but also a legitimate subject of scientific study.

While it might sound cynical, global warming has been used politically in order for governments to gain control over the private sector. Bob Watson’s view was just one indication of this. As a former government employee, I can attest to the continuing angst civil servants have over remaining relevant to the taxpayers who pay their salaries, so there is a continuing desire to increase the role of government in our daily lives.

In 1970, the Environmental Protection Agency (EPA) was given a legitimate mandate to clean up our air and water. I remember the pollution crises we were experiencing in the 1960s. But as those problems were solved, the EPA found itself in the precarious position of possibly outliving its usefulness.

So, the EPA embarked on a mission of ever-increasing levels of regulation. Any manmade substance that had any evidence of being harmful in large concentrations was a target for regulation. I was at a Carolina Air Pollution Control Association (CAPCA) meeting years ago where an EPA employee stated to the group that “we must never stop making the environment cleaner” (or something to that effect).

There were gasps from the audience.

You see, there is a legitimate role of the EPA to regulate clearly dangerous or harmful levels of manmade pollutants.

But it is not physically possible to make our environment 100% clean.

As we try to make the environment ever cleaner, the cost goes up dramatically. You can make your house 90% cleaner relatively easily, but making it 99% cleaner will take much more effort.

As any economist will tell you, money you spend on one thing is not available for other things, like health care. So, the risk of over-regulating pollution is that you end up killing more people than you save, because if there is one thing we know kills millions of people every year, it is poverty.

Global warming has become a reason for government to institute policies, whether they be a carbon tax or whatever, using a regulatory mechanism which the public would never agree to if they knew (1) how much it will cost them in reduced prosperity, and (2) how little effect it will have on the climate system.

So, the policy prescription does indeed become a hoax, because the public is being misled into believing that their actions are going to somehow make the climate “better”.

Even using the IPCC’s (and thus the EPA’s) numbers, there is nothing we can do energy policy-wise that will have any measurable effect on global temperatures.

In this regard, politicians using global warming as a policy tool to solve a perceived problem is indeed a hoax. The energy needs of humanity are so large that Bjorn Lomborg has estimated that in the coming decades it is unlikely that more than about 20% of those needs can be met with renewable energy sources.

Whether you like it or not, we are stuck with fossil fuels as our primary energy source for decades to come. Deal with it. And to the extent that we eventually need more renewables, let the private sector figure it out. Energy companies are in the business of providing energy, and they really do not care where that energy comes from.

The Dodgy Science of Global Warming

The director of NASA/GISS, Gavin Schmidt, has just laid down the gauntlet with President-elect Trump to not mess with their global warming research.

Folks, it’s time to get out the popcorn.

Gavin is playing the same card that the former GISS director, James Hansen, played years ago when the Bush administration tried to “rein in” Hansen from talking unimpeded to the press and Congress.

At the time, I was the Senior Scientist for Climate Studies at NASA/MSFC, and NASA had strict regulations regarding talking to the press and Congress. I abided by those regulations; Hansen did not. When I grew tired of them restricting my “freedoms” I exercised my freedom — to resign from NASA, and go to work at a university.

Hansen instead decided to play the ‘persecuted scientist’ card. After all, he (and his supporters in the environmental community) were out to Save The Earth ™ , and Gavin is now going down that path as well.

I can somewhat sympathize with Gavin that “climate change” is indeed a legitimate area of study. But he needs to realize that the EPA-like zeal that the funding agencies (NASA, NOAA, DOE, NSF) have used to characterize ALL climate change as human-caused AND as dangerous would eventually cause a backlash among those who pay the bills.

We The People aren’t that stupid.

So now climate research is finding itself at a crossroads. Scientists need to stop mischaracterizing global warming as settled science.

I like to say that global warming research isn’t rocket science — it is actually much more difficult. At best it is dodgy science, because there are so many uncertainties that you can get just about any answer you want out of climate models just by using those uncertianties as a tuning knob.

The only part that is relatively settled is that adding CO2 to the atmosphere has probably contributed to recent warming. That doesn’t necessarily mean it is dangerous.

And it surely does not mean we can do anything about it… even if we wanted to.

Super-zoom videos of supermoon rising

Tuesday, November 15th, 2016

The last couple nights I tried out my new Nikon Coolpix P900 super-zoom camera on the rising moon, at 2000 mm focal length. The two videos that follow are real-time, not time lapse, and are HD so best viewed full-screen.

This is a frame grab from one of the clips I took last night showing a jet passing by, I calculate it was about 100 miles away:

jet-and-supermoon

This video of the moonrise last night was the most spectacular here in Huntsville, as our skies have been pretty smokey from the SE U.S. wildfires, and the smoke thinned enough to see the moon as it peeked over the horizon. The tree line is about 2 miles away, and a bat flies by starting at about 1:42:

The next video was taken two nights ago, about 20 minutes after moonrise. The TV tower is almost 2 miles away.

UAH Global Temperature Update for October 2016: +0.41 deg. C

Tuesday, November 1st, 2016

October Temperature Down a Little from September

NOTE: This is the nineteenth monthly update with our new Version 6.0 dataset. Differences versus the old Version 5.6 dataset are discussed here. Note we are now at “beta5” for Version 6, and the paper describing the methodology has just been accepted for publication.

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for October 2016 is +0.41 deg. C, down a little from the September value of +0.44 deg. C (click for full size version):

uah_lt_1979_thru_october_2016_v6

The global, hemispheric, and tropical LT anomalies from the 30-year (1981-2010) average for the last 22 months are:

YEAR MO GLOBE NHEM. SHEM. TROPICS
2015 01 +0.30 +0.44 +0.15 +0.13
2015 02 +0.19 +0.34 +0.04 -0.07
2015 03 +0.18 +0.28 +0.07 +0.04
2015 04 +0.09 +0.19 -0.01 +0.08
2015 05 +0.27 +0.34 +0.20 +0.27
2015 06 +0.31 +0.38 +0.25 +0.46
2015 07 +0.16 +0.29 +0.03 +0.48
2015 08 +0.25 +0.20 +0.30 +0.53
2015 09 +0.23 +0.30 +0.16 +0.55
2015 10 +0.41 +0.63 +0.20 +0.53
2015 11 +0.33 +0.44 +0.22 +0.52
2015 12 +0.45 +0.53 +0.37 +0.61
2016 01 +0.54 +0.69 +0.39 +0.84
2016 02 +0.83 +1.17 +0.50 +0.99
2016 03 +0.73 +0.94 +0.52 +1.09
2016 04 +0.71 +0.85 +0.58 +0.94
2016 05 +0.55 +0.65 +0.44 +0.72
2016 06 +0.34 +0.51 +0.17 +0.38
2016 07 +0.39 +0.48 +0.30 +0.48
2016 08 +0.43 +0.55 +0.32 +0.50
2016 09 +0.44 +0.50 +0.39 +0.37
2016 10 +0.41 +0.42 +0.39 +0.46

To see how we are now progressing toward a record warm year in the satellite data, the following chart shows the average rate of cooling for the rest of 2016 that would be required to tie 1998 as warmest year in the 38-year satellite record:

uah-v6-lt-with-2016-projection

Based upon this chart, it would require strong cooling for the next two months to avoid 2016 being a new record-warm year (since the satellite record began in 1979) in the UAH dataset.

The “official” UAH global image for October, 2016 should be available in the next several days here.

The new Version 6 files (use the ones labeled “beta5”) should be updated soon, and are located here:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/uahncdc_lt_6.0beta5.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tmt/uahncdc_mt_6.0beta5.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0beta/ttp/uahncdc_tp_6.0beta5.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tls/uahncdc_ls_6.0beta5.txt

What Do 16 Years of CERES Data Tell Us About Global Climate Sensitivity?

Friday, October 21st, 2016

Short Answer: It all depends upon how you interpret the data.

It has been quite a while since I have addressed feedback in the climate system, which is what determines climate sensitivity and thus how strong human-caused global warming will be. My book The Great Global Warming Blunder addressed how climate researchers have been misinterpreting satellite measurements of variations the Earth’s radiative energy balance when trying to estimate climate sensitivity.

The bottom line is that misinterpretation of the data has led to researchers thinking they see positive feedback, and thus high climate sensitivity, when in fact the data are more consistent with negative feedback and low climate sensitivity. There have been a couple papers — an many blog posts — disputing our work in this area, and without going into details, I will just say that I am as certain of the seriousness of the issue as I have ever been. The vast majority of our critics just repeat talking points based upn red herrings or strawmen, and really haven’t taken the time to understand what we are saying.

What is somewhat dismaying is that, even though our arguments are a natural outgrowth of, and consistent with, previous researchers’ published work on feedback analysis, most of those experts still don’t understand the issue I have raised. I suspect they just don’t want to take the time to understand it. Fortunately, Dick Lindzen took the time, and has also published work on diagnosing feedbacks in a manner that differs from tradition.

Since we now have over 16 years of satellite radiative budget data from the CERES instruments, I thought it would be good to revisit the issue, which I lived and breathed for about four years. The following in no way exhausts the possibilities for how to analyze satellite data to diagnose feedbacks; Danny Braswell and I have tried many things over the years. I am simply trying to demonstrate the basic issue and how the method of analysis can yield very different results. The following just gives a taste of the problems with analyzing satellite radiative budget data to diagnose climate feedbacks. If you want further reading on the subject, I would say our best single paper on the issue is this one.

The CERES Dataset

There are now over 16 years of CERES global radiative budget data: thermally emitted longwave radiation “LW”, reflected shortwave sunlight “SW”, and a “Net” flux which is meant to represent the total net rate of radiative energy gain or loss by the system. All of the results I present here are from monthly average gridpoint data, area-averaged to global values, with the average annual cycle removed.

The NASA CERES dataset from the Terra satellite started in March of 2000. It was followed by the Aqua satellite with the same CERES instrumentation in 2002. These datasets are combined into the EBAF dataset I will analyze here, which now covers the period March 2000 through May 2016.

Radiative Forcing vs. Radiative Feedback

Conceptually, it is useful to view all variations in Earth’s radiative energy balance as some combination of (1) radiative forcing and (2) radiative feedback. Importantly, there is no known way separate the two… they are intermingled together.

But they should have very different signatures in the data when compared to temperature variations. Radiative feedback should be highly correlated with temperature, because the atmosphere (where most feedback responses occur) responds relatively rapidly to a surface temperature change. Time varying radiative forcing, however, is poorly correlated with temperature because it takes a long time — months if not years — for surface temperature to fully respond to a change in the Earths radiative balance, owing to the heat capacity of the land or ocean.

In other words, the different directions of causation between temperature and radiative flux involve very different time scales, and that will impact our interpretation of feedback.

Radiative Feedback

Radiative feedback is the radiative response to a temperature change which then feeds back upon that temperature change.

Imagine if the climate system instantaneously warmed by 1 deg. C everywhere, without any other changes. Radiative transfer calculations indicate that the Earth would then give off an average of about 3.2 Watts per sq. meter more LW radiation to outer space (3.2 is a global area average… due to the nonlinearity of the Stefan-Boltzmann equation, its value is larger in the tropics, smaller at the poles). That Planck effect of 3.2 W m-2 K-1 is what stabilizes the climate system, and it is one of the components of the total feedback parameter.

But not everything would stay the same. For example, clouds and water vapor distributions might change. The radiative effect of any of those changes is called feedback, and it adds or subtracts from the 3.2 number. If it makes the number bigger, that is negative feedback and it reduces global warming; if it makes the number smaller, that is positive feedback which increases global warming.

But at no time (and in no climate model) would the global average number go below zero, because that would be an unstable climate system. If it went below zero, that would mean that our imagined 1 deg. C increase would cause a radiative change that causes even more radiative energy to be gained by the system, which would lead to still more warming, then even more radiative energy accumulation, in an endless positive feedback loop. This is why, in a traditional engineering sense, the total climate feedback is always negative. But for some reason climate researchers do not consider the 3.2 component a feedback, which is why they can say they believe most climate feedbacks are positive. It’s just semantics and does not change how climate models operate… but leads to much confusion when trying to discuss climate feedback with engineers.

Radiative Forcing

Radiative forcing is a radiative imbalance between absorbed sunlight and emitted infrared energy which is not the result of a temperature change, but which can then cause a temperature change (and, in turn, radiative feedback).
For example, our addition of carbon dioxide to the atmosphere through fossil fuel burning is believed to have reduced the LW cooling of the climate system by almost 3 Watts per sq. meter, compared to global average energy flows in and out of the climate system of around 240 W m-2. This is assumed to be causing warming, which will then cause feedbacks to kick in and either amplify or reduce the resulting warming. Eventually, the radiative imbalance cause by the forcing causes a temperature change that restores the system to radiative balance. The radiative forcing still exists in the system, but radiative feedback exactly cancels it out at a new equilibrium average temperature.

CERES Radiative Flux Data versus Temperature

By convention, radiative feedbacks are related to a surface temperature change. This makes some sense, since the surface is where most sunlight is absorbed.

If we plot anomalies in global average CERES Net radiative fluxes (the sum of absorbed solar, emitted infrared, accounting for the +/-0.1% variations in solar flux during the solar cycle), we get the following relationship:

Fig. 1. Monthly global average HadCRUT4 surface temperature versus CERES -Net radiative flux, March 2000 through May 2016.

Fig. 1. Monthly global average HadCRUT4 surface temperature versus CERES -Net radiative flux, March 2000 through May 2016.

I’m going to call this the Dessler-style plot, which is the traditional way that people have tried to diagnose feedbacks, including Andrew Dessler. A linear regression line is typically added, and in this case its slope is quite low, about 0.58 W m-2 K-1. If that value is then interpreted as the total feedback parameter, it means that strong positive feedbacks in the climate system are pushing the 3.2 W m-2 K-1 Planck response to 0.58, which when divided into the estimated 3.8 W m-2 radiative forcing from a doubling of atmospheric CO2, results in a whopping 6.5 deg. C of eventual warming from 2XCO2!

Now thats the kind of result that you could probably get published these days in a peer-reviewed journal!

What about All That Noise?

If the data in Fig. 1 all fell quite close to the regression line, I would be forced to agree that it does appear that the data support high climate sensitivity. But with an explained variance of 3%, clearly there is a lot of uncertainty in the slope of the regression line. Dessler appears to just consider it noise and puts error bars on the regression slope.

But what we discovered (e.g. here) is that the huge amount of scatter in Fig. 1 isn’t just noise. It is evidence of radiative forcing contaminating the radiative feedback signal we are looking for. We demonstrated with a simple forcing-feedback model that in the presence of time-varying radiative forcing, most likely caused by natural cloud variations in the climate system, a regression line like that in Fig. 1 can be obtained even when feedback is strongly negative!

In other words, the time-varying radiative forcing de-correlates the data and pushes the slope of the regression line toward zero, which would be a borderline unstable climate system.

This raises a fundamental problem with standard least-squares regression analysis in the presence of a lot of noise. The noise is usually assumed to be in only one of the variables, that is, one variable is assumed to be a noisy version of the other.

In fact, what we are really dealing with is two variables that are very different, and the disagreement between them cant just be blamed on one or the other variable. But, rather than go down that statistical rabbit hole (there are regression methods assuming errors in both variables), I believe it is better to examine the physical reasons why the noise exists, in this case the time-varying internal radiative forcing.

So, how can we reduce the influence of this internal radiative forcing, to better get at the radiative feedback signal? After years of working on the problem, we finally decided there is no magic solution to the problem. If you knew what the radiative forcing component was, you could simply subtract it from the CERES radiances before doing the statistical regression. But you don’t know what it is, so you cannot do this.

Nevertheless, there are things we can do that, I believe, give us a more realistic indication of what is going on with feedbacks.

Switching from Surface Temperature to Tropospheric Temperature

During the CERES period of record there is an approximate 1:1 relationship between surface temperature anomalies and our UAH lower troposphere LT anomalies, with some scatter. So, one natural question is, what does the relationship in Fig. 1 look like if we substitute LT for surface temperature?

Fig. 2. As in Fig. 1, but surface temperature has been replaced by satellite lower tropospheric temperature (LT).

Fig. 2. As in Fig. 1, but surface temperature has been replaced by satellite lower tropospheric temperature (LT).

Fig. 2 shows that the correlation goes up markedly, with 28% explained variance versus 3% for the surface temperature comparison in Fig. 1.

The regression slope is now 2.01 W m-2 K-1, which when divided into the 2XCO2 radiative forcing value of 3.8 gives only 1.9 deg. C warming.

So, we already see that just by changing from surface temperature to lower tropospheric temperature, we achieve a much better correlation (indicating a clearer feedback signal), and a greatly reduced climate sensitivity.

I am not necessarily advocating this is what should be done to diagnose feedbacks; I am merely pointing out how different a result you can obtain when you use a temperature variable that is better correlated with radiative flux, as feedback should be.

Looking at only Short Term Variability

So far our analysis has not considered the time scales of the temperature and radiative flux variations. Everything from the monthly variations to the 16-year trends are contained in the data.

But there’s a problem with trying to diagnose feedbacks from long-term variations: the radiative response to a temperature change (feedback) needs to be measured on a short time scale, before the temperature has time to respond to the new radiative imbalance. For example, you cannot relate decadal temperature trends and decadal radiative flux trends and expect to get a useful feedback estimate because the long period of time involved means the temperature has already partly adjusted to the radiative imbalance.

So, one of the easiest things we can do is to compute the month-to-month differences in temperature and radiative flux. If we do this for the LT data, we obtain an even better correlation, with an explained variance of 46% and a regression slope of 4.69 W m-2 K-1.

Fig. 3. As in Fig. 2, but for month-to-month differences in each variable.

Fig. 3. As in Fig. 2, but for month-to-month differences in each variable.

If that was the true feedback operating in the climate system it would be only (3.8/4.69=) 0.8 deg. C of climate sensitivity for doubled CO2 in the atmosphere(!)

Conclusions

I dont really know for sure which of the three plots above are more closely related to feedback. I DO know that the radiative feedback signal should involve a high correlation, whereas the radiative forcing signal will involve a low correlation (basically, the latter often involves spiral patterns in phase space plots of the data, due to time lag associated with the heat capacity of the surface).

So, what the CERES data tells us about feedbacks entirely depends upon how you interpret the data… even if the data have no measurement errors at all (which is not possible).

It has always bothered me that the net feedback parameter that is diagnosed by linear regression from very noisy data goes to zero as the noise becomes large (see Fig. 1). A truly zero value for the feedback parameter has great physical significance — a marginally unstable climate system with catastrophic global warming — yet that zero value can also occur just due to any process that de-correlates the data, even when feedback is strongly negative.

That, to me, is an unacceptable diagnostic metric for analyzing climate data. Yet, because it yields values in the direction the Climate Consensus likes (high climate sensitivity), I doubt it will be replaced anytime soon.

And even if the strongly negative feedback signal in Fig. 3 is real, there is no guarantee that its existence in monthly climate variations is related to the long-term feedbacks associated with global warming and climate change. We simply don’t know.

I believe the climate research establishment can do better. Someday, maybe after Lindzen and I are long gone, the IPCC might recognize the issue and do something about it.

New Santer et al. Paper on Satellites vs. Models: Even Cherry Picking Ends with Model Failure

Tuesday, October 18th, 2016

(the following is mostly based upon information provided by Dr. John Christy)

Dr. John Christy’s congressional testimonies on 8 Dec 2015 and 2 Feb 2016 in which he stated that climate models over-forecast climate warming by a factor of 2.5 to 3, apparently struck a nerve in Climate Consensus land.

In a recently published paper in J. Climate entitled Comparing Tropospheric Warming in Climate Models and Satellite Data, Santer et al. use a combination of lesser-known satellite datasets and neglect of radiosonde data to reduce the model bias to only 1.7 times too much warming.

Wow. Stop the presses.

Part of the new paper’s obfuscation is a supposed stratospheric correction to the mid-tropospheric temperature channel the satellite datasets use. Of course, Christy’s comparisons between models and satellite data are always apples-to-apples, so the small influence of the stratosphere on the MT channel is included in both satellite and climate model data. The stratospheric correction really isn’t needed in the tropics, where the model-observation bias is the largest, because there is virtually no stratospheric influence on the MT channel there.

Another obfuscation is the reference the authors make to previously-published radiosonde comparisons:

“we do not compare model results with radiosonde-based atmospheric temperature measurements, as has been done in a number of previous studies (Gaffen et al. 2000; Hegerl and Wallace 2002; Thorne et al. 2007, 2011; Santer et al. 2008; Lott et al. 2013).”

Conveniently omitted from the list are the most extensive radiosonde comparisons published (Christy, J.R., R.W. Spencer and W.B Norris, 2011: The role of remote sensing in monitoring global bulk tropospheric temperatures. Int. J. Remote Sens. 32, 671-685, and references therein). This is the same kind of marginalization I have experienced in my previous research life in satellite rainfall estimation. By publishing a paper and ignoring the published work of others, they can marginalize your influence on the research community at large. They also keep people from finding information that might undermine the case they are trying to build in their paper.

John Christy provides this additional input:

My testimony in Dec 2015 and Feb 2016 included all observational datasets in their latest versions at that time. Santer et al. neglected the independent datasets generated from balloon measurements. The brand new “hot” satellite dataset (NOAAv4.0) used by Santer et al. to my knowledge has no documentation.

Here is my testimony of 2 Feb 2016 (pg 5):

I’ve shown here that for the global bulk atmosphere, the models overwarm the atmosphere by a factor of about 2.5. As a further note, if one focuses on the tropics, the models show an even stronger greenhouse warming in this layer … the models over-warm the tropical atmosphere by a factor of approximately 3.

Even when we use the latest satellite datasets used by Santer, these are the results which back up my testimony.

Global MT trends (1979-2015, C/decade) & magnification factor models vs. dataset:


102ModelAvg +0.214
___UWein(2) +0.090 2.38x radiosonde
_____RATPAC +0.087 2.47x radiosonde
_______UNSW +0.092 2.33x radiosonde
____UAHv6.0 +0.072 2.97x satellite
____RSSv4.0 +0.129 1.66x satellite
___NOAAv4.0 +0.136 1.57x satellite
________ERA +0.082 2.25x reanalysis

The range of model warming rate magnification versus observational datasets goes from 1.6x (NOAAv4.0) to 3.0x with median value of 2.3x for models warming faster than the observations.

Tropical MT trends (1979-2015, C/decade) & magnification factor models vs. dataset:


102ModelAvg +0.271
___UWein(2) +0.095 2.85x radiosonde
_____RATPAC +0.068 3.96x radiosonde
_______UNSW +0.073 3.69x radiosonde
____UAHv6.0 +0.065 4.14x satellite
____RSSv4.0 +0.137 1.98x satellite
___NOAAv4.0 +0.160 1.69x satellite
________ERA +0.082 3.31x reanalysis

Range goes from 1.7 (NOAAv4.0) to 4.1 with a median value of 3.3 for the models warming faster than the observations.

Therefore, the testimony of 2 Feb 2016 is corroborated by the evidence.

Overall, it looks to me like Santer et al. twist themselves into a pretzel by cherry picking data, using a new hot satellite dataset that appears to be undocumented, ignores independent (radiosonde) evidence (since it does not support their desired conclusion), and still arrives at a substantial 1.7x average bias in the climate models warming rates.