Our Initial Comments on the Abraham et al. Critique of the Spencer & Braswell 1D model

October 23rd, 2014

Our 1D forcing-feedback-mixing model published in January 2014 (and not paywalled) addressed the global average ocean temperature changes observed from the surface to 700 m depth, with the model extending to 2,000 m depth.

We used the 1D model to obtain a consensus-supporting climate sensitivity when traditional forcings were used (mostly anthropogenic GHGs, aerosols, and volcanoes), but a much smaller 1.3 deg. C climate sensitivity if the observed history of ENSO was included, which was shown from CERES satellite measurements to modulate the Earth’s radiative budget naturally (what we called “internal radiative forcing” of the climate system).

Abraham et al. recently published an open source paper addressing the various assumptions in our model. While we have only had a couple days to look at it, in response to multiple requests for comment I am now posting some initial reactions.

Abraham et al. take great pains to fault the validity of a simple 1D climate model to examine climate sensitivity. But as we state in our paper (and as James Hansen has even written), in the global average all that really matters for the rate of rise of temperature is (1) forcing, (2) feedback, and (3) ocean mixing. These three basic processes can be addressed in a 1D model. Advective processes (horizontal transports) vanish in the global ocean average.

They further ignore the evidence we present (our Fig. 1 in Spencer & Braswell, 2014) that a 1D model might actually be preferable from the standpoint of energy conservation, since the 3D models do not appear to conserve energy – a basic requirement in virtually any physical modelling enterprise. Some of the CMIP3 models’ deep ocean temperature changes in apparent contradiction to whether the climate system is being radiative forced from above. Since the 3D models do not include a changing geothermal heat flux, this suggests a violation of the 1st Law of Thermodynamics. (Three of the 13 models we examined cooled most of deep ocean since 1955, despite increasing energy input from above. How does that happen?)

On this point, how is it that Abraham et al. nitpick a 1D model that CAN explain the observations, but the authors do not fault the IPCC 3D models which CANNOT explain the observations, and possibly don’t even conserve energy in the deep ocean?

Regarding their specific summary points (in bold):

1. The model treats the entire Earth as ocean-covered.
Not true, and a red herring anyway. We model the observed change in ocean heat content since 1955, and it doesn’t matter if the ocean covers 20% of the globe or 100%. They incorrectly state that ignoring the 30% land mass of the Earth will bias the sensitivity estimates. This is wrong. All energy fluxes are per sq. meter, and the calculations are independent of the area covered by the ocean. We are surprised the authors (and the reviewers) did not grasp this basic point.

2. The model assigns an ocean process (El Nino cycle) which covers a limited geographic region in the Pacific Ocean as a global phenomenon…
This is irrelevant. We modeled the OBSERVED change in global average ocean heat content, including the observed GLOBAL average expression of ENSO in the upper 200 m of the GLOBAL average ocean temperature.

3. The model incorrectly simulates the upper layer of the ocean in the numerical calculation.
There are indeed different assumptions which can be made regarding how the surface temperature relates to the average temperature of the first layer, which is assumed to be 50 m thick. How these various assumptions change the final conclusion will require additional work on our part.

4. The model incorrectly insulates the ocean bottom at 2000 meters depth.
This approximation should not substantially matter for the purpose the model is being used. We stopped at 2,000 m depth because the results did not substantially depend upon it going any deeper.

5. The model leads to diffusivity values that are significantly larger than those used in the literature.

We are very surprised this is even an issue, since we took great pains to point out in our paper that the *effective* diffusivity values we used in the model are meant to represent *all* modes of vertical mixing, not just diffusivity per se. If the authors read our paper, they should know this. And why did the reviewers not catch this basic oversight? Did the reviewers even read our paper to see whether Abraham et al. were misrepresenting what it claimed? Again, the *effective* diffusivity is meant to represent all modes of vertical heat transport (this is also related to point #8, below). All the model requires is a way to distribute heat vertically, and a diffusion-type operator is one convenient method for doing that.

6. The model incorrectly uses an asymmetric diffusivity to calculate heat transfer between adjacent layers, and
7. The model contains incorrect determination of element interface diffusivity.

The authors discuss ways in which the implementation of the diffusion operator can be more accurately expressed. This might well be the case (we need to study it more). But it should not impact the final conclusions because we adjust the assumed effective diffusivities to best match the observations of how the ocean warms and cools at various depths. If there was a bias in the numerical implementation of the diffusion operator (even off by a fact of 10), then the effective diffusivity values will simply adjust until the model matches the observations. The important thing is that, as the surface warms, the extra heat is mixed downward in a fashion which matches the observations. Arguing over the numerical implementation obscures this basic fact. Finally, a better implementation of diffusivity calculation still must then be run with a variety of effective diffusivities (and climate sensitivities) until a match with the observations has been obtained, which as far as we can tell the authors did not do. The same would apply to a 3D model simulation…when one major change is implemented, other model changes are often necessary to get realistic results.

8. The model neglects advection (water flow) on heat transfer.
Again, there is no advection in the global average ocean. The authors should know this, and so should the reviewers of their paper. Our *effective* diffusivity, as we state in the paper, is meant to represent all processes that cause vertical mixing of heat in the ocean, including formation of cold deep water at high latitudes. Why did neither the authors nor the reviewers of the paper not catch this basic oversight? Again, we wonder how closely anyone read our paper.

9. The model neglects latent heat transfer between the atmosphere and the ocean surface.
Not true. As we said in our paper, processes like surface evaporation, convective heat transfer, latent heat release, while not explicitly included, are implicitly included because the atmosphere is assumed to be in convective equilibrium with the surface. Our use of 3.2 W/m2 change in OLR with a surface temperature change of 1 deg. C is the generally assumed global-average value for the effective radiating temperature of the surface-atmosphere system. This is the way in which a surface temperature change is realistically translated into a change in top-of-atmosphere OLR, without having to explicitly include latent heat transfer, atmospheric convection, temperature lapse rate, etc.

Final Comments
If our model is so far from reality, maybe Abraham et al. can tell us why the model works when we run it in the non-ENSO mode (mainly greenhouse gas, aerosol, and volcanic forcing) , yielding a climate sensitivity similar to many of the CMIP models (2.2 deg. C). If the model deficiencies are that great, shouldn’t the model lead to a biased result for this simple case? Again, they cannot obtain a “corrected” model run by changing only one thing (e.g. the numerical diffusion scheme) without sweeping the other model parameters (e.g. the effective diffusivities) to get a best match to the observations.

These are our initial reactions after only a quick look at the paper. It will take a while to examine a couple of the criticisms in more detail. For now, the only one we can see which might change our conclusions in a significant way is our assumption that surface temperature changes have the same magnitude as the average temperature change in the top (50 m) layer of the model. In reality, surface changes should be a little larger, which will change the feedback strength. It will take time to address such issues, and we are now under a new DOE contract to do climate model validation.

Solar Eclipse Today and the Largest Sunspot in 18 Years

October 23rd, 2014

Just a reminder of the partial solar eclipse today, Thursday October 23, which will provide eastern U.S. watchers with the best display near sunset. Do not view the sun without eye protection! (even multiple sunglasses are unsafe)

Giant sunspot group 2192, the largest since 1996, will also be pointed toward Earth. This spot has been “crackling” with flares, and it is a little mystifying that a large coronal mass ejection (CME) event has not yet occurred. Here’s a self-updating movie of the solar disk through today, as sunspot 2192 rotates into an Earth-pointing position:

A major CME event in the next few days from Sunspot 2192 could produce auroral displays into the middle latitudes a few days after the CME.

Over the eastern U.S. the eclipse will peak near sunset, and over the western U.S. the eclipse will occur during the afternoon and end before sunset. Weather will allow viewing over much of the country, but cloudy and rainy weather will exist at eclipse time over the Pacific Northwest, Wisconsin, and New England. Here’s a cloud forecast movie for the U.S.

I’ll be doing a time lapse video of the setting sun, weather permitting, when the partial eclipse will peak at about 40% at sunset at my location. I hope to also catch the sunspot group, which currently looks like this in visible light:

Sunspot group 2192 on October 23, 2014, as seen by the Solar Dynamics Observatory.

Sunspot group 2192 on October 23, 2014, as seen by the Solar Dynamics Observatory.

Here’s an eclipse calculator simulation for your location.

DO NOT view the sun with the naked eye! Advice on methods for safely viewing the sun are provided by Astro Bob at UniverseToday.com.

Why 2014 Won’t Be the Warmest Year on Record

October 21st, 2014

Much is being made of the “global” surface thermometer data, which three-quarters the way through 2014 is now suggesting the global average this year will be the warmest in the modern instrumental record.

I claim 2014 won’t be the warmest global-average year on record.

..if for no other reason than this: thermometers cannot measure global averages — only satellites can. The satellite instruments measure nearly every cubic kilometer – hell, every cubic inch — of the lower atmosphere on a daily basis. You can travel hundreds if not thousands of kilometers without finding a thermometer nearby.

(And even if 2014 or 2015 turns out to be the warmest, this is not a cause for concern…more about that later).

The two main research groups tracking global lower-tropospheric temperatures (our UAH group, and the Remote Sensing Systems [RSS] group) show 2014 lagging significantly behind 2010 and especially 1998:

Yearly-global-LT-UAH-RSS-thru-Sept-2014

With only 3 months left in the year, there is no realistic way for 2014 to set a record in the satellite data.

Granted, the satellites are less good at sampling right near the poles, but compared to the very sparse data from the thermometer network we are in fat city coverage-wise with the satellite data.

In my opinion, though, a bigger problem than the spotty sampling of the thermometer data is the endless adjustment game applied to the thermometer data. The thermometer network is made up of a patchwork of non-research quality instruments that were never made to monitor long-term temperature changes to tenths or hundredths of a degree, and the huge data voids around the world are either ignored or in-filled with fictitious data.

Furthermore, land-based thermometers are placed where people live, and people build stuff, often replacing cooling vegetation with manmade structures that cause an artificial warming (urban heat island, UHI) effect right around the thermometer. The data adjustment processes in place cannot reliably remove the UHI effect because it can’t be distinguished from real global warming.

Satellite microwave radiometers, however, are equipped with laboratory-calibrated platinum resistance thermometers, which have demonstrated stability to thousandths of a degree over many years, and which are used to continuously calibrate the satellite instruments once every 8 seconds. The satellite measurements still have residual calibration effects that must be adjusted for, but these are usually on the order of hundredths of a degree, rather than tenths or whole degrees in the case of ground-based thermometers.

And, it is of continuing amusement to us that the global warming skeptic community now tracks the RSS satellite product rather than our UAH dataset. RSS was originally supposed to provide a quality check on our product (a worthy and necessary goal) and was heralded by the global warming alarmist community. But since RSS shows a slight cooling trend since the 1998 super El Nino, and the UAH dataset doesn’t, it is more referenced by the skeptic community now. Too funny.

In the meantime, the alarmists will continue to use the outdated, spotty, and heavily-massaged thermometer data to support their case. For a group that trumpets the high-tech climate modeling effort used to guide energy policy — models which have failed to forecast (or even hindcast!) the lack of warming in recent years — they sure do cling bitterly to whatever will support their case.

As British economist Ronald Coase once said, “If you torture the data long enough, it will confess to anything.”

So, why are the surface thermometer data used to the exclusion of our best technology — satellites — when tracking global temperatures? Because they better support the narrative of a dangerously warming planet.

Except, as the public can tell, the changes in global temperature aren’t even on their radar screen (sorry for the metaphor).

Of course, 2015 could still set a record if the current El Nino ever gets its act together. But I’m predicting it won’t.

Which brings me to my second point. If global temperatures were slowly rising at, say, a hundredth of a degree per year and we didn’t have cool La nina or warm El Nino years, then every year would be a new record warm year.

But so what?

It’s the amount of temperature rise that matters. And for a planet where all forms of life experience much wider swings in temperature than “global warming” is producing, which might be 1 deg. C so far, those life forms — including the ones who vote — really don’t care that much. We are arguing over the significance of hundredths of a degree, which no one can actually feel.

Not surprisingly, the effects on severe weather are also unmeasurable …despite what some creative-writing “journalists” are trying to get you to believe. Severe weather varies tremendously, especially on a local basis, and to worry that the average (whatever than means) might change slightly is a total misplacement of emphasis.

Besides, once you consider that there’s nothing substantial we can do about the global warming “problem” in the near term, short of plunging humanity into a new economic Dark Age and killing millions of people in the process, its a wonder that climate is even on the list of the public’s concerns, let alone at the bottom of the list.

Ode to Misinterpretations of the Second Law

October 21st, 2014

Inspired by a couple comments from my solar eclipse post.

He said an object that was cold
Could not make something warm still warmer
So he donned his coat, went out the door
To prove the truth of former.

“See?” he said, “the sky is cold”
“and so it cannot warm”
Then back inside he merrily went,
Removing the cold coat he’d worn.

-Burma-Shave

Solar Thursday USA: An Eclipse AND a Massive Sunspot Group

October 20th, 2014

Residents of the eastern U.S. will be in a particularly good location to see a partial solar eclipse which will peak near sunset on Thursday, Oct. 23, and as a bonus the giant sunspot group 2192 should also be visible.

Here’s what sunspot 2192 looks like in recent days as it slowly rotates toward the central portion of the solar disk:

Sunspot 2192 has been pumping out solar flares on a daily basis, and has a good chance of producing an Earth-directed coronal mass ejection (CME) over the next week, which would lead to auroral displays.

As of right now, the best viewing of the eclipse looks like it will be in a general swath from the Upper Plains and Great Lakes (~60% solar disk coverage) through the Midwest and Ohio Valley toward the southeast U.S. The northeast U.S. viewing will depend on how much cloud cover remains from a slowly retreating low pressure system…some breaks in the clouds will allow at least scattered viewing there. Over the western U.S. the eclipse will occur during the afternoon and end before sunset.

Here in north Alabama I’ll be doing a time lapse video of the setting sun, weather permitting, when the partial eclipse will peak at about 40% at sunset.

Here’s an eclipse calculator simulation for your location.

DO NOT view the sun with the naked eye! Advice on methods for safely viewing the sun are provided by Astro Bob at UniverseToday.com.

Dr. Roy’s Earth Today #12: Central Siberian Plateau

October 20th, 2014

Lying mostly north of the Arctic Circle, the Central Siberian Plateau is enjoying sunshine today, but in several weeks the sun will fall below the horizon (click for full-size):

Central Siberian Plateau as seen on 20 October 2014 by the MODIS instrument on NASA's Aqua satellite.

Central Siberian Plateau as seen on 20 October 2014 by the MODIS instrument on NASA’s Aqua satellite.


Winter has gotten off to an early start in Russia, and many forecasters are calling for an unusually cold winter in the Northern Hemisphere. In the above image, lake-effect cloud streets can be seen to be streaming off a few of the larger lakes. According to the GFS forecast model, mid-day temperatures here are running below 0 deg. F.

From Russia, With Cold

October 19th, 2014

Winter has gotten an early start in Russia, with much of the expansive country already covered in snow (even though it’s only mid-October) and temperatures running well below normal.

The immediate future looks worse. The GFS model forecast from last night shows temperatures over the next 7 days running 10 to 20 deg. F below normal, and a rapid buildup of the snowpack (click image for full size, based upon graphics from WeatherBell.com):

GFS 7-day forecast of average temperature departures from normal, and snow depth by the end of the 7 days, ending Sunday, Oct. 26, 2014.

GFS 7-day forecast of average temperature departures from normal (deg. C), and snow depth (inches) by the end of the 7 days, ending Sunday, Oct. 26, 2014.

Individual days and locations are forecast to be 40 deg. F below normal, with some places reaching 40 deg. below zero, more typical of mid-winter.

The very warm spots over the Arctic Ocean are where there is less sea ice cover compared to the 30-year mean (1981-2010).

As reported by The Moscow Times, Russian forecasters like those elsewhere are projecting an unusually cold and snowy winter. Whenever the “Siberian Express” kicks in this winter, it could mean some bitterly cold outbreaks for North America and even the U.S.

Gonzalo: 144 mph Gust Measured on Bermuda

October 18th, 2014

Hurricane Gonzalo moving northeast of Bermuda at 9:15 am ET, Oct. 18, 2014.

Hurricane Gonzalo moving northeast of Bermuda at 9:15 am ET, Oct. 18, 2014.


As predicted, the eye of Hurricane Gonzalo passed directly over Bermuda last night. Most of the island was without power this morning, but crews are already out restoring power…although they said progress will be slow due to widespread damage.

After the calm of the eye passed during the night and hurricane force winds resumed, a wind gust to 144 mph was measured near the airport. The official weather equipment at the airport is reported to be damaged. Most of the home weather stations went offline several hours before the worst of the storm hit the island.

Despite some injuries, Bermuda police report there has been no loss of life. The island was well warned, and building codes in Bermuda are quite strict for withstanding hurricane-force winds.

Here’s an interesting short video taken from the International Space Station as it passed near Gonzalo:

Target, Bermuda: Will Hurricane Gonzalo Rival Fabian?

October 16th, 2014
Color MODIS image of Hurricane Gonzalo, late morning, Oct. 17, 2014.

Color MODIS image of Hurricane Gonzalo, late morning, Oct. 17, 2014 (click for full size).

UPDATE 7 am ET Oct. 18: As was forecast, the eye of Gonzalo passed over Bermuda last night. The highest wind gust I saw reported was 144 mph. Most of the island is without power this morning.

Before the October, 1926 arrival of catastrophic Hurricane Ten (nicknamed Valerian), it was reported that it had been over a hundred years since a hurricane hit Bermuda in October.

That hurricane arrived so quickly (moving at 40 mph) and from an unexpected direction (from almost due west), that the island was unprepared. HMS Valerian was just returning from Cuba, and was within sight of Bermuda with mild weather reported, yet never made it to port because the storm descended on the island so quickly. Most of the crew was lost as the ship sank. An anemometer in the Royal Navy Dockyard measured a 138 mph wind gust before it broke.

The next-to-worst hurricane to hit Bermuda in the last couple centuries was Fabian in 2003. Taking a path similar to Gonzalo is forecast to take as it arrives at the island tomorrow, Fabian produced sustained winds of 120 mph and a measured peak wind gust of 164 mph at an elevated measurement site on a radio tower. Damages were estimated at $300 million, the worst since 1926.

It would take another direct hit like Fabian, and a somewhat higher intensity than is currently forecast (140145130125 mph sustained winds now, 115120 115 mph sustained winds forecast as the eyewall reaches Bermuda this evening) for Gonzalo to rival either of these historic storms. But hurricane intensity fluctuations are notoriously difficult to forecast, and the current forecast path of Gonzalo remains very close to a “direct hit” which will place the most intense eyewall winds over land.

We will know in the next 8 hours or so. In the meantime, here’s our Gonzalo tracking page, and check out the slideshow of preparations from the Royal Gazette.

The eyewall of Gonzalo is now showing up on the Bermuda radar loop.

Live Updates from BerNews.com

(I’m now deleting off-topic comments…I’ll let the snarky global warming ones pass for now. –RWS)

How Safe is the Air You Breathe in Planes?

October 15th, 2014

sneeze_682_473022a-755805With increasing concerns that Ebola apparently spreads more readily than we were told, I thought it might be useful to mention a little experiment I performed a few months ago. I was on the way to Las Vegas to give climate talks, and I wanted to show how much greater the CO2 content of air is in the confined spaces we share than out in the ambient atmosphere.

Let me say up front I am not a germaphobe. I have flown hundreds of times over the years. I wash my hands a few times a day, but consider some limited exposure to germs to be healthy and necessary to keep our immune systems strong.

But I would wager no one — medical experts, doctors, nurses, or the CDC Director – is willing to expose themselves to tiny amounts of sputum from Ebola patients to prove that point.

I’ve always wanted to know just how much of the air in planes is recycled, versus fresh. It’s hard to find an answer to this question. This Wall Street Journal article from a few years ago summarizes several studies that did indeed document increased incidence of illness contracted by recent airline passengers.

I have a good handheld air quality meter that measures the carbon dioxide content of the atmosphere. Humans exhale large concentrations of CO2, so I used the meter on one recent trip to see just how high the CO2 concentration gets on airplanes. The higher the CO2 content, the more you are breathing air that other passengers have exhaled.

Yes, I know the official word is that, unless you are swapping spit with someone who has contracted Ebola, you don’t have anything to worry about. But in the very confined space of an aircraft, there is some inadvertent spit-swapping going on, anyway.

And think of all of the surfaces inside the plane that MANY people are touching with unclean hands: Seat headrests along the aisles, overhead luggage compartment latches, air flow nozzles, trays, passing cups and trash, etc.

Oh, and those miniscule rest rooms.

Anyway, to the answer. On two flights — one a large plane, the other small — I measured CO2 concentrations of 1,600 ppm or more (coming out of the nozzles), which is 4 times ambient (400 ppm). In my office building I might measure 700 ppm, late in the afternoon. In small offices with several people confined I’ve measured 1,000 ppm, the point at which some people consider the start of “reduced” air quality.

So, it is true, a greater proportion of air you breathe on an airplane has been exhaled by others than in most other environments you are likely to be exposed to. It’s still hard to say from my measurement of 1,600+ ppm just how much fresh air is mixed in with the air that is recycled by the aircraft ventilation system, but I think the bigger concern is this: that you are in such close proximity to other people in a confined space, you are breathing other peoples air — including tiny aerosols — even before all of the exhaled air gets sucked back into the ventilation system and filtered.

Now, this doesn’t mean I’m going to forgo flying…unless many more Ebola cases start showing up which might have occurred through casual contact. I suspect we should know much more in the coming weeks and months.