Archive for the ‘Blog Article’ Category

Additional Comments on the Frank (2019) “Propagation of Error” Paper

Thursday, September 12th, 2019

NOTE: This post has undergone a few revisions as I try to be more precise in my wording. The latest revision was at 0900 CDT Sept. 12, 2019.

If this post is re-posted elsewhere, I ask that the above time stamp be included.

Yesterday I posted an extended and critical analysis of Dr. Pat Frank’s recent publication entitled Propagation of Error and the Reliability of Global Air Temperature Projections. Dr. Frank graciously provided rebuttals to my points, none of which have changed my mind on the matter. I have made it clear that I don’t trust climate models’ long-term forecasts, but that is for different reasons than Pat provides in his paper.

What follows is the crux of my main problem with the paper, which I have distilled to its essence, below. I have avoided my previous mistake of paraphrasing Pat, and instead I will quote his conclusions verbatim.

In his Conclusions section, Pat states “As noted above, a GCM simulation can be in perfect external energy balance at the TOA while still expressing an incorrect internal climate energy-state.

This I agree with, and I believe climate modelers have admitted to this as well.

But, he then further states, “LWCF [longwave cloud forcing] calibration error is +/- 144 x larger than the annual average increase in GHG forcing. This fact alone makes any possible global effect of anthropogenic CO2 emissions invisible to present climate models.”

While I agree with the first sentence, I thoroughly disagree with the second. Together, they represent a non sequitur. All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!

Why?

If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behavior:

Figure 1. Yearly- and global-average longwave infrared energy flux variations at top-of-atmosphere from 10 CMIP5 climate models in the first 100 years of their pre-industrial “control runs”. Data available from https://climexp.knmi.nl/

Importantly, this forced-balancing of the global energy budget is not done at every model time step, or every year, or every 10 years. If that was the case, I would agree with Dr. Frank that the models are useless, and for the reason he gives. Instead, it is done once, for the average behavior of the model over multi-century pre-industrial control runs, like those in Fig. 1.

The ~20 different models from around the world cover a WIDE variety of errors in the component energy fluxes, as Dr. Frank shows in his paper, yet they all basically behave the same in their temperature projections for the same (1) climate sensitivity and (2) rate of ocean heat uptake in response to anthropogenic greenhouse gas emissions.

Thus, the models themselves demonstrate that their global warming forecasts do not depend upon those bias errors in the components of the energy fluxes (such as global cloud cover) as claimed by Dr. Frank (above).

That’s partly why different modeling groups around the world build their own climate models: so they can test the impact of different assumptions on the models’ temperature forecasts.

Statistical modelling assumptions and error analysis do not change this fact. A climate model (like a weather forecast model) has time-dependent differential equations covering dynamics, thermodynamics, radiation, and energy conversion processes. There are physical constraints in these models that lead to internally compensating behaviors. There is no way to represent this behavior with a simple statistical analysis.

Again, I am not defending current climate models’ projections of future temperatures. I’m saying that errors in those projections are not due to what Dr. Frank has presented. They are primarily due to the processes controlling climate sensitivity (and the rate of ocean heat uptake). And climate sensitivity, in turn, is a function of (for example) how clouds change with warming, and apparently not a function of errors in a particular model’s average cloud amount, as Dr. Frank claims.

The similar behavior of the wide variety of different models with differing errors is proof of that. They all respond to increasing greenhouse gases, contrary to the claims of the paper.

The above represents the crux of my main objection to Dr. Frank’s paper. I have quoted his conclusions, and explained why I disagree. If he wishes to dispute my reasoning, I would request that he, in turn, quote what I have said above and why he disagrees with me.

Critique of “Propagation of Error and the Reliability of Global Air Temperature Predictions”

Wednesday, September 11th, 2019

UPDATE: (1300CDT, Sept. 11, 2019). I’ve added a plot of ten CMIP5 models’ global top-of-atmosphere longwave IR variations in the first 100 years of their control runs.

UPDATE #2: 0800 CDT Sept. 12, 2019) After comments from Dr. Frank and a number of commenters here and at WUWT, I have posted Additional Comments on the Frank (2019) Propagation of Error Paper, where I have corrected my mistake of paraphrasing Dr. Frank’s conclusions, when I should have been quoting them verbatim.

I’ve been asked for my opinion by several people about this new published paper by Stanford researcher Dr. Patrick Frank.

I’ve spent a couple of days reading the paper, and programming his Eq. 1 (a simple “emulation model” of climate model output ), and included his error propagation term (Eq. 6) to make sure I understand his calculations.

Frank has provided the numerous peer reviewers’ comments online, which I have purposely not read in order to provide an independent review. But I mostly agree with his criticism of the peer review process in his recent WUWT post where he describes the paper in simple terms. In my experience, “climate consensus” reviewers sometimes give the most inane and irrelevant objections to a paper if they see that the paper’s conclusion in any way might diminish the Climate Crisis™.

Some reviewers don’t even read the paper, they just look at the conclusions, see who the authors are, and make a decision based upon their preconceptions.

Readers here know I am critical of climate models in the sense they are being used to produce biased results for energy policy and financial reasons, and their fundamental uncertainties have been swept under the rug. What follows is not meant to defend current climate model projections of future global warming; it is meant to show that — as far as I can tell — Dr. Frank’s methodology cannot be used to demonstrate what he thinks he has demonstrated about the errors inherent in climate model projection of future global temperatures.

A Very Brief Summary of What Causes a Global-Average Temperature Change

Before we go any further, you must understand one of the most basic concepts underpinning temperature calculations: With few exceptions, the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system. This is basic 1st Law of Thermodynamics stuff.

So, if energy loss is less than energy gain, warming will occur. In the case of the climate system, the warming in turn results in an increase loss of infrared radiation to outer space. The warming stops once the temperature has risen to the point that the increased loss of infrared (IR) radiation to to outer space (quantified through the Stefan-Boltzmann [S-B] equation) once again achieves global energy balance with absorbed solar energy.

While the specific mechanisms might differ, these energy gain and loss concepts apply similarly to the temperature of a pot of water warming on a stove. Under a constant low flame, the water temperature stabilizes once the rate of energy loss from the water and pot equals the rate of energy gain from the stove.

The climate stabilizing effect from the S-B equation (the so-called “Planck effect”) applies to Earth’s climate system, Mars, Venus, and computerized climate models’ simulations. Just for reference, the average flows of energy into and out of the Earth’s climate system are estimated to be around 235-245 W/m2, but we don’t really know for sure.

What Frank’s Paper Claims

Frank’s paper takes an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) and assumes that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation, propagating the error forward in time during his emulation model’s integration. The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT).

He claims (I am paraphrasing) that this is evidence that the models are essentially worthless for projecting future temperatures, as long as such large model errors exist. This sounds reasonable to many people. But, as I will explain below, the methodology of using known climate model errors in this fashion is not valid.

First, though, a few comments. On the positive side, the paper is well-written, with extensive examples, and is well-referenced. I wish all “skeptics” papers submitted for publication were as professionally prepared.

He has provided more than enough evidence that the output of the average climate model for GASAT at any given time can be approximated as just an empirical constant times a measure of the accumulated radiative forcing at that time (his Eq. 1). He calls this his “emulation model”, and his result is unsurprising, and even expected. Since global warming in response to increasing CO2 is the result of an imposed energy imbalance (radiative forcing), it makes sense you could approximate the amount of warming a climate model produces as just being proportional to the total radiative forcing over time.

Frank then goes through many published examples of the known bias errors climate models have, particularly for clouds, when compared to satellite measurements. The modelers are well aware of these biases, which can be positive or negative depending upon the model. The errors show that (for example) we do not understand clouds and all of the processes controlling their formation and dissipation from basic first physical principles, otherwise all models would get very nearly the same cloud amounts.

But there are two fundamental problems with Dr. Frank’s methodology.

Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux

If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.

Why?

Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propogate.

For example, the following figure shows 100 year runs of 10 CMIP5 climate models in their pre-industrial control runs. These control runs are made by modelers to make sure that there are no long-term biases in the TOA energy balance that would cause spurious warming or cooling.

Figure 1. Output of Dr. Frank’s emulation model of global average surface air temperature change (his Eq. 1) with a +/- 2 W/m2 global radiative imbalance propagated forward in time (using his Eq. 6) (blue lines), versus the yearly temperature variations in the first 100 years of integration of the first 10 models archived at
https://climexp.knmi.nl/selectfield_cmip5.cgi?id=someone@somewhere .

If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling. But they don’t. You can barely see the yearly temperature deviations, which average about +/-0.11 deg. C across the ten models.

Why don’t the climate models show such behavior?

The reason is that the +/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance. To demonstrate this, here are the corresponding TOA net longwave IR fluxes for the same 10 models shown in Fig. 1. Clearly, there is nothing like 4 W/m2 imbalances occurring.

Figure 2. Same as in Fig. 1, but for TOA longwave (IR) fluxes.

The average yearly standard deviation of the LW flux variations is only 0.16 W/m2, and these vary randomly.

And it doesn’t matter how correlated or uncorrelated those various errors are with each other: they still sum to nearly zero in the long term, which is why the climate model trends in Fig 1 are only +/- 0.10 C/Century… not +/- 20 deg. C/Century. That’s a factor of 200 difference.

This (first) problem with the paper’s methodology is, by itself, enough to conclude the paper’s methodology and resulting conclusions are not valid.

The Error Propagation Model is Not Appropriate for Climate Models

The new (and generally unfamiliar) part of his emulation model is the inclusion of an “error propagation” term (his Eq. 6). After introducing Eq. 6 he states,

Equation 6 shows that projection uncertainty must increase in every simulation (time) step, as is expected from the impact of a systematic error in the deployed theory“.

While this error propagation model might apply to some issues, there is no way that it applies to a climate model integration over time. If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time. It doesn’t somehow accumulate (as the blue curves indicate in Fig. 1) as the square root of the summed squares of the error over time (his Eq. 6).

Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step. Dr. Frank has chosen 1 year as the time step (with a +/-4 W/m2 assumed energy flux error), which will cause a certain amount of error accumulation over 100 years. But if he had chosen a 1 month time step, there would be 12x as many error accumulations and a much larger deduced model error in projected temperature. This should not happen, as the final error should be largely independent of the model time step chosen. Furthermore, the assumed error with a 1 month time step would be even larger than +/-4 W/m2, which would have magnified the final error after a 100 year integrations even more. This makes no physical sense.

I’m sure Dr. Frank is much more expert in the error propagation model than I am. But I am quite sure that Eq. 6 does not represent how a specific bias in a climate model’s energy flux component would change over time. It is one thing to invoke an equation that might well be accurate and appropriate for certain purposes, but that equation is the result of a variety of assumptions, and I am quite sure one or more of those assumptions are not valid in the case of climate model integrations. I hope that a statistician such as Dr. Ross McKitrick will examine this paper, too.

Concluding Comments

There are other, minor, issues I have with the paper. Here I have outlined the two most glaring ones.

Again, I am not defending the current CMIP5 climate model projections of future global temperatures. I believe they produce about twice as much global warming of the atmosphere-ocean system as they should. Furthermore, I don’t believe that they can yet simulate known low-frequency oscillations in the climate system (natural climate change).

But in the context of global warming theory, I believe the largest model errors are the result of a lack of knowledge of the temperature dependent changes in clouds and precipitation efficiency (thus free-tropospheric vapor, thus water vapor “feedback”) that actually occur in response to a long-term forcing of the system from increasing carbon dioxide. I do not believe it is because the fundamental climate modeling framework is not applicable to the climate change issue. The existence of multiple modeling centers from around the world, and then performing multiple experiments with each climate model while making different assumptions, is still the best strategy to get a handle on how much future climate change there *could* be.

My main complaint is that modelers are either deceptive about, or unaware of, the uncertainties in the myriad assumptions — both explicit and implicit — that have gone into those models.

There are many ways that climate models can be faulted. I don’t believe that the current paper represents one of them.

I’d be glad to be proved wrong.

The Faith Component of Global Warming Predictions

Sunday, September 8th, 2019
Credit: NBC News.

It’s been ten years since I addressed this issue in a specific blog post, so I thought it would be useful to revisit it. I mention it from time to time, but it is so important, it bears repeating and remembering.

Over and over again.

I continue to strive to simply these concepts, so here goes another try. What follows is as concise as I can make it.

  1. The temperature change in anything, including the climate system, is the result of an imbalance between the rates of energy gain and energy loss. This comes from the First Law of Thermodynamics. Basic stuff.
  2. Global warming is assumed to be due to the small (~1%) imbalance between absorbed sunlight and infrared energy lost to outer space averaged over the Earth caused by increasing atmospheric CO2 from fossil fuel burning.
  3. But we don’t know whether the climate system, without human influence, is in a natural state of energy balance anyway. We do not know the quantitative average amounts of absorbed sunlight and emitted infrared energy across the Earth, either observationally or from first physical principles, to the accuracy necessary to blame most recent warming on humans rather than nature. Current best estimates, based upon a variety of datasets, is around 239-240 Watts per sq. meter for these energy flows. But we really don’t know.

When computer climate models are first constructed, these global-average energy flows in and out of the climate system do not balance. So, modelers adjust any number of uncertain processes in the models (for example, cloud parameterizations) until they do balance. They run the model for, say, 100 years and make sure there is little or no long-term temperature trend to verify balance exists.

Then, they add the infrared radiative effect of increasing CO2, which does cause an energy imbalance. Warming occurs. They then say something like, “See? The model proves that CO2 is responsible for warming we’ve seen since the 1950s.”

But they have only demonstrated what they assumed from the outset. It is circular reasoning. A tautology. Evidence that nature also causes global energy imbalances is abundant: e.g., the strong warming before the 1940s; the Little Ice Age; the Medieval Warm Period. This is why many climate scientists try to purge these events from the historical record, to make it look like only humans can cause climate change.

I’m not saying that increasing CO2 doesn’t cause warming. I’m saying we have no idea how much warming it causes because we have no idea what natural energy imbalances exist in the climate system over, say, the last 50 years. Those are simply assumed to not exist.

(And, no, there is no fingerprint of human-caused warming. All global warming, whether natural or human-caused, looks about the same. If a natural decrease in marine cloudiness was responsible, or a decrease in ocean overturning [either possible in a chaotic system], warming would still be larger over land than ocean, greater in the upper ocean than deep ocean, and greatest at high northern latitudes and least at high southern latitudes).

Thus, global warming projections have a large element of faith programmed into them.

Florida Major Hurricane Strikes: No Significant Increase in Intensity from Sea Surface Warming

Wednesday, September 4th, 2019

Summary: Twenty-two major hurricanes have struck the east coast of Florida (including the Keys) since 1871. It is shown that the observed increase in intensity of these storms at landfall due to SST warming over the years has been a statistically insignificant 0.43 knots per decade (0.5 mph per decade). Thus, there has been no observed increase in landfalling east coast Florida major hurricane strength with warming.

In the news reporting of major Hurricane Dorian which devastated the NW Bahamas, it is commonly assumed that hurricanes in this region have become stronger due to warming sea surface temperatures (SSTs), which in turn are assumed to be caused by human-caused greenhouse gas emissions.

Here I will use observational data since the 1870s to address the question: Have landfalling major hurricanes on the east coast of Florida increased in intensity from warming sea surface temperatures?

The reason I am only addressing landfalling hurricanes on the east coast of Florida is three-fold: (1) this area is a hotbed of major hurricane activity; (2) the record is much longer for landfalling hurricanes, since before the early 1970s the intensity of major hurricanes well offshore was much more uncertain; and (3) the coastal population there is now several million people, the region south of West Palm Beach is historically prone to major hurricane strikes, and so the question of whether hurricane intensity there has increased due to ocean warming is of great practical significance to many people.

First let’s start with the record of major hurricane strikes on the east coast of Florida, including the keys. There have been 22 such storms since 1871, occurring quite irregularly over time.

While there has been a slight increase in the intensity of these storms over time, amounting to +0.8 knots per decade, the correlation is quite low (0.21) and the quantitative relationship is only barely significant at the 1-sigma level.

But this doesn’t tell us the role of sea surface temperatures (SSTs). So, next let’s examine how SSTs have changed over the same period of time. Since all of these major hurricanes made landfall in the southern half of Florida, I chose the following boxed region (22N-28N, 75W-82W) to compute area-averaged SST anomalies for all months from 1870 through 2018 (HadSST data available here).

Since 18 of the 22 major hurricane strikes occurred in either August (4) or September (14), (and 4 were in October), I focused on the average SST anomaly for the 2-month periods August-September. Here’s the 2-month average SST anomalies for 1870-2018.

Note that the years with major hurricane strikes are marked in red. What surprised me is that the SST warming in this region during peak hurricane season (August/September) has been very weak: +0.02 C/decade since 1871, and +0.03 C/decade since 1950.

If we then compare SST anomaly with storm intensity at landfall, we get the following plot. Here I took into account which month the hurricane occurred in for the purposes of computing a 2-month SST anomaly. For example, if the storm hit in October, I used the September/October average. If landfall was in August, I used the July/August average.

There is a weak relationship between SST and storm intensity (correlation = 0.19), but the regression coefficient (+13.5 kts/deg. C warming) is not statistically significant at the 1-sigma level.

Now, if we just ignore statistical lack of significance and assume these quantitative relationships are mostly signal rather than noise, we can multiply the 0.03 C/decade SST warming trend since 1950 by the 13.5 kts/deg C “warming sensitivity parameter”, and get +0.43 kts/decade of storm intensity increase per decade due to SST warming, which is almost exactly 0.5 mph per decade.

This is an exceedingly small number. That would be 5 mph per century.

So, based upon the observed SST data from the Hadley Centre, and hurricane data from the National Hurricane Center, we conclude that warming SSTs have caused a tiny increase in intensity of landfalling major hurricanes by 0.5 mph per decade.

I suspect a statistician (which I am not) would say that this is in the noise level.

In other words, there is no observational evidence that warming SSTs have made landfalling hurricanes on the east coast of Florida any stronger.

UAH Global Temperature Update for August, 2019: +0.38 deg. C

Tuesday, September 3rd, 2019

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for August, 2019 was +0.38 deg. C, unchanged from July, 2019:

The linear warming trend since January, 1979 remains at +0.13 C/decade.

Various regional LT departures from the 30-year (1981-2010) average for the last 20 months are:

YEAR MO GLOBE NHEM. SHEM. TROPIC USA48 ARCTIC AUST
2018 01 +0.29 +0.51 +0.06 -0.10 +0.70 +1.39 +0.52
2018 02 +0.24 +0.28 +0.21 +0.05 +0.99 +1.22 +0.35
2018 03 +0.28 +0.43 +0.12 +0.08 -0.19 -0.32 +0.76
2018 04 +0.21 +0.32 +0.09 -0.14 +0.06 +1.02 +0.84
2018 05 +0.16 +0.38 -0.05 +0.01 +1.90 +0.14 -0.24
2018 06 +0.20 +0.33 +0.06 +0.11 +1.11 +0.76 -0.42
2018 07 +0.30 +0.38 +0.22 +0.28 +0.41 +0.24 +1.48
2018 08 +0.18 +0.21 +0.16 +0.11 +0.02 +0.11 +0.37
2018 09 +0.13 +0.14 +0.13 +0.22 +0.89 +0.23 +0.27
2018 10 +0.19 +0.27 +0.12 +0.30 +0.20 +1.08 +0.43
2018 11 +0.26 +0.24 +0.27 +0.45 -1.16 +0.68 +0.55
2018 12 +0.25 +0.35 +0.14 +0.30 +0.25 +0.69 +1.20
2019 01 +0.38 +0.35 +0.41 +0.35 +0.53 -0.15 +1.15
2019 02 +0.37 +0.47 +0.28 +0.43 -0.02 +1.04 +0.05
2019 03 +0.34 +0.44 +0.25 +0.41 -0.55 +0.96 +0.58
2019 04 +0.44 +0.38 +0.51 +0.53 +0.50 +0.92 +0.91
2019 05 +0.32 +0.29 +0.35 +0.39 -0.61 +0.98 +0.38
2019 06 +0.47 +0.42 +0.52 +0.64 -0.64 +0.90 +0.35
2019 07 +0.38 +0.33 +0.44 +0.45 +0.11 +0.33 +0.87
2019 08 +0.38 +0.38 +0.39 +0.42 +0.17 +0.44 +0.23

This makes August, 2019 the 4th warmest August in the 41 year satellite record, behind 1998 (+0.52), 2016 (+0.44), and 2017 (+0.42).

The UAH LT global anomaly image for August, 2019 should be available in the next few days here.

The global and regional monthly anomalies for the various atmospheric layers we monitor should be available in the next few days at the following locations:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt

Weather.com’s Contrived “Record Cat5 Hurricanes” Statistic

Tuesday, September 3rd, 2019

About an hour ago I posted an objection to a Weather.com article entitled: Hurricane Dorian Becomes the 5th Atlantic Category 5 in 4 Years. Then I deleted it. When I first read the Weather.com article it appeared that the headline was what they were claiming was a record. If so, then it was wrong because the 1930s also had a stretch of 4 years with 5 Category 5 Atlantic hurricanes.

I had not heard about the claim until my interview on Tucker Carlson last evening (hosted by Martha McCallum):

But it turns out that (reading carefully) what they claim is a record (which appears so) is that we have now had a stretch of 4 consecutive years with at least one Cat5 hurricane.

I claim that is a contrived statistic.

Which is more significant in a “climate change” context: that in 1933-34 there were two Cat5 storms (both in 1933), or in 2018-2019 there were also two Cat5 storms, but one in each year? Because that what this boils down to.

I think those would be considered equal in a climate context. In statistics you can always find some insignificant way of slicing and dicing the data to make a certain time period look “unique”. The recent 11+ year period (2006-2016) with no major hurricane landfalls in the U.S. (an unprecedented event) was in my opinion less contrived of a statistic, but since it didn’t fit the global warming narrative, few people are aware of it.

If you think the Weather.com claim is legitimate and related to climate change, let me ask you: Is global warming really spreading out Cat5 hurricanes across the years, so multiple ones don’t occur in the same year? Because that’s the only difference between the 1930s “record” and the current “record”.

The important thing is that the main conclusion as represented by the title of their article (Hurricane Dorian Becomes the 5th Atlantic Category 5 in 4 Years) does not represent a record. It also happened in the 1930s, as shown by the chart in their article.

If Dorian Hits as a Cat4, Still No Long-term Trends in Florida Major Hurricanes

Thursday, August 29th, 2019

NOTE: This post was updated on 30 August to include Hurricane Michael (2018).

Atlantic hurricane activity is notoriously variable, not only from year to year, but decade to decade.

In fact, based upon studies of overwash sediments in coastal lakes stretching from the Florida panhandle to eastern Louisiana, it appears that the period from 1,000 to 3,800 years ago had a considerably higher incidence of Category 4 & 5 hurricanes than in the last 1,000 years. These are admittedly indirect, proxy estimates, but if you read this American Scientist article, it sounds like the researchers have pretty strong evidence.

Why would major hurricane activity vary so much? No one knows. Our climate is a nonlinear dynamical system, capable of undergoing unforced changes both locally and globally. Atmospheric steering currents, wind shear, and African easterly wave activity all play a role in hurricane formation. Tropical Atlantic sea surface temperatures (SSTs) in the late summer are always sufficiently warm to support a major hurricane and are, in my opinion, overrated as a controlling factor. Factors other than SST tend to largely determine hurricane activity and strength.

More direct measurements of hurricane landfalls in Florida have only been possible in the last 120 years or so since prior to 1900 very few people lived there. Before 1900, the intensities of these storms at landfall were quite uncertain. It could be some even went unreported.

If we examine the record of major (Category 3 or greater) hurricanes at landfall in Florida since 1900, and assume that Hurricane Dorian strikes Florida as a 115 kt Category 4 storm, we see that there will still be no long-term trends in either the intensity or number of major landfalling hurricanes.


If Hurricane Dorian makes landfall in Florida as a 115 kt Category 4 storm, there will still be no long term trend in Florida major hurricane landfalls since 1900.

This is not to say there won’t be potentially catastrophic damage. For example, the population of Miami in 1900 was less than 1,700 people. It is now 2.74 million. Needless to say, vast expanses of storm-vulnerable infrastructure has been built over the last 120 years across the Miami-Ft. Lauderdale-West Palm Beach metroplex, and northward along most of the Florida coastline.

But increasing storm damage does not mean increasing storminess.

Selective and Misplaced Outrage at Brazil’s President Bolsonaro over Amazonian Fires

Wednesday, August 28th, 2019

No, I’m not in favor of burning down all of the rainforest in Brazil (or neighboring countries, which are being given a pass for some reason). But the recent outrage over increased fire activity this year in Brazil during the annual burn season seems pretty manufactured to me. And it’s largely political, placing blame at the feet of Brazil’s President Bolsonaro, who took office at the beginning of 2019.

The widespread reporting on this makes it sound like fires in Amazonia this time of year are a new thing. With 50 million Brazilians living below the poverty line, many take up farming which involves clearing land to grow grass to feed cattle, pigs, chickens, etc. They make about US$5.50 a day.

Here’s just one of hundreds of headlines making the rounds lately: The Amazon rainforest is on fire. Climate scientists fear a tipping point is near.

This then gets everyone whipped into a frenzy. For example, here’s what noted environmental expert and Toto guitarist Steve Lukather tweeted:

THIS IS THE MOST IMPORTANT ISSUE IN THE WORLD RIGHT NOW !
We must stop EVERYTHING and deal with this NOW!

So, just how bad is it this year compared to previous years for rainforest destruction in Brazil? Well, here’s the official data:

Graphic from Brazil farmers deforesting Amazon ‘to survive’

Now, tell me exactly what about that graph suggests that things have suddenly gotten worse in terms of rainforest destruction?

If you say, “Well, that’s only through July of this year. Maybe August is much worse!”, then I will point out that the original news article from The Guardian about the “88% rise” in rainforest destruction “under Bolsonaro” was way back on July 3!!

In that article they were comparing June of 2019 to June of 2018, which sounds like cherry-picking to me, when a much more extensive and complete history in the above graph suggests 2019 will not be exceptional for rainforest destruction compared to previous years.

This year’s dry season (June-August) has indeed been exceptionally dry, though. Brazil’s rainfall is tied to sea surface temperature patterns in both the Pacific and Atlantic, especially related to El Nino and La Nina activity. NASA satellite data show that the fires there, mainly set for agricultural purposes, are burning exceptionally hot, probably due to a lack of moisture in the fuel. Anyone who has a wood burning fireplace, or has tried making a campfire with wood that is not thoroughly dry, is familiar with this effect. The fires are burning hotter and “cleaner” than usual. If you look at NASA’s daily satellite imagery of smoke you will see that many previous years were smokier in Amazonia than this year is.

This is just one more example of the media controlling the narrative and selectively and hypocritically placing blame on a particular (and almost always right-leaning) political party.

To be clear: I’m not supporting President Bolsonaro’s policies. I’m pointing out the hypocrisy of the media in its environmental reporting.

Nuking Hurricanes

Monday, August 26th, 2019

There is a story going around that President Trump once suggested using nuclear weapons to weaken hurricanes before they hit land. While he has denied it, the idea has actually been batted around for years.

A less radioactive idea, called Project Stormfury, was carried out by the U.S. Government for about twenty years starting in the early 1960s. Aircraft seeded hurricane clouds with silver iodide in an attempt to strengthen the outer portions of the storm in hopes of weakening the intense storm core.

The project was a failure because it was learned that hurricanes already efficiently convert the available cloud water to precipitation anyway, throughout the storm. The hurricane doesn’t respond to seeding with silver iodide.

What Fuels a Hurricane?

I’ve found that there is a general lack of appreciation of just how much energy nature uses in weather systems. Hurricanes are, of course, an example of an accumulation of a lot of energy that is organized into a single weather system with dramatic effects.

That energy was accumulated over many sunny days and weeks as huge expanses of ocean soaked up tropical sunshine and warmed. The hurricane circulation then draws upon that pent-up energy. The tropical oceans nearly everywhere have the energy required to fuel a hurricane; what is usually missing is an atmospheric disturbance with low wind shear throughout the depth of the troposphere so that the heat produced by rain clouds isn’t just blown away rather than concentrated into a small geographic area.

How About Nuking that Hurricane?

Let’s use the example of the B83 nuclear weapon, which is considered “the most modern nuclear bomb in the U.S. arsenal“. The bomb has an energy yield of 1.2 megatons of TNT.

The average hurricane releases that much energy every 10 seconds.

So, the hurricane probably wouldn’t care that much about a brief nuclear kick in the shins. (The idea of spreading all of that radioactivity would not go over very well with the public, either.)

But let’s say we had hundreds or even thousands of megaton-class weapons that were cheap and did not produce dangerous radiation. What could be done to weaken a hurricane?

The most fundamental problem with trying to weaken a hurricane is that hurricanes are driven by heat release, and these bombs would just add more heat to the storm, potentially making it worse. As mentioned above, in a hurricane, water vapor condenses into clouds and rain, releasing latent heat, which warms the troposphere and causes intense low pressure at the surface, leading to strong surface winds.

I suspect the idea would be to release the bomb energy in portions of the storm that could — theoretically — disrupt the inner core (the eyewall) where most of the hurricane damage occurs. But adding large amounts of heat energy could result in unforeseen strengthening of the core hours later. Who knows? It’s not nice to fool Mother Nature.

How the Media Help to Destroy Rational Climate Debate

Sunday, August 25th, 2019

An old mantra of the news business is, “if it bleeds, it leads”. If someone was murdered, it is news. That virtually no one gets murdered is not news. That, by itself, should tell you that the mainstream media cannot be relied upon as an unbiased source of climate change information.

There are lots of self-proclaimed climate experts now. They don’t need a degree in physics or atmospheric science. For credentials, they only need to care and tell others they care. They believe the Earth is being murdered by humans and want the media to spread the word.

Most people do not have the time or educational background to understand the global warming debate, and so defer to the consensus of experts on the subject. The trouble is that no one ever says exactly what the experts agree upon.

When you dig into the details, what the experts agree upon in their official pronouncements is rather unremarkable. The Earth has warmed a little since the 1950s, a date chosen because before that humans had not produced enough CO2 to really matter. Not enough warming for most people to actually feel, but enough for thermometers to pick up the signal buried in the noise of natural weather swings of many tens of degrees and spurious warming from urbanization effects. The UN consensus is that most of that warming is probably due to increasing atmospheric CO2 from fossil fuel use (but we really don’t know for sure).

For now, I tend to agree with this consensus.

And still I am widely considered a climate denier.

Why? Because I am not willing to exaggerate and make claims that cannot be supported by data.

Take researcher Roger Pielke, Jr. as another example. Roger considers himself an environmentalist. He generally agrees with the predictions of the UN Intergovernmental Panel on Climate Change (IPCC) regarding future warming. But as an expert in severe weather damages, he isn’t willing to support the lie that severe weather has gotten worse. Yes, storm damages have increased, but that’s because we keep building more infrastructure to get damaged.

So, he, too is considered a climate denier.

What gets reported by the media about global warming (aka climate change, the climate crisis, and now the climate emergency) is usually greatly exaggerated, half-truths, or just plain nonsense. Just like the economy and economists, it is not difficult to find an expert willing to provide a prediction of gloom and doom. That makes interesting news. But it distorts the public perception of the dangers of climate change. And because it is reported as “science”, it is equated with truth.

In the case of climate change news, the predicted effects are almost universally biased toward Armageddon-like outcomes. Severe weather events that have always occurred (tornadoes, hurricanes, floods, droughts) are now reported with at least some blame placed on your SUV.

The major media outlets have so convinced themselves of the justness, righteousness, and truthfulness of their cause that they have banded together to make sure the climate emergency is not ignored. As reported by The Guardian, “More than 60 news outlets worldwide have signed on to Covering Climate Now, a project to improve coverage of the emergency”.

The exaggerations are not limited to just science. The reporting on engineering related to proposed alternative sources of energy (e.g. wind and solar) is also biased. The reported economics are biased. Unlimited “free” energy is claimed to be all around us, just waiting to be plucked from the unicorn tree.

And for most of America (and the world), the reporting is not making us smarter, but dumber.

Why does it matter? Who cares if the science (or engineering or economics) is exaggerated, if the result is that we stop polluting?

Besides the fact that there is no such thing as a non-polluting energy source, it matters because humanity depends upon abundant, affordable energy to prosper. Just Google life expectancy and per capita energy use. Prosperous societies are healthier and enjoy longer lives. Expensive sources of energy forced upon the masses by governmental fiat kill poor people simply because expensive energy exacerbates poverty, and poverty leads to premature death. As philosopher Alex Epstein writes in his book, The Moral Case for Fossil Fuels, if you believe humans have a right to thrive, then you should be supportive of fossil fuels.

We don’t use wind and solar energy because it is economically competitive. We use it because governments have decided to force taxpayers to pay the extra costs involved and allowed utilities to pass on the higher costs to consumers. Wind and solar use continue to grow, but global energy demand grows even faster. Barring some new energy technology (or a renewed embrace of nuclear power), wind and solar are unlikely to supply more than 10% of global energy demand in the coming decades. And as some European countries have learned, mandated use of solar and wind comes at a high cost to society.

Not only the media, but the public education system is complicit in this era of sloppy science reporting. I suppose most teachers and journalists believe what they are teaching and reporting on. But they still bear some responsibility for making sure what they report is relatively unbiased and factual.

I would much rather have teachers spending more time teaching students how to think and less time teaching them what to think.

Climate scientists are not without blame. They, like everyone else, are biased. Virtually all Earth scientists I know view the Earth as “fragile”. Their biases affect their analysis of uncertain data that can be interpreted in multiple ways. Most are relatively clueless about engineering and economics. I’ve had discussions with climate scientists who tell me, “Well, we need to get away from fossil fuels, anyway”.

And maybe we do, eventually. But exaggerating the threat can do more harm than good. The late Stephen Schneider infamously admitted to biased reporting by scientists. You can read his entire quote and decide for yourself whether scientists like Dr. Schneider let their worldview, politics, etc., color how they present their science to the public. The unauthorized release of the ‘ClimateGate’ emails between IPCC scientists showed how the alarmist narrative was maintained by undermining alternative views and even pressuring the editors of scientific journals. Even The Guardian seemed shocked by the misbehavior.

It’s fine to present the possibility that human-caused global warming could be very damaging, which is indeed theoretically possible. But to claim that large and damaging changes have already occurred due to increasing CO2 in the atmosphere is shoddy journalism. Some reporters get around the problem by saying that the latest hurricane might not be blamed on global warming directly, but it represents what we can expect more of in a warming world. Except that, even the UN IPCC is equivocal on the subject.

Sea level rise stories in the media, as far as I can tell, never mention that sea level has been rising naturally for as long as we have had global tide gauge measurements (since the 1850s). Maybe humans are responsible for a portion of the recent rise, but as is the case for essentially all climate reporting, the role of nature is seldom mentioned, and the size of the problem is almost always exaggerated. That worsening periodic tidal flooding in Miami Beach is about 50% due to sinking of reclaimed swampland is never mentioned.

There are no human fingerprints of global warming. None. Climate change is simply assumed to be mostly human-caused (which is indeed possible), while our knowledge of natural climate change is almost non-existent.

Computerized climate models are programmed based upon the assumption of human causation. The models produce human-caused climate change because they are forced to produce no warming (be in a state of ‘energy balance’) unless CO2 is added to them.

As far as we know, no one has ever been killed by human-caused climate change. Weather-related deaths have fallen dramatically — by over 90% — in the last 100 years.

Whose child has been taught that in school? What journalist has been brave enough to report that good news?

In recent years I’ve had more and more people tell me that their children, grandchildren, or young acquaintances are now thoroughly convinced we are destroying the planet with our carbon dioxide emissions from burning of fossil fuels. They’ve had this message drilled into their brains through news reporting, movies, their teachers and professors, their favorite celebrities, and a handful of outspoken scientists and politicians whose knowledge of the subject is a mile wide but only inches deep.

In contrast, few people are aware of the science papers showing satellite observations that reveal a global greening phenomenon is occurring as a result of more atmospheric CO2.

Again I ask, whose child has been taught this in school? What journalist dares to report any positive benefits of CO2, without which life on Earth would not exist?

No, if it’s climate news, it’s all bad news, all the time.

More Examples of Media Bias

Here are just a few recent (and not-so-recent) examples of media reporting which only make matters worse and degrade the public debate on the subject of climate change. Very often what is reported is actually weather-related events that have always occurred with no good evidence that they have worsened or become more frequent in the last 60+ years that humans could be at least partly blamed.

The Amazon is burning

A few days ago, The Guardian announced Large swathes of the Amazon rainforest are burning. I don’t know how this has suddenly entered the public’s consciousness, but for those of us who keep track of such things, farmland and some rainforest in Amazonia and adjacent lands has been burned by farmers for many decades during this time of year so they can plant crops. This year is not exceptional in this regard, yet someone decided to make an issue of it this year. In fact, it looks like 2019 might be one of the lowest years for biomass burning. Deforestation there has gone down dramatically in the last 20 years.

The rainforest itself does not burn in response to global warming, and in fact warming in the tropics has been so slow that it is unlikely that any tropical resident would perceive it in their lifetime. This is not a climate change issue; it’s a farming and land use issue.

Greenland Is rapidly melting

The Greenland ice sheet gains new snow every year, and gravity causes the sheet to slowly flow to the sea where ice is lost by calving of icebergs. How much ice resides in the sheet at any given time is based upon the balance between gains and losses.

During the summer months of June, July, and August there is more melting of the surface than snow accumulation. The recent (weather-related) episode of a Saharan air mass traveling through western Europe and reaching Greenland led to a few days of exceptional melt. This was widely reported as having grave consequences.

Forbes decided to push the limits of responsible journalism with a story title, Greenland’s Massive Ice Melt Wasn’t Supposed to Happen Until 2070. But the actual data show that after this very brief period (a few days) of strong melt, conditions then returned to normal.

The widely reported Greenland surface melt event around 1 August 2019 (green oval) was then followed by a recovery to normal in the following weeks (purple oval), which was not reported by the media.

Of course, only the brief period of melt was reported by the media, further feeding the steady diet of biased climate information we have all become accustomed to.

Furthermore, after all of the reports of record warmth at the summit of the ice cap, it was found that the temperature sensor readings were biased too warm, and the temperature never actually went above freezing.

Was this reported with the same fanfare as the original story? Of course not. The damage has been done, and the thousands of alarmist news stories will live on in perpetuity.

This isn’t to say that Greenland isn’t losing more ice than it is gaining, but most of that loss is due to calving of icebergs around the edge of the sheet being fed by ice flowing downhill. Not from blast-furnace heating of the surface. It could be the loss in recent decades is a delayed response to excess snow accumulation tens or hundreds of years ago (I took glaciology as a minor while working on my Ph.D. in meteorology). No one really knows because ice sheet dynamics is complicated with much uncertainty.

My point is that the public only hears about these brief weather events which are almost always used to promote an alarmist narrative.

July 2019 was the hottest month on record

The yearly, area-averaged surface temperature of the Earth is about 60 deg. F. It has been slowly and irregularly rising in recent decades at a rate of about 0.3 or 0.4 deg. F per decade.

So, let’s say the average temperature reaches 60.4 deg. F rather than a more normal 60 deg. F. Is “hottest” really the best adjective to use to inform the public about what is going on?

Here’s a geographic plot of the July 2019 departures from normal from NOAA’s Climate Forecast System model.

July 2019 surface temperature departures from normal. The global average is only 0.3 deg. C (0.5 deg. F) above the 1981-2010 average, and many areas were below normal in temperature. (Graphic courtesy WeatherBell.com).

Some areas were above normal, some below, yet the headlines of “hottest month ever” would make you think the whole Earth had become an oven of unbearable heat.

Of course, the temperature changes involved in new record warm months is so small it is usually less than the uncertainty level of the measurements. And, different global datasets give different results. Monitoring global warming is like searching for a climate needle in a haystack of weather variability.

Bait and Switch: Models replacing observations

There is an increasing trend toward passing off climate model projections as actual observations in news reports. This came up just a few days ago when I was alerted to a news story that claimed Tuscaloosa, Alabama is experiencing twice as many 100+ deg. F days as it used to. To his credit, the reporter corrected the story when it was pointed out to him that no such thing has happened, and it was a climate model projection that (erroneously) made such a “prediction”.

Another example happened last year with a news report that the 100th Meridian climate boundary in the U.S. was moving east, with gradual drying starting to invade the U.S. Midwest agricultural belt. But, once again, the truth is that no such thing has happened. It was a climate model projection, being passed off as reality. Having worked with grain-growing interests for nearly 10 years, I addressed this bit of fake climate news with actual precipitation measurements here.

Al Gore and Bill Nye’s global warming in a jar experiment

This is one of my favorites.

As part of Al Gore’s Climate Reality Project, Bill Nye produced a Climate 101 video of an experiment where two glass jars with thermometers in them were illuminated by lamps. One jar had air in it, the other had pure CO2. The video allegedly shows the jar with CO2 in it experiencing a larger temperature rise than the jar with just air in it.

Of course, this was meant to demonstrate how easy it is to show more CO2 causes warming. I’m sure it has inspired many school science experiments. The video has had over 500,000 views.

The problem is that this experiment cannot show such an effect. Any expert in atmospheric radiative transfer can tell you this. The jars are totally opaque to infrared radiation anyway, the amount of CO2 involved is far too small, the thermometers were cheap and inaccurate, the lamps cannot be exactly identical, the jars are not identical, and the “cold” of outer space was not included the experiment. TV meteorologist Anthony Watts demonstrated that Bill Nye had to fake the results through post-production video editing.

The warming effect of increasing atmospheric CO2 is surprisingly difficult to demonstrate. The demonstration is largely a theoretical exercise involving radiative absorption calculations and a radiative transfer model. I believe the effect exists; I’m just saying that there is no easy way to demonstrate it.

The trouble is that this fraudulent video still exists, and many thousands of people are being misled into believing that the experiment is evidence of how obvious it is to

Greta Thunberg’s sailboat trip

The new spokesperson for the world’s youth regarding concerns over global warming is 16-year-old Swede Greta Thunberg. Greta is travelling across the Atlantic on what CNN describes as a “zero-emissions yacht” to attend the UN Climate Action Summit on September 23 in New York City.

To begin with, there is no such thing as a zero-emissions yacht. A huge amount of energy was required to manufacture the yacht, and it transports so few people so few miles over its lifetime the yacht is a wonderful example of the energy waste typical of the lifestyles of the wealthy elite. Four (!) people will need to fly from Europe to the U.S. to support the return of the yacht to Europe after Greta is delivered there.

The trip is nothing more than a publicity stunt, and it leads to further disinformation regarding global energy use. In fact, it works much better as satire. Imagine if everyone who traveled across the ocean used yachts rather than jet airplanes. More energy would be required, not less, due to the manufacture of tens of thousands of extra yachts which inefficiently carry few passengers on relatively few, very slow trips. In contrast, the average jet aircraft will travel 50 million miles in its lifetime. Most people don’t realize that travel by jet is now more fuel efficient than travel by car.

The Greta boat trip story is in so many ways the absolute worst way to raise awareness of climate issues, unless you know knothing of science, engineering, or economics. It’s like someone who is against eating meat consuming three McDonalds cheeseburgers to show how we should change our diets. It makes zero sense.

I could give many more examples of the media helping to destroy the public’s ability to have a rational discussion about climate change, how much is caused by humans, and what can or should be done about it.

Instead, the media chooses to publish only the most headline-grabbing stories, and the climate change issue is then cast as two extremes: either you believe the “real scientists” who all agree we are destroying the planet, or you are a knuckle-dragging 8th-grade educated climate denier with guns and racist tendencies.