Radiative Changes Over the Global Oceans During Warm and Cool Events

February 9th, 2011 by Roy W. Spencer, Ph. D.

In my continuing efforts to use satellite observations to test climate models that predict global warming, I keep trying different ways to analyze the data.

Here I’ll show how the global oceanic radiative budget changes during warm and cool events, which are mostly due to El Niño and La Niña (respectively). By ‘radiative budget’ I am talking about top-of-atmosphere absorbed sunlight and emitted infrared radiation.

I’ve condensed the results down to a single plot, which is actually a pretty good learning tool. It shows how radiative energy accumulates in the ocean-atmosphere system during warming, and how it is then lost again during cooling.

[If you are wondering how radiative ‘feedback’ fits into all this — oh, and I KNOW you are — imbalances in the net radiative flux at the top of the atmosphere can be thought of as some combination of forcing and feedback, which always act to oppose each other. A radiative imbalance of 2 Watts per sq. meter could be due to 3 Watts of forcing and -1 Watt of feedback, or 7 Watts of forcing and -5 Watts of feedback (where ‘feedback’ here includes the direct Planck temperature response of infrared radiation to temperature). Unfortunately, we have no good way of knowing the proportions of forcing and feedback, and it is feedback that will determine how much global warming we can expect from forcing agents like more atmospheric carbon dioxide.]

But for now let’s ignore that conceptual distinction, and just talk about radiative imbalances. This simplifies things since more energy input should be accompanied by a temperature rise, and more energy loss should be accompanied by a temperature fall. Conservation of energy.

And, as we will see from the data, that is exactly what happens.

We analyzed the 20th Century runs from a total of 14 IPCC climate models that Forster & Taylor (2006 J. Climate) also provided a diagnosed long-term climate sensitivity for. In order to isolate the variability in the models on time scales less than ten years or so, I removed the low-frequency variations with a 6th order polynomial fit to the surface temperature and radiative flux anomalies. It’s the short-term variability we can test with short term satellite datasets.

I’ve already averaged the results for the 5 models than had below-average climate sensitivity, and the 9 models that had above-average climate sensitivity.

The curves in the following plot are lag regression coefficients, which can be interpreted as the rate of radiative energy gain (or loss) per degree C of temperature change, at various time lags. A time lag of zero months can be thought of as the month of temperature maximum (or minimum). I actually verified this interpretation by examining composite warm and cold events from the CNRM-CM3 climate model run, which exhibits strong El Niño and La Niña activity.

Also shown are satellite-based results, from detrended HadSST2 global sea surface temperature anomalies and satellite-measured anomalies in radiative fluxes from the Terra CERES instrument, for the 10-year period from March 2000 through June 2010.

The most obvious thing to note is that in the months running up to a temperature maximum (minimum), the global oceans are gaining (losing) extra radiative energy. This is true of all of the climate models, and in the satellite observations.

The above plot is a possibly a more intuitive way to look at the data than the ‘phase space’ plots I’ve been pushing the last few years. One of the KEY things it shows is that doing these regressions only at ZERO time lag (as Dessler recently did in his 2010 cloud feedback paper, and all previous researchers have tried to do) really has very little meaning. Because of the time lags involved in the temperature response to radiative imbalances, one MUST do these analyses taking into account the time lag behavior if one is to have any hope of diagnosing feedback. At zero time lag, there is very little signal at all to analyze.

So, What Does This Tell Us About the Climate Models Used to Predict Global Warming?

Of course, what I am ultimately interested in is whether the satellite data can tell us anything that might allow us to determine which of the climate models are closer to reality in terms of their global warming predictions.

And, as usual, the results shown above do not provide a clear answer to that question.

Now, the satellite observations DO suggest that there are larger radiative imbalances associated with a given surface temperature change than the climate models exhibit. But the physical reason why this is the case cannot be determined without other information.

It could be due to a greater depth of water being involved in temperature changes in the real climate system, versus in climate models, on these time scales. Or, maybe the extra radiative input seen in the satellite data during warming is being offset by greater surface evaporation rates than the models produce.

But remember, conceptually these radiative changes are some combination of forcing and feedback, in unknown amounts. What I call forcing is what some people call “unforced internal variability” – radiative changes not due to feedback (by definition, the direct or indirect result of surface temperature changes). They are probably dominated by quasi-chaotic, circulation-induced variations in cloud cover, but could also be due to changes in free-tropospheric humidity.

Now, if we assume that the radiative changes AFTER the temperature maximum (or minimum) are mostly a feedback response, then one might argue that the satellite data shows more negative feedback (lower climate sensitivity) than the models do. The only trouble with that is that I am showing averages across models in the above plot. One of the MORE sensitive models actually had larger excursions than the satellite data exhibit!

So, while the conclusion might be true…the evidence is not exactly ironclad.

Also, while I won’t show the results here, there are other analyses that can be done. For instance: How much total energy do the models (versus observations) accumulate over time during the warming episodes? During the cooling episodes? and does that tell us anything? So far, based upon the analysis I’ve done, there is no clear answer. But I will keep looking.

In the meantime, you are free to interpret the above graph in any way you want. Maybe you will see something I missed.

27 Responses to “Radiative Changes Over the Global Oceans During Warm and Cool Events”

Toggle Trackbacks

  1. Scott Thomas says:

    Dr. Roy,

    I am wondering about the right had tail of your feedback, after +20mo. It swings back negative, as if a warming spell is about to begin or if the positive (0 to +20mo) portion over-corrected. Do you think that’s real? Or an artifact of how you removed the long wavelengths?

  2. Christopher Game says:

    Dear Dr Spencer,

    You write: “[ … imbalances in the net radiative flux at the top of the atmosphere can be thought of as some combination of forcing and feedback, which always act to oppose each other. …. Unfortunately, we have no good way of knowing the proportions of forcing and feedback, and it is feedback that will determine how much global warming we can expect from forcing agents like more atmospheric carbon dioxide.]

    I agree.

    But I would add that we could make good progress towards finding out the proportions by combining the top of atmosphere imbalance data with data about external drivers, such as the distance of the earth from the sun, and even the time of day if there is enough time resolution in the imbalance data, and various cycles of tides determined by sun and moon, as well as sunspot data. There are of course many other external drivers which must be investigated in order to let us identify the internal dynamics of the system. Many of these external drivers are governed by laws of deterministic chaos following Newton’s laws of motion and gravity. The autoregressive model is not as efficient a tool of investigation as is the use of external driver data.

    Yours sincerely,

    Christopher Game

  3. maxwell says:


    Two questions.

    First, is there any physical significance in the fact that satellite data seems to show a radiative balance at zero time lag? I’m thinking that the temperature will hit an extrema as the radiative imbalance reaches an extrema (radiate balance being such an extreme), but is there any reason to a priori believe that a radiative balance is necessary for a temperature extreme?

    Second, knowing what we know about the relationship between radiative im/balance and temperature extremes, are there hypotheses as why the satellite data passes through radiative balance at zero time lag, but the climate models do not? Are they the same reasons you highlight above?

    I do agree that this plot is somewhat more understandable than the phase space plots I’ve seen in some of your other posts.

    Thanks in advance.

  4. Mike says:

    Dr Spencer,
    What it looks like to me is that the below-avg models are fairly close on radiating energy out into space. But both models under-valuate radiative heating by a large amount.
    And if they just correct that, the models will all over estimate temps.

  5. Jens says:

    Dr. Spencer,

    I think that there is perhaps less uncertainty in the interpretation than you suggest, at least if the simple model is a fair representation of the system. An important strength of your analysis compared with that of Schwartz, based on autocorrelation of temperature fluctuations, is that your result for the sensitivity parameter lambda (i.e. the height of the curves in your plots extrapolated to zero lag without the low-pass-filter smoothing of the ‘wiggle’) is independent of the assumed heat capacity. (This can be inferred simply from the dimension on the y-axis which is the same as for lammbda, but is also confirmed by analytical evaluation of the model).

    Also, I cannot see how evaporative heat loss changes the interpretation of the curves. Eventually, the heat is lost by radiation into space, and the mechanism of vertical transport in the atmosphere does not matter for the analysis(except perhaps for a delay).

    Of course the uncertainty of extrapolation to forcing by greenhouse gases and longer time scales ramains. As you indicate, your analysis is mainly for the ENSO induced T variations, and the system dynamics could be very different – as indicated by the model example you mention.

  6. Terry says:


    It seems to me that the entire modeling business of radiative budgets is fraught. The problem as I see it is that one is trying to establish the difference between two large numbers, and there is considerable uncertainty in the measured and modeled estimates of both the these ie incoming and outgoing fluxes. Small errors in either will give large variations in the estimated dQ. I cant see how this can possibly ever be resolved using the traditional techniques, as the uncertainty is just too great. What is needed is a fundamental switch to measure a different metric that is dependent only on the actual difference, not the two measured/modeled fluxes. intuitively I would like to think that there is a spectroscopic method that would fit the bill, but that is just blue-sky thinking.

  7. HR says:

    In your January 28th post you compare your simple model with observations with two different climate sensitivities (3.3 and 1.44).

    What would happen if you repeat the comparison of observations and IPCC models in the above post (Feb 9th) but with a much lower climate sensitivity? Is it possible to do that experiment? What would a good match between models and observations prove (and still fail to prove)?

  8. You are talking about top-of-atmosphere absorbed sunlight and emitted infrared radiation. But,–

    I know that Trenberth’s incoming solar radiation is equal to the “Solar Constant” divided by 4, which in my opinion makes a mockery out of the whole energy flow budget. – But do climate-models also use that number, say 341.3 W/m² in?

    And if so, what about the outgoing radiation, say 238.5 W/m², is that a measured fact or is that also some kind of “guestimate”?
    Can anybody ever hope to make any sense out of a climate- or any other kind of model that contains estimated numbers or values?

    Plenty of assumptions in, etcetera, etcetera.

  9. Mistakes are all too easy to make when trying to do science on the fly, on the internet, but: Not knowing just what is in the models, this just shows the derivative of an extremum (a local maximum or minimum in T) with inflection points on either side of that extremum. Since d(LW+SW)/dT varies to either side of zero, either d(LW)/dT or d(SW)/dT is doing so; can you even say which, or both? The model curves are indicating a single swing and back of temperature — their slopes going to zero as they return to zero — while the satellite data shows three extrema (every zero crossing in the green curve), indicating none of the models are tracking the real satellite data properly (which is also indicated by their lesser maximum excursions versus the satellite data, which you mentioned — they are apparently missing a significant portion of the real energy in the process, and just what that portion means for the surface temperature). I don’t know whether this is a flaw in your method of “removing low-frequency variations”, but it sure seems to be a basic flaw somewhere (even if you are just interested in comparing the maximum excursions of the models with those of the satellite data). Basically, I would say the models all fail the process you have put together, and this tells us nothing about forcings or feedbacks in the real world. This is for me just another sign that radiation transfer theory is substantially disconnected from the thermodynamics that actually controls the surface temperature, and that disconnect is fatal for the theory, and for the credibility of all who now swear (all too passionately, and arrogantly) by that theory.

  10. Kevin says:

    Dr. Spencer,

    With respect, I am disappointed by your “single plot”. I was hoping for something solid and substantial that would back up the credibility of climate science.

    Many years ago in my early engineering education I was taught to “always reconcile your units”. This may have been one of the most useful lessons of all. When stumped by why my actual system does not behave as I expected one of my first steps is to check my units, then I check my units once again.

    So in your text you explain how you analyzed “energy” changes over time, a fine attempt to reconcile modeled data with empirical measurements. You then plot the results as “energy” .vs. time, except the units on your vertical axis are (power/(unit area * temperature increment )). Sorry, but these are not units of energy that I recognize.

    You then title the data as “Energy loss (gain) after Temperature max (min)”.

    I suggest you;

    1)Separate the “power/temperature increment” variable into perhaps “incremental power” independent of the temperature increment. I’m sure you realize that the “power “involved with radiative energy transfer is dependent on the temperature.
    2) Rename the “Energy loss etc.” data as “Incremental Power” or some other more descriptive name.

    Unfortunately, I think the climate scientists have the cart before the horse. The whole Sun / Earth / Atmosphere “energy budget” has to be redone from scratch by following the amount of energy arriving at “TOA” and then following this all the way through the system while performing the calculations across the entire spectrum (UV to far IR).

    Once an approximation of the energy present in each portion of the system is known it will be possible to use the material properties (i.e. the thermal capacity) of that portion of the system to predict the temperature at that location. This would of course only apply to the “average” state and would not consider the 24 hour/ 365 day cyclical nature of the energy input to the system.

    I must admit the following doubts about the alleged ”energy budget” of the Earth as currently accepted by the field of climate science;

    • An energy budget prepared with units of power is suspect
    • The application of the Stephan-Boltzman equation in this situation needs to be remained to see if it has been used correctly
    • Since the energy input is cyclical (24 hrs etc.) it is necessary to consider the “speed of heat” through the system.

    I know I do not speak “climate science” but as an engineer I have prepared/reviewed many power budgets and also many energy budgets. While they have similarities there are differences that can make an analysis useless.

    Cheers, Kevin.

  11. Kevin says:

    Sorry, please replace “remained to see if it has been used correctly”

    with: “reexamined to see if it has been used correctly”

    Cheers, Kevin.

  12. peter_ga says:

    I feel there is something wrong mathematically with plotting it like that.

    In effect, the radiative imbalance is being cross-correlated with the inverse of sea surface temperature, in order to determine the affect one variable has on the other, and vice-versa.

    However, system identification algorithms, in my limited experience, do not work like this. For example, if there is a system
    y(t) = h(t) * x(t)
    then the algorithm does not involve cross-correlating y(t) with 1/x(t).
    Instead, the H is found by dividing the cross-correlation spectrum of
    Y*X by the auto-correlation of X, that is X*X, so you get H(w) where H is in the frequency domain.

  13. Christopher Game says:

    Dr Spencer writes:

    “we have no good way of knowing the proportions of forcing and feedback” and

    “as usual, the results shown above do not provide a clear answer to that question” and

    “conceptually these radiative changes are some combination of forcing and feedback, in unknown amounts” and

    “One of the MORE sensitive models actually had larger excursions than the satellite data exhibit!” and

    “while the conclusion might be true…the evidence is not exactly ironclad” and

    “So far, based upon the analysis I’ve done, there is no clear answer. But I will keep looking.”

    peter_ga writes:

    “I feel there is something wrong mathematically with plotting it like that” and “However, system identification algorithms, in my limited experience, do not work like this.”

    Dr Spencer so far has not appeared to have replied to Kevin’s post of February 11, 2011 at 8:58 PM, which reads:

    “Unfortunately, I think the climate scientists have the cart before the horse. The whole Sun / Earth / Atmosphere “energy budget” has to be redone from scratch by following the amount of energy arriving at “TOA” and then following this all the way through the system”.

    As Christopher reads Kevin’s post, Kevin is recommending the usual scientific method of investigation of cause and effect in a dynamical system, as against the autoregressive model method the difficulties of which Dr Spencer is telling us about in the above quotes in this present post. Christopher is not commenting on the rest of Kevin’s comments.

    Christopher wonders how Dr Spencer thinks about peter_ga’s and about Kevin’s comments.

    Christopher thinks that (translating peter_ga’s and Kevin’s comments into other terms in the ordinary language of science) a better way to examine the present data is to combine them with data about external drivers and see how the external drivers affect the internal state of the system under investigation.

    Christopher thinks that the difficulties that Dr Spencer is telling us about are the result of the arbitrary nature of the IPCC concept of “feedback”, as opposed to the ordinary objective and systematic scientific understanding of feedback as used in the ordinary theory of dynamical systems (as opposed also to the engineers’ theory of control systems). The IPCC distinction between “forcing” and “feedback” is an arbitrary conceptual fiction, and is not an invariant feature of the natural dynamics of the system. Christopher thinks that Dr Spencer knows this when Dr Spencer writes on January 28th, 2011: “Keep in mind that “feedback” in the climate system is more of a conceptual construct.”

  14. Kevin ONeill says:

    As a non-scientist I can still understand the need to determine whether global feedback systems have a net negative or positive effect. One point in your recent writings leaves me a bit confused – why 70 meters?

    Obviously there is a great difference between ocean surface temperatures at tropical, temperate, and polar latitudes. Regardless of the latitude, the temperature measured at the surface is relatively stable to a depth of around 30 meters. At 30 meters the temperature begins to drop in tropical and temperate waters — and the two converge at a depth of 200 meters. Only at a 1000 meters or more are they indistinguishable from polar waters.

    For temperate and tropical waters there appear to be two possible depths of significance: the stable layer near the surface (of approximately 30 meters) and the depth where the temperature of tropical and temperate waters converge (approximately 200 meters).

    Polar oceans are an entirely different proposition – since their temperature essentially remains unchanged year round. The only way out of this is to somehow include the volume of ice and treat it as part of the ocean. The polar ocean temperatures can’t rise above zero degrees until the existing ice has all disappeared. Warming in the arctic leads to a decrease in ice volume – but not an increase in water temperature.

    Tropical, temperate, and polar oceans have vastly different thermal profiles – and water temperature alone would give no indication of the vast changes that have taken place in the Arctic.

    So why 70 meters? It seems (to me) that you simply searched until you found a number that provided the results you desired. If the model only produces these results when configured for that depth then wouldn’t it seem more reasonable that the model is insufficient for the analysis and *any* results in produces have to be taken with a *very* large grain of salt?

  15. NLB says:

    Without getting into the details of anything, it seems like if the IPCC models vary from peaks of around 1 W/m2 to 4 W/m2 for a degree of temperature change that wouldn’t be settled science. Not knowing or being able to model the behavior of a system within a factor of 4 would lead me to believe that the models are not precise.
    Of course those of us interested enough to look at data on the topic on the web know this (whichever side you are on) and certainly the scientists know this, but people who want to spend trillions of dollars on AGW really don’t understand it.

    I am very interested in what you find about the total energy budgets.

  16. Terry says:

    Kevin, you raise a very good point in my opinion, in that the diurnal variation is not accounted for. As I understand it, none of the models include it. Radiation budgets are based on the average difference between an assumed solar and the average grey body emission from the earth/atmosphere. At night there is still a significant incoming radiation flux from the atmosphere even though there is a higher amount leaving. There is a short description of this on WUWT http://wattsupwiththat.com/2011/02/13/a-conversation-with-an-infrared-radiation-expert/#more-33954. I expect the difference is different at night cf day. Going back to my comments above re errors accrued by the difference between two large numbers, I wonder. Is it actually valid to use an average since the flux will be dependent on T**4 that varies diurnally. That means that an average will definitely give a wrong result.

  17. Kevin Klees says:

    This is Kevin Klees, I have been posting as just “Kevin”,

    Regarding the additional comments from Mr. Game, Mr. ONeill and Terry, thank you for the insights.

    I do respect Dr. Spencer’s contributions and insights. But I am concerned about the cavalier use of certain terms in the climate science field, specifically “Net Energy Gain” and “Higher Equilibrium Temperature”. As someone who has been exposed to complex man made systems I have learned that humans have not yet learned how to create (or even observe) any form of “energy gain”. I have also learned that no cyclical system ever really reaches “equilibrium”.

    I will contribute this small bit of empirical evidence about climate science; I reside right on the south shore of Lake Ontario. The portion of my yard that abuts the lake shore demonstrates a clear “micro climate”. In the fall our lakeside temperatures are a few weeks behind the falling temperatures further inland. In the spring our lakeside temperatures are also a few weeks behind the rising temperatures further inland. This is not a likely result of “climate change” since people have been harvesting fruit (apples, cherries, grapes, peaches, very delicious indeed) from this area for at least 150 years. In fact if you research old and new reference documents you will see that this area has a “growing climate” about equal to areas much further south (i.e. southern Pennsylvania). Also, it may surprise people but along the northern shore of Lake Erie they have been growing Tobacco for about one hundred years as well. I posit that this observable difference is due to the slower “speed of heat” passing through the water within the Great Lakes. This causes the weather (i.e. energy content in the atmosphere) to be somewhat delayed along the lake shore. This delay just happens to cause the soil and air temperatures to align better with the available sunlight to make the plants “happier”. As evidence I submit that in April and May I can usually walk around my “front yard” (away from the lake) in shirt sleeves, but at the same time I find a light jacket preferable along the lake shore. And indeed the opposite is true in the Fall.

    I do admit that it is entirely possible to accumulate energy (note the careful selection of the word “accumulate” which implies a “temporary” situation as opposed to “store” which implies a “permanent” situation). I know this becomes very semantic very quickly, but word selection does matter and I am certainly open to the use of other words. Unfortunately energy that has been accumulated (or “stored” if you will) will also dissipate faster (i.e. more quickly) as determined by the laws of thermodynamics.

    Cheers, Kevin.

  18. Simple says:

    “If you are wondering how radiative ‘feedback’ fits into all this — oh, and I KNOW you are — imbalances in the net radiative flux at the top of the atmosphere can be thought of as some combination of forcing and feedback, which always act to oppose each other.”

    I have a confusion here for it stikes me that there can be radiative imbalance that leads to a positive feedback on the imbalance, like decreasing albedo.

    Or are you actual really stating that the radiative imbalance is due to the input being either smaller or larger than the output?

    It is just one leads to the wrong conclusion the climate sensitivity cannot be amplified by internal factors whereas the second is just claiming energy balance overtime.

    Also your graph is a tad confusing for me.

    Basically is 0,0 a temperature max or temperature min?

    “The most obvious thing to note is that in the months running up to a temperature maximum (minimum), the global oceans are gaining (losing) extra radiative energy. This is true of all of the climate models, and in the satellite observations.”

    How are you sure for if either LGW increases or SW decreases (presuming LGW to be a negative in the addition to get negative values) the D(LGW+SW)/dt will fall or become more positive?

    Also if the oceans gain heat internally oceans LGW increases and world temperature increases to a possible max as LGW+SW approaches zero, presuming the oceans had been cooled from an internal factor before this time. Now world global temperatures lag nina events by about 5-6months as your time lags are suggestive. Now the most negative time would be a La Nina event, with the oceanic radiative imbalance at a peak 6 months before the full cooling. As the La Nina fades the rad. imbalance approaches zero as global temperature approaches a min and vice versa for a El Nino, all in keeping with your 2000-2010 graphic. However although this says absolutely nothing about the radiative imbalnce caused by CO2, it does imply as does May’s paper in 2009 that the El-nino/La nina cycle predominates natural variation as compared to the sunspots cycle, otherwise the predominant lag would be 2-3years.

    As for the lag from CO2 warming which due to deep ocean mixing is in the order of decades this analysis says abolutely nothing. Also in CO2 induced warming the top of atmosphere budget will be slowly becoming more and more negative as LGW increases, however this increase in LGW can only occur at a higher temperature (due to CO2 blocking infra-red) and is also in reaction to an increases in atmospheric LGW radiation to the earth which isn’t included in TOA and therefore could get a large LGW increases but still be in an overall positive radaitive budget at the surface despite LGW+SW TOA becoming negative when the world is heating. Therefore to do the radiative imbalance that is necessary to atribute warming isn’t it the surface balances of LGW (out) plus LGW (in) and surface SW (in) that are crucial. This imbalance will be affected by internal cooling surface force like an La Nina is and thus swing with it, however as CO2 increases there will be general trend for the imbalance to rise (this could be detected in your analysis) and for world temperatures to follow lagged by several years as said. A proxy for this imbalance therefore although lagged by 20 years or so is the global world temp, this means that the acute heating from energy imbalance 20 years ago is being resultant now, which means the rate of temperature rise from 1990-2010 represents the response to the CO2 concentration from 1970-1990 ish or a peak of 325-350ppm or 25ppm or 1.25 ppm a year and that has given a rate of rise lagged of 0.2C a decade. The last 20 years CO2 has risen from 350-390ppm, or 40ppm or 2ppm a year, which means given the climate sensitivity is presumeably static (which it isn’t as ICE is melthing), then we all things being equal we should be in for ~0.32C for the next few decades. Now that isn’t the whole story of course as it is only the first 40-50% of the total warming that in resultant over the first 40-50years, so there will additions to be made to account for this deeper lag in temperature increases.

    Anyway sure time will tell, but it could all mean that we are due another 0.65C (or what we’ve had in the last 100 years or so) in 20 years, which will be interesting considering the effects already occuring and the positive feedbacks in the system.

    From paleo records it is more and more clear that the full CS is in the range of 6-12C for a doubling depending on inital conditions with 50-60% of that realised in the first 100 years. Initial conditions are important here in CS as CS has got to be higher when a polar ocean is losing its summer (albedo relevant) ice than when it isn’t.

    Also you analysis also suggests that El-nino La nina transforms in the last 10 years have been 2 yearly ish and the most recent changes seem to going EL Nino, La Nina , El Nino (predicted july 2011) in years now; makes me wonder about a chaotic change sequence.

    Also all things being equal 2010 should have been cold due to low sun activity (lagged 2years) and overall probably equal La Nina El Nino influences, yet it was hot, and Jan 2011 was also the 11th warmest on record (this is not apparent on your statellite temp. plot as you are shifting the average point as you include more years, if you kept the at 1979-2000 Jan 2011 would be well above the zero line!!). This is surprising and means possibly that a year that should have been below average was at a 1:100 years extreme above average and don’t chaotic systems start to jump after a while!!

    Also as you know all CPU models are all underestimating apparent secondary changes of global warming (ice melt, etc) yet not too bad on the temperature rise, which either means ice melts faster at lower temperatures, Hadley cells expand, species shift, etc, etc) or the excess energy is in the system isn’t being fully represented as temeprature changes yet, despite the very large changes we’ve already experienced.

    Interesting times ahead wonder what sort of changes we can expect if CS 100years per doubling is actually about 6C (average from paleo reckoning), that is 3.6C by 2100.

  19. RW says:

    Simple says:

    “actually about 6C (average from paleo reckoning)”

    How do you figure?

  20. Kevin Klees says:

    Ok, I realize now that I forgot to explain my main observation about “climate change” while living on the shore of Lake Ontario. My point is;

    The large (relative to the gases in the atmosphere) thermal capacity of the water in Lake Ontario seems to only be capable of DELAYING the max/min temperatures on the nearby landmass. We do not have any evidence of any new “Higher Equilibrium” temperature along the lakeshore. Just a better alignment (time domain wise) of temperature peaks with the lifecycle of the plants along the shoreline. We still get just about the same average values of hot/cold/wet/dry/calm/storm as the areas that are many miles inland from the shore. This is pretty much determined by how far North we are from the Equator.

    So, if water (liquid in this case) in large quantities (the Great Lakes represent about 20% of the world’s fresh water) CANNOT cause a “Higher Equilibrium” temperature, why does anybody believe that a TRACE gas with minimal thermal capacity can cause a “Higher Equilibrium” temperature to permanently occur ???

    If any climate scientists with a grant would like to rent
    my backyard and place thermometers there I would certainly be open to discussions.

    Cheers, Kevin.

  21. Kevin, you say: “If any climate scientists with a grant would like to rent my backyard and place thermometers there I would certainly be open to discussions.”
    Sorry but grants are only given to climate scientists by politicians for as long as those “scientists” keep churning out science which makes political spin believable. – So tough luck Kev, – you’ll probably be better off renting your back-yard out to fishermen.

  22. Harold Pierce Jr says:

    ATTn: Kevin

    Do you have any ideas or comments of how the “atmospheric tides” might alter energy flux(es) thru out the earth-atmosphere system by changing the distribution of the mass of the gases and of the clouds?

  23. Kevin Klees says:

    Mr. Dahlsveen, yes I know that I probably can’t get any “free” money to study the temperatures in my backyard.

    But it does not create any harm to offer my private property to the greater good by solving the great climate disaster that we supposedly face. I will in fact make this more generous offer, any climate scientist that wants to provide thermometers at their own expense are free to place them in my yard at no cost.

    Funny thing about the fisherman, according to the current laws of New York State I own the land under the water along the lakeshore. But the State of New York owns the water. So technically anybody can swim above my land without trespassing, but if their feet touch the bottom (my property) they have violated my property rights.

    Cheers, Kevin.

  24. Kevin Klees says:

    Mr. Pierce, Honestly I have not thought about the “atmospheric tides” theory at all. I am still shaking my head about the “net energy gain” comments from folks.

    My first observation would be that gases do indeed behave according to Boyles Gas Law. They do move/expand/compress in response to the conditions they are exposed to. All of
    these changes include some exchange of energy; it would of course require some more detailed analysis to determine the final energy balance…

    Cheers, Kevin.

  25. simple says:

    “actually about 6C (average from paleo reckoning)”

    How do you figure?

    From Scheider article “Global warmth with little extra CO2” in Nature Geosciences, the Pliocene, the last time CO2 concentrations were like today at 325-450ppm . The tropics were 3-4C warmer and the poles were 10C warmer, although have seen more recent estimates of polar temperatures being 19C, with a much reduced polar-tropical variation, this would result in weakening trade winds and possibly different weather patterns and tele communications. The vegetation back then did occupy a different special abundance as compared to today. Anyway overall the Pliocene had a 3-5C higher global temperature than today, which is 0.7C higher than when CO2 was 280ppm and therefore 3.7-5.7C for calculation of CS to CO2 doubling.

    There was less ice back then and a weaker sun, so it is not unreasonable to expect that the CS might be lower due to there being less albedo effect or higher to compensate for the weaker sun. This countering effect makes it probably that it is not far off and presuming the ice loss to ice gain is about the same during a warming or cooling of the same degree, it does seem reasonable to estimate present day CS to be of a similar magnitude.

    Therefore starting the baseline at 280ppm and taking an average for the Pliocene to be 350ppm that equals a difference of 70ppm or a 1/4 of a doubling rise from 280ppm. To get the temperature rise for a doubling of CO2, it is therefore the temperature during the Pliocene times 4, which is scarily 14.8C to 22.8C, taking CO2 to be 400ppm, gives, 8.6C to 13.3C, even taking the lowest Pliocene estimates of 2C higher, 350ppm gives a CS of 8C to 10C and 400ppm gives CS range of 4.6C to 11.5C.

    These CS ranges basically represent the spread of the 95% possibility for the CS and 4.6C to 22.8C is a large range to consider however this range is the long term CS to equilibrium, which means that only about 60% will be realised in the first 100 years due to the nature of warming, therefore 60% of the long term CS gives the CS range as per 100 years and IPCC models. This comes out to 2.76C-5C, however in review of the Pliocene about the range of global temperature in the Pliocene was 3-5C and therefore a more realistic CS per 100 years, is 4.4C to 6.9C; however this taking CO2 levels in the Pliocene to be 400ppm and more recent evidence is suggestive that 350ppm is much more likely. Using 350ppm the CS per 100 years is 7.2C to 12C, so 6 might have been generous, maybe melting the surface ice of a polar ocean accelerates global warming more than anticipated, it has been just reported that albedo estimates are likely to be 30% underestimated in models.

    What does mean in terms of temperature rise? Taking 350ppm for Pliocene and a low temperature of 3C to give the lowest CS means, if atmospheric CO2 ppm remain at 390ppm (0.4 of a doubling), with no further rise, we can expect to get a 2.88C minimum rise by 2100, if CO2 can be withdrawn to 350ppm that reduces this to 1.8C to 3C rise by 2100 with 2.4C most likely.

    Adaptation for a 2.4C rise is not an easy thing to consider, anything more than that then the disruptions are likely to be too severe to cope with that well.

    Planning for common 1:1000year or more weather events takes serious consideration. The weather can be very extreme.

    Lets hope the lower estimates are the right ones and somehow the atmospheric ppm can be brought down to 350ppm, although with natural warming feedbacks and the common CO2 producing energy addiction that seems unlikely.

    Funny thing this changing climate could be an opportunity to come together, give up fossil fuels and create a sustainable human-eco-system relationship for the future, however at present this seems an impossible plausibility to even consider and thus I really do hope that the Pliocene records and others are wrong.

    The longer we are above 350ppm the more likely that >2.4C will be the result; 400ppm by 2100 would give ~3C global rise, which 10C in the poles, what is the upper limit and how on earth can 50ppm equivalent of CO2 be drawn out the atmosphere

  26. RW says:

    Sorry, I don’t understand. Nice try though.

  27. simple says:

    Have re-considered my calculations as it has cooled since the Pliocene and this does change the calculations due to the nature of halving and doubling being dependent on the initial value and tried to make them easier.

    When the CO2 concentration in the atmosphere is halved or doubled the earth changes temperature by a set amount if everything else remains the same. This change in temperature is termed the climate sensitivity CS and is dependent on initial conditions. If the CS is 3C and the CO2 drops from 350ppm to 175ppm (is halved), then the earth’s temperature will fall by 3C.

    Temperature rise = CS x CO2 rise in doubling equivalents

    Temperature fall = CS x CO2 fall in halving equivalents

    For example going from 280ppm to 350ppm with a CS of 3C.

    The rise is 70ppm, a doubling from 280ppm would take a 280ppm rise therefore 70ppm is a 1/4 of a doubling.

    Temperature rise = 3C x 0.25 = 0.75C.

    The CO2 in the Pliocene was 350ppm, so this is the initial value, the temperature was 3-5C hotter than now and 3.7-5.7C hotter than the pre-industrial world when CO2 was 280ppm or 70ppm lower than in the Pliocene.

    To half the Pliocene CO2 concentration would take a fall of 175ppm, (350ppm to 175ppm) so a fall of 70ppm represents 70/175 = 0.4 or 40% of a halving in the atmospheric CO2 concentration and therefore would lead to only 40% of the temperature change that would be induced by a halving. If the CS was 10C then falling from 350ppm to 280ppm would result in a 4C temperature fall.

    In the Pliocene temperatures were 3.7-5.7ppm higher or there has been a fall of 3.7-5.7C in global temperature for a CO2 drop of 70ppm which represents 40% of a halving from above, therefore 3.7-5.7C represents only 40% of the full CS and thus implies a CS range of 9.25C to 14.24C.

    This is the equilibrium CS which is higher than the 100year CS, as only ~60% of the overall temperature change resultant from a change in CO2 levels occurs in the first 100 years. The IPCC CS is a 100 year CS and thus an IPCC CS of 3C is equivalent to an equilibrium CS of 5C. Therefore the Pliocene data indicates a 100 year IPCC models CS of 5.55C to 8.54C (60% of 9.25C-14.24C)

    As 350ppm is 70ppm above pre-industrial levels of 280ppm and thus represents a rise in CO2 or a 1/4 of a doubling above 280ppm, this means that if the CO2 remains at 350ppm for 100 years, the earth will warm by 1.4C to 2.1C, which would keep changes adaptable to, however 400ppm for 100yrs means 2.3C to 3.66C rise and anything higher even higher.

    If the Pliocene CO2 was higher at 400ppm though, then through the same calculations above an atmospheric concentration of 350ppm for a 100years would mean 0.925C to 1.4C temperature rise by 2100, and 400ppm would mean 1.6C to 2.44C; considering the CO2 concentration has only been above 300ppm for 80 years and 350ppm for 20 years, and yet temperatures have risen by 0.7C which is more suggestive that 350ppm will induce 1.4C to 2.1C by 2100 and that the Pliocene CO2 is 350ppm as the more recent evidence suggests.

    Therefore the up-shot of the Pliocene data is that 350ppm is probably safe’’ish, however 400ppm isn’t and we are already at 390ppm, which means to get to safe levels (and that is debateable considering changes already occurring!) 40ppm already have to be withdrawn from the atmosphere, that is 20 years of present emissions or lots and lots of trees!!

    Not sure that this is possible as CO2 levels seem destined to go well over 400ppm and durign warmign episodes the earth seems to release CO2, so 2C by 2100 seems basically inevitable unless a some sort of sweeping change occurs.

Leave a Reply