The Rest of the Cherries: 140 decades of Climate Models vs. Observations

September 22nd, 2011 by Roy W. Spencer, Ph. D.

Since one of the criticisms of our recent Remote Sensing paper was that we cherry-picked the climate models we chose to compare the satellite observations of climate variations to, here are all 140 10-year periods from all 14 climate models’ 20th Century runs we analyzed (click to see the full res. version):

As you can see, the observations of the Earth (in blue, CERES radiative energy budget versus HadCRUT3 surface temperature variations) are outside the range of climate model behavior, at least over the span of time lags we believe are most related to feedbacks, which in turn determine the sensitivity of the climate system to increasing greenhouse gas concentrations. (See Lindzen & Choi, 2011 for more about time lags).

Now, at ZERO time lag, there are a few decades from a few models (less than 10% of them) which exceed the satellite measurements. So, would you then say that the satellite measurements are “not inconsistent” with the models? I wouldn’t.

Especially since the IPCC’s best estimate of future warming (about 3 deg C.) from a doubling of atmospheric CO2 is almost exactly the AVERAGE response of ALL of the climate models. Note that the average of all 140 model decades (dashed black line in the above graph) is pretty darn far from the satellite data.

So, even with all of 140 cherries picked, we still see evidence there is something wrong with the IPCC models in general. And I believe the problem is they are too sensitive, and thus are predicting too much future global warming.


54 Responses to “The Rest of the Cherries: 140 decades of Climate Models vs. Observations”

Toggle Trackbacks

  1. Svend Ferdinandsen says:

    I have asked before, and do it now again.
    What is the reason to have and use so many models?
    One model should be sufficient unless the different models deals with different parts of the climate.
    On the other hand, i have not seen any information regarding specialties in the different models and they are often used in making some average.
    This is to me a sign, that even the climate science community do not really believe them.
    Average of garbage is still garbage.

  2. Fred says:

    1) The results are not just from CERES. They are a combination of CERES and the HadCRUT3 temperature data.
    ROY:yes, and HadCRUT3 is the primary surface temperature dataset used by the IPCC…the supposed gold standard.

    2) Dessler showed that the different data sets have a different result.
    ROY: Yes, he used more obscure and unproven datasets for climate work like this.

    3) This is almost certainly because the other temperature datasets have a higher spatial coverage (including the poles), which implies that that they are less dominated by tropical variability.
    4) the model temperatures are global and thus have more polar variability in the mix. I predict that using model temperatures from 60S-60N you will see a stronger regression (and ones that are more similar to your CERES/HadCRUT result.
    ROY: No, all datasets are area-weighted. You are grasping at straws. I get about the same result over the global oceans if I use the Levitus (ARGO float-dominated) dataset for surface temperature, or HadSST2.

    5) Almost all of the power in the regression comes from the LW variability, not SW.
    ROY: Doesn’t matter where it comes from. It’s an apples-to-apples comparison, and it’s the sum of both that determines total feedback.

    6) Plotting the regression at lag 4 months against each model’s climate sensitivity shows no correlation. So why do you insist that CS is the determining factor in this result?
    ROY: You are correct, it is not *necessarily* the determining factor…just the most obvious factor. But what I am demonstrating is that Dessler’s 2010 claim that the satellite data support the climate models is not correct. He had to ignore the clear evidence of negative cloud feedback directly from the CERES observations, as I have mentioned earlier, and now verified by Steve McIntyre and by Troy Masters:
    http://climateaudit.org/2011/09/21/troy-dessler2010-artifact-of-combining-two-flux-calculations/

    • Hi Roy,

      So you admit there’s no correlation between the goodness of fit and climate sensitivity? If there is no correlation, then why is it the most obvious factor? Or if it was once the most obvious thing to look at, isn’t it clear by now that it didn’t pan out?

      It seems to me that the “most obvious factors” would have something to do with short-term variability. El Nio has been suggested, but other things could be involved.

  3. Ray says:

    Svend,
    I agree with you entirely, and it seems that this fact is the “Achilles Heel” in the case put forward by the IPCC.
    My own view is that there are so many models in order that the positive and negative errors can balance each other out.
    It is clear to me that if there was certainty over the science, that far fewer models would be required, although I can accept that there might be many scenarios, taking into account a range of input assumptions. In order to predict the motion of the planets, do we run 20 or 30 models and take the average? Another point is that there is such divergence of the individual models within each scenario that it would be IMPOSSIBLE for all of them to be accurate to within reasonable limits.
    There are clearly problems with the models used in AR4 and I think that the owners of the models are aware of that, but obviously not advertising the fact.
    I recently did some evaluation of the models in scenario A1B and found that the Canadian models, i.e. cma_cgcm3_1_t63
    and cccma_cgcm3_1_t47 were amongst the least accurate, largely because they underestimated temperatures during the early 20th century and overestimated them during the early 21st century. When I wrote to Environment Canada to point this out, their reply included the following:
    “It is the case that the Canadian model contributed to the IPCC AR4 tended to overestimate warming in the 20th century, and this is well known. Since then, considerable effort has been devoted to improvements that have led to a new model whose results will be contributed to the upcoming IPCC Fifth Assessment.”
    Note that they say that the overestimated warming in the 20th century was “well known”. Amongst climate scientists no doubt, but amongst the public? I suspect that the new models to be contributed to AR5 will be “improvements” in the sense that they are more accurate in “hindcasts”, which presumably will include 2000-2010, but they may even show a faster rate of warming for the remainder of the 21st century. The possibility that the “improvements” will show a lower rate of warming in the future is, of course, out of the question, since that would would be an unacceptable embarrassment for the IPCC.

    • Doug Proctor says:

      Svend’s question and your response are both good and to the point. I’ve been involved in Monte Carlo simulations in the oil patch, and I know that you don’t lump them all together for the average. Multiple scenarios reflect different set-ups; your job as a “working” scientist is to figure out which parameters best reflect your situation, and then go with the simulation that uses them.

      The problem here is that in a real-world usage you have observations that are reasonably unique. At time and study goes forward – in part from the desire to weed out inappropriate scenarios, you focus on studies that give you information you need to narrow the options. In this climate science fiasco, it appears that as time goes by there is no additional information gathered to do so. Your uncertainty tomorrow is the same as today, regardless of what you do.

      23 years into this foolishness we appear no more certain of whether Hansens’s A, B or C (WHICH ELIMINATES CO2 PRODUCTION) to follow, although “C” certainly appears the closest. But all the options remain open.

      Weird. Work, work, work, spend, spend, spend, and yet get no closer to the “truth”.

      Is this what “truthiness” is all about?

  4. HAS says:

    I had been looking at this issue as time permitted as was interested to see that which GCMs actually model ESNO well is controversial, and even that one of the models claimed by Dessler (2011) to do a good job on ENSO (MRI CGCM 2.3.2A) is not even supported by his reference to Lin (2007).

    What would interest me if you have it easily to hand is the graph of how the models that ENSO Feedbacks and Associated Time Scales of Variability in a Multimodel Ensemble Belmadani et al (2010) identified as the best fit to ENSO (INM-CM3.0, IPSL CM4, UKMO HadCM3, and UKMO HadGEM1)fare in this comparison.

  5. Robert says:

    “ROY:yes, and HadCRUT3 is the primary surface temperature dataset used by the IPCCthe supposed gold standard.”

    It may be used by the IPCC but it is still the worst available method… Undersampling the warming as was demonstrated by ECMWF when comparing the last decade. Anecdotally I have a paper submitted which examines some of the challenges in dealing with temperature datasets in sparse locations and Hadleys method will ALWAYS result in the least amount of data being used. Frankly it has been demonstrated on the blogosphere that Hadley’s method will always undersample warming. You know this yourself. I don’t believe in picking the dataset which includes the least amount of data (least stations due to restrictions in the CAM method) and has the worst high latitude coverage for your comparison. It seems to me you’re using the IPCC’s reliance on Hadley for using a dataset that we all know is lacking. Please explain why you would select the dataset that has been shown to undersample the warming (particularly in the Eastern Canadian Arctic)?

    • Sundance says:

      I’m sure you also wrote to Mike Mann and Keith Briffa that because there was so much more tree proxy data that existed that they chose to ignore in constructing their hockey stick that you consider it bogus.

  6. Eric Worrall says:

    Can you recalibrate the sensitivity of the models, to produce a better fit for the satellite observations? That would be a difficult observation to explain away.

  7. Dr. Doom says:

    Dr. Spencer,

    What is your best scientific guess on climate sensitivity for doubling of CO2? What is the basis?

    • RW says:

      I think Dr. Spencer’s best estimate is about 0.6 C, given a 1.1-1.2 C ‘zero-feedback’ value. Or a negative feedback reduction of about 40-50%.

  8. Christopher Game says:

    Dr Spencer writes in his response to the post of Fred of September 22, 2011 at 2:04 PM:
    “Its an apples-to-apples comparison”

    Dr Spencer is on an outright winner there. Home and hosed.

    If Robert of September 22, 2011 at 6:37 PM doesn’t like the dataset used by Drs Spencer and Braswell, rather than demand an explanation from Drs Spencer and Braswell, perhaps he can specify another of the same kind that he prefers? Christopher Game

  9. ChrisH says:

    With tongue only slightly in cheek:

    If Robert of September 22, 2011 at 6:37 PM doesnt like the dataset used by the IPCC (as well as Drs Spencer and Braswell), why doesn’t he demand an explanation from the IPCC?

    It is amazing how the IPCC & friends are held to one scientific standard, and while Drs Spencer and Braswell are routinely held to an entirely different standard (which may not even be possible to achieve!). If they pick datasets they think are best, then they are accused of cherry picking, and if they pick the datasets used by the IPPC & friends, then they are accused of picking the worst datasets. The double standards is palpable 🙁 . Presumably the only acceptable datasets are the ones which give an outcome that confirms AGW…

  10. william says:

    I set my sat nav at the beginning of my journey, it estimates my time to arrive, it knows the route, distance, tolls, everything,

    Its always wrong, well it suddenly catches up when i actually am near my front door.

    And that is how we are being sold a pup, as soon as the temps started to look like they were
    going down we were told oops yep all consistent with AGW, that’s how they are working it now , always playing catchup and saying it is consistent,

    well the makers of the sat nav could say their machine is right- when i got near to my front door,

  11. Stephen Wilde says:

    I think that the simplest reason for the discrepancy is that more evaporation and condensation from a faster water cycle results in a component of upward energy transfer that has to ADDED to the energy transfer occurring from radiative processes alone.

    The thing is that more evaporation keeps the surface as a whole cool despite the warming up of individual molecules prior to their evaporation earlier than would otherwise have occurred.

    Once the energy needed by the evaporative process has been converted to latent form it no longer affects surface temperature and so is not included in the upward radiation anticipated from that surface temperature.

    It then reappears at a higher level when condensation occurs and departs to space faster than it would have done if radiated from the surface.

    I suspect that the models fail to treat a faster upward transfer of energy from more evaporation as an ADDITION to the radiative energy loss caused by the surface temperature.

    As I understand AGW theory warmer surfaces are supposed to result in more evaporation and water vapour but somehow without a corresponding increase in condensation hence a positive rather than negative feedback.

    If condensation increases too then the feedback must turn negative. Relatively stable global humidity seems to support a corresponding increase in condensation when more energy is added to water at the surface whether it be in oceans or soil moisture on the continents.

    Radiative physics alone is not sufficient because a highly efficient non radiative process by passes the densest lower levels of the atmosphere to supplement the basic processes of radiative energy transfer.

  12. londo says:

    For a science that is settled, 14 models can safely be considered excessive. In physics, unless the models show essentially the same results and the differences are well understood, this type of modelling would be considered as science in progress at best. But then again, this is climate science. It’s settled but with humongous error bars to make room for any deviation between reality and the so called science.

  13. P. Solar says:

    Good response the rather tenuous criticisms Dr Spencer. The graph makes the point well.

    It also underlines the differences more strongly than before. It is not just a case of amplitude, there is one fundamental difference between the models and reality that is not just the magnitude of a feedback.

    Almost none of the modelled responses cross the axis apart from near zero lag. All variations of real radiation and temperature measurements do , like your plot here, show a more oscillatory behaviour and clearly come back at around +/- 18 months.

    This is also a defect of the “simple model” you proposed which, whatever parameter values are used will not consistently cross the axis more than once.

    Now the naive model has only random inputs and its response is coherent with that basis. It was never supposed to be complete, just to show climate-like variations based on random inputs. It thus demonstrates the need for something other than short-term random forcing.

    In this respect the supercomputer models show the same defect as your trivial random model.

    One thing that would produce this kind of swing would be heat change due to ocean currents with a cyclic component. Indeed using R.stl to remove the seasonal component from the data one does see a strong 18m and 3 year cyclic trend. This trend is, in part, very similar to ENSO trends.

    Isn’t this graph really showing that what the models are producing is not only wrong in scale but fundamentally wrong in FORM because of their failure to model for ocean currents?

  14. This is just the start of the models predicting to much warmth. Just wait a few more years.

    As I have said many times until (which will be never) the models have complete and accurate data, they will never be able to predict the climate going forward.

    In the meantime past history is a much better indicator ,and the PHASE IN THEORY ,is the only theory that even attempts to address all of the many past abrupt climatic changes.

    If abrupt climate change was a once or twice occurrence in earth’s past, other explanations would warrent more considerations, but it has happened to many times.

    Climate can, and it will in the future change rapidly once thresholds are met, due to the level of solar activity and the items that control the climate phasing into a cold or warm mode, as a ccnsequence of that solar activity.

    I would say thus far everything the PHASE IN THEORY , has been saying has come to be, from a more -AO, to an increase in geological activity, to a greater +SOI INDEX, to a cold PDO.

    Until someone can come up with something better, this is what I believe is the correct explanation.

    If the PROLONG SOLAR MINIMUM can persist, then expect colder temperatures to come ,once the OCEAN HEAT CONTENT, lag is in.

    The ROUTE cause of all climatic change,in my opinion is the magnetic field strength or lack of strength ,as a result of solar activity, the earth’s magnetic field strength ,and the moon’s modulation of the solar magnetic field.

    The weaker the magnetic field, the slower the weather systems move, and the more staying power they will exhibit once established. Hence more extremes in the climate.

    All the items I have mentioned , such as

    SOLAR
    VOLCANIC
    SOI INDEX
    AAO,AO,NAO ETC
    PDO/AMO
    EARTH’S MAGNETIC FIELD
    MILANKOVICH CYCLES IN THE LARGER SENSE
    COSMIC RAYS

    All of those in turn influence cloud cover,snow cover, and precipitation , which in turn influence earth’s albedo.

    Extreme climatic conditions and increased geological activity much quicker to respond to a PROLONG SOLAR MINIMUM ,then temperature change ,which is held back by OCEAN HEAT CONTENT.

  15. P. Solar says:

    One quick but important question: your label for positive lag says “radiation after Tsfc”. Is this correct?

    It would seem that this part of the graph shows the temperature regime where radiation is inducing a rate of change of temperature. One would expect a lagged response to such a forcing similar to that shown in the graph but in this case it would be radiation leading temperature change on the right hand side.

  16. One main reason the models are off, and will continue to be off ,are the POSITIVE feedbacks the models keep saying will happen, are NOT, and WON’T be happening.

    This in turn lends support to Dr. Spencer’s negative cloud feedback.
    Another example of the many false positive feedbacks global warmers have tried to come up with, to support their asinine theory.

  17. WillyW says:

    It is encouraging that Dr. Spencer and several other like minded scientists are still looking for the answers. Too many so-called scientists have sold out to politics, particularly the IPCC, and allowed the answer that brings in dollars to become their answer. Thomas Edison while working on finding an element for the electric light bulb was asked why he didn’t get discouraged after so many failures. Mr. Edison replied, “I have not failed. I’ve just found 10,000 ways that won’t work.” Unfortunately the AGW crowd seems to have concluded that since they have 140 models they must have the right answer. To that I say, 140 down, 9,860+ to go! Don’t be discouraged Dr. Spencer, just keep successfully identifying what doesn’t work and the right answer will be found.

  18. DCA says:

    Dr.Spencer,

    Forgive me for not reading your whole paper, but can you tell me how the certaintly in your analysis compares to the other “different lines of evidence” I’ve heard many AGW advocates claim? I’ve heard those line are Paleo records, icecore, ocean heat uptake and solar cycles.

    I would be grateful for anyone else to answer that.

  19. P. Solar says:

    “Forgive me for not reading your whole paper”

    I suspect if you want an answer to your question you are at least going to have to read what people publish. No one is going to do you a personal summary.

  20. “salvatore del prete says:
    September 23, 2011 at 8:18 AM

    One main reason the models are off, and will continue to be off ,are the POSITIVE feedbacks the models keep saying will happen, are NOT, and WONT be happening.”

    I agree. Le Chateliers Principle obviously applies here as well. If it didn’t, Earth would no longer have life after all this time. There are no tipping points in climate.

  21. Christopher Game says:

    Responding to the post of Werner Brozek of September 23, 2011 at 4:37 PM.

    Werner Brozek writes: “Le Chateliers Principle obviously applies here as well.”

    Christopher replies. I am glad you mention this principle. I think it is to some degree relevant.

    But one has to be aware that it was written for static thermodynamic equilibria, not non-thermodynamic-equilibrium steady states such as are contemplated for the climate system. Things are much different for non-equilibrium systems. And the principle needs to be stated very carefully indeed, with particular attention to choosing the proper variables of interest. The principle has clear exceptions in some variable choices, as explained by Prigogine and Defay 1954.

    I think that the climate system could well be an exception of the kind that Prigogine and Defay point to. I think it an interesting project to find variables for the climate system that make the principle apply. But I think it not obvious that such variables exist. Christopher Game

  22. Patrick says:

    In Medicine we were taught that if a condition has 20 different treatments then we can infer that none of them are effective. So what does the number of computer models say about climate science a la IPCC?

  23. Bill says:

    Why “so many” models? Because it’s needed, and best. The answer has to do with mice and oxen.

    First mice. Everybody thinks they can build a better mouse trap; and until one proves consistently best in almost all applications, several will stay in the market. That is good. No climate modeler I know, save perhaps Dr. Spencer, would make the argument that their model is “best” in all cases or even “right”. Rather, they claim that their models “perform well” within uncertainty bounds that are understood and reasonable sensitivities to reasonable factors – some with more justification for doing so than others. Until climate modeling is perfected (and no one claims it has been); people will tinker with different algorithms and input parameters. Good.

    Now for oxen. In 1907, Galton published in “Science” his finding that people at a State Fair predicted with amazing accuracy the weight of an Ox. Individually, they varied widely, but on average their various “models” for predicting the weight of the ox were within 1 pound of correct. Since that study, much work has been done on the power of diversity in heuristics and modeling, giving rise to such things as amazingly accurate “Prediction Markets” – based on the theory that the diversity of people’s ideas how to predict or “model” an outcome will yield very accurate results. So, Dr. Spencer is right about “averaging or cancelling out the positive and negative errors”, but assigns the wrong motive. In assumimg sinister motive; he ignores the power of the “diversity prediction theorem.” That many competent people are performing this diversity of modeling in a decentralized fashion is a good thing, with a likely good result (as opposed to one person running different models until they get a result they like).

  24. Thank you for your comments
    “Christopher Game says:
    September 23, 2011 at 4:58 PM”

    Obviously climate is much more complex than what Le Chatelier originally wrote it for, but I believe similar things could be written about climate. There are just many more pieces to the puzzle. And the SB11 paper adds a few more pieces to this puzzle.

    I believe Lubos’ article on Le Chatelier’s principle and climate would be an interesting read: http://motls.blogspot.com/2007/11/le-chateliers-principle-and-natures.html

    Below are some paragraphs from this article.

    “One possible way to describe Le Chatelier’s principle is to say that feedback mechanisms in stable systems are negative. When you add CO2, various processes that consume it (such as photosynthesis) become more frequent. So the ultimate increase in CO2 will be lower than if those processes didn’t exist. For the concentration of CO2, it is essentially a standard example of the principle in action. No one doubts it.

    What about the clouds? Well, I think that the observations make it rather likely that the clouds are a stable system. For example, the temperature during the glaciation cycles never started to run out of control. It had the tendency to stay in a certain interval. If it is true that the responses of clouds on the external temperature and CO2 concentration were a good description of the effective laws of physics governing these processes, it follows that we deal with a stable system. Le Chatelier’s principle must apply and feedbacks must be negative.

    But the idea that positive feedbacks dominate or that they are the ones who win at the end simply contradicts basic laws of thermodynamics.”

    • Dr. Roy actually explains very well in his book why Lubos Motl is dead wrong. The blackbody or “Planck” response (which can be thought of as a negative feedback, and which just means that the hotter an object gets, the more energy it radiates back out) dominates the system, so the net feedback INCLUDING the Planck response is negative. But climatologists usually take the Planck response as a zero point, and talk about positive or negative feedbacks RELATIVE TO THAT. In other words, even when climatologists say “positive feedbacks dominate the system,” they do not mean the system is unstable. They just mean the feedback is less negative than it would be with just the Planck response.

      Motl’s argument is the sort of thing that impresses electrical engineers, who know all about “feedback,” but next to nothing about climate.

  25. Jason says:

    That’s the most confusing graph I’ve ever seen. But I think it’s more for effect than actual study.

  26. HAS says:

    As an idle curiosity I tried to replicate using the data you had provide to Steve McIntyre and got similar results (but not exactly) for your fig 3a sans models (the raw data after smoothing looked about right).

    Anyway because I’d been thinking about the issues around what was causing some GCMs to do better than others in modeling this effect (and the discussion about ENSO) I ran the SOI (observed but scaled down by a factor of 10 to give values comparable to temp anomalies) against flux (rather than HadCRUT3).

    I need to check the statistical significance but in both cases the higher R^2s are in the 0 – 8 months energy lag period and on the scaling of SOI index I serendipitously picked the two curves follow one another pretty well.

    So much to do so little time.

  27. Ray says:

    Bill,
    “So, Dr. Spencer is right about averaging or cancelling out the positive and negative errors, but assigns the wrong motive.”
    You seem to be, at least partially, quoting myself there, rather than Dr. Spencer, (my apologies if Dr. Spencer has said the same thing).
    So what you are saying that climate modelling is little more than informed guesswork, and in that, I would agree with you.
    The problem is that at the moment, the diversity in the model outcomes is so large that it is almost a CERTAINTY that the majority of the models will be incorrect, within reasonable error limits.
    How can we place faith in the predictions of model ensembles when the majority of the models are not correct?
    It would probably be able to produce equally accurate results with a series of random predictions, within reasonable specified bounds.

  28. S Basinger says:

    I still don’t understand why the modellers aren’t using the satellite data (which is representative of reality) to improve their models.

  29. Fred says:

    Your responses to 1) and 2) are non-responsive. Firstly, you don’t seem to take anything else from IPCC on trust, so why do you do so on this specific issue? This is even worse since nowhere has IPCC ever said that the HadCRUT3 data is a ‘gold standard’. I have done the same calculation with the GISTEMP record (neither obscure nor unknown) and it shows similar behaviour to the metrics seen in Dessler’s paper. This therefore cries out for an explanation no?

    On 3) and 4), you misunderstand the point. Of course the temperatures and radiative fluxes are area-weighted. But given a dominant tropical variability, the integral over lower latitudes will give a higher regression than the integral over the globe (since Arctic variability is effectively uncorrelated to the tropics). Given that the main differences between HadCRUT and other temperature indexes is related predominantly to Arctic coverage, a better apples-to-apples comparison with the HadCRUT/CERES data would be to take the model temperatures averaged from 60S-60N compared to their global radiative fluxes. Your results from using Levitus or HadSST reinforce this point. Higher regressions will result from using extra-polar model output, reducing the mismatch between models and obs.

    On 5) I’m not sure what you are implying. Surely it matters what is causing this correlation? The results for the LW and SW anomalies look very different and should surely have implications for source of this phenomena. LW dominance fingers water vapour and/or high clouds, but the lack of equally significant SW impacts would count against clouds on their own.

    On 6) you concede that CS does not correlate to your metric, yet insist that this only implies that CS is not ‘necessarily’ related. This is perverse – what evidence do you have that CS is in any way related? Until you find a metric in the models that does correlate better and actually explains the result you see, how can you conclude anything, let alone something about CS?

  30. Jens says:

    Dr. Spencer,

    excellent rejoinder to two (valid, I think) points of criticism, possible cherry picking of models for comparison and lack of estimates of uncertainty. As indicated by your question mark in the graph, it is still an open question whether this relates to climate sensitivity or is special for variations caused mainly by ENSO. Is it not possible to make similar analyses of variations during a day in the tropics, as discussed at wattsupwiththat by Willis Eschenbach, and yearly variations due the changing angle of the sun? Perhaps a more general picture will emerge.

    I agree with Stephen Wilde that there appears to be too much focus on (changes of) radiative heat transport through the atmosphere. The transport of latent heat in water vapor is very important and is, as discussed by Willis, crucial for the stabilization of surface temperatures during the day in the tropics.

  31. THE CURRENT SITUATION: This spurt of solar activity is among the strongest of solar cycle 24, so far. I still say , it is a spurt in the prolong solar minimum.

    However, if this spurt were to last ,then unlike the global warmers, another evaluation will have to be made, which might have to modifiy the climatic extreme forecast going forward, along with the extent of the temperature decline and increase geological activity, all in response to a prolong solar minimum, with some active spurts.

    Simplist way to put it is, a QUIET SUN EQUATES TO A MORE RESTLESS EARTH.(climate and geologically speaking)

    ACTIVE SUN EQUATES TO A MORE QUIET EARTH. (climate and geologically speaking)

    Year 2011 reflects that very fact so well.

    Sunspot 1302 will soon be in a position relative to the earth that any strong flair activity will probable produce a major geomagnetic storm,of K5 or higher.

    If this should come about, everyone watch the geological activity 1 to 25 days ,following this possible geomanetic storm.

    I say this is the process of how the prolong solar minimum works in the cooling of the temperatures of earth. If one looks at the temperatures during any prolong solar minimum prior period, one will see it is up and down constantly with many counter trends, in the overall trend. I think part of it is the constant increase/decrease of volcanic activity within the prolong solar minimum. More geological activity during active spurts, less when the sun is constantly quiet.

    SOI INDEX,PDO,AMO,AO,NAO, to name some others, all playing roles ,as they also have counter trends within their main trend.

    The fact that the temperature response always zigs and zags within it’s main trend , lends much support to the PHASE IN THEORY, and eliminates other theories, which if the causes they claim were to change the climate , those causes would be reflected by much more straight up or down moves in the temperature, without all the zigs and zags.

    In closing I will keep saying it, the PHASE IN THEORY ,is the only theory that attempts to explain the MANY abrupt climatic changes of the past. Many abrupt climatic changes ,not just one or two.

    The global man made CO2 theory is a waste of time to continue to study, and is taking so much away from the real reasons why the climate of the earth has changed, and will continue to change going forward. What a waste of time the CO2 study is,along with the dumb climatic models, which have already been proven WRONG, over and over again.

  32. Matter says:

    Thankyou for displaying the graph, is there any chance that the data could be made easily available in an obvious way so one could get the results from all of the runs labelled by model?

  33. Christopher Game says:

    Responding to the post of Jens of September 25, 2011 at 4:19 AM.

    Jens writes: “too much focus on (changes of) radiative heat transport through the atmosphere”

    It depends what you mean by “through”. Radiative transfer from the land-sea body direct to space is “radiative transport through the atmosphere”. So is radiative transfer from one part of the atmosphere to another. But they are quite different things.

    Radiative transfer from the land-sea body direct to space is on the order of 65 W m^-2 while radiative transfer from lower parts of the atmosphere to upper is at least an order of magnitude less. Radiative transfer from the atmosphere to direct to space is on the order of 175 W m^-2. These quantities are essential for the questions studied by Drs Spencer and Braswell and it is not right to say they focus too much on them.

    As you say, most heat transport within the atmosphere is by convection of internal energy. Christopher Game

  34. Rob says:

    Bill
    September 23, 2011 at 8:49 PM

    re: oxen

    The climate follows the laws of physics, not the average opinion of “experts”. Like the ones who didnt believe in plate tectonics? Or the ones who put the earth at the centre of the universe?

    Or like the stockmarket predicting when the next crash will happen, or lemmings predicting where the cliff face is?

  35. Rob, that post makes so much sense.

    The climate is in such a delicate balance ,and yet at the same time it is hard to change the balance, but we know the balance changes from past historical data.

    This decade is going to be very interesting.

    Again the strength of MAGNETIC FIELDS, is the ROUTE cause of climate change in my opinion. That is where the research should be going.

    The CO2 nonsense is just what it is,nonsence, but the one good thing about it, is it opens the door in this field ,which for lack of a better word is ruled by mainstream scientist ,that are essentially CLUELESS ,when it comes to earth’s climatic system.

    It opens up many opportunities, which should and would otherwiswe be closed if the study of climatalogy was on the ball, which of course it isn’t. I have never seen a scientific field so clueless in the field they are involved in, as this one.

  36. Steve Fitzpatrick says:

    bill,

    re: oxen

    The problem is that the large majority of the models are on the high side of the trend, not normally distributed…. people guessing the weight of oxen probably are pretty uniformly scattered around the ‘data’ (in that case, the true weight). So the best evidence is that the models do not on average presently represent a ‘best estimate’ of real behavior. Could they be on average “right” in the longer term? Sure, but the odds are not looking good. Unless there is a significant increase in warming rate in the next few years, it will be clear the average of the models is almost certainly NOT a good representation of the truth.

    This should surprise nobody; the models all use the same basic approach, and most likely suffer from the same types of deficiencies, including obvious problems like overstated estimates of ocean heat uptake rates, among others.

  37. pochas says:

    Dr Spencer,

    I notice that your figure shows a 3 month lag to peak flux from peak temperature, and a 4 month lag as per the arrow. Three month is the quarter cycle lag for an annual cycle, and four months is the time lag from peak sea surface temperature (mid-march) to peak TLT temperature (mid-july). This suggests that you are looking at an annual cycle, perhaps caused by the sun moving above and below the plane of the equator throughout the year. If this is the case, then these time lags would not be of interest as relates to cloud feedbacks. Is it possible that amplitude ratio a is related to the cloud feedback factor f simply by

    a = 1/(1-f)

    ?

  38. P. Solar says:

    I looked at matching UAH TLT to hadSST and ersst a while back.

    http://tinypic.com/view.php?pic=immr7b&s=7

    The two surface datasets seemed offset very close to 0.25 years later wrt to UAH.

  39. Joe Born says:

    Steve McIntyre just posted http://climateaudit.org/2011/09/28/monthly-centering-and-climate-sensitivity/#comments an excellent discussion pointing out that basing anomalies on overall means rather than monthly means yields a much lower sensitivity result than Dessler. To get a much higher correlation between temperature and radiation than Dessler, he used temperature at 600 mb rather than 1000 mb. He noted the lag of the former with respect to the latter (what P. Solar just referred to, I believe), which Troy ably discusses at http://troyca.wordpress.com/2011/08/25/relationship-between-sst-and-atmospheric-temperatures-and-how-this-affects-feedback-estimates/.

    Bottom line: even without considering the confounding effect that Dr. Spencer has shown the different causality directions result in, simple regression can–with much higher correlation–arrive at a lower sensitivity than folks like Dessler do.

    By the way, Troy shows his work. If all the disputants had done so as well as Troy does, the number of people who really understood what Spencer, Lindzen, and others are doing would have been at least an order of magnitude higher several years ago. I commend his work to the attention of lurkers like me who have come near to giving up in frustration with the ambiguities in the major disputants’ discussions–ambiguities that would readily have been dispelled if they had shown their work.

  40. As this decade goes on Dessler’s work ,will even be more irrelevant,not that it isn’t already.

  41. P. Solar says:

    Joe Born says:
    “Steve McIntyre just posted http://climateaudit.org/2011/09/28/monthly-centering-and-climate-sensitivity/#comments an excellent discussion pointing out that basing anomalies on overall means rather than monthly means yields a much lower sensitivity result than Dessler.”

    Much of that discussion confused the climate response to the annual solar cycle with “feedback”.

    The discussion that you linked at Troys blog is much more to the point.

    The elephant in the room here is that all this talk of slopes and regression in TOTALLY INVALID as it is being done. Some account must be taken of regression attenuation.

    With the awful R2 and correlation values of this data the true linear relation could be TWO OR THREE times larger than the attenuated OLS “slope”.

  42. Dr Spencer

    If your hypothesis is that climate sensitivity is 0.6*C for 2xCo2, what happens if you apply that figure to the calculation of the climate’s reaction to Pinatubo?

    Thanks for your trouble in answering this question.

  43. MikeN says:

    Does the Pacific Thermostat theory of Cane relate to your work at all?

  44. Andrew says:

    Richard Lawson-I believe that Roy has analyzed the Pinatubo eruption before, it might help to search the archives of this blog. But others have made models with relatively low sensitivities that can fit the eruption well, for instance:

    http://www.pas.rochester.edu/~douglass/papers/thermocline_pub_2006GL026355.pdf

    However, it is pretty easy to construct a model that fits the eruption by adjusting the response time with almost any sensitivity, and the magnitude of the response varies little with sensitivity, and the rate of recovery is the main difference, which can be arbitrarily “corrected” for by changing the growth of net anthropogenic forcing to get a faster recovery. So Pinatubo is a weak test.

  45. coldlynx says:

    The time lag approach of calculating feedback might be easy to validate. The average time lag between winter and summer solstice and minimum and maximum temperatures are typical about 30-45 days here in northern Europe. Typical have August the warmest period of the year and January/February the coldest period of the year.
    But time lag for seasonal solar angles are changing between years.
    During recent years have that time lag between climate and solstice been shorter where i live,60 deg north.
    Both summer and winter solstice time lag have been shorter.
    Last year was the coldest month december and June the warmest month. December was even record cold. Coldest december ever measured.

    This seems to me to indicate that less delay = less water and clouds in the system at this latitudes change climate substantially.

    Time lag differences between north and south high latitudes may be a way to calculate the cloud feedback in the atmosphere.

  46. Dr. Doom says:

    @RW
    Why assume negative feedback? If you extrapolate the 0.5C warming in the 20th century and 75 ppm increase in CO2, you get a sensitivity of 1.8C for doubling of CO2. That’s positive feedback. More than the 1.2C no feedback.

  47. Andrew, Thank you. The Douglass & Knox (DK) paper you link to has been refuted here by Robcock:
    http://climate.envsci.rutgers.edu/pdf/DouglassKnoxComment2005GL023287.pdf
    and here by Wigley:
    https://e-reports-ext.llnl.gov/pdf/319378.pdf

    It seems that DK overlooked thermal inertia of the oceans, and make other errors, including not understanding radiative forcing.

    To my untutored eye, it seems that the refutation is pretty robust, and a low climate sensitivity of ~0.5*C cannot explain the atmospheric response to Pinatubo.

    A search on this site for Pinatubo gives six posts, which I will look at later.

  48. Jan Perlwitz says:

    In the figure presented above, 10 realizations of the lagged regression for each model are compared with only one realization of the lagged regression averaged over 10 years derived from only one observational dataset. It is just assumed that the regression curve for the observations is representative for each decade of the 100 years (additionally the assumption is made w/o any evidence that HadCRUT3 is the best representation of reality compared to other data sets). But in reality, the 10-year average regression curve derived from observation will vary from decade to decade, and how do we know here in what percentile of the statistical distribution the curve from observations used in the analysis is located and what the variance is?

    The claim, “As you can see, the observations of the Earth (in blue, CERES radiative energy budget versus HadCRUT3 surface temperature variations) are outside the range of climate model behavior,” does not have scientific validity, if it is not shown that the differences between models and observation data lie outside the range of statistical uncertainty. No, what is claimed can’t be seen from the figure above w/o any statistical analysis.

    One could show the average together with the 2-sigma standard deviation for each model instead, and then compare to the regression curve derived from the observations. Or, since there are only 10 years of satellite data, one could show what the uncertainty is in the observations due to year-to-year variability within the 10 years at least.

Leave a Reply