UPDATE: (1300CDT, Sept. 11, 2019). I’ve added a plot of ten CMIP5 models’ global top-of-atmosphere longwave IR variations in the first 100 years of their control runs.
UPDATE #2: 0800 CDT Sept. 12, 2019) After comments from Dr. Frank and a number of commenters here and at WUWT, I have posted Additional Comments on the Frank (2019) Propagation of Error Paper, where I have corrected my mistake of paraphrasing Dr. Frank’s conclusions, when I should have been quoting them verbatim.
I’ve been asked for my opinion by several people about this new published paper by Stanford researcher Dr. Patrick Frank.
I’ve spent a couple of days reading the paper, and programming his Eq. 1 (a simple “emulation model” of climate model output ), and included his error propagation term (Eq. 6) to make sure I understand his calculations.
Frank has provided the numerous peer reviewers’ comments online, which I have purposely not read in order to provide an independent review. But I mostly agree with his criticism of the peer review process in his recent WUWT post where he describes the paper in simple terms. In my experience, “climate consensus” reviewers sometimes give the most inane and irrelevant objections to a paper if they see that the paper’s conclusion in any way might diminish the Climate Crisis™.
Some reviewers don’t even read the paper, they just look at the conclusions, see who the authors are, and make a decision based upon their preconceptions.
Readers here know I am critical of climate models in the sense they are being used to produce biased results for energy policy and financial reasons, and their fundamental uncertainties have been swept under the rug. What follows is not meant to defend current climate model projections of future global warming; it is meant to show that — as far as I can tell — Dr. Frank’s methodology cannot be used to demonstrate what he thinks he has demonstrated about the errors inherent in climate model projection of future global temperatures.
A Very Brief Summary of What Causes a Global-Average Temperature Change
Before we go any further, you must understand one of the most basic concepts underpinning temperature calculations: With few exceptions, the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system. This is basic 1st Law of Thermodynamics stuff.
So, if energy loss is less than energy gain, warming will occur. In the case of the climate system, the warming in turn results in an increase loss of infrared radiation to outer space. The warming stops once the temperature has risen to the point that the increased loss of infrared (IR) radiation to to outer space (quantified through the Stefan-Boltzmann [S-B] equation) once again achieves global energy balance with absorbed solar energy.
While the specific mechanisms might differ, these energy gain and loss concepts apply similarly to the temperature of a pot of water warming on a stove. Under a constant low flame, the water temperature stabilizes once the rate of energy loss from the water and pot equals the rate of energy gain from the stove.
The climate stabilizing effect from the S-B equation (the so-called “Planck effect”) applies to Earth’s climate system, Mars, Venus, and computerized climate models’ simulations. Just for reference, the average flows of energy into and out of the Earth’s climate system are estimated to be around 235-245 W/m2, but we don’t really know for sure.
What Frank’s Paper Claims
Frank’s paper takes an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) and assumes that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation, propagating the error forward in time during his emulation model’s integration. The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT).
He claims (I am paraphrasing) that this is evidence that the models are essentially worthless for projecting future temperatures, as long as such large model errors exist. This sounds reasonable to many people. But, as I will explain below, the methodology of using known climate model errors in this fashion is not valid.
First, though, a few comments. On the positive side, the paper is well-written, with extensive examples, and is well-referenced. I wish all “skeptics” papers submitted for publication were as professionally prepared.
He has provided more than enough evidence that the output of the average climate model for GASAT at any given time can be approximated as just an empirical constant times a measure of the accumulated radiative forcing at that time (his Eq. 1). He calls this his “emulation model”, and his result is unsurprising, and even expected. Since global warming in response to increasing CO2 is the result of an imposed energy imbalance (radiative forcing), it makes sense you could approximate the amount of warming a climate model produces as just being proportional to the total radiative forcing over time.
Frank then goes through many published examples of the known bias errors climate models have, particularly for clouds, when compared to satellite measurements. The modelers are well aware of these biases, which can be positive or negative depending upon the model. The errors show that (for example) we do not understand clouds and all of the processes controlling their formation and dissipation from basic first physical principles, otherwise all models would get very nearly the same cloud amounts.
But there are two fundamental problems with Dr. Frank’s methodology.
Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux
If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.
Why?
Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propogate.
For example, the following figure shows 100 year runs of 10 CMIP5 climate models in their pre-industrial control runs. These control runs are made by modelers to make sure that there are no long-term biases in the TOA energy balance that would cause spurious warming or cooling.

https://climexp.knmi.nl/selectfield_cmip5.cgi?id=someone@somewhere .
If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling. But they don’t. You can barely see the yearly temperature deviations, which average about +/-0.11 deg. C across the ten models.
Why don’t the climate models show such behavior?
The reason is that the +/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance. To demonstrate this, here are the corresponding TOA net longwave IR fluxes for the same 10 models shown in Fig. 1. Clearly, there is nothing like 4 W/m2 imbalances occurring.

The average yearly standard deviation of the LW flux variations is only 0.16 W/m2, and these vary randomly.
And it doesn’t matter how correlated or uncorrelated those various errors are with each other: they still sum to nearly zero in the long term, which is why the climate model trends in Fig 1 are only +/- 0.10 C/Century… not +/- 20 deg. C/Century. That’s a factor of 200 difference.
This (first) problem with the paper’s methodology is, by itself, enough to conclude the paper’s methodology and resulting conclusions are not valid.
The Error Propagation Model is Not Appropriate for Climate Models
The new (and generally unfamiliar) part of his emulation model is the inclusion of an “error propagation” term (his Eq. 6). After introducing Eq. 6 he states,
“Equation 6 shows that projection uncertainty must increase in every simulation (time) step, as is expected from the impact of a systematic error in the deployed theory“.
While this error propagation model might apply to some issues, there is no way that it applies to a climate model integration over time. If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time. It doesn’t somehow accumulate (as the blue curves indicate in Fig. 1) as the square root of the summed squares of the error over time (his Eq. 6).
Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step. Dr. Frank has chosen 1 year as the time step (with a +/-4 W/m2 assumed energy flux error), which will cause a certain amount of error accumulation over 100 years. But if he had chosen a 1 month time step, there would be 12x as many error accumulations and a much larger deduced model error in projected temperature. This should not happen, as the final error should be largely independent of the model time step chosen. Furthermore, the assumed error with a 1 month time step would be even larger than +/-4 W/m2, which would have magnified the final error after a 100 year integrations even more. This makes no physical sense.
I’m sure Dr. Frank is much more expert in the error propagation model than I am. But I am quite sure that Eq. 6 does not represent how a specific bias in a climate model’s energy flux component would change over time. It is one thing to invoke an equation that might well be accurate and appropriate for certain purposes, but that equation is the result of a variety of assumptions, and I am quite sure one or more of those assumptions are not valid in the case of climate model integrations. I hope that a statistician such as Dr. Ross McKitrick will examine this paper, too.
Concluding Comments
There are other, minor, issues I have with the paper. Here I have outlined the two most glaring ones.
Again, I am not defending the current CMIP5 climate model projections of future global temperatures. I believe they produce about twice as much global warming of the atmosphere-ocean system as they should. Furthermore, I don’t believe that they can yet simulate known low-frequency oscillations in the climate system (natural climate change).
But in the context of global warming theory, I believe the largest model errors are the result of a lack of knowledge of the temperature dependent changes in clouds and precipitation efficiency (thus free-tropospheric vapor, thus water vapor “feedback”) that actually occur in response to a long-term forcing of the system from increasing carbon dioxide. I do not believe it is because the fundamental climate modeling framework is not applicable to the climate change issue. The existence of multiple modeling centers from around the world, and then performing multiple experiments with each climate model while making different assumptions, is still the best strategy to get a handle on how much future climate change there *could* be.
My main complaint is that modelers are either deceptive about, or unaware of, the uncertainties in the myriad assumptions — both explicit and implicit — that have gone into those models.
There are many ways that climate models can be faulted. I don’t believe that the current paper represents one of them.
I’d be glad to be proved wrong.
Dr. Spencer:
Fundamentally, errors don’t cancel. They add (linearly, as the variance)… you can’t cancel error. Only reduce it by changing the ‘process’.
Bias, itself, can be dealt with as it is part of a recurring signal but not bias error. Error never sums to zero… it always add across process steps.
The “errors” we are talking about are biases in individual components of the models’ radiative energy budget. They cancel because they were FORCED to cancel by the modelers. Do you see the distinction between this verus some general statistical theory of “errors”?
Seems like all Brute Force would do is hide errors somewhere else.
If you’ve got a broken ruler, and you keep adding to the same pool with that ruler, your uncertainty in how big that pool is will grow.
It doesn’t matter that they correct any straying in the model with other forces. The rules is stil broken.
It seems that you’re suggesting that just because a model doesn’t stray too much because of brute force in its design, that somehow that makes the model more certain.
The ruler is broken. End of story.
Well, as Pat mentioned, the variability between models in longwave cloud forcing is only 12%. How little do you think is enough for the models to be useful for climate projections? 5%? 1%? 0%? Remember, we don’t know hardly any of the global average radiative components to better than 5-10 W/m2 in a global average sense. Our observations aren’t that accurate.
It’s the total cloud fraction that is off by (+/-)12%, Roy, not the longwave cloud forcing.
Roy,
The errors cannot be FORCED to cancel, because they are not yet known until the data for the given iteration step is collected. Basically, for every step the plaintive whine from the adolescent in the back seat of Are we there, yet? has the answer of No…
Therefore, each step in the future has an uncertainty in its prediction that comes from 1. Model insufficiency, and 2. uncertainty in the input coming from the previous step. It is not a error until we have collected the data, but this uncertainty must be carried forward and added to the uncertainty of the application of the model to the current step.
I have posted the following in other discussions on this subject and I offer it here as additional clarification.
Thank You,
Bill Haag
There seems to be a misunderstanding afoot in the interpretation of the description of uncertainty in iterative climate models. I offer the following examples in the hopes that they clear up some of the mistaken notions apparently driving these erroneous interpretations.
Uncertainty: Describing uncertainty for human understanding is fraught with difficulties, evidence being the lavish casinos that persuade a significant fraction of the population that you can get something from nothing. There are many other examples, some clearer that others, but one successful description of uncertainty is that of the forecast of rain. We know that a 40% chance of rain does not mean it will rain everywhere 40% of the time, nor does it mean that it will rain all of the time in 40% of the places. We however intuitively understand the consequences of comparison of such a forecast with a 10% or a 90% chance of rain.
Iterative Models: Lets assume we have a collection of historical daily high temperature data for a single location, and we wish to develop a model to predict the daily high temperature at that location on some date in the future. One of the simplest, yet effective, models that one can use to predict tomorrows high temperature is to use todays high temperature. This is the simplest of models, but adequate for our discussion of model uncertainty. Note that at no time will we consider instrument issues such as accuracy, precision and resolution. For our purposes, those issues do not confound the discussion below.
We begin by predicting the high temperatures from the historical data from the day before. (The model is, after all, merely a single day offset) We then measure model uncertainty, beginning by calculating each deviation, or residual (observed minus predicted). From these residuals, we can calculate model adequacy statistics, and estimate the average historical uncertainty that exists in this model. Then, we can use that statistic to estimate the uncertainty in a single-day forward prediction.
Now, in order to predict tomorrows high temperature, we apply the model to todays high temperature. From this, we have an exact predicted value ( todays high temperature). However, we know from applying our model to historical data, that, while this prediction is numerically exact, the actual measured high temperature tomorrow will be a value that contains both deterministic and random components of climate. The above calculated model (in)adequacy statistic will be used to create an uncertainty range around this prediction of the future. So we have a range of ignorance around the prediction of tomorrows high temperature. At no time is this range an actual statement of the expected temperature. This range is similar to % chance of rain. It is a method to convey how well our model predicts based on historical data.
Now, in order to predict out two days, we use the predicted value for tomorrow (which we know is the same numerical value as today, but now containing uncertainty ) and apply our model to the uncertain predicted value for tomorrow. The uncertainty in the input for the second iteration of the model cannot be canceled out before the number is used as input to the second application model. We are, therefore, somewhat ignorant of what the actual input temperature will be for the second round. And that second application of the model adds its ignorance factor to the uncertainty of the predicted value for two days out, lessening the utility of the prediction as an estimate of day-after-tomorrows high temperature. This repeats so that for predictions for several days out, our model is useless in predicting what the high temperature actually will be.
This goes on for each step, ever increasing the ignorance and lessening the utility of each successive prediction as an estimate of that days high temperature, due to the growing uncertainty.
This is an unfortunate consequence of the iterative nature of such models. The uncertainties accumulate. They are not biases, which are signal offsets. We do not know what the random error will be until we collect the actual data for that step, so we are uncertain of the value to use in that step when predicting.
Bill Haag
Doesnt the fact they are forced to be canceled by the modeler set up a situation where the errors are artificially and inaccurately low? And if these restraints are removed, then the errors would inflate as the article suggests. It seems in real life, errors wouldnt cancel if they were truly independent. You do state that many of the systems are not well understood.
I greatly appreciate your effort on this. Unlike you, I will consult a critique by a previous reviewer (you) as I slog through the paper.
In attempting to understand many such papers I find it hard to accept that the writer is really saying what he seems to be saying. It helps to have someone serious say that, yes, that’s really what the writer meant.
Thanks for the objective analysis. If only we could get this from the other side.
In gist “the other side” said essentially the same thing only more directly.
I’m reasonably sure he implied that the ‘other side’ should as thoroughly and objectively analyze/review articles from the ‘other side’…
Barry
The ‘other side’ does. Nick Stokes did so at his blog, too, some years ago when the idea first came out, and again more recently. I believe ATTP did also.
The other Barry,
Maybe so… I wasn’t necessarily agreeing with the original poster, just pointing out what I thought was obvious about his intentions.
But, I do know, that with everything in life when there is bias (sports, politics, news media, climate change, etc) there is a natural inclination to work harder to find fault in ‘the other side’. It’s refreshing if there are at least some folks on a given side of the debate, who, even though biased, work hard to validate things from a pure mathematical point of view, etc.
Barry
Thanks for the objective review. To me what you say makes sense!(this time)!
Hi Roy,
Let me start by saying that I’ve admired your work, and John’s, for a long time. You and he have been forthright and honest in presenting your work in the face of relentless criticism. Not to mention the occasional bullet. 🙂
Thanks for posting your thoughts here. I’m glad of the opportunity to dispel a few misconceptions. I’ll move right down to “What Frank’s Paper Claims.”
You start by stating that I take “an example known bias in a typical climate models longwave (infrared) cloud forcing (LWCF) …”
If your “bias” means offset, it is misleading. The LWCF error is a theory error, not a bias offset. That is demonstrated by its pair-wise correlation among all the models. The 4 W/m^2 is a model calibration error statistic.
I do not assume “that the typical models error (+/-4 W/m2) in LWCF can be applied in his emulation model equation.” That’s not an assumption.
It is justified several times in the paper on the grounds that it is an uncertainty in simulated tropospheric thermal energy flux. As such it conditions the simulated impact of CO2 forcing, which is also part of the very same tropospheric thermal energy flux.
Entry of the 4 W/m^2 into the emulation of projected global average air temperature is fully justified on those grounds.
You go on to write that I propagate “the error forward in time during his emulation models integration.” You’re implying here that the The 4 W/m^2 is propagated forward. It’s not.
It’s the uncertainty in air temperature, consequent to the uncertianty in simulated forcing, that is propagated forward.
Then you write, “ The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT).”
I must say I was really sorry to read that. It’s such a basic mistake. The 20 C (your number) is not a temperature. It’s an uncertainty statistic. Propagated error does not impact model expectation values. It is evaluated separately from the simulation.
And consider this: the 20 C uncertainty bars are vertical, not offset. Your understanding of their meaning as temperature would require the model to imply the simultaneous coexistence of an ice house and a greenhouse state.
One of my reviewers incredibly saw the the 20 C as implying the model to be wildly oscillating between hothouse and ice-house states. He also not realizing the vertical bars require that his interpretation of 20 C as temperature would necessitate both states to be occupied simultaneously.
In any case, Roy, your first paragraph alone has enough mistakes in it to invalidate your entire critique.
The meaning of uncertainty is discussed in Sections 7.3 and 10 of the Supporting Information.
You wrote that, “ The modelers are well aware of these biases, which can be positive or negative depending upon the model.”
The errors are both positive and negative across the globe for each model. This is clearly shown in my Figure 4, and in Figures throughout Lauer and Hamilton, 2013. The errors are not bias offsets, as you have them here.
You wrote, “If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.”
You’re mistaken the calibration error statistic for an energy. It is not. And you’ve assigned an implied positive sign to the error statistic, representing it as an energy flux. It isn’t. It’s 4 W/m^2. Recognizing the is critical to understanding.
And let me ask you: what impact would a simultaneously positive and negative energy flux have at the TOA? After all, it’s 4 W/m^2. If that was a TOA energy flux, as you have it, it would be self-cancelling.
You wrote, “each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.”
And each of those models simulate cloud fraction incorrectly, producing an average calibration error 4 W/m^2 in LWCF, even though they are overall energy-balanced. I point out in my paper that the internal climate energy-state can be wrong, even though the overall energy balance is correct.
That’s what the cloud fraction simulation error represents: an internally incorrect climate energy-state.
You wrote, “If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling.”
No, they would not.
I’m sorry to say that your comment shows a complete lack of understanding of the meaning of uncertainty.
Calibration error statistics do not impact model expectation values. They are calculated after the fact from model calibration runs.
You wrote, “+/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance.”
I don’t assume that 4 W/m^2. It is the reported LWCF calibration error statistic in Lauer and Hamilton, 2013.
Second, offsetting errors do not make the underlying physics correct. The correct uncertainty attending offsetting errors is their combination in quadrature and their report as a uncertainty in the reported result.
There is no reason to suppose that errors that happen to offset during a calibration period will continue to offset in a prediction of future states. No other field of physical science makes such awful mistakes in thinking.
You are using an incomplete or incorrect physical theory, Roy, adjusting parameters to get spuriously offsetting errors, and then assuming they correct the underlying physics.
All you’re doing is hiding the uncertainty by tuning your models.
Under “Error Propagation …” you wrote, “If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time. ”
Once again, you imposed a positive sign on a uncertainty error statistic. The error statistic is not an energy flux. It does not perturb the model. It does not show up at the TOA.
Your imposition of that positive sign facilitates your incorrect usage. It’s an enabling mistake.
I have run into this mistaken thinking repeatedly among my reviewers. It’s incredible. It’s as though no one in climate science is ever taught anything about error analysis in undergraduate school.
You wrote, “Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step.”
No, it does not. Several reviewers, including Prof. Zanchettin, raised this question. I answered it to his satisfaction.
The size of the simulation calibration uncertainty statistic will vary with the time over which it is appraised. When this is taken into account, the centennial uncertainty comes out the same every time.
And the time step is not assumed, as you have it. Lauer and Hamilton provided an annual mean error statistic. That annual average calibration error was applied to annual temperature time steps. None of that was assumed.
You should have looked at eqns. 5, and the surrounding text. Here’s the critical point, from the paper: “In equation 5 the step-wise GHG forcing term, ΔF_i, is conditioned by the uncertainty in thermal flux in every step due to the continual imposition of LWCF thermal flux calibration error.”
Eqn. 6 is a generalization of eqns 5.
I’m sorry Roy. You’ve made one very fundamental mistake after another. Your criticism has no force.
I get it, Pat, and I’m certainly no statistician.
Pat, thanks for taking the time to do a point-by-point rebuttal.
I see nothing in what you have stated that alter my conclusions. Much of what you say is a matter of semantics (when I say you make an “assumption”, I mean (for example) you are taking someone else’s estimate for the purpose of making further calculations. It’s not meant to minimize the validity of the assumption).
Your point about the radiative flux errors varying geographically is irrelevant to the issue at hand. Your model is for global average conditions.
Your point of semantics regarding these being *temperature uncertainty* estimates rather than *temperature* seems like a dodge; you are plotting huge error bars on future temperature projections. These DO NOT OCCUR in any of the climate models. If these error bars represent something else, and other reviewers have brought up similar objections, than the onus is on you to explain in global energy budget terms what you are claiming.
Because I still do not understand.
Maybe one problem is that your Eq. 6 always gives a positive number, no matter what is input for +/-ui(T), because it gets squared. This then amplifies the error over time. Is this what you really intended? It’s not clear from your discussion, and it is not the way energy flux errors behave in the models. They have a time averaged value, relatively constant, with random variations at shorter time scales and smaller space scales (for a variety of reasons).
Your claim of the error propagation equation output not depending on time step would only be true (as I said) if the error actually DECREASED with time step, when in fact in reality it would increase. I’ve programmed the equation (Eq. 6) and this is trivially obvious. I have no specific knowledge of exactly what you convinced some reviewer of, and don’t have time to do a forensic investigation.
Since there is such widespread misunderstanding of what you have done, it seems like you should spend some effort speak in global energy balance terms, not statistical theory, to explain what you think you have demonstrated. NONE of the models behave in the way you suggest they should if they had such huge radiative flux errors affecting the GLOBAL TOA net energy flux.
YES, I agree that the modelers have simply adjusted other model errors to offset each other, which shows they are playing games.
That’s not the issue.
The issue is that you think the models have potentially huge errors in the temperature projections due to uncertainties in various processes (clouds, water vapor, etc.) that affect global average radiative fluxes. Yet none of the models behave that way. You need to explain why that is the case.
Pat – “All youre doing is hiding the uncertainty by tuning your models.”
Roy – “YES, I agree that the modelers have simply adjusted other model errors to offset each other, which shows they are playing games.
Thats not the issue.
The issue is that you think the models have potentially huge errors in the temperature projections due to uncertainties in various processes (clouds, water vapor, etc.) that affect global average radiative fluxes. Yet none of the models behave that way. You need to explain why that is the case.”
Isn’t what Pat is saying is they are playing games to make the models come out so it appears they are somewhat accurate and reasonable, but it is only because of the game playing? And there is no reason to assume that they won’t continue to tweek and play games going forward to maintain that illusion even though there is actually something really wrong?
Yes, Glacierman, it is true that they play games to make model errors cancel out. Even the modelers admit this! But Pat seems to be claiming that because of the uncertain nature of all of the component energy fluxes, it can lead to huge uncertainty in future temperature projections in the models.
That would only be true if the modelers did not adjust things so that all of the fluxes balance at the top of atmosphere.
Yes, it is legitimate to fault the reliability of the models. But to claim that it means there can be huge warming or cooling errors in the models 50 years in the future as a result of these uncertainties, then why don’t the model behave that way?
Maybe Pat can find a different way to explain this all in global energy balance terms, and the components of that global energy balance. Because that is what his model plots represent.
“But Pat seems to be claiming that because of the uncertain nature of all of the component energy fluxes, it can lead to huge uncertainty in future temperature projections in the models.”
No. He’s not saying this. As you admit, they brute force the model to converge on more reasonable solutions. So what if they converge more narrowly? It doesn’t solve the underlying measurement error problem.
He’s saying it is a cone of uncertainty is so large because of the underlying measurement error, that the output is meaningless.
Except that the ~20 different models from around the world cover a WIDE variety of errors in the component fluxes, and they all basically behave the same in their temperature projections for the same (1) climate sensitivity and (2) rate of ocean heat uptake. Therefore, the projections don’t depend on model biases in the component energy fluxes, they only depend upon feedbacks and the rate of ocean heat uptake. -Roy
If they tune the models to give an output that is acceptable and reasonable based on observations, they can tune them to output anything at all, including a slight warming trend…..which is what probably 90% in the business believe is the reality.
So are we seeing predictive skill from these models, or just using them to backup what the climate community believes is going on?
Roy
Do the modelers check the output of each time step to verify physicality and then do ‘on the fly’ adjustments to maintain reasonable values? If so, that might explain why the models don’t drift towards the bounds of the uncertainty. If this process does take place, as I’ve read somewhere, then the models are, at best, being guided by First Principles, but over-ridden by subjective fudge factors that make the models behave as the modelers would like.
When you say the errors “cancel” you really mean they cancel under the specific “no CO2” control runs. Its inevitable there are structural errors in the models that will bias the results once the components that were “balanced” begin to change relative to one another.
The combination of those biases and any CO2 signal is what the GCMs are projecting.
Later Roy says “Yes, it is legitimate to fault the reliability of the models. But to claim that it means there can be huge warming or cooling errors in the models 50 years in the future as a result of these uncertainties, then why dont the model behave that way?”
I dont think that’s the argument? The CO2 forcing is tiny by comparison to the errors that are inherent in the calculation. That leaves the result being unjustifiable.
“Yes, it is legitimate to fault the reliability of the models. But to claim that it means there can be huge warming or cooling errors in the models 50 years in the future as a result of these uncertainties, then why don’t the model behave that way?”
How can we possibly know whether the models “behave that way” or not? One would have to know what the temperature anomaly 50 years from now will actually be in order to know whether any or all of the models are or are not erroneous. That is, unless all of the models predict a different temperature anomaly 50 years from now. Then you know that at best all but one is wrong…and more likely all of them are.
I lean to the latter conclusion, but for far more fundamental reasons than Dr. Frank gives.
Michael,
Frank’s argument is not about how well models compare to observations, but how the uncertainty is amplified over time, so that after 50 or a hundred years there is a huge (statistical) uncertainty – so large as to make the projection meaningless.
In Frank’s view, even if today’s model run matched the obs 50 years down the road, that would be dumb luck, and the uncertainty envelope would still cover many degrees C.
Roy is arguing that the models are bound, and that the errors do not propagate at each time step, or we would see the chaotic results that Frank’s statistical view implies.
To answer Roy, Frank has to explain why energy would not be conserved in the models.
barry, please stop trolling.
Why don’t you start with some basics? “An Introduction to Error Analysis” by John R. Taylor or “Measurements and their
Uncertainties” by Hughes and Hase. I’ve had error analysis at the very beginning of my physics study at Uraler State University, and I understand and agree with everything Dr. Frank said.
Ill put your comments in italics, Roy, and my replies in standard font.
Pat, thanks for taking the time to do a point-by-point rebuttal.
Thanks, Roy. Cordial discussion is important.
I see nothing in what you have stated that alter my conclusions. Much of what you say is a matter of semantics
One of my reviewers wrote that the distinction between accuracy and precision is just philosophy.
Now I find you supposing that the distinction between an energy flux and a statistic is a matter of semantics. How is that conceivable?
(when I say you make an assumption, I mean (for example) you are taking someone elses estimate for the purpose of making further calculations. Its not meant to minimize the validity of the assumption).
OK, but I dont see the significance of that explanation, yet.
Your point about the radiative flux errors varying geographically is irrelevant to the issue at hand. Your model is for global average conditions.
The LWCF error is a global simulation model calibration error, Roy. Its directly relevant.
Your point of semantics regarding these being *temperature uncertainty* estimates rather than *temperature* seems like a dodge;
It seems like a dodge to you that an uncertainty is not a physical magnitude?
Please. It’s merely an obvious point that you have invariably missed.
you are plotting huge error bars on future temperature projections.
No, Im not. Im plotting uncertainty bounds. Theyre not physical error bars.
In my post I suggested you read sections 7.1 and 10 in the SI. You clearly do not recognize the distinction between uncertainty and physical error.
These DO NOT OCCUR in any of the climate models.
An irrelevant point building on your lack of understanding of uncertainty versus error.
If these error bars represent something else, and other reviewers have brought up similar objections, than the onus is on you to explain in global energy budget terms what you are claiming.
I have already pointed out that uncertainty bounds do not impact the energy budget. Uncertainty bounds are not energies.
Uncertainties are calculated from calibration experiments that take into account the difference between observations and the model expectation values. They are calculated after the fact of the simulation.
They do not impact the simulation or its outcomes. They inform us of the reliability of the results. They do not impact the results.
The calibration error statistic conveys the accuracy of the instrument or the model. When multiple instrumental measurements are combined, or when model are used in step-wise simulations, the calibration error recurs with each measurement or simulation step.
The uncertainty that comes into the result grows with each iteration.
But you’re right: virtually no climate scientist reviewer, except the most recent ones, appeared to understand any of that.
Because I still do not understand.
Agreed. Im having a real problem understanding how it is that so many climate scientists dont understand basic physical error analysis. As I pointed out to ATTP, not one chemist or physicist in seminars or in conversation, has failed to immediately understand the analysis. And agree with it.
Maybe one problem is that your Eq. 6 always gives a positive number, no matter what is input for +/-ui(T), because it gets squared.
Its an uncertainty variance. They’re always squared.
This then amplifies the error over time. Is this what you really intended?
It does not amplify the error. It propagates the uncertainty. See paper equations 3, 4, 5 and 6.
Its not clear from your discussion, and it is not the way energy flux errors behave in the models. They have a time averaged value, relatively constant, with random variations at shorter time scales and smaller space scales (for a variety of reasons).
Roy, the (+/-)4 W/m^2 is a calibration error statistic derived from cloud fraction error. It is not a measure of error imbalance. It is not a flux. Its an uncertainty in the simulated flux, derived from the difference between simulated cloud fraction and observed cloud fraction.
Your claim of the error propagation equation output not depending on time step would only be true (as I said) if the error actually DECREASED with time step, when in fact in reality it would increase.
The value of the calibration error statistic varies with the size of the time step. Look at the inferred derivation in SI Section 6.2.
Ive programmed the equation (Eq. 6) and this is trivially obvious.
Clearly, it isnt. Because you’re not correct.
I have no specific knowledge of exactly what you convinced some reviewer of, and dont have time to do a forensic investigation.
There’s no mystery or secret about that. Here’s what I wrote, all enquoted.
“Let’s first approach the shorter time interval. Suppose a one-day (86,400 seconds) GCM time step is assumed. The (+/-)4 Wm2 calibration error is a root-mean-square annual mean of 27 models across 20 years. Call that a rms average of 540 model-years.
“The model cloud simulation error across one day will be much smaller than the average simulation error across one year, because the change in both simulated and real global average cloud cover will be small over short times.
“We can estimate the average per-day calibration error as ((+/-)4 Wm^-2)^2 = (sum of error over 365 days)^2. Working through this, the error per day = (+/-)0.21 Wm^-2. If we put that into the right side of equation 5.2 and set F_0 = 33.30 Wm^-2 then the one-day per-step uncertainty is (+/-)0.087 C. The total uncertainty after 100 years is sqrt[(0.087)^2x365x100] = (+/-)16.6 C.
“Likewise, the estimated 25-year mean model calibration uncertainty is sqrt(16×25) = (+/-)20 Wm^-2. Following from eqn. 5.2, the 25-year per-step uncertainty is (+/-)8.3 C. After 100 years the uncertainty is sqrt[(8.3)^2×4)] = (+/-)16.6 C.
“These are average uncertainties following from the 540 simulation years and the assumption of a linearly uniform error across the average simulation year. Individual models will vary.
“Unfortunately, neither the observational resolution nor the model resolution is able to provide a per-day simulation error. However, the 25-year mean is relatively indicative because the time-frame is only modestly extended beyond the 20-year mean uncertainty calculated in Lauer and Hamilton, 2013.”
Since there is such widespread misunderstanding of what you have done, it seems like you should spend some effort speak in global energy balance terms, not statistical theory, to explain what you think you have demonstrated.
There’s no widespread misunderstanding among physical scientists; or among engineers for that matter.
The misunderstanding seems confined to the climate community, and perhaps to mathematics-trained numerical specialists who think that square roots have only positive roots and who know nothing at all of science or of the systematic error that can and does arise due to uncontrolled variables.
I’ve pointed out already that global energy balance is irrelevant. Global energy balance is possible even though the internal climate energy-states are in error. This exactly describes the condition of cloud fraction error. The energy balance is maintained all the while the simulated cloud structure is wrong.
As an example of this, look at Figure 1 in Rowlands, et al. (2012) Broad range of 2050 warming from an observationally constrained large climate model ensemble Nature Geosci. 5(4), 256-260.
It shows a gazillion climate states from “perturbed physics,” all of which are in global energy balance, but all but one — just possibly — have an erroneous internal energy-state. All of them will have an incorrect total cloud fraction.
All those projections will have huge uncertainty bounds. Were anyone to calculate them.
NONE of the models behave in the way you suggest they should if they had such huge radiative flux errors affecting the GLOBAL TOA net energy flux.
And I’ve pointed out over, and yet over again, that the (+/-)4Wm^-2 is not an energy flux. Why is that so difficult to understand?
It’s a calibration error statistic. It does not affect model behavior at all. Not at all, Roy.
YES, I agree that the modelers have simply adjusted other model errors to offset each other, which shows they are playing games. Thats not the issue.
Not the issue here. Just the larger issue.
The issue is that you think the models have potentially huge errors in the temperature projections due to uncertainties in various processes (clouds, water vapor, etc.) that affect global average radiative fluxes.
No, I think that models have huge uncertainties in the temperature projections because of theory error.
Yet none of the models behave that way. You need to explain why that is the case.
I have explained that, Roy. In my first post to you. In this, my second. In my presentation available on youtube. And in the paper.
The (+/-)4W/m^2 is not an energy flux. The (+/-)15 C uncertainty is neither a physical temperature nor a physical error interval about a temperature.
Neither of them describes the behavior of the models. Nor do they impact the simulations.
Here, page 9: “Thus, the LWCF calibration error of (+/-)4Wm^2 year^-1 is an average CMIP5 lower limit of resolution for atmospheric forcing. This means the uncertainty in simulated LWCF defines a lower limit of ignorance concerning the annual average thermal energy flux in a simulated troposphere (cf. Supporting Information Section 10.2).”
Lower limit of resolution. Not an energy flux. Lower limit of ignorance. Not an energy flux.
You really need to read Section 10 of the Supporting Information.
Pat if our host will indulge us I want to check with you that the rather torturous discussion I’ve had with Nick Stokes at WUWT accurately reprepresent the first step in the argument, and speaks to our hosts first point.
The underlying argument is that the current crop of GCMs don’t deal with cloud uncertainty (and it seems to be common ground here and that this is compensated for elsewhere in the models).
What I understand you to be saying is that if they did more properly represent this (correct the instrument bias), their projections could lie anywhere in a certain (large) range based on our current knowledge.
Now I haven’t looked at the subsequent issue addressed by our host of the size of that range, and what I’ve said is no doubt a gross simplification (I’m a simple man), but I’d find my charaterisation hard to disagree with if this is what you are saying. If our host also does (apart from the quantification issue) then that would a useful place to go forward from.
I’d really like to see how Spencer&Christy arrive at 0.01K uncertainty calculating the mean Earth temperature. After reading this post I’m even more skeptical than before about handling of uncertainties in the UAH dataset.
I think what Dr Patrick Frank is saying is that those models are accurate under unrealistic controlled conditions, since they agree with each other, but they are very imprecise under realistic conditions (noise data).
I try to mimic this behavior in this figure:
https://www.ime.usp.br/~patriota/USP/Simulated-Anomaly-Frank.jpg
My figure does not intent to represent the evolution of the temperature anomaly over time. It just shows that it is possible to observe small variations under unrealistic controlled conditions, but that does not seem to be the case under real noisy data.
Thanks for taking the time to provide a much needed critique of this paper. I suspect the author wore the reviewers down by sheer persistence.
You’d be wrong, pochas94.
And you’ve insulted the reviewers.
Insulting the reviewers, there is a piece of comedy.
I have been a reviewer for decades and am currently a co-editor of a journal.
My field is other but the process is identical everywhere and if there is one thing that can be said unequivocally about peer-review is that it is an embarrassment to all involved. Naturally, exceptions exist but they are that, exceptions.
And so, it took little for me to realize in the 90s that the climate science literature had been thoroughly compromised despite being a field far removed from my own.
Nothing new in that regard, mind you. It happens all the time.
Chomsky and his crew, to name a popular example, did something quite similar in linguistics (it’s the kind of people, not the kind of work). Decades of intelectual torture ensued. The most inane, convoluted, and absurd notions had to be accepted unquestionably, worshiped even, while any criticism, no matter how limited in scope and pertinent was violently crushed. In short, toe the line or your work will be destroyed, and you too if you get fussy.
Anyway.
It’s been now three full decades of climate-Armageddon-by-next-Monday and these fanatics will go the same way other fanatics have gone before.
And go they will.
That too I have seen happen over and over.
Thomas, I investigated Chomsky’s work across 35 years, using the resources of the Stanford University library.
I discovered that he is serially dishonest. He misrepresents his sources, making people appear to write and intend things they did not mean or intend.
I also read of his entry into Linguistics and the political turmoil and antipathy he created.
He is all you described.
“And you’ve insulted the reviewers.”
A bit rich. Here is how reviewers are insulted properly. Or here.
Nick Stokes, please stop trolling.
Hmm… it seems that none of the corrective plus/minus signs I included have come through.
Everyone should please read, wherever a 4W/m^2 statistic occurs in my comments, it should be (+/-)4W/m^2.
I could tell from your comments and by a blank space that the plus/minus was missing.
Showing that a GCM is energy balanced doesn’t mean it is without error. That should mean that the calculations are all done with regard to problems like round off errors and the like. This ensures that numerical issues in the computation are not causing erroneous results. It does not ensure that the physics that are being modeled are themselves correct for all processes. The referenced papers are where Pat went to see what the folks using hte models thought about their accuracy. a variety of numbers about a variety of processs are given. But the end result is that the GCMs are known to be imperfect. Which means that each stepwise calculation of the next state has some imperfection in it. How you quantify that depends on what units the imperfection is given in and the fraction of that unit that the model uses for each step.
If one knows that a model (although internally consistent and energy balanced) is not completely correct about something with a known bound on that correctness, you should be able to express how that affects the uncertainty of the result after a number of iterations. That is all that the paper is trying to show. In comments over at WUWT he refers to it as analogous to instrument calibration error.
The lower bound on the uncertainty was given as +/- 4 Watts/square meter. That was obtained from papers by others evaluating the GCMs. In the comments, Pat assumes that this is the uncertainty over the course of a single year. Maybe yes, maybe no. It doesn’t matter that much since you can always find out what a GCM step size is and identify the uncertainty applicable to that step size. You would then have some idea about how that uncertainty will propagate from step to step.
The more steps you have, the larger the uncertainty will get. It is as simple as that.
I’ve already shown that the year-to-year variations in the models’ energy balance are nowhere near that large. Yes, I know the models are full of error and biases, bad parameterizations, and who knows what kind of energy “leakage” from imperfect numerics. That’s not the point. It’s that none of the model exhibit temperature OR radiative flux variations anywhere near what Pat is claiming. If he needs to re-word what he is claiming, fine, but I’m not a dummy in these matters and I still don’t believe what I think Pat is claiming. I do not have this problem with any other climate papers. Either Pat is wrong, or his wording is making it look like he is claiming something he is not. It’s his responsibility to clear this up, not mine.
And it won’t be cleared up by throwing around statistical buzzwords or new terminology which has no clear definition related to physical reality.
Correct me if I’m wrong, but Pat was not trying to quantify the physical errors of the models, but the magnitude of uncertainty in the modeled results due to our ignorance of cloud physics as they impact climate processes. He absolutely admits that the models are bounded such that they can’t produce a result of +/-20C. The point is, that any result less than that simply can’t be resolved by the models because of the known LCWF errors. It would be like trying to take the picture of something that is out of focus. If it’s too far out of focus, you won’t be able to tell what the object is – you can guess, but that’s all. What Pat is doing here is determining how far out of focus the models are at a certain distance (time) by saying something analogous to “an object would have to be this big in order for you to recognize it”. If the object clearly can never be that big (a 20 foot tall mouse, for example), then clearly the camera (model) is of no practical use.
It should be mentioned that the satellite measurements probably don’t have the absolute accuracy to determine global cloud cover. It depends too much on spatial resolution of the satellite, thresholds, the definition of a cloud, etc.
Even the global LW energy loss isn’t known to 10 W/m2 absolute accuracy from CERES measurements. The instrument isn’t good enough.
Does this mean we can’t model the global climate system and make projections? Since we don’t know any of the component parts of the global energy flux to better than, say, 3-15 W/m2?
Well, the different models put in all different combinations for these uncertain energy-affecting processes, and they still get the same result (for global temperature trends) depending only on (1) climate sensitivity, which isn’t from (say) cloud biases in the models, but how the average clouds (even with their biases) *change with warming*; and (2) how fast the models mix excess heat into the deep ocean.
There is no model behavior to support Pat’s claims. I still think he’s making one or more invalid assumptions.
With all due respect, Dr. Spencer, the argument that “they all get the same result” is not very convincing to me. I think if I had tried that line of reasoning on my college physics professor in the day, I would have been given a failing grade. I can think of many reasons why they would all give approximately the same answer. Fundamentally, modeling relatively simple systems with any fidelity is not easy. You have to have a solid understanding of the entire system. Anything wrong or missing throws the entire thing into question. Something like the global atmosphere is a truly herculean task to model, and as you point out, there are still important things that we just don’t know. Faced with that uncertainty, how can we put any weight into the results of multi-year runs? It seems absurd on the face of it.
As a Software Engineer of almost 40 years, I have worked on a number of modeling efforts, all much simpler than GCMs. In the end, none of them were useful for understanding the system better, but none produced predictions that were very close to physical reality. For example, one was a model of a natural gas combustion engine with some, at the time, novel features. The models helped improved the design, but they predicted much better efficiencies than were ever achieved in the real world. That project failed and I found a new job, but we can’t afford failure when the entire energy infrastructure of the world is at stake.
Oops, I meant to say “…some of them were useful…”
Roy
You said, “Well, the different models put in all different combinations for these uncertain energy-affecting processes, and they still get the same result (for global temperature trends)” It sounds like you are saying that the models are insensitive to these parameters. That should raise a red flag for the modelers!
Roy, There is a common core kernel that is passed to every GCM which has CO2 to flux conversion code and Boundary code(boundary code being code that prevents the model from blowing up). That is why CMIP version 6 is out now and like the 5 previous versions which made sure that all models enforced a positive flux forcing. If there wasnt common code why would they need to have version numbers> Every GCM could have new version numbers everyday. I have confirmed this with Greg Flato head modeller with the Canadian GCM. All Pat is saying is that if the models didn’t have boundary code, they all would blow up with nonsensical projections. The huge uncertainty is still there even though the boundary code prevents absolutely silly unrealistic projections.
So let me get this right: we’re talking about models that have huge uncertainty factors in them that by hand they’re forced to cancel out. And yet, people think these models can predict anything? At best these are toy models. They might be used to see the effect of certain parameters tweaked in one direction or the other but actual predictions? C’mon…
Maybe so. But I think that is a different, more philosophical discussion. See my comment above about how, even with uncertain model components, they all behave basically the same.
Ptolemaic models of the solar system all worked the same way too. It didn’t make them any less wrong.
Roy
You said, “… they all behave basically the same.” All except the Russian model. Might it be that the reason MOST of the models behave similarly is that they have made similar assumptions and use legacy code with some minor tweaks?
Another way of looking at this is to consider an inertial navigation system. These take readings from accelerometers in multiple axis to estimate the current position. As a jet flys along, the estimate is updated. The estimate is composed of prior state + acceleration over a time period + uncertainty. There is uncertainty in all three elements of the calculation. If a jet were unable to correct its position periodically the actual position would be off by miles! To achieve long range flights doesn’t just require a lot of fuel, it also requires the means to navigagte accurately across large distances so that the fuel you have at the end of the flight is good enough to get you down to the airport you were trying to reach.
The GCMs are kind of like the old INS. They don’t have any means to apply a correction to their state to fix the uncertainty in their position so that uncertainty just keeps growing and growing.
Not a good analogy. If there is a burst of energy release that increases the global average temperature, the system will later relax to it’s previous average temperature state.
“If there is a burst of energy release that increases the global average temperature, the system will later relax to its previous average temperature state.”
What is the time constant for that relaxation? Every 182.5 days, the solar “constant” changes from 1,315 W/m^2 to 1,409 W/m^2 due to the fact that the earth’s orbit is elliptical. The difference between the energy delivered to the TOA at perihelion and that at aphelion over a period of one day is 1e21 Joules, equivalent to 239,000 megatons of TNT (about 11 times the energy in all of the nuclear weapons stockpiles at the height of the Cold War). That “blast” happens every year.
How long does it take for the climate system to damp that out? Or does it make more sense to admit that the climate can never reach an equilibrium state?
Dr. Spencer,
I’ve been looking forward to your comments on Dr. Frank’s paper since it came out on WUWT. I don’t have any expertise (obviously) with GCMs, but it makes sense to me that if the models are tuned to obtain known conditions in order to zero-out unknown and/or mis-specified physics, their output would likely be stationary (your Figure 1) going forward absent additional forcings. In addition, given tuning there would be no error estimates within the training period, as all errors are offset by definition. However, once added forcings are incorporated beyond the training period, there should be errors around the forward estimates as these are no longer offset by previous tuning.
Well, a better way of phrasing your concern is whether a system that has been calibrated to a past climate state will behave the same as one which is, say 1.5 deg. C warmer.
But, see, that’s what feedbacks are: changes in system components (such as clouds) as the system warms. That’s where the real model uncertainty is, which is why many of us harp on climate sensitivity (feedbacks).
The 20-something models, despite a wide variety of error-prone components, give basically the same predictions *depending upon* (1) the feedbacks, and (2) the rate of deep ocean heat storage. The biases in the various components doesn’t seem to matter much.
Thank you Dr. Spencer. Looking at composite graphs of the GCMs (e.g. Dr. Christy’s), I’m not sure they “give basically the same predictions”. Are they all not working off the same set of forcing inputs? Having thought about this for a couple of days, I think there are some basic issues re. how modeling errors / uncertainty should be scaled with time, and so agree with you that someone like Dr. McKitrick would be most welcome to weigh in here.
I immediately had the same thought.
And are the models not only diverging more and more from what is observed as time passes (with an occasional excursion which overlaps briefly with one or two of the models), but also diverge more and more from each other?
Discrediting the climate models this way might be for the fun of it, but it is just an exercise in redundancy
We have enough of the track record that speaks for itself
https://i.postimg.cc/Zn156rPG/christy-dec8.jpg
Eben
Are you able to fill KNMI’s Climate Explorer with all that data and to plot the actual results?
Unfortunately I’m since years too lazy to go into that, so I can’t .
Maybe you can, Eben?
Bindidong just wouldn’t quit would he , Yeah I have been keeping my chart up to date
https://i.postimg.cc/CSM3Ljt5/predictions.png
Thanks, Roy,
I think those points are well made. It seems to me that there is one clear error around Eq 6, which is units. Pat insists that if you average data that happens to have been calculated from annual averages, then its units change; in particular the 4 W/m2 of RMSE acquire the unit 4 W/m2/year. I think that is intended to allow him to subsequently integrate his steps over time to get back to W/m2 over long periods.
But because of the addition in quadrature, it doesn’t work. They year^-1 of the μ becomes year^-2 on squaring, and then year^-1 on summation, if that is treated as integration over time. That leaves year^-(1/2) after taking the sqrt, a very odd unit. If you don’t regard the summing squares as an integration, you are left with a year^-1, also very odd in an accumulated error.
I think the basic failing is that errors in a differential equation just don’t, and can’t, propagate in the simplistic way Pat has it. An error just shifts the solution to one of the other possible solutions of the de, and what then happens depends on the different behaviour of that solution. The point is that both solutions are subject to the requirements of conservation of mass, momentum and energy, which are nowhere present in Pat’s Eq 1. I’ve written a blog post about this aspect here.
Stokes
You said, “I think the basic failing is that errors in a differential equation just dont, and cant, propagate in the simplistic way Pat has it.” But, Frank is NOT using any differential Equations. He has made it quite clear that he is emulating GCMs with a linear projection. Unless you can demonstrate that his ‘linearization’ does not faithfully reproduce the GCMs, then your criticism is invalid.
“He has made it quite clear that he is emulating GCMs with a linear projection.”
Well, you can emulate them with a sharpie if you want. The question is whether you are emulating the process by which they propagate error. Without that, whatever you do is irrelevant.
The emulator shows the same response to forcings the models do. That makes it relevant, and your comment the opposite, Nick.
“The emulator shows the same response to forcings the models do. “
A sharpie can do that too. The question is whether it emulates the error propagation.
Nick Stokes, please stop trolling.
I’m all but a specialist in the domain debated here.
But… having had a look at Frank’s WUWT guest post, I noticed that he replied to numerous comments, but not to this one writen by Dr Antero Ollila:
https://wattsupwiththat.com/2019/09/07/propagation-of-error-and-the-reliability-of-global-air-temperature-projections-mark-ii/#comment-2790406
Why?
Bindidon,
Given the large number of posts, I thought Dr. Frank’s effort to respond to comments was commendable. What are trying to imply?
Thanks, Frank. There are now 829 comments. I’ll certainly not get to reply to all.
But I’m grateful for all the interest.
But for Bindidon’s sake, I’ve replied to Antero.
Thanks for that inbetween.
Ollila’s comment sounds so very pertinent to me (as zero-level layman in the field) that I was wondering about it not having been replied.
C’est tout!
I get a kick out of the computer modelers, I work in the aviation , the things I work on and produce have to comply with the laws of fizzix 100% at all times,
Ones in a while a newcomer “computer modeler” appears who thinks because he can model pretty pictures of airplanes on the computer he too can be an airplane designer.
This is what the result looks like when one of them actually tries to fly his creation.
https://www.youtube.com/watch?v=ccRddyavMKQ
Umm.. did he not realize that once the tires left the ground, he no longer had propulsion?
Well it has a pusher propeller in the back not well visible because the bad res quality , Its the aerodynamics part that is missing.
What is called “Pats Eq 1” here actually expresses the way GCMs seem to treat the so-called “CO2 forcing.” This LW back-radiation powered unicorn prances across all published descriptions of the greenhouse effect, as if it were an important source of power quite independent of terrestrial radiation. In fact, it’s source is inseparable from terrestrial radiation and constitutes not any dynamic forcing but merely a state of matter, which varies primarily with cloud-modulated insolation and only marginally with CO2.
What indeed may be the controversial aspect of Pat Frank’s derivation of error propagation is the supposition that GCM projections of surface temperature rely totally upon a recursive estimation process. My understanding is that only the spatio-temporal variation on the globe is projected recursively, “subject to the requirements of conservation of mass, momentum and energy.” The base global-average temperature established during the calibration process remains effectively unchanged. Consequently, when only the temperature anomalies are considered, the actual modeling error is very much smaller, as shown by here by Roy Spencer.
I have a piece of wood that I’m going to use as a ruler, but I don’t know exactly how long it is. It is somewhere between 11 and 13 inches in length.
I measure the length of my living room. I get exactly 20 ruler lengths. I run this 9 more times. Each time I get exactly 20 ruler lengths.
The fact that each run gives the same answer, doesn’t mean that my ruler is exactly 12 inches long. It just means that my measurement technique is very precise. My accuracy still contains a lot of uncertainty because I don’t know how big my ruler really is. If I measured my living room 10,000 more times, I’m still left with the exact same uncertainty. The length of my ruler doesn’t become closer to 12 inches the more times I use it.
Would you be comfortable ordering carpeting and furniture based on that measurement, or would you rather get a better ruler first?
This is a much simplified explanation, and I’m no scientist, but I believe that this idea is what Pat is conveying in his paper. The model runs, like measuring my living room, are precise, but the accuracy is in doubt because of the uncertainties.
But, you can still tell if your living room is bigger than your kitchen. The ruler is still valuable.
captain droll, please stop drolling.
Dr Franks postulates that cumulative errors can spuriously increase GCM global temperatures by 20K.
That is a temperature increase of 7%, perhaps from 288K to 308K It would lead to a 1.07^4 increase, 22%, in outward SB radiation.
This would lead to a planet supposedly receiving 240W/m^2 and radiating 293W/m^2.
GCMS conserve mass, momentum and energy. An imbalance on that scale could not be produced for lack of sufficient energy. It could not be sustained because even if the energy were magically added, the 53W/m^2 imbalance would produce rapid cooling.
Entropic,
“Dr Franks [sic.] postulates that cumulative errors can spuriously increase GCM global temperatures by 20K.”
This is not correct. Dr. Frank postulates that the uncertainty in the model results is +/- 20K.
He doesn’t say that a correct model would yield any particular value for future temperature. He only shows that the current models are useless because they have such a large uncertainty range.
That the models don’t produce results that are +/- 20K different is proof that they are doing something else to counter the error that creates the uncertainty, or that the errors are symsnticdal and tend to cancel out. But fiddling with parameters, and error canceling, do not remove uncertainty.
EM,
That’s not what Dr. Frank postulates. The GCMs output the estimates they are programmed to output. The issue is that these estimates are insignificant relative to the errors implicit in the models’ unknown or mis-specified physics (e.g., LWCF).
Entropic man, Dr Franks postulates that cumulative errors can spuriously increase GCM global temperatures by 20K.
No I don’t.
Dr. Franks analysis seems valid to me.
For simplicities sake, lets assume the models calculate in time-increments of one year and the cloud error caused by errors in cloud forcing cause a +/- 1 C uncertainty in the calculated temperature.
For the first year, the uncertainty is +/- 1C.
For the second year the uncertainty is +/- 2 F, because we don’t know if the first year was 1 C cooler or 1 C warmer than the “real” temperature.
The actual error for the second year could be anywhere between – 2 C and + 2 C, including zero, but the uncertainty is +/- 2 C. In actual model runs, some of the errors could cancel out errors of an opposite sign, but the uncertainty remains high because there is no way to know the sign of the previous error.
It seem to me that models must be damped to remove this behavior because the odds of the forcing error being plus or minus for any run might be 50/50, for example, but the odds of it being positive, or negative, for many years in a row are high. If you flip a coin 20 times, you will probably get at least four heads or tails in a row (if memory serves), and that would cause large fluctuations in the models.
Dr. Spencer objects that models don’t exhibit behavior like he thinks Dr. Frank’s analysis predicts. Such errors could be (are?) removed by balancing energy flows, or by adjusting parameters for processes that are not directly modeled, but the uncertainty would still exist.
Dr. Frank shows that the models are not fit for purpose because they have an error in cloud forcing that propagates into a huge uncertainty. He’s not talking about what the models actually do. He’s talking about the uncertainty of the result.
Dr. Franks statistics, and the language he uses, are well above my pay grade, but I think I understand what he did and I think it makes sense.
For me, complex statistical analysis is not required. It is enough to know that annual cloud forcing calculations are accurate to only +/- 4 W/m2. That error is so large that the models can’t possibly be correct. In fact, they don’t even seem to be modeling the thing they claim to be modeling. Probably because cloud is too complex.
You’re exactly right, Thomas, thanks.
Thomas,
When you said “For the second year the uncertainty is +/- 2 F…”
You meant to say C, not F, correct?
Nicholas,
Yes, I meant C not F.
It’s quite simple. If Frank’s error propagation is occurring, one would see it in the wildly chaotic outputs of the models. But the outputs are not chaotic. They are stable – relative to what Frank is proposing.
So then one criticizes (sans understanding, it seems) that the ‘modelers’ apply some ‘damping’ technique to prevent the error propagation from resulting in wildly chaotic results.
Now one can say that without these ‘dampers’, the model results would be chaotic, and therefore the uncertainty is extreme.
But without clinically identifying the ‘dampers’, one is fishing in the dark, and not saying much at all. Without full comprehension of what it is EXACTLY that keeps the models from resulting in wildly different end points, it remains quite possible that more valid techniques are in use, such as equations that successfully preserve the balance of energy, even when the system is perturbed.
Frank may be onto something, but it is not yet demonstrated by him. If the models are far stabler than he gives to believe based on his uncertainty estimate, then he has to explain exactly, not vaguely, why that is. Until then, we are left with little better than a detailed assertion, rather than an argument.
barry, please stop trolling.
@ Jim Allison: Your ruler explanation is exactly what I understand what Dr. Frank is trying to explain. He should add your example in is article.
From IPCC AR5 –
” . . . Such chaotic behaviour limits the predictability of the state of a nonlinear dynamical system at specific future times, although changes in its statistics may still be predictable given changes in the system parameters or boundary conditions. ”
In the latest report the IPCC attempts to dismiss or ignore the chaotic nature of the atmosphere.
Note the weasel words ” . . . changes in its statistics may still be predictable . . .”, – and pigs may still fly! Who can prove they may not, at some time in the future?
Nope. No predictability until demonstrated reproducibly and usefully.
There is not much point in arguing about Dr Frank’s paper (with which I generally agree). Modelling outcomes of chaos is doomed to failure. Naive persistence forecasts are as good as anything. Dr Frank merely points out the uselessness of current climate models, even if the bizarre assumptions on which they are based, are accepted as fact.
For example, the supposed “energy balance” is nonsensical. Any location on the Earth’s surface experiences “balance” twice a day, in general. When the temperature is unchanging, as at the maximum and minimum, incoming energy is equal to outgoing energy. At all other times, the temperature is changing.
What about averaging? Pointless. Daily, night falls. Yearly, seasons come and go. Over the maximum possible period, since the creation of the Earth, the surface has demonstrably cooled. More energy has left the Earth than has been absorbed externally, or generated internally. No balance.
Oh well, enough, I suppose.
Cheers.
You people are reading way too much into this computer modeling thing,
If you see what is hidden on the back side of that million dollars climate computer where nobody looks
https://i.postimg.cc/RZ81D8Sf/Climate-Control-Knob.jpg
Eben,
Some have upgraded to million dollar model, to the ultra new and improved bajillion dollar CEX-1000.
See here:
https://twitter.com/NickMcGinley1/status/1172052720511004673?s=20
Eben: read and critique –
Atmospheric CO2: Principal Control Knob Governing Earths Temperature, Lacis et al, Science (15 October 2010) Vol. 330 no. 6002 pp. 356-359.
http://pubs.giss.nasa.gov/abs/la09300d.html
The pseudoscientific GHE true believer in action.
Critique finished. Gavin can return to playing with his knob. No happy ending, I fear.
Cheers,
Hi Roy, I have substituted (+/-) for the ‘plus-minus’ special character.
That should make things much clearer. Please delete the post above.
Thanks and sorry for the trouble.
Pat
Let me start by saying that I’ve admired your work, and John’s, for a long time. You and he have been forthright and honest in presenting your work in the face of relentless criticism. Not to mention the occasional bullet. 🙂
Thanks for posting your thoughts here. I’m glad of the opportunity to dispel a few misconceptions. I’ll move right down to “What Frank’s Paper Claims.”
You start by stating that I take “an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) …”
If your “bias” means offset, it is misleading. The LWCF error is a theory error, not a bias offset. That is demonstrated by its pair-wise correlation among all the models. The (+/-)4 W/m^2 is a model calibration error statistic.
I do not assume “that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation.” That’s not an assumption.
It is justified several times in the paper on the grounds that it is an uncertainty in simulated tropospheric thermal energy flux. As such it conditions the simulated impact of CO2 forcing, which is also part of the very same tropospheric thermal energy flux.
Entry of the (+/-)4 W/m^2 into the emulation of projected global average air temperature is fully justified on those grounds.
You go on to write that I propagate “the error forward in time during his emulation model’s integration.” You’re implying here that the The (+/-)4 W/m^2 is propagated forward. It’s not.
It’s the uncertainty in air temperature, consequent to the uncertainty in simulated forcing, that is propagated forward.
Then you write, “ The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT).”
I must say I was really sorry to read that. It’s such a basic mistake. The (+/-)20 C (your number) is not a temperature. It’s an uncertainty statistic. Propagated error does not impact model expectation values. It is evaluated separately from the simulation.
And consider this: the (+/-)20 C uncertainty bars are vertical, not offset. Your understanding of their meaning as temperature would require the model to imply the simultaneous coexistence of an ice house and a greenhouse state.
One of my reviewers incredibly saw the (+/-)20 C as implying the model to be wildly oscillating between hothouse and ice-house states. He also not realizing the vertical bars require that his interpretation of (+/-)20 C as temperature would necessitate both states to be occupied simultaneously.
In any case, Roy, your first paragraph alone has enough mistakes in it to invalidate your entire critique.
The meaning of uncertainty is discussed in Sections 7.3 and 10 of the Supporting Information.
You wrote that, “ The modelers are well aware of these biases, which can be positive or negative depending upon the model.”
The errors are both positive and negative across the globe for each model. This is clearly shown in my Figure 4, and in Figures throughout Lauer and Hamilton, 2013. The errors are not bias offsets, as you have them here.
You wrote, “If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.”
You’re mistaken the calibration error statistic for an energy. It is not. And you’ve assigned an implied positive sign to the error statistic, representing it as an energy flux. It isn’t. It’s (+/-)4 W/m^2. Recognizing the (+/-) is critical to understanding.
And let me ask you: what impact would a simultaneously positive and negative energy flux have at the TOA? After all, it’s (+/-)4 W/m^2. If that was a TOA energy flux, as you have it, it would be self-cancelling.
You wrote, “each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.”
And each of those models simulate cloud fraction incorrectly, producing an average calibration error (+/-)4 W/m^2 in LWCF, even though they are overall energy-balanced. I point out in my paper that the internal climate energy-state can be wrong, even though the overall energy balance is correct.
That’s what the cloud fraction simulation error represents: an internally incorrect climate energy-state.
You wrote, “If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling.”
No, they would not.
I’m sorry to say that your comment shows a complete lack of understanding of the meaning of uncertainty.
Calibration error statistics do not impact model expectation values. They are calculated after the fact from model calibration runs.
You wrote, “+/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance.”
I don’t assume that (+/-)4 W/m^2. It is the reported LWCF calibration error statistic in Lauer and Hamilton, 2013.
Second, offsetting errors do not make the underlying physics correct. The correct uncertainty attending offsetting errors is their combination in quadrature and their report as a (+/-) uncertainty in the reported result.
There is no reason to suppose that errors that happen to offset during a calibration period will continue to offset in a prediction of future states. No other field of physical science makes such awful mistakes in thinking.
You are using an incomplete or incorrect physical theory, Roy, adjusting parameters to get spuriously offsetting errors, and then assuming they correct the underlying physics.
All you’re doing is hiding the uncertainty by tuning your models.
Under “Error Propagation …” you wrote, “If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time. ”
Once again, you imposed a positive sign on a (+/-) uncertainty error statistic. The error statistic is not an energy flux. It does not perturb the model. It does not show up at the TOA.
Your imposition of that positive sign facilitates your incorrect usage. It’s an enabling mistake.
I have run into this mistaken thinking repeatedly among my reviewers. It’s incredible. It’s as though no one in climate science is ever taught anything about error analysis in undergraduate school.
You wrote, “Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step.”
No, it does not. Several reviewers, including Prof. Zanchettin, raised this question. I answered it to his satisfaction.
The size of the simulation calibration uncertainty statistic will vary with the time over which it is appraised. When this is taken into account, the centennial uncertainty comes out the same every time.
And the time step is not assumed, as you have it. Lauer and Hamilton provided an annual mean error statistic. That annual average calibration error was applied to annual temperature time steps. None of that was assumed.
You should have looked at eqns. 5, and the surrounding text. Here’s the critical point, from the paper: “In equation 5 the step-wise GHG forcing term, ΔF_i, is conditioned by the uncertainty in thermal flux in every step due to the continual imposition of LWCF thermal flux calibration error.”
Eqn. 6 is a generalization of eqns 5.
I’m sorry Roy. You’ve made one very fundamental mistake after another. Your criticism has no force.
Pat Frank…”One of my reviewers incredibly saw the (+/-)20 C as implying the model to be wildly oscillating between hothouse and ice-house states”.
Sorry to hear that. In the future, please presume that reviewers are as stupid as the day is long. As someone else suggested, dumb it down for them.
Peer review has out-lived its usefulness and has now become a paradigm re-enforcer.
Because all of us here know how bright Gordon’s intelligence shines….
Do we? Really?
Cheers.
Pat,
You must be exhausted. But I think you need to dumb down the message. People keep thinking youre talking about model behavior, not uncertainty.
See my post above for a (not so good) example.
That the models dont exhibit the behavior your work shows, is proof that they are doing something other than modeling climate.
“With few exceptions, the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system. This is basic 1st Law of Thermodynamics stuff.”
I see this kind of statement a lot, but it is misleading. There is more to increasing temperature then just a net energy increase. With the example of a “pot of water warming on a stove”, once the water temperature stabilized for the flame setting, an equal amount of water at the same temperature could be added, but the system (pot) temperature would not increase. Energy was added to the system, but the temperature did not increase.
The same result would occur if ice blocks were brought into a room with the same temperature as the ice. Energy is added to the room, but the temperature does not increase.
In Earth’s climate system, it doesn’t matter how much infrared from the surface is “trapped” by the atmosphere, that infrared cannot then raise the temperature of the surface.
JDH,
Exactly.
Surrounding a human with as much as ice emitting 300 W/m2 as you like, is adding energy. It doesn’t help the poor shivering fool who thought that adding heat would make him warmer.
You need temperature as well as W/m2.
Cheers.
The human and the ice come to an equilibrium temperature that is greater than the equilibrium temperature if the ice isn’t there.
DA,
Rubbish. The human at 37 C or so, eventually dies, and the body drops to the temperature of the ice. All that heat energy from the ice reduced the temperature of the body, you idiot.
It didn’t make it warmer. I hope you don’t think that GHGs make the surface hotter, do you?
That would be as stupid as claiming you could make water hotter using ice!
Cheers.
I will not claim to be on top of this topic yet. However, an obvious question that needs to be answered is based on Dr Roy’s Figure 1 above.
Imagine simulating the climate 100 years ahead in time assuming no changes to greenhouse gases or any other possible external forcings. The (trivial) answer is that global average temperatures will not change (allowing for natural variability), yet we are expected to believe that this correct answer must be accompanied by +-20 deg uncertainty? Why?
It seems to me that the expansion of uncertainty (shown in Fig.1) can be started anywhere on the timeline. There is no obvious reason to start it at the point where you adjust the forcings.
e.g. would the same uncertainty exist if I changed prescribed CO2 by only .00000000001% ?
I would have to say that the main criticisms of Dr. Frank’s method are valid.
I think climatological modelling makes some simplifying assumptions (about long term averaging) that are not unreasonable, but which Dr. Frank is unwilling to acknowledge as valid scientific method…and this is the source of their disagreement.
Example: If you fill a barrel of known diameter with water from a hose flowing at a known rate, you can calculate the rate the barrel will fill quite precisely (and even define an average instantaneous level) even though you cannot predict the chaotic sloshing of the surface at any instant.
The errors of your knowledge of the exact chaotic surface position do not propagate significantly, and thus do not become important, in your calculation of the rate of filling.
The damping factor ultimately “subdues” the addition of chaotic signals over a long enough period, keeping the system stable!
Or, in other words, physical conservation laws trump chaotic variability… as long as the system is “stable”.
mdm,
As Lorenz said –
“Chaos: When the present determines the future, but the approximate present does not approximately determine the future.”
No stability. No reversion to the mean. No return to an average. Physical conservation laws do not trump chaos. This is just wishful thinking by pseudoscientific GHE true believers, and, unfortunately, many physicists and others who just cannot accept that the future is unknowable. These people refuse to believe that the uncertainty principle exists, or that quantum electrodynamics has held up against all experimental scrutiny!
Irrelevant analogies might make people feel better, but do not represent the reality of chaos.
Even the IPCC accepts chaos, and the non-predictability of future climate states, albeit grudgingly, in the latest IPCC report. The modellers have unbreakable religious faith that they can eventually predict the future in some useful way. Fools, frauds, charlatans, or delusional bumblers – one and all.
As you have probably figured out, I disagree with your view. Others may decide for themselves.
Cheers.
If the sun’s output were to increase by 50%, do you think the earth would get hotter on average…or that the future temperature could not be at all predicted because of the chaotic nature of climatic response?
mdm,
The future temperature can not be predicted. You don’t even know what the present average temperature is, do you?
The Earth has cooled from its initial molten state. Did changes in the Sun’s output (if any) change this occurrence? No.
The atmosphere behave s chaotically. Claiming that the future will be different is fine, but of course, completely irrelevant and pointless.
Climate is merely the average of past weather. I would be surprised if you could even accurately define the climate of California, let alone measure it it any meaningful way. When will it change? For better or worse? Will some parts cool, and other parts become hotter?
So much for reverting to an average, or so some mythically stable condition which has never existed, and never will, unless assumptions about the heat death of the universe come to pass, and entropy maximises.
I won’t challenge your faith, as long as you keep it private. It’s not science though, just stridently proclaimed pseudoscience!
Cheers.
If the sun’s output were to increase by 50%, do you think the earth would get hotter on average…or that the future temperature could not be at all predicted because of the chaotic nature of climatic response?
Where is the chaos here?
https://en.wikipedia.org/wiki/Interglacial#/media/File:Ice_Age_Temperature.png
What makes you think chaos is not evident?
Cheers.
Studentb
“The (trivial) answer is that global average temperatures will not change (allowing for natural variability), yet we are expected to believe that this correct answer must be accompanied by +-20 deg uncertainty? Why?”
Because the models have an inherent error, cloud forcing. It creates an inherent uncertainty that propagates to a very large uncertainty in long model runs.
“It seems to me that the expansion of uncertainty (shown in Fig.1) can be started anywhere on the timeline. There is no obvious reason to start it at the point where you adjust the forcings.
e.g. would the same uncertainty exist if I changed prescribed CO2 by only .00000000001% ?”
The uncertainty starts accumulating wherever the model starts modeling.
Yes, the same uncertainty would exist even if you didn’t change CO2 at all. The uncertainty comes form the mis-molded (likely un-modeled but parametrized) cloud forcing.
The models get cloud forcing so wrong that they cannot possibly be right. I think Dr. Spencer has made the same point before. If you can’t model cloud, you can model climate. And cloud is too complex to model.
But now Dr. Spencer is stuck thinking that the models don’t exhibit the same viability as the uncertainty. They don’t because they are not modeling cloud. If they did model cloud, results would vary +/- 20 C over a 100 year projection. The molders adjust parameters so the result looks reasonable.
Dr. Frank’s analysis only shows what any reasonable person already knew; the models are “not even wrong.”
They’re just useless.
That’s my take on it anyway. If I’m wrong, I’m wrong. It won’t be the first time.
: )
I am afraid you haven’t convinced me.
You appear to be saying that, even if the models correctly predict zero change in temperature, that they are of no value because of the +-20 deg uncertainty. That does not make sense.
“The uncertainty starts accumulating wherever the model starts modeling.”
This also doesn’t make sense. In fact, most experiments start from models that have been spun up for many hundreds of years.
What then is the uncertainty associated at the start of a control run?
Is it zero or +-20deg?
What is it at the end of (say) another 100 years?
Zero or +-20 deg?
Imagine that at the some point the forcing is changed.
Is the uncertainty at this point zero or +-20 deg?
Frank implies it is zero (despite the model having been run for many hundreds of years ).
After 100 years with the changed forcing, Frank implies the uncertainty has grown from zero to +-20 deg.
Can you not see that Frank’s growth in uncertainty is arbitrary because there is no way to identify the starting point. As I suggested, it cannot be when the forcings change since I could just as easily change the forcing by a near-to but not quite zero amount. The model results would not change, yet somehow we have to accept that the uncertainty suddenly starts amplifying when the forcing changes by any amount. This defies common sense.
Why do you think a prediction of zero change in temp is, or will be, correct?
You seem to be making an assumption about something for which no evidence exists, and then using your believe that this is a correct assumption to make a point about the correctness of the prediction.
Or perhaps I misunderstand.
You miss the point. If there is no change in the forcing, why would the temperature change?
Answer: it doesn’t – unless some magic is involved!
Therefore zero change is what will happen.
Studentb,
“You miss the point. If there is no change in the forcing, why would the temperature change?”
Your implication that only forcing cause temperature change is demonstrably incorrect. An El Nio occurs over a very short period of time, so the change in CO2 forcing or solar forcing are infinitesimal. Yet the global surface temperature changes dramatically. The total heat content of the system didn’t change, but warm water that was buried in the Pacific warm pool got released to the sea surface and atmosphere. No change in TOA forcing but a big change in temperature.
The multi-decadal oscillations are likely similar phenomena but with a longer period. If models are initiated to ignore such phenomena, and Dr. Spencer says they are, then they are probably going to get future temperature very wrong. Most of the recent warming could be due to oscillations with long periods, which the models don’t even model.
You wrote,
“In fact, most experiments start from models that have been spun up for many hundreds of years. What then is the uncertainty associated at the start of a control run?”
Uncertainty starts to propagate (and accumulate) whenever you initiate a model run. See my example at 5:20 PM above.
I’ll repeat the relevant part here, with some minor corrections.
* * *
For the sake of simplicity, lets assume the models calculate in time increments of one year and cloud forcing errors cause a +/- 1 C uncertainty in the calculated temperature.
For the first year, the uncertainty is +/- 1 C.
For the second year the uncertainty is +/- 2 C, because we dont know if the first year was 1 C cooler or 1 C warmer than the real temperature.
The actual model result for the second year could be anywhere between 2 C and + 2 C, including zero, but the uncertainty is +/- 2 C. In actual model runs, some of the errors could cancel out errors of an opposite sign, but the uncertainty remains high because there is no way to know the sign of the previous error.
* * *
You and Dr. Spencer are stuck thinking about what the models actually do. Dr. Frank says nothing about what the models actually do, except that they get long wave cloud forcing wrong at the rate of about +/- 4 W/m2 per year. He uses that as a metric to calculate what the accumulated uncertainty is. That the models don’t actually exhibit large swings due to the incorrect cloud forcing is evidence that the models do something to damp it out (something that is almost certainly non-physical).
Even if the cloud error is symmetrical, so that each error cancels out the pervious error, one would still expect large temperature swings because of statistical fluctuation. Over many flips of a perfect coin, we expect to see 50% heads and 50% tails. But, if you flip a coin 100 times, you will likely find instances where there are eight heads, or eight tails in a row.
Your points about El Nino and mutli-decdal fluctuations are redundant:
(1) Models are free to simulate these phenomenah – and do (look up ENSO forecasting)
(2) Their effects on global average temperature are relatively small anyway
(3) In my first post I stated “that global average temperatures will not change (allowing for natural variability),..” where natural variability exists in both observations and model results.
Secondly, you have not defined when a model run starts. Any model simulation begins using a set of initial conditions. The initial conditions are usually those generated from a previous simulation, which was initialised from some other previous simulation etc etc. The “age” of the initial conditions is irrelevant (and often unknown). Therefore, the question “When does the uncertainty begin to amplify?” remains unanswered. Therefore “accumulated error” carries no meaning.
“Your implication that only forcing cause temperature change is demonstrably incorrect.”
Thomas 7:07am – You are not precise enough here to be demonstrably wrong or right. You haven’t specified the thermodynamic control volume of interest. Later you change “forcing” to “TOA forcing”.
Even later you already have time in W/m^2 which is joules/sec in or out of the cv over unit area. There is no sense then to add per year to get joules/second-year over unit area.
Think that through and see if you can clarify your writing.
Your implication that only forcing cause temperature change is demonstrably incorrect.
How does an internal oscillation cause a net warming?
Where does the new energy come from?
DA,
Don’t be stupid. Energy can neither be created or destroyed. Only pseudoscientific GHE true believers claim such a stupid thing.
Cheers.
Md Mil,
“Example: If you fill a barrel of known diameter with water from a hose flowing at a known rate, you can calculate the rate the barrel will fill quite precisely.”
What if your flow meter has an error of +/- 4 gallons per minute?
Dr. Frank used the known (at least approximately) error of the erroneous flow meter to calculate the uncertainty of the final result.
No…Franks errors can grow exponentially (and are dependent on the arbitrary time step size used, as Roy mentioned).The errors add up …propagate.
If your flow rate is wrong by about 4%(for example)
then sure…your final result will be wrong by about 4%. But that is not the issue. Dr.Franks results are really inappropriate…IMO. I think my analogy is a good one.
mdm,
The oceans have been filled. The atmosphere has been filled. The hoses have been turned off. Now try and figure out why neither the atmosphere, the oceans, or indeed the solid and not so solid parts of the Earth, are not sitting quietly – awaiting further disturbance. They are all in constant, chaotic, motion.
Your analogy is both pointless and irrelevant. It is typical of those proposed by pseudoscientific GHE true believers. What is wrong with trying to address the science (or lack of it)? If you don’t understand, just say so. If you can’t defend your position with fact rather than faith, you possibly don’t understand science and the scientific method.
Try usefully describing the GHE, for example. Nobody else has managed to, because any description would involve breaking the laws of thermodynamics and general physical law. If you don’t agree – give it a try. Good luck.
Cheers.
If the suns output were to increase by 50%, do you think the earth would get hotter on averageor that the future temperature could not be at all predicted because of the chaotic nature of climatic response?
the molecules in a bag of gas are all in constant chaotic motion, yet the gas laws work well. The earth is not in thermodynamic equilibrium…the pole temp differs from the equator, the day/night,seasonal cycles, the sun constantly warms and the radiation constantly cools(which is not an equilibrium state….yet the earth climate is amazingly repetitive and constant over appropriate periods.
[What is GHE?]
And i repeat,if the suns output were to increase by 50%, do you think the earth would get hotter on average or that the future temperature could not be at all predicted because of the chaotic nature of climatic response?
mdm: MF has defined the GHE many times. Don’t get suckered in by his mindless claims.
Mike Flynn says:
May 18, 2017 at 5:57 PM
Of course they cant admit that CO2 also absorbs incoming solar energy in the 15 um band.
http://www.drroyspencer.com/2017/05/uah-global-temperature-update-for-april-2017-0-27-deg-c/#comment-247260
the transmittance of the atmosphere increases as the amount of GHGs in it drops.
– Mike Flynn, 5/23/17
http://www.drroyspencer.com/2017/05/uah-global-temperature-update-for-april-2017-0-27-deg-c/#comment-247988
–
The atmosphere is an insulator. CO2 is around 90 to 750 times as opaque to some wavelengths of light as N2 or O2.
http://www.drroyspencer.com/2017/06/a-global-warming-red-team-warning-do-not-strive-for-consensus-with-the-blue-team/#comment-251624
Mike Flynn says:
CO2, like any other matter, can absorb and emit energy of any wavelength.
http://www.drroyspencer.com/2017/06/a-global-warming-red-team-warning-do-not-strive-for-consensus-with-the-blue-team/#comment-251611
The atmosphere is an insulator. CO2 is around 90 to 750 times as opaque to some wavelengths of light as N2 or O2.
Mike Flynn, June 18, 2017 at 3:34 AM
http://www.drroyspencer.com/2017/06/a-global-warming-red-team-warning-do-not-strive-for-consensus-with-the-blue-team/#comment-251624
.the transmittance of the atmosphere increases as the amount of GHGs in it drops.
Mike Flynn, May 23, 2017 at 5:16 PM
http://www.drroyspencer.com/2017/05/uah-global-temperature-update-for-april-2017-0-27-deg-c/#comment-247988
Less GHGs less impediment to radiation reaching the surface from the Sun, or being emitted by the surface to outer space.
Mike Flynn, May 5, 2017 at 9:22 PM
http://www.drroyspencer.com/2017/05/uah-global-temperature-update-for-april-2017-0-27-deg-c/#comment-245860
Thanks for the support, David.
Just remember, the GHE doesn’t exist, so don’t lie and attribute a non-existent explanation to me, eh?
Cheers.
mdm,
The future temperature could not be predicted, any more than you could state the Earth’s temperature right now. Nice attempt at diversion, though.
Any other questions?
Cheers.
The sloshing of the surface of a barrel of water is a good analogy for the climate state of the Earth?
further,Franks errors can grow and are dependent on the arbitrary time step size used, as Roy mentioned.This really is unacceptable to any consistent theory..it is simply a killer to Frank’s development.
NO, it is meant as an analogy for the “propogation” of error idea…IMO
Why do you need an analogy? Don’t you understand error propagation, or do you think others don’t?
Cheers.
m d mill.
Your analogy only filled one barrel. Climate models fill a barrel, then fill another barrel with that barrel, and the flow from the hose, then another barrel, etc. It’s a pour (ha!) analogy, but I trust you get the point. See my post upstream for a better explanation.
The analogy is only for the “propogation” of error principle,
not of the climate
With respect, I am in disagreement with Dr Spencer. For practical reasons I am purposely keeping my explanation short:
It is an error to assume the models are without bias because they are correctly describing the HISTORICAL energy balance. In effect this assumption is a circular reasoning error, where the ghg hypothesis is used in support of the hypothesis. In point of fact, the historical energy balance is much more likely due to model tuning.
This is similar to driving a car down a road. The car remains on the road so long as you can see the road, (historical model tuning). However, if you blindfold the driver (the future) the car will quickly run off the road (fail to predict). The longer you drive blindfolded, the more likely it is that the car will wander away from the road.
Importantly, you cannot rely on the Law of Large Numbers to balance out your errors and bring you back on course. Driving errors when blindfolded do not balance out.
The energy imbalance represents how far off the road you are. The +-4wm2 is not an energy imbalance. Rather it is a measure of the sloppiness of the steering.
Fred Berple,
An elegant analogy!
But Dr. Spencer will say that the car (the models) stay on the road so they can’t be that wrong.
Let’s pretend the care is a Tesla with auto steering. If the road is straight, and the auto steering has an inherent error of +/- 4 angular degrees, it will quickly run off the road and probably end up 100 years later being very far from the road. If the car does not run off the road that is evidence that the inherent error was misestimated, or that the auto-streering algorithm is doing something else that keeps it on the road.
The cloud forcing error (if correct, or even close to correct) is physical. If the models don’t run off the road, it is because they are doing something non-physical to keep on the road. Like the driver is lifting up the blind fold every now and then and making small corrections.
“Driving errors when blindfolded do not balance out.”
This describes a random walk. Models are bounded. If they weren’t, we would see the chaotic results Frank regards as propagated errors. But we don’t.
You’re confusing error with predictive uncertainty, barry.
Error may be bounded, but uncertainty is not.
Uncertainty does not impact model expectation values. High predictive uncertainty does not imply chaos in model outputs.
One physical bound is the energy balance for the whole system.
b,
The lower temperature bound for the Earth is the temperature which will prevail at the heat death of the universe. Around 4 K.
The Earth has cooled down by a few thousand K. Cooling continues. Still a fair way to go, wouldn’t you agree?
Cheers.
Dumb. The Earth is warmer than it was 23,000 years ago.
Dumb. Cherry picking is a sin, according to you. Sinner! Sinner!
Over the last four and a half billion years or so, the Earth has cooled.
Cheers.
sorry…posted to wrong article.
Disclaimer to Roy: As you stated at the close of your article…”I’d be glad to be proved wrong”.
With regard to my reply here, I feel the same. ☺
****
Roy…”With few exceptions, the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system. This is basic 1st Law of Thermodynamics stuff.
So, if energy loss is less than energy gain, warming will occur”.
***
Roy…this is one of the few areas where I have to disagree with you. You insist on referring to the 1st law as energy in = energy out. It’s the 1st law of thermodynamics and it is about heat and work, not generic energy per se.
I would appreciate you clarifying that. By energy, wrt the 1st law, do you mean heat? If so, what do you mean by ‘net’ energy.
What you claim is technically correct but it lacks specificity. That lack of specificity leads to a gross misunderstanding of heat transfer.
The only reason the 1st law works is because heat and work are equivalent. They don’t use the same units but one can be expressed directly as the other. The scientist Joule was the first to describe the relationship where 1 gram calorie of heat = 4.18 joules of work.
Therefore if you do work on a system, you can express the work in calories. However, the 1st law is usually expressed in terms of work, wherein heat is converted to it’s mechanical equivalent of heat. That’s the only way heat and work can exist together in the 1st law, by converting one to it’s equivalent in the other.
As Clausius explained, internal energy is the work done by atoms in a lattice as they vibrate. We know that adding heat to that lattice causes the atoms to vibrate harder hence they produce more work. If they do less work, the substance cools.
That’s the derivation of your temperature. Temperature is the average kinetic energy of atoms and that is also the definition of heat. Temperature is a relative measure of heat, but it is not a measure of energy per se. Temperature is not a measure of mechanical energy, electrical energy, chemical energy or any other energy but heat.
The only way to increase temperature is to add more heat. You can increase temperature by doing work on the system, but that’s because internally, the work is converted to heat.
Work itself, has no direct relationship to temperature, it is heat that is related to temperature.
I think this point is very important and not merely semantics. I have no particular desire to disagree with you since I agree with most things you say.
I can figure it out for myself that heat is energy as is work. However, you and others have stated elsewhere that the 2nd law, also about heat, is satisfied by a net balance of energy. That is true only if you replace the word energy with heat. It is not true for any other form of energy.
There is no point talking about energy in certain cases unless you specify the energy. Thermodynamics is about heat, not generic energy per se. The 1st law is about the external energies, work and heat, and the internal energy made up of work and heat. That comes from Clausius, who introduced the U term to the 1st law for internal energy.
Clausius also wrote the 2nd law and in doing so invented entropy as a mathematical statement of the 2nd law. The law of entropy and the 2nd law are about heat and they say the same thing: heat can never be transferred by its own means from a colder to a warmer body.
That statement has no amendments or qualifications, it is absolute. Heat can never, by it’s own means, be transferred from a colder body to a warmer body. Net energy has nothing to do with it.
If you allow the use of a generic energy, then electromagnetic energy can become the basis of the 1st and 2nd law and that is where the misunderstanding of the 2nd law begins. It also opens the door for the illegal transfer of heat from a colder body to a warmer body, without compensation.
Neither the 1st law nor the 2nd law cover electromagnetic energy and that was a weak point in the Clausius explanation of radiative heat transfer. He insisted that heat transfer by radiation (heat rays), which he and others thought was a transfer of heat through space as heat, must obey the 2nd law. Then he confused the issue by vaguely referring to a two-way heat transfer.
He obviously did not mean that, he was referring to a two way flow of electromagnetic energy of which he had no understanding. It was not till 1913, the Bohr figured it out and hypothesized that electrons in atoms could absorb and emit EM under strict conditions. Electrons were not discovered till after Clausius died.
When electrons emit EM, the kinetic energy of the emitters drop and that translates to a loss of heat. Therefore, EM does not transfer heat physically, it is its own unique form of energy that has no heat associated with it.
If that EM is absorbed by a cooler body, the electrons in the body gain kinetic energy which means the body warms. The heating is local, it is produced in the absorbing body and does not come from the emitting body.
The reverse process is not possible. If a cooler body emits EM and a warmer body encounters the EM, it cannot be absorbed.
Roy, it’s not me claiming that, it came from Bohr. An electron will not absorb EM unless it meets certain stringent and quantum conditions.
testing…
I initially posed above post to the wrong article. It’s format was fine but I copy/posted it from the wrong article to this one and the formatting changed. I am now copy/pasting the first paragraph to a text editor then copy/pasting from text editor to here. Want to see if format hold after being corrected in text editor.
********
sorryposted to wrong article.
Disclaimer to Roy: As you stated at the close of your articleI’d be glad to be proved wrong.
With regard to my reply here, I feel the same.
****
yup…it works.
actually, it didn’t. The text editor gobbled some punctuation marks and spacing.
Ever gotten a sunburn?
EM is certainly heat.
The 1LOT refers to all types of energy, summed.
You seem to really not grasp the difference about EM, work, heat and kinetic energy.
EM is NOT heat. It can be converted into it, it doesn’t have to.
BTW, the biological mechanisms of a sunburn have nothing to do with heat at all.
EM is heat. It warms you when it strikes you. It carries energy that warms you when it strikes you.
DA…”EM is heat. It warms you when it strikes you. It carries energy that warms you when it strikes you”.
I think you once agreed that EM is an electric field with a magnetic field perpendicular to it. It has no mass and heat requires mass.
The warming when EM strikes you is due to electrons in the atoms of your skin absorbing the EM and vibrating harder. That raises the kinetic energy of the atoms in your skin, which is heat.
If those electrons absorb energy in EM UV band, they get really agitated and will burn your skin.
There are EM fields around electric motors and transformers but they don’t warm you. Also, the air is full of EM waves used in communication…don’t warm you. Mind you, if you stand in front of a radar sail emitting EM at 10,000 watts, it will warm you, most likely fatally.
DA,
Don’t be thick. Ice emits EM. Try and get warm using ice. Use as much as you like.
Cheers.
something lost in format.
Apostrophes as in don’t, there’s, and it’s seems to have been corrupted.
‘net’ energy came out corrupted. I was asking what Roy meant by net energy with regard to heat transfer.
It is funny to watch the desperate attempts to find comfort in Pat Frank’s paper. Reminds me of drowning rats climbing aboard a lifeboat that is about to sink.
As Dr Roy (and some clear thinkers here) have indicated, it is complete nonsense. Apparently Frank’s ideas have been around for ages and nobody took them seriously. His paper was rejected by 13 (yes THIRTEEN) journals before somehow getting a tick for a minor (some would say dubious) journal. Sometimes b.s. does get printed!
Ad hominem and appeal to authority are not valid scientific argument, so there is nothing in Dr Myki’s post that can be logically responded to.
I did not expect a response. Just letting people know that they could save themselves a lot of time by simply letting the paper die.
The dastardly shenanigans you are employing here might have had a snowball’s chance in Hell of swaying an opinion or two, had not the climategate emails revealed that this was the plan all along.
Revealed how?
David, please stop trolling.
Dr. Myki (Mouse?)
Thank you for sharing the wisdom from your superior intellect. Those of us not of your caliber certainly need and appreciate the guiding hand that saves us from the embarrassment of asking stupid questions in our vain attempt to understand things that only the likes of you can truly comprehend.
Thank you again, your Highness.
Arise, Sir Clyde Spencer !
Clyde…”Dr. Myki (Mouse?)”
That’s why I call her, him, it ‘Mickey’.
THomas, what was ad hom?
Being rejected by 13 journals, if true, is a serious issue.
Rejected by climate journals. I am doubtful the paper would have been rejected that many times by physics journals. But Dr. Frank tried to publish in a climate journal again and again on purpose to force this debate into the climate science community.
Some of the papers that are more diverging from the mainstream in atmospheric physics were published by submitting to physics journals not climate and among the biggest skeptics are people with traditional physics and chemistry backgrounds. I don’t think that is a coincidence.
You’re already making excuses.
Journals publish good science. It’s their raison d’etre. They actually compete to do so.
Physics journals publish physics, not climate science. Climate science publish those.
13 rejections is a serious sign and suggests the author had to very much settle below the common standards.
Few scientists and policymakers will not pay attention to a paper published in a Frontiers journal. There is too much published in the best & good journals to worry about the mediocre/bad journals.
I am fully aware of the flaws of the peer-review process where you obviously don’t.
Ever heard about the replication crisis? In the life sciences it is well established that 50+X% of publications in high impact journals cannot be reproduced. So much about “good science” and appeal to authority.
“Physics journals publish physics, not climate science. Climate science publish those.”
That is a really stupid statement. The atmosphere follows physical laws. Therefore people with education in the former can comprehend climate science whereas people with education in climate science do not have necessarily the basic education to understand fully the physics underlying climate.
Read the reviews, David Appell. They’re incompetent.
They’re here, for anyone who wants to check for themselves: https://uploadfiles.io/vyu9e78n
It’s a 60 MB zip file that Webroot has scanned virus-free. Choose “free download.”
Rejection on the grounds of those reviews is a shame on the journal, not on my work.
And if those reviews convince you, David, you’re incompetent, too.
So first warmistas hijack the peer review process, and then use the fact they have done so as evidence that anyone who is not a warmista is wrong?
That is pretty much exactly what we have come to expect from you lot.
What is the evidence anyone has hijacked the peer review process?
Judith Curry recently (cannot find it now) told how one reviewer dismissed her paper because (words to the effect) “it would not be helpful to messaging the importance of AGW in the society at large”.
One whole reviewer….? Wow. And he (or she?) has hijacked the entire peer review process?
The ENTIRE process?
Of course, climate deniers rarely get asked to review, because….
m d mill…”Judith Curry recently (cannot find it now) told how one reviewer dismissed her paper because (words to the effect) it would not be helpful to messaging the importance of AGW in the society at large”.
Typical.
It goes further back, however. Einstein had a paper rejected because the journal editor did not think it made sense. The problem with peer review is that it’s not a peer review. It’s a stacked process where a journal editor farms out a paper to an anonymous reviewer.
More recently, Australian, Barry Marshall, submitted a paper in which he claimed that stomach ulcers were caused by a bacteria, heliobacter pylori. The journal editor not only rejected his paper, he listed it as the worst paper he’d read in 10 years.
Marshall had to drink a potion with the heliobacter pylori to convince the medical fraternity. He became very ill but managed to heal himself with penicillin.
Peer review is a bad joke that interferes with science.
DA…”Of course, climate deniers rarely get asked to review, because.”
Because the climate review process is stacked with climate alarmists. The Journal of Climate was run for years by Andrew Weaver, a dyed-in-the-wool alarmist. He is now head of the Green Party here in BC, Canada.
On his board of editors he had Micahel Mann and Gavin Schmidt. What kind of chance would a skeptic have getting a paper past the evil climate twins who run realclimate?
Mann was caught in the Climategate emails working overtime trying to interfere with peer review to have skeptics blocked. When Judith Curry showed an interest in climate skepticism he insulted her using sexist remarks.
Half-remembered hearsay?
barry, please stop trolling.
mickey…”It is funny to watch the desperate attempts to find comfort in Pat Franks paper”.
Actually, it’s funny watching you trying to be funny, and especially trying to be intelligent. I am still waiting for you to add something to Roy’s blog that is based on science and free of ad homs and insults.
You reveal the aptitude of a petulant child, frustrated and angry because he/she cannot understand his/her homework.
Gordon chastises for ad hom attacks, by employing an ad hom attack.
Priceless.
It is not ad hom. It is ad fatuus.
Priceless.
Cheers.
So, Dr. Myki, you too think a calibration statistic is an energy flux, and that model error analysis impacts an expectation value.
And apparently you think you’re a scientist, too.
A tropical storm is developing over the Bahamas.
http://tropic.ssec.wisc.edu/real-time/mtpw2/webAnims/tpw_nrl_colors/conus/mimictpw_conus_latest.gif
Are modelers claiming the uncertainty in forcing is 0 in the absence of changing CO2 levels?
If there is no forcing, what is the uncertainty in that forcing?
I will type this very slowly for your benefit, David – there is no forcing, and no uncertainty in that which does not exist.
Got it? Good.
Cheers.
0 +- what?
Dr. Spencer,
You wrote,
“Importantly, this forced-balancing of the global energy budget is not done at every model time step, or every year, or every 10 years. If that was the case, I would agree with Dr. Frank that the models are useless, and for the reason he gives. Instead, it is done once, for the average behavior of the model over multi-century pre-industrial control runs, like those in Fig. 1.”
How is this forced-balancing actually accomplished?
Forced-balancing does not change the fact that the models don’t model cloud accurately, which is a fact that both skeptics and alarmists have long agreed on. You have said, I think, that a small change in cloud would be enough to explain the recent warming. The fact that the models can’t capture that detail means they can’t detect any negative cloud feedback, or any feedback at all. So they aren’t really modeling the climate.
If the models are forced to be constant before CO2 is added, then any positive number for CO2 forcing will cause warming. Anyone can pick a number they want and see the result. But Pat’s emulation equation does the same thing, so why bother with the model at all?
You explanation sounds like the models are programmed to show no warming trend unless a forcing like CO2 is added. Then the IPCC says they know that most of the warming is human-caused because their models don’t show any warming unless CO2 is added. That’s painfully circular reasoning.
Even if the models are balanced, the cloud error still remains. It’s like the models are programmed to ignore that uncertainty, but ignoring something doesn’t make it go away. To put it another way, if the models did not ignore the changes in cloud that they predict, they would give results that are +/- 20 C.
Am I missing something?
If the models are forced to be constant before CO2 is added, then any positive number for CO2 forcing will cause warming.
1) Why?
2) Are models forced in this way?
Why not and who cares?
DA
If the models are written in such a way that increasing CO2 will cause warming, then it would only be a surprise if adding CO2 to a ‘stable’ climate didn’t cause warming!
However, we know that the climate has not been static in the past, so any model that stabilizes isn’t doing a good job of representing internal variability. It is a further indication that the models do a poor job of simulating reality. If they only respond to CO2, that suggests they are doing what the modelers want — ‘proving’ that CO2 is the only parameter causing warming. The models ‘prove’ it by being written to only respond to CO2 forcing.
I don’t understand the focus on climate model errors when the models are programmed incompletely and incorrectly in the first place. There is no way for any modeler to understand the complexities of the surface-atmosphere interface well enough to write a program to describe it, even remotely.
I have run this by Roy before on positive feedback and he took it to a colleague. I cannot remember the response but my point was that positive feedbacks in models are wrong. A positive feedback of the type that can lead to a tipping point is only one part of an amplified system. Without the amplifier, there can be no positive feedback, all that would be available is negative feedback.
Here’s a paper by an engineer, Jeffrey Glassman, who took Gavin Schmidt of GISS to task for stating positive feedback theory incorrectly:
http://rocketscientistsjournal.com/2006/11/gavin_schmidt_on_the_acquittal.html
part 2…
About 1/4 way down the page there is a title, “GAVIN SCHMIDT ON POSITIVE FEEDBACK”
Schmidt describes positive feedback as such: “A positive feedback occurs when a change in one component of the climate occurs, leading to other changes that eventually ‘feeds back’ on the original change to amplify it”.
Glassman takes him apart on that but as someone with expertise with positive feedback I can add my own two-bits worth.
part 3…
Schmidt infers a feedback can amplify an original change, which is sheer nonsense. Feed.backs in feedback systems do not amplify, they are one part of an amplifier circuit. Since there are no amplifiers in the atmosphere that is not possible.
A positive feedback in an ele.c.t.r.onic amplifier is a small sample of the output signal fed back to the input IN PHASE with the input signal. Both add. The more common type of feedback in such an amplifier is frequency sens.i.t.ive negative feedback which varies across the amplifier ban.d.w.idth, controlling the amplifier gain as it subtracts from the input signal.
part 4…
Reading through the Net, I have come across several descriptions of positive feedback as an amplifier. Not possible.
Example. An amplifier in a public address system is setup in an auditorium. A microphone is partly in front of a speaker. The mic picks up signal from the speaker, which is passed through an amplifier and amplified.
An input from a performer is not even required if the amplifier gain is turned up enough. A simple hiss from the speakers can be picked up, amplified, broad-cast through the speaker, then back to the mic. The positive feedback is heard as a squeal that becomes more intense per unit time.
All that is required to stop the feedback squeal is to turn off the amplifier. Of course, you can turn down the gain control, move the mic behind the speakers, or use a graphic equalizer (filter) to attenuate the signal frequency at which the feedback occurs.
If Schmidt is using this incorrect definition of positive feedback at GISS, I would imagine all modelers are using it. If so, all models are programmed to show an amplified warming that does not exist and cannot exist.
That’s not to mention the warming effect attributed to CO2, which the same Schmidt has declared as 9% to 25%. That number contradicts the Ideal Gas Law for a trace gas at 0.04% of the entire gas mix. It was obviously pulled from a hat.
The error was in part 3 but I’m not sure what it was. I got a major site error page and added dots between letters till it posted.
You still are so stubborn you pretend radiative physics doesn’t exist.
That makes your claims completely worthless, and not even worth considering.
DA…”You still are so stubborn you pretend radiative physics doesnt exist”.
Don’t know what you mean by radiative physics. I have never claimed radiation does not exist via electromagnetic radiation. In fact, I have described it in detail.
I described how heat is transferred from the Sun to Earth via radiation, even though the Sun is 93 million miles away, and you kept asking how heat gets here from the Sun.
Seems you’re the one who doesn’t understand radiation.
Anyway, there is no positive feedback in the atmosphere, via radiation or any other mechanism.
Radiative physics = interactions between IR and molecules that make up the atmosphere.
There are very obvious positive feedbacks in the climate system:
1) the water vapor feedback
2) the ice-albedo feedback
There is already evidence for both.
DA…”There are very obvious positive feedbacks in the climate system:
1) the water vapor feedback
2) the ice-albedo feedback”
They are both negative feedbacks, there is no amplification in either.
I want to add to this since it seems to be related to the misunderstanding of positive feedback related to amplification.
“A positive feedback in an ele.c.t.r.onic amplifier is a small sample of the output signal fed back to the input IN PHASE with the input signal. Both add”.
I should have added that the summed feedback and input signal must then be amplified. That’s the point of PF. Through each gain cycle, the sampled output signal increases and that sampled increase is fed back to the input signal where it is added again. Each gain cycle forces the output signal higher and higher.
There is nothing in the atmosphere that can do the same.
The atmosphere isn’t a cycling circuit.
DA…”The atmosphere isn’t a cycling circuit”.
Well done, DA, no positive feedback is possible in the atmosphere. Glad you agree.
Why is no positive feedback available in an atmospheric system?
Do you think there is?
Why?
Rocketscientistsjournal looks like nothing more than someone publishing a term paper.
It’s junk and not science and deserves to be ignored.
Why is that?
MF has defined the GHE many times. Don’t get wrapped up in his brainless games.
“The GHE is atmospheric insulation.”
Mike Flynn says:
CO2, like any other matter, can absorb and emit energy of any wavelength.
http://www.drroyspencer.com/2017/06/a-global-warming-red-team-warning-do-not-strive-for-consensus-with-the-blue-team/#comment-251611
The atmosphere is an insulator. CO2 is around 90 to 750 times as opaque to some wavelengths of light as N2 or O2.
– Mike Flynn, June 18, 2017 at 3:34 AM
http://www.drroyspencer.com/2017/06/a-global-warming-red-team-warning-do-not-strive-for-consensus-with-the-blue-team/#comment-251624
.the transmittance of the atmosphere increases as the amount of GHGs in it drops.
Mike Flynn, May 23, 2017 at 5:16 PM
http://www.drroyspencer.com/2017/05/uah-global-temperature-update-for-april-2017-0-27-deg-c/#comment-247988
Less GHGs less impediment to radiation reaching the surface from the Sun, or being emitted by the surface to outer space.
– Mike Flynn, May 5, 2017 at 9:22 PM
http://www.drroyspencer.com/2017/05/uah-global-temperature-update-for-april-2017-0-27-deg-c/#comment-245860
DA,
I didn’t say the GHE is atmospheric insulation, but lie away if it suits, you. The GHE doesn’t exist.
As to the other quotes, repeat them more often. Facts might as well be repeated until accepted.
Nobody has yet managed to usefully describe the GHE. Certainly not me, because no description that does not involve magic is possible.
You can lie, evade, complain, and generally carry on like a brainless idiot, but it still won’t make people believe your fantasy. You can’t describe the GHE, can you?
Cheers.
“Less GHGs less impediment to radiation reaching the surface from the Sun, or being emitted by the surface to outer space.”
– Mike Flynn, May 5, 2017 at 9:22 PM
http://www.drroyspencer.com/2017/05/uah-global-temperature-update-for-april-2017-0-27-deg-c/#comment-245860
DA,
Thanks for quoting me. Don’t you agree? Why?
Cheers.
Mike Flynn says:
September 12, 2019 at 9:15 PM
I didnt say the GHE is atmospheric insulation
Liar.
“The atmosphere is an insulator. CO2 is around 90 to 750 times as opaque to some wavelengths of light as N2 or O2.”
– Mike Flynn, June 18, 2017 at 3:34 AM
http://www.drroyspencer.com/2017/06/a-global-warming-red-team-warning-do-not-strive-for-consensus-with-the-blue-team/#comment-251624
DA,
Liar, liar, pants on fire!
I didn’t say the GHE is atmospheric insulation, you fool. You just keep making that up, hoping nobody will notice you are lying. There is no GHE! That’s why you have to keep lying, and pretending that someone has described it.
Fantasy is not reality, but the fantasist is in the grip of his delusional thinking. Facts make no impression.
Carry on lying and fabricating.
Cheers.
DA…”The atmosphere is an insulator. CO2 is around 90 to 750 times as opaque to some wavelengths of light as N2 or O2.
Mike Flynn, June 18, 2017 at 3:34 AM”
****
Mike’s right. The atmosphere is a very poor conductor of heat because the atoms/molecules are so far apart compared to a good conductor. It is a good absorber of heat (directly from the surface), however, and it tends to retain the heat because it is a poor conductor and poor radiator. That explains the +33C (coupled with the oceans) credited to the GHE, which is a bogus theory.
So what did you mean when you wrote, “The atmosphere is an insulator?”
DA,
Exactly what I wrote. What part do you not understand]?
Cheers.
Insulators trap heat, right?
DA,
Don’t be stupid. How does a cold room trap heat? Negative heat, perhaps?
Cheers.
The insulator would trap the heat. Insulators work in cold rooms too.
bdgwx, please stop trolling.
BTW, Will Happer quit.
https://www.eenews.net/stories/1061113085
The red blue/blue team debate will not happen. It was anti-scientific anyway.
You seem to be very confused about how the scientific method works.
You are confused enough to think the debate didn’t already happen long ago.
You are confused enough to think think that science is a debating society outcome.
Ho ho.
Cheers.
DA…”Will Happer quit”.
Unfortunately, Trump is too thick to understand the ramifications of failing to confront climate alarm. He’s not as stupid as the media make him out to be but he is not the brightest either. I think Reagan was far more stupid, and Nixon too.
Let’s not forget Bush, Jr.
Not defending Dems. Lyin’ Willie turned out to be pretty damned stupid, and his sidekick Gore.
Of course, we in Canada can’t talk with Trudeau as PM. I am sure he was a good school teacher but as an intellect, especially in science, the lights are on but nobody is home.
Ah, the “science is settled” argument.
That really tells that you are confused about how science works. The debate is never over and one falsification is enough to prove a theory needs to be discussed again.
You don’t know science. Scientists aren’t forever debating the accuracy of F=ma.
If you are waiting for CO2 to someday be found not to be a greenhouse gas that causes warming, you are a complete fool.
DA…”If you are waiting for CO2 to someday be found not to be a greenhouse gas that causes warming, you are a complete fool”.
The term greenhouse gas is a misnomer. They are actually infrared active molecules. Initially, it was mistakenly thought that the glass in a greenhouse trapped infrared energy causing the greenhouse to warm. That was disproved by R. W. Wood, an expert and authority on gases like CO2.
BTW…no one ever explained how trapped IR warmed anything.
It was thought for the longest time that such gases in the atmosphere trapped IR, causing the atmosphere to warm. However, Wood proved that a greenhouse warms due to a lack of convection. The glass actually traps molecules of air which is 99% oxygen and nitrogen. It also prevents them from rising naturally hence the heating in a greenhouse.
Therefore, infrared active gases cannot trap molecules of air and cannot trap heat as does a greenhouse. The atmosphere is rife with convection, it’s not possible to replicate a greenhouse in the atmosphere.
Greenhouses are built to do what the atmosphere cannot do.
“You dont know science. Scientists arent forever debating the accuracy of F=ma.”
You really don’t have an idea, don’t you?
If somebody proves that F=ma is not accurate then it looses it’s value. The reason why it is not debated is that nobody comes up with a falsification, not because people are tired debating it.
“If you are waiting for CO2 to someday be found not to be a greenhouse gas that causes warming, you are a complete fool.”
The other way around. I am waiting for proof that CO2 can cause warming in an open convective environment. That evidence is completely lacking.
CO2 can absorb and re-emit some IR wavelengths but that is also true for a mixture of O2 and N2 or O2 alone:
https://www.nature.com/articles/2091341a0
https://www.nature.com/articles/s41557-018-0015-x
Collision is key.
ron fald…”CO2 can absorb and re-emit some IR wavelengths but that is also true for a mixture of O2 and N2 or O2 alone:”
Thanks for links. It stands to reason that N2/O2 absorb and emit EM just like any other mass. The focus till now has been in the IR band and it has occurred to me that scientists are overlooking the absor.p.tion/emission of N2/O2 in other bands.
The reference to the difference between CO2, N2, and O2 as molecules refers only to their molecular rotational states. The vibrational state should not be an issue in gases but the transitional state should.
Even though those molecules are classified according to their molecular rotational state, that does not mean individual electrons in the atoms making up the molecules of N2/O2 cannot absorb/emit EM.
The notion is that only CO2 and water vapour are responsible for controlling the temperature of the atmosphere and for radiating energy to space. That strikes me as being absurd. If you have a gas made up 99% N2/O2, with one boundary near the 0K of space, it strikes me as being absurd that the 99% cannot not lose heat due to that proximity.
I think something is wrong in the theory and needs to be studied further and more seriously.
GR said…”The notion is that only CO2 and water vapour are responsible for controlling the temperature of the atmosphere and for radiating energy to space.”
No one is claiming that only CO2 and WV control the temperature.
No one is claiming that only CO2 and WV control the radiation to space.
GR said…”That strikes me as being absurd.”
That’s because it is absurd. The claim that is being made is that CO2 and WV plus a bunch of other stuff are contributing factors.
GR said…”I think something is wrong in the theory and needs to be studied further and more seriously.”
Based on the misrepresentation above it is more likely that your understanding of the theory is wrong. At the very least it is essential that you understand what the theory says before you critique it.
bdg…”No one is claiming that only CO2 and WV control the temperature”.
It is certainly being claimed by many that GHGs are the only gases able to radiate EM to space because N2/O2 lack the molecular rotational means to do so. Based on that argument, it would seem N2/O2 must retain their heat forever or channel it through GHGs that make up no more than 0.31% of the overall atmosphere.
Gavin Schmidt of NASA GISS has claimed CO2 alone has a warming factor of 9% to 25%. Where is the evidence for these claims?
Is DA trying to convince himself he understands physics, again?
He’s always fun to watch.
DA once tried to claim ice on either side of a blackbody plate could radiatively warm the plate to 48 C, 118.3 F!
He must enjoy being a clown….
RF,
The pseudoscientific GHE true believers can’t even describe the GHE, let alone propose o testable GHE hypothesis. No theory.
No science – just cultist delusion.
Cheers.
Then these no-GHG or weak-GHG need a lot of discussing because they’ve been falsified ad nauseam. If you disagree then present one that survives falsification over the entire paleoclimate and instrumental record including the PETM, other ETMx eras, magnitude of glacial cycles, faint young Sun problem, modern warming era, etc. As it stands today these various no-GHG or weak-GHG theories are quite terrible some of which are so astonishingly bad they can’t even survive the most basic falsification attempts.
bdgwx, what are you talking about? Are you playing with straw men again?
What “no-GHG or weak-GHG” has been “falsified ad nauseam”.
Here’s why I hope Trump wins again in 2020. According to dumb Dems, Miami will soon be gone due to climate change.
https://www.foxnews.com/media/aoc-miami-few-years
How can anyone possibly be that stupid?
Roy, I am a naive in this posting of comments, the first one ever being above (my naivety leading to the use of the pseudonym billygodzilly. My apologies )
As a result I placed my post in the incorrect location below. It can be removed, as this is the correct location for:
Roy,
The errors cannot be FORCED to cancel, because they are not yet known until the data for the given iteration step is collected. Basically, for every step the plaintive whine from the adolescent in the back seat of Are we there, yet? has the answer of No
Therefore, each step in the future has an uncertainty in its prediction that comes from 1. Model insufficiency, and 2. uncertainty in the input coming from the previous step. It is not a error until we have collected the data, but this uncertainty must be carried forward and added to the uncertainty of the application of the model to the current step.
I have posted the following in other discussions on this subject and I offer it here as additional clarification.
Thank You,
Bill Haag
bill…I have posted the following in other discussions on this subject and I offer it here as additional clarification”.
Makes sense, Bill. We are discussing errors in unvalidated models which are not even programmed with valid scientific facts.
The IPCC have been focused on models because the fist co-chair, John Houghton, was a climate modeler.
Validated models can be extremely useful. In electronics, I can model a circuit then build the circuit and validate the model. In climate science, models are essentially useless and they are misleading many people into believing pseudo-science. They are also misleading politicians into implementing Draconian taxes while fueling idiotic climate weenies.
Thank you Roy, your large intellect casts warm rays of knowledge over my body, overpowering my senses.
S_a ___c b_ sung c_p nh_t thm thnh ph_n vi l__ng k_m. http://www.kingsraid.wiki/index.php?title=User:RosalinaSeidel
Coi thm nhi_u s_n ph_m s_a b_t khng gi_ng. https://exclusiveoffers.review/groups/sua-ensure-xach-tay-tu-dubai/
Trong s_a v_i ch_a n_ng __ IgG v cng cao. http://www.sertelt.com/User:EarnestineFouts
Einen davon knnen wir Ihnen bieten: Unsere Medaillen.
Vorschub leistet die Ideologie der Daten.
Wir haben die Anzahl der Startpltze auf 32 erhht.
Es sind maximal 39 von 51 Trophen erspielbar.
H-Anker,, die die Grundidee als Steinbruch benutzt.
Ja, die platin ist nur mit der us version mglich.
Hello! I know this is kinda off topic but I was wondering which blog platform are
you using for this site? I’m getting tired of WordPress
because I’ve had problems with hackers and I’m looking at alternatives for another platform.
I would be awesome if you could point me in the direction of
a good platform.
This website was… how do I say it? Relevant!! Finally I’ve found something that helped me.
Many thanks!
Hello there, I discovered your site by the use of Google at the same time as searching for a similar subject,
your site came up, it looks great. I have bookmarked it in my google bookmarks.
Hello there, simply changed into aware of your blog through Google,
and found that it is truly informative. I am gonna watch out for brussels.
I’ll appreciate if you proceed this in future.
Many other folks will probably be benefited from your writing.
Cheers!
What you published was actually very logical. But, what about
this? suppose you added a little information? I ain’t suggesting your
content isn’t good., however suppose you added something to
maybe grab a person’s attention? I mean Critique of “Propagation of Error
and the Reliability of Global Air Temperature Predictions”
« Roy Spencer, PhD is a little vanilla. You ought to glance at Yahoo’s front page and watch how they write
article titles to grab people to open the links. You might
add a related video or a related pic or two to get
readers interested about everything’ve got to say. Just my
opinion, it could bring your website a little bit more interesting.
Have you ever considered about including a little bit more than just your articles?
I mean, what you say is valuable and all. Nevertheless just imagine if you added some great visuals or video clips to give your posts
more, “pop”! Your content is excellent but with images and clips,
this blog could definitely be one of the very best in its niche.
Amazing blog!
Thanks for publishing this awesome article. I’m a long time reader but I’ve never been compelled to leave a comment.
I subscribed to your blog and shared this on my Twitter.
Thanks again for a great post!