Additional Comments on the Frank (2019) “Propagation of Error” Paper

September 12th, 2019 by Roy W. Spencer, Ph. D.

NOTE: This post has undergone a few revisions as I try to be more precise in my wording. The latest revision was at 0900 CDT Sept. 12, 2019.

If this post is re-posted elsewhere, I ask that the above time stamp be included.

Yesterday I posted an extended and critical analysis of Dr. Pat Frank’s recent publication entitled Propagation of Error and the Reliability of Global Air Temperature Projections. Dr. Frank graciously provided rebuttals to my points, none of which have changed my mind on the matter. I have made it clear that I don’t trust climate models’ long-term forecasts, but that is for different reasons than Pat provides in his paper.

What follows is the crux of my main problem with the paper, which I have distilled to its essence, below. I have avoided my previous mistake of paraphrasing Pat, and instead I will quote his conclusions verbatim.

In his Conclusions section, Pat states “As noted above, a GCM simulation can be in perfect external energy balance at the TOA while still expressing an incorrect internal climate energy-state.

This I agree with, and I believe climate modelers have admitted to this as well.

But, he then further states, “LWCF [longwave cloud forcing] calibration error is +/- 144 x larger than the annual average increase in GHG forcing. This fact alone makes any possible global effect of anthropogenic CO2 emissions invisible to present climate models.”

While I agree with the first sentence, I thoroughly disagree with the second. Together, they represent a non sequitur. All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!

Why?

If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behavior:

Figure 1. Yearly- and global-average longwave infrared energy flux variations at top-of-atmosphere from 10 CMIP5 climate models in the first 100 years of their pre-industrial “control runs”. Data available from https://climexp.knmi.nl/

Importantly, this forced-balancing of the global energy budget is not done at every model time step, or every year, or every 10 years. If that was the case, I would agree with Dr. Frank that the models are useless, and for the reason he gives. Instead, it is done once, for the average behavior of the model over multi-century pre-industrial control runs, like those in Fig. 1.

The ~20 different models from around the world cover a WIDE variety of errors in the component energy fluxes, as Dr. Frank shows in his paper, yet they all basically behave the same in their temperature projections for the same (1) climate sensitivity and (2) rate of ocean heat uptake in response to anthropogenic greenhouse gas emissions.

Thus, the models themselves demonstrate that their global warming forecasts do not depend upon those bias errors in the components of the energy fluxes (such as global cloud cover) as claimed by Dr. Frank (above).

That’s partly why different modeling groups around the world build their own climate models: so they can test the impact of different assumptions on the models’ temperature forecasts.

Statistical modelling assumptions and error analysis do not change this fact. A climate model (like a weather forecast model) has time-dependent differential equations covering dynamics, thermodynamics, radiation, and energy conversion processes. There are physical constraints in these models that lead to internally compensating behaviors. There is no way to represent this behavior with a simple statistical analysis.

Again, I am not defending current climate models’ projections of future temperatures. I’m saying that errors in those projections are not due to what Dr. Frank has presented. They are primarily due to the processes controlling climate sensitivity (and the rate of ocean heat uptake). And climate sensitivity, in turn, is a function of (for example) how clouds change with warming, and apparently not a function of errors in a particular model’s average cloud amount, as Dr. Frank claims.

The similar behavior of the wide variety of different models with differing errors is proof of that. They all respond to increasing greenhouse gases, contrary to the claims of the paper.

The above represents the crux of my main objection to Dr. Frank’s paper. I have quoted his conclusions, and explained why I disagree. If he wishes to dispute my reasoning, I would request that he, in turn, quote what I have said above and why he disagrees with me.


157 Responses to “Additional Comments on the Frank (2019) “Propagation of Error” Paper”

Toggle Trackbacks

  1. Grant F says:

    Dr Roy.
    I’ve been watching this papers criticism from various angles for the past week. How I have approached it was working backwards from this fundamental point Pat Frank made:

    “The unavoidable conclusion is that an anthropogenic air temperature signal cannot have been, nor presently can be, evidenced in climate observables.”

    I’m not sure how people can have legitimate criticism concerning that conclusion, considering the fact that cloud forcing has a 4w/m2 (+/-) range of error. It seems most people are hung up on the end temperature numbers. This is irrelevant, as the main point of the paper IMHO is the error uncertainty from this year, next year, and the subsequent years. I’m not means trained enough to dive deep into statistical mathematics, but looking at the argument with layman’s eyes, I definitely understand the underlying unknown in climate models.

    Regardless, your critique was professionally considered, which is very refreshing considering the other side of the climate debate tended to go straight for the Ad Hom route.

    Cheers
    Grant

    Grant:
    I have substantially modified the post you are responding to. You might want to re-read it, although it might not change the thrust of your comments. -Roy

  2. Scott R says:

    Dr Spencer,

    Why are all the models assuming we are in energy equilibrium as a start point? Clearly, the earth is never in equilibrium. Not only that, but our measurement of the system at any snap shot in time at particular locations is completely dependent on the distribution of energy in the earth’s system which is constantly changing. Get a lot of El Ninos, our energy state looks high. La Ninas? Our energy state looks low. Despite that, the total energy remains the same. How can we expect our models to perform if we don’t include the natural forcers and distribution of energy properly?

    Some natural forcers in order of period:
    Day / Night
    Seasonal
    El Nino / La Nina (2.2 year, 3.6 year solar harmonics)
    11 year solar cycle / long term ENSO
    AMO / PDO
    400 year solar cycle
    magnetic field / cosmic rays
    orbital changes (getting more circular)

    How all of these things impact the clouds, jet streams is absolutely critical. Perhaps it isn’t CO2 that is controlling our temperature trends, but the amount of snow falling on Antarctica. More snow, more ice flows, more cold air and ocean, more La ninas. Cosmic rays have been increasing along with precipitation. Records indicate outflows in Antarctica have been increasing in the last 40 years up to the current average precipitation rates. Greenland outflows are increasing to match increased snow falls there. These increased outflows will cool the planet in theory.

    Looking at the total energy budget for the planet, sure the oceans are slightly warmer than in 1980 (except southern ocean), but you have more ice in Greenland and Antarctica. So the total energy for the system hasn’t necessarily increased, the distribution of energy has changed.

    Thank you for your work on the UAH dataset which takes the heat island data fairly unlike NOAA, NASA which use heat islands to draw some very red maps again and again to scare everyone.

    • Without going into the details you list, I completely agree that it is a silly assumption that the climate system is always in energy balance unless humans interfere. I have said this for many years, blogged on it repeatedly, and discussed it in my books.

      But it is a separate subject from what I am blogging on.

    • Scott R says:

      One last point… the linear tide gage trends at every location around the world back up the fact that the heat content of the globe has not changed due to anything we are doing. Ice at the poles is creating pops and drops along the trend, but isn’t making the trend. The sea level rise looks geological to me… or perhaps it is reacting to a very long term natural trend related to the earth’s orbit.

      • Yes, what you say could be true. I’ve blogged on this before. Even if there has been a recent acceleration of sea level rise that is due to humans, that component amounts to only 1 inch every 30 years.

        One interesting aspect of SLR is that the ocean heat content can remain constant, yet we can get sea level rise from thermal expansion. The reason why is because the thermal expansion coefficient of seawater is temperature-dependent (it goes up with increasing temperature). So, if there was a slight reduction of the ocean’s overturning circulation, the warm surface layers would warm considerably, the deep cold layers would cool very slightly, and the net result would be sea level rise.

        • Scott R says:

          Dr Spencer… why isn’t the media telling the true about sea level rise that the majority of it is natural? No matter what we do, we need to regulate new construction on the coast.

          For a more accurate measurement of the man made component sea level rise, we would need to set up a standard deviation from the linear trend and see if there are more data points falling outside of the standard deviation now than before for all stations around the world. If it is determined that we have more data points north of 1sd, (also considering we have more stations reporting now) we can then separate sea level into 2 components, natural and man made. The natural component is sure to be MUCH larger than man made by a factor of at least 10:1 just eyeballing these charts in my opinion. I haven’t done the work, but that is the impression that I have from reviewing the NOAA data.

          Very interesting. I wonder… the earth’s magnetic field has recently been weakening. Could this be slowing down the ocean circulation? That would cause ice to build at the poles, ocean surface warming, and obviously land surfaces warming especially at night. During the day, the temperatures are not making new highs as the combination of warmer surface waters and cosmic rays are causing cloud formations to reflect the daytime solar energy. Do you know if the day / night split is a global phenomenon or is it just happening in the US?

          I definitely think that coming off the modern solar maximum has reduced the trade winds and also contributed to less upwelling. We see that strong La Nina usually accompanies the start of a solar cycle… also right after the sun reverses polarity.

  3. Math says:

    This is probably not scientifically correct but I take the chance: If I understand Dr. Frank correctly, he tries to estimate the error the IPCC models have because they exclude all sources for temperature changes except anthropognic greenhouse gases. This error will not show up in the models since they have all removed the influence of other sources. Therefore the size of the error is not of any other interest than as a comparison with the predicted temperature change. Since the error is larger than the predicted temperature change, the models are useless. Am I wrong?

  4. Math says:

    Ok, is this what he should claim? 🙂

    • Pat Frank says:

      Math, I just show that the theory error manifest in long wave cloud forcing error propagates through the sequence of step-wise calculations that comprise a climate simulation.

      The annual per-model average error, (+/-)4 W/m^2, is a model calibration error. There are two elements to it. One is that it is (+/-)114 times larger than the annual change in CO2 forcing.

      Second is that it is a theory error that enters every single calculational step of a step-wise simulation. When it is propagated through those steps, it grows so large that the projected air temperature is physically meaningless.

  5. There are no REAL climate models.

    There are only computer games that some people call “climate models”.

    REAL CLIMATE MODELS MAKE RIGHT PREDICTIONS.

    COMPUTER GAMES MAKE WRONG PREDICTIONS.

    A real climate model that makes right predictions must be constructed from an accurate physics model, of exactly what causes climate change ( with global average temperature as a questionable proxy for the “climate” of our planet).

    There is no accurate physics model of climate change.

    Therefore, a real climate model that makes right predictions is impossible to construct.

    The above logic is simple.

    Understanding climate change physics is very complicated.

    The Russian “climate model”, that SEEMS TO make decent predictions, appears to be just a
    null hypothesis “model”, predicting “more of the same” … and also can’t be trusted because the scientists involved are obviously colluding with comrade Donald Trump.

    My climate science blog, for non-scientists:
    http://www.elOnionBloggle.Blogspot.com

    • Richard, I assume you are joking about Russian climate scientists colluding with Trump?

      • Richard Greene says:

        Yes

        I’ll have you know that 56.7% of my jokes are immediately recognized as jokes,
        and 14,9% of the time people ask: “Was that a joke?”

        As I wrote in my politics blog in 2016, Trump has never colluded with anyone !

        He barely colluded with the Republican Party, and he was supposed to collude with them!

        The claim that Trump colluded with Russians, or anyone else, was ridiculous in 2016, and even more ridiculous after four investigations, but many Dumbocrats I know still believe it.

        Leftist beliefs require no evidence.

        For them, the alleged “coming climate change crisis” requires no evidence !

        • raygun says:

          Ya gotta love it.

        • jorgekafkazar says:

          “Barely colluded with the Republican Party…”

          Yeah, and that’s mostly a good thing. The GOP was starting to slide too far Leftward.

        • Gordon Robertson says:

          richard…”Leftist beliefs require no evidence”.

          I am a leftist, actually a Socialist, and I don’t think Trump colluded with the Russians. In fact, I don’t think there is any proof that the Russians interfered at all. It was strictly sour grapes from the Democrats, especially Hillary Clinton’s camp.

          With regard to anthropogenic global warming, I don’t think humans have anything to do with it. I think AGW is a politically-correct load of nonsense. Ironically, it was dreamed up by an uber-right-winger, former UK PM Margaret Thatcher. She was trying to control the UK coal mining unions and an advisor urged her to use her degree in chemistry to baffle the minions at the UN re the effect of coal emissions on climate.

          It would be nice if people would stop jumping to conclusions about the left. It’s as bad as a left-leaner presuming all right-wingers are like Hitler, Mussolini, and Stalin. And please don’t tell me they were leftists. every one of them was a fascist, the epitome of the extreme right-wing.

          If you took the time to understand who left-wingers really are you’d find a broad spectrum of views. The politically-correct who run around these days lording it over everyone are neither leftists nor socialists. They are some kind of freak sideshow who the late Michael Crichton described as urban atheists with a new religion.

          • Stephen P Anderson says:

            Gordon,
            Mussolini, Stalin and Hitler were not on the right. They were all leftists. To say they were on the right is a complete denial of who the left really is. Mussolini started the fascist movement. Before proclaiming himself a fascists he was a member of the communist party and corresponded with Lenin. But like Hitler he was a national fascist and not a global fascist. Like Hitler he did not want to take his orders from Moscow and so when he became a fascist he denounced communism. But make no mistake, and read his platform and the Nazi Party platform, and there is no mistake they were from the left.

          • Stephen P Anderson says:

            Gordon, Nazi Part platform. Is it of the right or of the left?

            Unification of Greater Germany (Austria + Germany)
            Land + expansion
            Anti-Versailles – abrogation of the Treaty.
            Land and territory – lebensraum.
            Only a “member of the race” can be a citizen.
            Anti-semitism – No Jew can be a member of the race.
            Anti-foreigner – only citizens can live in Germany.
            No immigration – ref. to Jews fleeing pograms.
            Everyone must work.
            Abolition of unearned income – “no rent-slavery”.
            Nationalisation of industry
            Divison of profits
            Extension of old age welfare.
            Land reform
            Death to all criminals
            German law, not Roman law (anti- French Rev.)
            Education to teach “the German Way”
            Education of gifted children
            Protection of mother and child by outlawing child labour.
            Encouraging gymnastics and swimming
            Formation a national army.
            Duty of the state to provide for its volk.
            Duty of individuals to the state

          • Lou Maytrees says:

            Stephen P Anderson,

            Fascism and Nazism, are authoritarian rules of government, and Communism, the communal sharing of most goods, are on opposite ends of the political spectrum.

            Fascism in its true form is an extreme right wing ideology, communism in its pure form is far more liberal.

            But Stalin’s form of ‘communism’ was not liberal, his was an authoritarian form of gov’t.

            If your premise is correct and Stalin and Hitler were “leftist’s” as you call them and of the same ideology, why did Hitler attack Russia in 1941? Why didn’t Russia join the Axis Powers w Germany and Italy? Why was Russia a member of the Allies?

            Liberal is tolerant of new behaviors, Fascism is authoritarian and not tolerant.

          • Lou Maytrees says:

            Stephen P Anderson,

            Your ‘Nazi Platform’ is textbook extreme right wing here in America.

          • I’m sorry my lame joke about the Russian climate model started a political discussion, but I’ll add to it anyway.

            The Russians did influence the 2016 election, harming Trump, but not enough to change the results.

            Their primary influence was Russian disinformation about Trump fed to Christopher Steele (he stated his sources were mainly Russians), who was working for Fusion GPS, included in the “Trump Dossier”, that was purchased by the Hillary Clinton campaign.

            The fake (at least 90%) information in that “Dossier” was used by the Clinton campaign to attack Trump as colluding with Russians (not true) and used by the FBI to spy on the Trump campaign and open a counterintelligence operation investigating Trump — both in July 2016, well before the 2016 election.

            Dumbocrats pushed the Russian collusion delusion for three years and many still believe it.

            The resulting Mueller Investigation divided the country for almost two years.

            Mueller knew he had no evidence of collusion in mid-2018, but kept quiet, allowing voter in the 2018 Election to assume anything bad about Trump that they heard from the mainstream media — and that was lots of speculation, certainly enough to hurt Republicans in the 2018 elections.

            Leftists are fascists — they think they are always “right” — they “debate” with character attacks — if you say the “wrong” thing about climate change, they attack — you must think and speak as they say. or face consequences. That’s fascism.

          • Lou Maytrees says:

            Richard Greene,

            Fascism is an extreme right wing authoritarian political ideology, it’s not “leftist’ – it’s rightist.

            ‘Leftists attacking’ in words about climate change is NOT Fascism, it’s simple free speech. Your metaphor is literally not applicable to Fascism.

          • Lou Maytrees says:

            Stephen P Anderson, Richard Greene, et al.,

            Its too easy to prove you wrong about your ‘fascism’.

            The only political party in the US to have Fascists and Nazis in it is the conservative right wing Republican Party.

            The only political party in the US that embraces Fascists and Nazis is the conservative right wing Republican Party.

            The Libertarian Party, another right wing arm in the US, also embraces Fascists and Nazis.

            That’s bc Fascism is right wing ideology, like it or not.

          • Nate says:

            Ugghh.

            We had the exact same conversation before. Stephen’s erroneous views were thoroughly debunked.

            He had no answers.

            Stephen is willfully ignorant. He confuses Authoritarianism with Leftism.

            So he waits a month, brings it up again as if it were never discussed before.

            Again:

            Hitler PRIVATIZED several govt owned businesses- opposite of what Leftists do.

            Hitler destroyed LABOR UNIONS, opposite of what LEFTISTS who support labor unions.

            Hitler was a NATIONALIST, opposite of LEFTISTs who are INTERNATIONALISTS.

            Hitler had a deep hatred of ethnic diversity. LEFTISTS embrace diversity.

          • Nate says:

            You guys simply eat-up right-wing historical revisionism.

            Antifa – anti-fascists. So they must be right-wingers?

            “Only a ‘member of the race’ can be a citizen.
            Anti-semitism No Jew can be a member of the race.
            Anti-foreigner only citizens can live in Germany.
            No immigration ref. to Jews fleeing pograms.”

            Is the RIGHT or LEFT that have these policy goals TODAY?

            “Nationalisation of industry”

            False, the opposite happened under Hitler.

            “Extension of old age welfare.”

            So what? Many countries at the time were adopting such progressive policies, including the US.

            “Land reform”

            Communists yes, Nazis No.

            “German law, not Roman law (anti- French Rev.)
            Education to teach ‘the German Way’
            Encouraging gymnastics and swimming
            Formation a national army.
            Duty of the state to provide for its volk.
            Duty of individuals to the state”

            All examples of Nationalism and Authoritarianism, not LEFTISM.

        • Thomas says:

          Two points, Mr Greene.

          1. For them, the alleged coming climate change crisis requires no evidence !

          Well, to be precise, for them neither “coming”, “climate”, “change”, “crisis”, or any combination of these requires any evidence.

          2. Leftist beliefs require no evidence.

          I wish it was only “leftist beliefs” or, even, that these beliefs were leftists at all.

        • gbaikie says:

          –As I wrote in my politics blog in 2016, Trump has never colluded with anyone !

          He barely colluded with the Republican Party, and he was supposed to collude with them!–

          But, maybe the Russians colluded with Trump.

          One could say without any doubt that CNN colluded with Trump.
          Or CNN handed Trump the election by only talking about Trump.
          This is an accepted fact.
          And they still can’t talk about anything else.

          The Republican party colluded with Trump by failing to take him seriously, they conceded Trump an open field, and Trump simply took it.
          Trump told the Reps, their problem, low energy.
          And everything Trump did was related to reading polls [and understanding the polls] and taking a position that aligned to what his voters wanted.
          Dead simple- even non politician can do that.

          Trump makes people collude with him.

          What Rep wouldn’t want the dems endless talking about totalitarian socialism?
          Complete with bonker stuff like eliminating air planes, getting rid of cows and government spending tens of trillions of dollars- on things the public doesn’t even want.
          Dems are doing everything that Trump could wish them to do- isn’t that collusion?

          China is also colluding with Trump.
          What are The Chinese leadership thinking about regarding what they are doing with Hong Kong?
          Obviously they are shooting themselves in the foot- and showing the world they unable to follow an agreement.

          And at same time, they are in trade negotiations with Trump {the Destructor}.
          Could Trump ask them for anything more than this?

          Trump: I want to destroy China, what can you do for me, China?
          China: Well we can start riots in Hong Kong that goes on for months.
          Trump: That should get us to the point of destroying China as soon as possible.
          Meanwhile I am going get a super majority of Congress supporting me with this trade war with China.
          And drugs China is smuggling thru our southern border and resulting in ten of thousand of Americans to die per year, is helping a lot in gaining a strong and solid support by Congress.

          • gbalkie
            You use the word “collusion” in ways that don’t resemble my use of “collusion”, or any definition of the word in a dictionary, so I don’t know how to respond.

          • gbaikie says:

            collusion:
            noun
            secret or illegal cooperation or conspiracy, especially in order to cheat or deceive others.
            “the armed forces were working in collusion with drug traffickers”

            LAW:
            illegal cooperation or conspiracy, especially between ostensible opponents in a lawsuit.

            Why Dems, Russian, and Chinese would want cooperate with Trump is a bit of mystery.
            But it is also true that Dems, Russians, Chinese are actually and actively cooperating with Trump.

            But why they do this in a manner that appears to help Trump and be quite suicidal for themselves, is a mystery to grapple with.
            I tend to pick the simple answer that they are just dead stupid and therefore they don’t have much input into the “conspiracy”.

            But it’s possible the Dems, Russian, and Chinese have a interesting plan which involves only appearing to be as stupid as bricks.

        • Richard Greene says:

          Reply to Lou “Forrest Gump” Maytrees

          You wrote:
          “The Libertarian Party, another right wing arm in the US, also embraces Fascists and Nazis”

          Mr. Gump, you have no idea what you are saying !

          Libertarians believe in minimum government and maximum personal freedom.

          Fascists, Nazis, communists , socials and Dumbocrats believe in strong to very strong governments, and less liberty.

          We’ll call on you again, Mr. Gump, when we want wrong answers !

          • Michael blazewicz says:

            Political parties are like religion…they pick and choose what suits them in the environment they are given. And a democracy embraces the changes in the populace or becomes irrelevant. The problem comes when the country is divided down the middle on an issue…so you either get a totalitarian government to solve the issue or somebody like Trump . Trump has never needed to collude with anybody…he just asks them to do his dirty work on his behalf..costs nothing, and he can wipe his own hands . Just read his conversation to the Ukrainian President…he puts his offer of aid conditional to cleaning up corruption. Then he suggests that Bidens son was involved in dodgy dealings as was Joe and that he should look into that. What person would not believe that the aid and Biden was not linked? Only an idiot.

  6. Ted Gilles says:

    Speaking of books, I think the fourth National Climate Assessment of November 2018 should bed nominated for the Pulitzer fiction award. Their claim of 40% increase in the atmosphere concentration of CO2 is a bad joke.

    • 40% increase in the atmospheric concentration of CO2 since the Industrial Revolution is true and few are disputing it without resorting to things that can easily be countered, such as the paper by Beck. The results of this 40% increase of CO2 are what the debate should be about, including how much or how little the natural feedbacks to the direct effect of this CO2 reinforce (or arguably counter) the direct effects of this CO2 increase.

      • Ted Gilles says:

        The 40% increase in the atmospheric concentration of carbon dioxide (CO2) as calculated on page 81 of Volume 1 fourth National Climate Assessment (NCA4) is wrong. It is derived as 390/278 = 1.40 = 40% increase. The units are actually parts of CO2 per 1,000,000 parts of air. The correct calculation is
        (390 -278)/1,000,000 = 112/1,000,000 = 0.000112 = 0.0112% or rounded to 0.01%. To verify, please see the NASA Mauna Loa website AND the NASA Composition of Air website that currently shows 0.039% CO2 (the 390 ppm in NCA4), which is up from 290 ppm (0.029%) 100 years ago for an increase 0.01%.

      • barry says:

        Ted,

        “The 40% increase in the atmospheric concentration of carbon dioxide (CO2)… is wrong”

        No, it’s correct.

        CO2 concentration has increased by 40% of its concentration just prior to the industrial revolution.

        You are comparing its increase to the sum total of all gases in the atmosphere.

        You are changing the frame of reference and pretending that it’s the same frame of reference – changing the denominator and claiming it is the same denominator.

        My height has increased by 386% since the day I was born (50cm –> 193cm)

        An analogy here would be to compare my height change to the height of the troposphere, and claim my height change from 50cm to 193cm is actually a change of 0.013%. That’s exactly what you’re doing WRT CO2 change, above.

        • Ted Gilles says:

          As I said, check these references:

          http://www.esrl.noaa.gov/gmd/ccgg/ trends/).
          Data are reported as a dry air mole fraction defined as the number of molecules of carbon dioxide divided by the number of all molecules in air, including CO2 itself, after water vapor has been removed. The mole fraction is expressed as parts per million (ppm). Example: 0.000400 is expressed as 400 ppm. The Mauna Loa data are being obtained at an altitude of 3400 m in the northern subtropics.

          http://www.nesdis.noaa.gov/content/peeling-back-layers-atmosphere
          “Air” is the common name given to the combination of gases used by organisms for breathing and photosynthesis. By volume, dry air contains 78.09% nitrogen, 20.95% oxygen, 0.93% argon, 0.039% carbon dioxide and smaller amounts of various other gases as well as varying amounts of water vapor.

        • barry says:

          “Data are reported as a dry air mole fraction defined as the number of molecules of carbon dioxide divided by the number of all molecules in air”

          Yes, that is what that website is reporting.

          But we are comparing CO2(a) to CO2(b).

          The difference is a 40% increase in atmospheric CO2 between 1750 and now. That is a factual statement.

          Your comparison uses the entire atmospheric mass as the denominator. You are free to look at it that way, but it doesn’t change the fact of the above statement one bit.

          We could, if we wished, estimate the amount of change in percentage terms, using all the atoms in the universe as our denominator, and get a super tiny fraction of a percent change. Perhaps you would like to work that one out? I’d be curious just how small that percentage would be.

          • Ted Gilles says:

            This an important debate and is probably indicative of the understanding, and misunderstanding, of the fourth National Climate Assessment (NCA4) November 2018 report. The current posting of Volume 1 page 82 states:

            The global average CO2 concentration has increased by 40% over the industrial era, increasing from 278 parts per million (ppm) in 1750 to 390 ppm in 2011; it now exceeds 400 ppm (as of 2016) (http://www.esrl.noaa.gov/gmd/ccgg/ trends/).

            The key emphasized word is “concentration.”

            From Wikipedia, the free encyclopedia:
            In chemistry, concentration is the abundance of a constituent divided by the total volume of a mixture.

            I don’t question that the finite amount of CO2 increased 40%, but the concentration, as stated in NCA4 and is the important issue for trapping heat, is 0.01%, which is also currently shown on the NOAA Composition of Air website as 0.039%, up from 0.029% (290 ppm) 100 years ago. The effective amount of change in CO2 becomes the basic premise of the entire document. Since the math is really so fundamental, I believe the authors deliberately made this distortion in an attempt to enhance their anthropogenic climate change (ACC) hypothesis.

          • Nate says:

            “deliberately made this distortion in an attempt to enhance their anthrop”

            Skeptics say the darndest things!

            Ted, it is written in plain english what they mean. No distortion whatsoever.

    • barry says:

      The 40% increase is pretty much indisputable, and attempts to dispute it fail under even cursory scrutiny.

    • bdgwx says:

      Not even the most hardened contrarians dispute the increase in CO2. And among AGW skeptics most fully acknowledge that this increase is primarily the result of anthroprogenic behavior.

  7. Steven Mosher says:

    Roy.

    you might want to remind folks what happens when models do not have cancelling biases.

    The control runs do what pat thinks.
    basically model spin up proves pat wrong.

    • Steven:
      I think I have already stated that, with somewhat different words.

    • Pat Frank says:

      Right, Steven Mosher: model spin-up solves everything.

      Well, everything apart from an erroneous equilibrated base-state climate.

      Apart from that, everything is solved by spin-up.

      Well, except for the theory-error injected into every simulation step.

      So, except for an erroneous base-state, and continual initial values errors in every single simulation step, spin-up solves everything.

      Except there is the problem that every projected climate state has accumulated error of unknown magnitude.

      So, except for an erroneous base-state, continual step-wise simulation error, and an unknown and unknowable error in projection phase-space trajectory, spin-up solves everything.

      You’ve got it figured out, Steve.

      • Pat:

        There is no “accumulated error” in the bias sense. This only exists in your imagination, not in the models, as I have clearly demonstrated from their output.

        Now, there *IS* a an accumulated random error (random walk) component, but even that is constrained to not affect temperature trends very much by net negative feedback from the Planck Effect (sigma T^^4).

        Your claims are demonstrably false, from the models themselves. You can assert all of the “accumulated error” arguments you want, but the model output proves you are wrong.

        You should have recognized this before you started your years-long journey down this very deep rabbit hole.

        -Roy

        • Pat Frank says:

          All you’ve shown, Roy, is that the models are all energy balanced at the TOA and all give similar responses to CO2.

          That’s not the same as the models giving a correct answer. Nor is it the same as the answer having no physical uncertainty.

        • Nate says:

          Dr Frank,

          ‘Thats not the same as the models giving a correct answer. Nor is it the same as the answer having no physical uncertainty.’

          I just don’t understand how your uncertainty is manifested?

          If it doesn’t show up in the actual predicted temperatures of the models, how can you justify drawing the huge uncertainty envelope on the temperatures?

          I agree with Roy that feedbacks that are real and physics-based (not artificial) will keep any random-walk tendency of the temperature under control.

          If so, then what’s the measurable effect of your propogated error?

          • angech says:

            I just dont understand how your uncertainty is manifested?
            If it doesnt show up in the actual predicted temperatures of the models, how can you justify drawing the huge uncertainty envelope on the temperatures?
            Think of doing millions of coin tosses and plotting where you will end up.
            Heads positive tails positive or neutral.
            Then run some models emulating it.
            You might toss millions of heads in a row at the start.
            That fits in the huge uncertainty envelope and is valid.
            Or you could toss millions of tails in a row.
            In the huge envelope equally valid.
            Or you could toss head tails alternately and end up even with a tail.
            Equally valid, it is a huge envelope.
            The envelope gets twice as big each throw.
            That is what uncertainty is.
            Now as to the predicted models, they should be all overt the shop but fit inside the envelope. The longer you go the less likely you will be to stop exactly even.
            Unless of course you put a constraint in your model.
            I.e after 10 heads or tails in a row go back to zero.
            Climate models do this two ways rebalancing everything a little bit, no big fiddles, to get the TOA back to regulation.
            Except for CO2. This gives a little trend upwards, positive heads accommodated by tweaking the TOA up a little as well.
            After all it does have to increase if everything else is equal.
            Pat Frank is saying in a computer model mathematically this spread of uncertainty must exist.
            Roy is using real world physics to say there are constraints preventing this, theoretically, from happening in the actual world.
            They are talking past each other.

          • Nate says:

            ‘Millions of heads in a row” No.

            Unless of course you put a constraint in your model.
            I.e after 10 heads or tails in a row go back to zero.”

            The constraints in the model are the same ones one the real Earth.

            “Climate models do this two ways rebalancing everything a little bit, no big fiddles, to get the TOA back to regulation.Except for CO2.”

            Nope. ANY Evidence for this?

          • angech says:

            Millions of heads in a row possible. Given enough throws, yes. Thats statistics, thats the border of uncertainty. Likely? Somehow somewhere sometime.
            Constraints in a model are put in by the modellers. One way mentioned is to require a specific TOA range.
            The real world has physics which flows.

  8. Thomas says:

    Dr. Spencer,

    (I posted this on the wrong thread.)

    You wrote,

    Importantly, this forced-balancing of the global energy budget is not done at every model time step, or every year, or every 10 years. If that was the case, I would agree with Dr. Frank that the models are useless, and for the reason he gives. Instead, it is done once, for the average behavior of the model over multi-century pre-industrial control runs, like those in Fig. 1.

    How is this forced-balancing actually accomplished?

    Forced-balancing does not change the fact that the models dont model cloud accurately, which is a fact that both skeptics and alarmists have long agreed on. You have said, I think, that a small change in cloud would be enough to explain the recent warming. The fact that the models cant capture that detail means they cant detect any negative cloud feedback, or any feedback at all. So they arent really modeling the climate.

    If the models are forced to be constant before CO2 is added, then any positive number for CO2 forcing will cause warming. Anyone can pick a number they want and see the result. But Pats emulation equation does the same thing, so why bother with the model at all?

    You explanation sounds like the models are programmed to show no warming trend unless a forcing like CO2 is added. Then the IPCC says they know that most of the warming is human-caused because their models dont show any warming unless CO2 is added. Thats painfully circular reasoning.

    Even if the models are balanced, the cloud error still remains. Its like the models are programmed to ignore that uncertainty, but ignoring something doesnt make it go away. To put it another way, if the models did not ignore the changes in cloud that they predict, they would give results that are +/- 20 C.

    Am I missing something?

    • Some of what you say I agree with, and I have said myself.

      But read what I said and think about it: Models with different known errors still produce no long-term temperature change without being forced with GHGs and volcanoes. So, apparently the AVERAGE energy flux biases of e.g. LWCF (+/-12% among models) doesn’t really matter to the models’ response to focing. It’s the CHANGE in model clouds with warming that matters (cloud feedback), not whether the model has a + or – 5 W/m2 bias error.

      I don’t know how else I can express it.

      • Math says:

        The models may not change without being forced with GHGs and volcanoes, but reality can of course do that. Isnt that the point Dr. Frank is making?

        • No. Read his conclusions near the top of this post. It has nothing to do with naturally-caused climate change.

          • Math says:

            But the natural courses ARE what causes his estimation of the uncertainty.

          • Math says:

            … causes

            No, errors in the models’ representation of various physical processes that impact global radiative balance (and thus global temperature) are what Pat is claiming cause huge uncertainties in future model projections. -Roy

          • Math says:

            Exactly, that is what I mean. To me natural causes and non-human physical processes are the same thing. Uncertainties in these causes/processes give the uncertainties in the models. Anyway, I will not disturb you anymore on this topic. You have been more than generous in answering my stupid questions and comments. Thank you very much Dr. Spencer for your enormous patience with us who are not trained in climate science. It is really interesting to follow your blog.

      • Thomas says:

        Dr. Spencer,

        The following could be construed as sarcastic. It is not.

        You wrote,

        “Models with different known errors still produce no long-term temperature change without being forced with GHGs and volcanoes.”

        Oh my goodness. I finally get it!

        I’ve been studying climate, as a hobby, for years. My father was a cloud physicists, so I knew how to operate a net-radiometer before most (probably all) of your students were born. But in all those years of studying climate I never understood this simple fact:

        Climate models don’t model climate.

        The models only model what might happen if CO2 is increased in an otherwise unchanging climate. The models would not show the Medieval Climate Optimum, or the Little Ice Age, or the warming in the early part of the 20th century, or the pause, or any other longish term climate event.

        I realize that your Fig. 1 should have clued me in, but deep-seated beliefs are hard to shake off.

        (By the way, how long term is long term? Ten years? A hundred years?)

        I stupidly assumed that a thing that is called a “climate model” would actually model climate. I was wrong. But that’s okay with me, Learning is the process of understanding where you had it wrong.

        I suspect Pat Frank also didn’t understand the above simple fact. He showed that, *if* (italics) the models were modeling climate, they were useless. The fact that they *do not* model climate makes them utterly and completely useless, at least for the purpose of ferreting out what is natural warming/cooling and what is not, or what the future temperature of the planet might actually be.

        A model will not model what it does not modeling. (Can we call that Thomas’ 0th law of modeling?)

        The models presume an unchanging climate, then calculate what effect CO2 might have. But the preponderance of the evidence suggests that Earth’s climate is not unchanging. Nearly all the recent warming could be natural, but the models would be completely unable to detect it.

        Pardon my childish exuberance, but that revelation seems huge to me.

        The IPCC says they know that the recent warming is caused by CO2 because the climate models do not show warming unless they add CO2. I thought that was circular reasoning. But it is not. It is misdirection, if not an outright lie.

        Now I do realize that a model that presumes an unchanging climate, then adds CO2 to see what the result might be, could have some utility. But it has no predictive value for the future average temperature of the earth. Unless there are no long-term climate oscillations, which seems highly unlikely.

        I have always enjoyed your blog and your excellent way of explaining things. It’s the first place I go when there is a disagreement amongst skeptics because you always have a well-explained answer. I have to admit I was discouraged when you didn’t seem to get what Dr. Frank had done. Now I know why. Dr. Frank was right in every way, except that he assumed that climate models model the climate. They don’t.

        My faith in you as a most-excellent teacher is renewed. That makes me feel good.

        Thank you for taking the time to maintain your blog. It means a lot to me.

        • Thomas says:

          “I stupidly assumed that a thing that is called a climate model would actually model climate.”

          A model is an elaborate version of a personal opinion.

          If it makes wrong predictions, it’s not a model at all.

          It’s a computer game.

          I was lucky when I started reading climate science in 1997 — I already never trusted predictions of the future — so when I read about 100-year climate predictions, I automatically did not believe them.

          Where is an accurate climate physics model that shows EXACTLY what causes climate change, that could be the foundation of a global climate model?

          Such a climate physics model does not exist, so constructing a General Circulation Model that makes accurate predictions is impossible — a lucky guess is possible, but nothing else.

      • Pat Frank says:

        The (+/-)4W/m^2 is not a bias error, Roy.

        It’s an uncertainty interval around simulated tropospheric LWCF.

        • Pat, you cant have it both ways.

          It’s either a BIAS error or a RANDOM error (or even a combination).

          But if it’s a bias error, then I have shown it does not exist in the models.

          And if it’s a RANDOM error, then it averages to near-zero in the long term because even a random walk in future temperatures is not possible since model temperatures are constrained by sigma T^^4.

          Quit hiding behind statistical hand-waving (“it’s not a bias….it’s an uncertainty”) which makes no physical sense.

          No matter what you call it, it’s wrong.

          -Roy

          • Pat Frank says:

            It’s a systematic error, stemming from an erroneous theory, Roy.

            It’s a deterministic error. Neither random, nor a bias offset.

            You’re imposing an ad hoc limit on the sorts of error that can accrue.

          • Layperson says:

            Can I ask a question?

            Does anyone think they can measure a nanometer with an off-the-shelf ruler?

            It seems like if you’re trying to detect something that is a nanometer wide with a ruler that has 1mm increments, that that is a fool’s mission.

            Your (+/-)0.5mm is going to propagate throughout whatever extended equations you are going to do. Your math might tell you you found something that’s a nanometer wide, but your measurement tool does not permit that. Hence, uncertainty.

  9. Clyde Spencer says:

    Roy
    You stated, “There are physical constraints in these models that lead to internally compensating behaviors.” Do these “constraints” over-ride the output of the sequential steps and result in dampening the potential deviation from the expected (desired) behavior? That is, do the model runs not flirt with the uncertainty boundaries because there is dampening that over-rides First Principles plagued by propagating uncertainties?

    • I do suspect that the models are overly-damped. If the modelers can’t get the models to produce at least 100-200 years with no temperature trend, they adjust the model further until it does. This is actually opposite to the behavior that Dr. Frank is claiming.

  10. I suspect I have spent enough time on this issue. I asked Dr. Frank to answer my specific objections, and all he has posted at WUWT is generalities and obscure, terse questions which do not make physical sense. Since he’s a chemist, I will assume he has not spent enough time to understand the components of global energy balance and how climate model (or weather forecast models) work. I know I’ve spent enough time addressing his paper.

    • Pat Frank says:

      None of your analyses are pertinent, Roy. -Are you seriously saying this, Pat? -Roy

      A statistical flux calibration uncertainty does not affect global energy balance. -Are you seriously saying this, Pat? -Roy

      Your entire argument is a claim that it does. You’re claiming that an error statistic is an energy. -Pat, your error has to have physical units. What are they? And in thermodynamics and physics, there isn’t “an energy”. That is an ambiguous, meaningless term by itself. -Roy

      This comment is neither a generality nor a terse question.

      It’s a specific criticism of what you’re doing.

      Other people get the distinction, even though you do not. -No, they have fallen prey to your hand-waving, which I have shown to be false from the models themselves. -Roy

      Here’s a comment from Crispin in Waterloo, as an example:

      Roy is claiming that a propagated error, which is an attribute of the system, has to affect the calculated values, and further claims that because the calculated values are generally repeatable, therefore they are certain. -I have no idea what was just said.

      This is simply not how results and propagated uncertainty are reported. The result is the result and the uncertainty is appended, depending on the uncertainty of various inputs. Xn

      The uncertainty n is independent of the output value X of the model.

      If every model in the world gave the same answer 1000 times, it doesnt reduce the propagated uncertainty one iota. Roy claims it does. It doesnt. Look it up on Wikipedia, even. That is simply not how error propagation works.” -Just because “propagation of error” is a valid concept does not mean you can apply it to climate model integrations, Pat. The models themselves show your conclusion is wrong. You have purposely avoided addressing my criticism of your main conclusion: that the models can’t show the effect of increasing CO2 because it is swamped by accumulated error. YET EVERY ONE OF THEM DOES SHOW THE EFFECT OF INCREASING CO2. How you cannot (or will not) recognize this baffles me. You are either being purposely deceptive, or you really do not understand how the Earth’s energy budget works or how climate models work. -Roy

      • Pat Frank says:

        Roy, I have not purposely avoided answering your question. I have answered it every single time.

        Here, again. You wrote, “You have purposely avoided addressing my criticism of your main conclusion: that the models cant show the effect of increasing CO2 because it is swamped by accumulated error. YET EVERY ONE OF THEM DOES SHOW THE EFFECT OF INCREASING CO2.

        Over and over again, I have pointed out that uncertainty is not error. Model expectation values are not impacted by uncertainty.

        I have avoided nothing.

        You continually mistake uncertainty for error. You ignore all the times I have written uncertainty is not error. And now you suggest I am being deceptive. All because you plain do not grasp the difference between error and uncertainty.

        And because you don’t understand, you suggest I am being deceptive.

        Let me try to illustrate a different way. Please take a look at eqn. 5.2 in the paper.

        Let me actually reproduce 5.2 here so everyone can see it:

        eqn. 5.2: (delta)T(+/-)u = 0.42 x 33K x [(F_0+(delta)F_i)/F_0](+/-)[0.42 x 33K x 4W/m^2/F_0]

        The equation to the left of the (+/-) provides the emulator expectation value.

        The equation to the right of the (+/-) provides the uncertainty.

        The uncertainty is calculated entirely separately from the emulator expectation values.

        The two parts of 5.2 on the opposite sides of the (+/-) are entirely independent.

        Look at Figures 1,2,3,7 and 8. The emulation lines are perfectly in conformance with the model points. They show no impact from the uncertainty. They show a coherent response to the CO2 forcing put into the equation, just as GCMs do.

        This coherence would be impossible, given your logic. But your logic does not apply. Uncertainty is not error.

        The uncertainty does not imply error scatter in the emulator output, just as it does not imply error scatter in model output.

        The same independence of the two parts of eqn. 5.2 applies just as well to the GCMs and the uncertainty calculation pertaining to them. They are entirely independent.

        Uncertainty is calculated after the simulation is completed.

        It has no effect on the simulation and does not imply simulation scatter or a lack of coherence of model response to CO2 forcings.

        It does not imply a lack of balance at the TOA.

        Uncertainty is not error.

        It does not imply that model outputs will jump around, or that models will not respond coherently to CO2 forcing.

        Uncertainty tells us about the reliability of the output.

        It says nothing about the coherence of the model output.

        It says nothing about how or whether the models respond to CO2 forcing.

        It says nothing about whether different models show the same response.

        It only says whether we can have any confidence in the physical meaning of the model output.

        Those points fully engage your criticism. I have been consistent in answer to you every single time.

        Your view is not correct, Roy. Uncertainty is not error.

        • curious says:

          Dear Pat and Roy

          As a reader on climate blogs I often see things deteriorate when people start assigning and making allegations of motive in dialogue. This is a plea to persevere with your cordial debate and as I think this important topic, now aired, needs resolving.

          Thank you for all the time and energy you each put into communicating and explaining your points of view. If necessary, perhaps you could participate in a mediated debate where conversation could be more flexible than these slightly scattered back and forth written blog comments.

          And perhaps you may just end with an agreement to disagree, but whatever the outcome I hope you can maintain respect for each other’s sincere, significant and substantial efforts.

          • Eli Rabett says:

            In physical science the simplest example is often the best for analysing whether a complex argument has any value

            Here is a simple example. Let us say you take 1000 measurements one time and get a distribution of values that is 15 with a std deviation of 1. Lets say you so this for 25 years and each time you get an average of 15 with a std. deviation of 1. So, what is the std deviation for the time series of measurements, 25 (Pat Frank) or 1 (Eli)? (BTW as several have pointed out the time interval is arbitrary)

          • Dr Roys Emergency Moderation Team says:

            Eli, please stop trolling.

  11. Jerry MacIntyne says:

    This author Dr. Frank appears to have no training or expertise in climate science, so it is not surprising that his paper is junk. The world is full of self appointed experts who really don’t have a clue. Many of them post here with astonishing frequency.

    He also appears on the “who we are” page of the Heartland Institute, so the conclusion in anything he writes is predetermined by his politics, not by science.
    J

    • coturnix says:

      >>Dr. Frank appears to have no training or expertise in climate science

      Apparently, there is science and there is climate science. Having no expertise in the latter should be viewed as a good thing!

      The problem of local denizens/deniers (me included, though i come here rarely enough to only qualify for an occasional visitor) is that they are not qualified in either, hence produce tonnes of gigo just like the climate models do.

      >> anything he writes is predetermined by his politics

      i think, after the climategate, that can be said about the whole entirety of ‘climate scientists’ everywhere. This ‘science’ has been wholly captured by the political discourse, and basically transformed into nothing but political advocacy. On all sides.

    • Pat Frank says:

      My work is about error analysis, not about the climate, Jerry. That distinction was emphasized in the paper.

      Let’s see you show one politically motivated step in the analysis I presented.

      On the other hand, your cavalier dismissal stinks of political partisanship

  12. Brent Auvermann says:

    Jerry, you wrote:

    “This author Dr. Frank appears to have no training or expertise in climate science, so it is not surprising that his paper is junk. The world is full of self appointed experts who really dont have a clue. Many of them post here with astonishing frequency.

    “He also appears on the ‘who we are’ page of the Heartland Institute, so the conclusion in anything he writes is predetermined by his politics, not by science.”

    That is a parade of accusations that are breathtakingly unfair individually, to say nothing of their aggregate value. Gracious.

    To the extent that a climate scientist is justified in using a hyper-simplified climate model to bolster his/her argument, non-specialists who can understand the theoretical basis (and the consequent sensitivity and canonical behavior) of that model should be permitted to weigh in on the ensuing discussion without being subjected to such fallacious accusations. Likewise, information theorists have a great deal to offer evolutionary biology, for example.

    I expect that were Dr. Frank and Dr. Spencer to spend some time isolated in a single room with enough dry-erase boards, they would be able to iron out enough of the semantic disconnects that their most fundamental, scientific disagreements would be greatly clarified and narrowed. As it stands, there is enough talking past each other that the real issues in fundamentals of both climate science and uncertainty analysis are hopelessly obscured.

    • David Appell says:

      Few other people are paying attention to this paper. Roy and Pat live in their own reality, and Roy is trying to pretend that this denier paper is somehow worth discussing. But the scientific community at large has seen all they need to know, which certainly includes all the rejections from all the quality journals.

      • coturnix says:

        Yes it i a mater of faith. If you belong to the church of global warming you are not suppose to discuss or even read what goes against your theological dogmas. And vice versa. This whole issue stopped being about science long time ago.

      • Appleman:
        The government bureaucrat “scientists” latched on to a formula for CO2 in the 1970s (+1.5 to +4.5 degrees C. per 100% CO2 increase) and have never changed in spite of warming predictions of triple the actual warming from 1979 through 2018.

        Government bureaucrat and university “scientists” live in their own leftist reality — a climate crisis is coming because “we are big shot scientists, and we say so”. They’ve been predicting climate doom since the 1970s and after almost 50 years, only gullible fools, like you, still repeat whatever they say (if it’s bad news), like a trained parrot.

      • m d mill says:

        Appell–Dr. Spencer is not pretending anything.
        Your statement is inane.
        He is analyzing a paper (that no doubt many people have asked him about)on his blog, not letting “quality journals”, or anyone else, do his thinking for him. And coming to a conclusion that you probably agree with.
        To simply state a person is a “denier”, and should not even be considered, without reading and analyzing the argument, is the shallow comment of a propagandist or a fool.
        Many climatologists and modelers HAVE taken the time to analyze this paper and come to conclusions similar to RS.
        Your statement is both false, inane, and simply unfair, and only illustrates your own “denier” status.

    • Dr Roys Emergency Moderation Team says:

      David, please stop trolling.

  13. Pat Frank says:

    Roy, let me take a different approach to the problem.

    We agree that all your GCMs produce an energy balance at the TOA. All of them accurately simulate the observed air temperature within the calibration bounds. – They are not my GCMs, but OK. -Roy

    Nevertheless, they all make errors in simulating total cloud fraction within the same calibration bounds. That means they all make errors in simulated long wave cloud forcing, within those calibration bounds. -I agree.-Roy

    The simulated tropospheric thermal energy flux is wrong within those calibration bounds. Tropospheric thermal energy flux is the determinant of air temperature. -No, only that COMPONENT of energy flux is wrong. LWCF is only one of MANY energy fluxes which determine temperature. You are starting to sound like you do not understand the many components making up global energy balance-Roy

    So the simulated calibration air temperature is correct while the simulated calibration tropospheric thermal energy flux is wrong. How is this possible? -Because

    Jeffrey Kiehl told us why in 2007.

    The reason is that the models are all tuned to reproduce air temperature in their calibration bounds. The correctness of the calibration air temperature is an artifact of the tuning.

    A large variety of tuned parameter sets will produce a good conformance with the observed air temperature (Kiehl, 2007). Therefore, model tuning hides the large uncertainty in simulated air temperature.

    The simulated air temperature has a large uncertainty, even though it has a small data-minus-simulation error. That small error is a spurious artifact of the tuning. We remain ignorant about the physical state of the climate. – If it had a large uncertainty, the model output would vary over wide ranges. What you are really saying is that the *individual components* of the global energy budget are not known well enough to determine why the global temperature is what it is. Yet, WE KNOW THE CLIMATE SYSTEM TENDS TO BE STABLE, AND DESPITE VARIOUS MODELS HAVING DIFFERENT ERROR COMPONENTS TO THOSE FLUXES, EACH AND EVERY ONE OF THE MODELS RESPONDS TO INCREASING CO2, CONTRARY TO YOUR MAIN CONCLUSION. -Roy

    Uncertainty is an ignorance-width. The uncertainty in simulated air temperature is there, even though it is hidden, because the models do not reproduce the correct physics of the climate. They do not solve the problem of the climate energy-state.

    Although the TOA energy is balanced, the energy within the climate-state is not partitioned correctly among the internal climate sub-states. Hence the cloud fraction error. -Yes, I agree. -Roy

    Even though the simulated air temperature is in statistical conformance with the observed air temperature, the simulated air temperature tells us nothing about the energy-state of the physically real climate.

    The simulated calibration air temperature is an artifact of the offsetting errors produced by tuning.

    Offsetting errors do not improve the physical description of the climate. Offsetting errors just hide the uncertainty in the model expectation values. -Again, the fact that models with very different combinations of errors for the individual flux components still respond to increasing CO2 refutes you. The model projections depend upon the *temperature dependence* of those errors, not upon the errors per se. This has been known for many years, but I understand why a chemist might not be aware of this -Roy

    With incorrect physics inside the model, there is no way to justify an assumption that the model will project the climate correctly into the future. – I agree. But not for the reason you have given -Roy

    With incorrect physics inside the model, the model injects errors into the simulation with every calculational step. Every single simulation step starts out with an initial values error. – Oh for heavens sake. Yes, everyone knows this. It’s not relevant to what you are trying to demonstrate, Pat. -Roy

    That includes a projection starting from an equilibrated base-climate. The errors in the projection accumulate step-by-step during a projection. – And yet the models’ output, as I have shown, do not reveal this “accumulated error” in the TOA radiative fluxes. Don’t you wonder why? – Roy

    However, we do not know the magnitude of the errors, because the prediction is of a future state where no information is available. – -Except the model output of the future state, which shows no evidence of what you are claiming. -Roy

    Hence, we instead calculate an uncertainty from a propagated calibration error statistic. -Which obviously cannot be applied to climate models. And I still don’t know how the LWCF error can GROW over time, which is what your Eq. 6 produces. This is not how climate model errors work! -Roy

    We know the average LWCF calibration error characteristic of CMIP5 models. That calibration error reflects the uncertainty in the simulated tropospheric thermal energy flux — the energy flux that determines air temperature. -You are repeating yourself. -Roy

    It is the energy range within which we do not know the behavior of the clouds. The clouds of the physically real climate may adjust themselves within that energy range, but the models will not be able to reproduce that adjustment.

    That’s because the simulated cloud error of the models is larger than the size of the change in the physically real cloud cover.

    The size of the error means that the small energy flux that CO2 emissions contribute is lost within the thermal flux error of the models. That is, the models cannot resolve so small an effect as the thermal flux produced by CO2 emissions. – And yet all of the models produce warming in response to increasing CO2, proving your conclusion false -Roy

    Propagating that model thermal-flux calibration error statistic through the projection then yields an uncertainty estimate for the projected air temperature. The uncertainty bounds are an estimate of the reliability of the projection; of our statement about the future climate state. – Only in your imagination, Pat, and maybe in systems where your Eq. 6 applies. Model errors do not look like what your Eq. 6 produces, and all models produce CO2-induced warming, contrary to your main conclusion. Yes, the individual components of the modeled radiative flux have errors. But the model output from pre-industrial control runs shows they lead to very small uncertainty. If anything, the models show TOO LITTLE variability in their future states in the absence of CO2 forcing. Then they seem to respond to ONLY CO2 forcing. This is exactly opposite to your claim! -Roy

    And that’s what I’ve done.

    • studentb says:

      “Every single simulation step starts out with an initial values error.
      That includes a projection starting from an equilibrated base-climate. The errors in the projection accumulate step-by-step during a projection.”

      I still don’t understand why the uncertainty starts amplifying at the point the projection begins.
      Does it still amplify if the projection involves zero, or near-zero, change to the forcing?

      • Mike Flynn says:

        s,

        Yes. The atmosphere is chaotic.

        From Wikipedia –

        “Edward Norton Lorenz (May 23, 1917 April 16, 2008) was an American mathematician and meteorologist who established the theoretical basis of weather and climate predictability, as well as the basis for computer-aided atmospheric physics and meteorology.”

        Pseudoscientific GHE true believers conveniently disregard Lorenz. No doubt because –

        “He is best known as the founder of modern chaos theory, a branch of mathematics focusing on the behavior of dynamical systems that are highly sensitive to initial conditions.”

        There is no minimum change which may result in complete unpredictable chaotic outcomes.. None. Unpredictability can result from the uncertainty of a photon’s position dictated by the uncertainty principle. As Feynman said “Nature is absurd”. But there you are. That’s how it is, whether we like it or not.

        Cheers.

        • barry says:

          Despite weather being chaotic, we can still predict with reasonable success in the short term (a few days). When it comes to climate, the seasons are strongly predictable, even though the weather on a given day 6 months ahead is impossible to predict. Summer is always going to be warmer than Winter.

          Chaotic systems are bounded. That’s what climate is – the average of semi-random weather elements.

          • Mike Flynn says:

            b,

            A few days is about the limit of naive persistence prediction – 12 year old kid stuff.

            Seasons are assumed, as is the assumption that the sun will rise tomorrow. No need for complex models – just more 12 year old kid stuff.

            From a PLOS paper –

            “Predicting extrema of chaotic systems in high-dimensional phase space remains a challenge.”

            Just asserting that chaotic systems are bounded does not make it fact. Pointing out that temperatures are bounded by definition (absolute zero) in not useful, and irrelevant. You quite obviously know nothing about chaos. Nor do pseudoscientific GHE true believers.

            Climate is the average of weather. Weather is the result of the chaotic behaviour of the atmosphere, the aquasphere and the lithosphere. If you believe you can usefully predict weather with numerical models – dream on.

            Cheers.

          • barry says:

            “Just asserting that chaotic systems are bounded does not make it fact”

            It is as trivially obvious as the seasons that Earth’s climate system is bounded. Can you name a naturally occurring unbounded chaotic system on Earth?

            Climate models are bound to equilibrate energy in/energy out at the TOA, based on the laws of thermodynamics. Frank’s criticism is purely statistical. He is ignoring the physics built into the models.

            “If you believe you can usefully predict weather with numerical models – dream on.”

            Didn’t I say that already?

            When it comes to climate, the seasons are strongly predictable, even though the weather on a given day 6 months ahead is impossible to predict

            Do you have a criticism about chaos, weather and climate that is specific to the topic?

          • Dr Roys Emergency Moderation Team says:

            barry, please stop trolling.

          • Mike Flynn says:

            b,

            You claim chaotic systems are bounded. What are the bounds of the climate system? See what I mean? You can provide no useful or meaningful answer, can you?

            Climate models are pointless. The Earth has cooled. Blathering about TOA, and mythical energy balances won’t make that inconvenient fact go away.

            Your assumptions about the seasons are about as useful as those from a 12 year old.

            It doesn’t take a genius to say that temperature, wind speed, pressure, cloud cover and so on, have zero as a lower bound. It doesn’t take a genius to realise that knowing what zero means, does you no good at all, when trying to predict chaos.

            Even if someone stated that surface temperatures vary between +90 C and -90 C, what good does it do in relation to forecasting climate? None at all.

            Carry on sounding sciency. You might impress a pseudoscientific GHE true believer.

            Cheers.

          • barry says:

            “What are the bounds of the climate system?”

            I already answered that – energy in and energy out at the TOA is the major bound. Another bound is the tropopause, where tropospheric convection ceases. The sun provides a bound – it is the source for virtually all the energy in the climate system. The fact that there are seasons is pure evidence that the chaos of whether is bounded by orbital variation. Summer is always – not sometimes – warmer than Winter. Not to mention that the physics also make complete sense of seasons. These are stable components that ‘frame’ the chaos of weather and turbulence.

            “See what I mean? You can provide no useful or meaningful answer, can you?”

            You state this in the same post, before I even have the opportunity to reply. Don’t be so narcissistic.

          • Dr Roys Emergency Moderation Team says:

            #2

            barry, please stop trolling.

      • I still don’t understand why such an error in a climate model (that gets canceled by other climate model errors due to model tuning) is supposed to cause a large uncertainty that grows with time in a manner like that of a 2-dimensional random walk, while these model errors are demonstrated as stable by Dr. Spencer. These model errors would change a little with change in CO2 and the difference between these changes in model errors due to change in CO2 would do the same and cause an error up to a few degrees C as a result of a large increase of CO2 over 50-100 years, and their uncertainties of up to a few W/m^2 would be constant or drift gradually as a result of CO2 change and/or have random noise in the few to ballpark ~10 W/m^2 added to them, as opposed to being recreated with every model iteration and added to the previous uncertainty (or sum thereof) in a way that causes the total uncertainty to grow like the range of a 2-dimensional random walk.

        • Mike Flynn says:

          Chaos is not a random walk. It is not randomness. It is chaos. Unpredictable, and not amenable to averages, or useful probability distribution.

          Cheers.

      • studentb says:

        Maybe this question/scenario can help clarify matters.

        I start a climate model simulation with 1xCo2 and run it for a thousand years. Time enough for the results to settle down. The global average temperature at the end is T1.

        I repeat the process, but impose 2xCo2. This yields after a thousand years a (presumably warmer) temperature T2.

        Now, is the difference T2 minus T1 of any value ?

        Or, are both values plagued by uncertainties of +-20deg such that the difference is meaningless?

        • Mike Flynn says:

          s,

          A chaotic system doesn’t settle down. That’s what Lorenz rediscovered. If your model settles down, it is worthless. If it doesn’t, it is useless.

          Which do you prefer your money to be wasted on?

          Cheers.

          • David Appell says:

            “Nevertheless, chaos theory does not imply a total lack of order. For example, slightly different conditions early in its history might alter the day a storm system would arrive or the exact path it would take, but the average temperature and precipitation (that is, climate) would still be about the same for that region and that period of time.”

            https://wg1.ipcc.ch/publications/wg1-ar4/faq/wg1_faq-1.2.html

          • Mike Flynn says:

            DA,

            The IPCC has little clue about chaos. Referring to their authority is about as stupid as referring to your own. The reason the IPCC stated –

            “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.”, albeit reluctantly, is because it is true.

            Their subsequent waffle about concentrating on probability distributions is just sciencey nonsense.

            As to your graphic, what make you think it indicates other than chaos? The eyeball mark one, wishful thinking, or a drug induced trance? Don’t ask stupid questions that you think make you look clever.

            Cheers.

          • Entropic man says:

            Mike Flynn

            Research strange attractors.

          • Mike Flynn says:

            Em,

            And why would I do that? If you are silly enough to think you can teach me something about strange attractors, why not just say so?

            What is it that you think you know better than me? Why not give your reasons, so that everyone benefits?

            Why the secrecy? Questions, questions.

            Cheers.

    • gbaikie says:

      So, I think what you saying {and proving} why climate models can’t work. Roy agree models don’t work.
      Next, is anything about this that indicates how a model of climate could work.

      Or is it simply that can’t predict weather 1 month in the future and predicted climate for decades can’t be done.
      Or said differently if you predict weather for next year, you might have chance of predicting climate for decades?

      Anyhow, I think someone should model planet completely covered by ocean.
      That should be more simple than planet covered by 70% ocean and 30% land.

      I guess if ocean too hard, how about planet covered completely with flat land- what average air temperature at poles and average air temperature at tropics.

    • Pat Frank says:

      Answers to you are intercalated after your comments, Roy.

      Roy, let me take a different approach to the problem.

      We agree that all your GCMs produce an energy balance at the TOA. All of them accurately simulate the observed air temperature within the calibration bounds. Roy, “– They are not my GCMs, but OK. -Roy

      I stand corrected. 🙂

      Nevertheless, they all make errors in simulating total cloud fraction within the same calibration bounds. That means they all make errors in simulated long wave cloud forcing, within those calibration bounds. Roy, “-I agree.-Roy” So far, so good. 🙂

      The simulated tropospheric thermal energy flux is wrong within those calibration bounds. Tropospheric thermal energy flux is the determinant of air temperature. Roy”-No, only that COMPONENT of energy flux is wrong. LWCF is only one of MANY energy fluxes which determine temperature. You are starting to sound like you do not understand the many components making up global energy balance-Roy

      It sounds like you’re saying, Roy, that when a component of the tropospheric thermal energy flux is wrong, the tropospheric thermal energy flux is not wrong. You are starting to sound like you don’t know that partly wrong is not right.

      So the simulated calibration air temperature is correct while the simulated calibration tropospheric thermal energy flux is wrong. How is this possible? -Because

      Jeffrey Kiehl told us why in 2007.

      The reason is that the models are all tuned to reproduce air temperature in their calibration bounds. The correctness of the calibration air temperature is an artifact of the tuning.

      A large variety of tuned parameter sets will produce a good conformance with the observed air temperature (Kiehl, 2007). Therefore, model tuning hides the large uncertainty in simulated air temperature.

      The simulated air temperature has a large uncertainty, even though it has a small data-minus-simulation error. That small error is a spurious artifact of the tuning. We remain ignorant about the physical state of the climate. Roy”– If it had a large uncertainty, the model output would vary over wide ranges….

      No, it would not. Models are tuned and constructed to give bounded outputs. Uncertainty does not imply wide ranges of outputs. It implies that the outputs are not reliable.

      Roy, “ What you are really saying is that the *individual components* of the global energy budget are not known well enough to determine why the global temperature is what it is. Yet, WE KNOW THE CLIMATE SYSTEM TENDS TO BE STABLE, AND DESPITE VARIOUS MODELS HAVING DIFFERENT ERROR COMPONENTS TO THOSE FLUXES, EACH AND EVERY ONE OF THE MODELS RESPONDS TO INCREASING CO2, CONTRARY TO YOUR MAIN CONCLUSION. -Roy

      You’ve misstated my main conclusion, Roy. It’s not that models do not respond to increasing CO2. It’s that the model response to increasing CO2 is physically meaningless. The reason it is meaningless is that model resolution is far cruder than the perturbation. You’re claiming to see atoms with a hand-magnifier.

      Uncertainty is an ignorance-width. The uncertainty in simulated air temperature is there, even though it is hidden, because the models do not reproduce the correct physics of the climate. They do not solve the problem of the climate energy-state.

      Although the TOA energy is balanced, the energy within the climate-state is not partitioned correctly among the internal climate sub-states. Hence the cloud fraction error.Roy” -Yes, I agree. -Roy” Thank-you. 🙂

      Even though the simulated air temperature is in statistical conformance with the observed air temperature, the simulated air temperature tells us nothing about the energy-state of the physically real climate.

      The simulated calibration air temperature is an artifact of the offsetting errors produced by tuning.

      Offsetting errors do not improve the physical description of the climate. Offsetting errors just hide the uncertainty in the model expectation values. Roy”-Again, the fact that models with very different combinations of errors for the individual flux components still respond to increasing CO2 refutes you.

      No, it does not. Models are adjusted to do exactly what you describe. Their common conformance is no indication of their physical verisimilitude.

      Once again, uncertainty is not error. It does not imply output variability. It is a measure of reliability.

      Roy, “The model projections depend upon the *temperature dependence* of those errors, not upon the errors per se. This has been known for many years, but I understand why a chemist might not be aware of this -Roy

      You know the models have offsetting errors, Roy. As a chemist I know that offsetting errors do not improve the physical theory. Offsetting errors that allow reproduction of observables are not telling us anything physically causal about why those observables appear or have the magnitudes they do.

      As a consequence of their reproduction only through offsetting errors, the observables do not indicate any explanatory content in the physical description of their origin.

      Hence the large uncertainty in the results: they are non-explanatory. They do not dispel our ignorance of the physics.

      As a chemist, I know this. I have to deal with that reality all the time in my work. What I see is that climate scientists have no appreciation of this cold-water reality of physical science.

      With incorrect physics inside the model, there is no way to justify an assumption that the model will project the climate correctly into the future. Roy, “– I agree. But not for the reason you have given -Roy

      You agree that incorrect physics disallows correct predictions, which is the reason I give, but then disagree with the reason I give?

      With incorrect physics inside the model, the model injects errors into the simulation with every calculational step. Every single simulation step starts out with an initial values error. Roy, “– Oh for heavens sake. Yes, everyone knows this. It’s not relevant to what you are trying to demonstrate, Pat. -Roy

      On the contrary, Roy. It is completely central to what I demonstrate. The fact you agree amounts to a concession that my analysis is correct. You may not agree to that, but it’s evidently true anyway.

      That includes a projection starting from an equilibrated base-climate. The errors in the projection accumulate step-by-step during a projection.Roy, “ – And yet the models’ output, as I have shown, do not reveal this “accumulated error” in the TOA radiative fluxes. Don’t you wonder why? – Roy

      I know why, Roy. Uncertainty is not error. This distinction between error and uncertainty is what you consistently fail to grasp. It is fundamental to everything.

      Models can give uniformly similar outputs, the outputs can indicate a consistent model response to CO2, and all of the outputs can have a huge uncertainty. Because the model internal physics is wrong.

      However, we do not know the magnitude of the errors, because the prediction is of a future state where no information is available. Roy, “– -Except the model output of the future state, which shows no evidence of what you are claiming. -Roy

      Because uncertainty is not error, Roy. The uncertainty would exist even if output variability was zero, and all the models gave identical results.

      Hence, we instead calculate an uncertainty from a propagated calibration error statistic. Roy, “-Which obviously cannot be applied to climate models.

      It’s quite obvious that it can. Your rejection stems from a lack of distinction between error and uncertainty.

      Roy, “And I still don’t know how the LWCF error can GROW over time, which is what your Eq. 6 produces.

      LWCF error does not grow over time. The uncertainty grows over time (step). The LWCF error is in every simulation step. The simulation wanders away from the correct phase-space trajectory. Uncertainty in its position must grow with time.

      This is not how climate model errors work! -Roy. It is, actually. The problem is the distinction between error and uncertainty. You need to grasp the meaning of that distinction.

      We know the average LWCF calibration error characteristic of CMIP5 models. That calibration error reflects the uncertainty in the simulated tropospheric thermal energy flux Roy, “— the energy flux that determines air temperature. -You are repeating yourself. -Roy

      Sometimes, that helps.

      It is the energy range within which we do not know the behavior of the clouds. The clouds of the physically real climate may adjust themselves within that energy range, but the models will not be able to reproduce that adjustment.

      That’s because the simulated cloud error of the models is larger than the size of the change in the physically real cloud cover.

      The size of the error means that the small energy flux that CO2 emissions contribute is lost within the thermal flux error of the models. That is, the models cannot resolve so small an effect as the thermal flux produced by CO2 emissions. Roy, “– And yet all of the models produce warming in response to increasing CO2, proving your conclusion false -Roy

      Models that consistently respond to CO2 does not mean that their physics is correct.

      Propagating that model thermal-flux calibration error statistic through the projection then yields an uncertainty estimate for the projected air temperature. The uncertainty bounds are an estimate of the reliability of the projection; of our statement about the future climate state. Roy, “ – Only in your imagination, Pat, and maybe in systems where your Eq. 6 applies. Model errors do not look like what your Eq. 6 produces, and all models produce CO2-induced warming, contrary to your main conclusion.

      My main conclusion has nothing to do with models consistently responding to CO2, Roy. The uncertainty bars of eqn. 6 do not indicate physical error. Your evident inability to understand that, or the distinction between error and uncertainty, is what powers your entire (wrong) contrary argument.

      Roy, “Yes, the individual components of the modeled radiative flux have errors. But the model output from pre-industrial control runs shows they lead to very small uncertainty. If anything, the models show TOO LITTLE variability in their future states in the absence of CO2 forcing. Then they seem to respond to ONLY CO2 forcing. This is exactly opposite to your claim! -Roy

      Tuning of models to reproduce uncertainties does not indicate lack of uncertainty. It merely hides the inadequacy of the models.

      I do not claim that models do not respond to CO2 forcing. I claim that their response to CO2 forcing is physically meaningless. Nothing you have written gainsays that. All you have done is show that you do not understand the analysis.

      Uncertainty is not error, Roy. Uncertainty does not affect model output. It does not indicate variability of calculated result. It only indicates whether the result is reliable. Models can all give perfectly inter-consistent results. They can nevertheless be completely unreliable.

      And that’s what I’ve done.

      And so it remains.

      Pat

      • Pat Frank said:
        “The LWCF error is in every simulation step. The simulation wanders away from the correct phase-space trajectory. Uncertainty in its position must grow with time.”

        Except, I see this error as being constant, except for changing a few W/m^2 as a result of CO2 change and +/- some random noise in the ballpark of a few W/m^2. The LWCF error is not something that both changes every iteration of the simulation and needs to be added to the previous one like rolling dice and adding the results of a dice roll to summated results of previous dice rolls.

  14. m d mill says:

    Dr. Franks results vary greatly when the arbitrary time step time is changed. This is simply a killer for any hypothesis, and one of several reasons it has not been taken seriously.

    • Pat Frank says:

      No, they do not.

      The simulation LWCF error varies with the length of the time step. When that is taken into account, the uncertainty bounds are the same.

  15. MikeN says:

    David Appell, you didn’t understand R2, that Mann had calculated it, that Wahl and Ammann had reported its value in agreement with McIntyre, so I don’t know why you bother to opine on another statistical matter. Why not just list what others have to say about it, rather than pretending it is your thoughts?

  16. Scott R says:

    ENSO readings over 2.25, year since 1850. (HADSTT3)

    1877 3.72
    1889 2.58
    1919 3.63
    1940 2.28
    1983 2.88
    1997 2.70
    2015 3.10

    Peaks of global ocean temperature:
    1878 +0.3 (1 warm cycle, data NA prior to 1850)
    1942 +0.3 (2 warm cycles over 35 prior years – 1 AMO period)
    2016 +0.74 (3 warm cycles over 35 years – 1 AMO period)

    My point is that the global oceans experienced 1 extra super El Nino cycle during the last AMO cycle compared to the others.

    • Bindidon says:

      Scott R

      I’m wondering a lot about your numbers.

      Where exactly do they come from? Are they a simple monthly average of HaddSST3 grid cells within 5N-5S–170W-120W?

      Calculating ENSO isn’t so simple, as such numbers consist of longer means, mostly 6 months. And you need the means’ product with the Darwin-Tahiti pressure deltas.

      *
      Here is, from the MEI data sets (extended from 1871; ‘classic’ from 1950), a descending sort of the months with the highest MEI values:

      1983 3: 3.35
      1983 2: 3.31
      1997 10: 3.21
      1997 11: 3.20
      1997 8: 3.18
      1997 9: 3.17
      1997 7: 3.12
      1983 4: 3.08
      1983 1: 3.08
      1997 12: 3.02
      1998 3: 2.94
      1998 2: 2.82
      1998 1: 2.82
      1998 4: 2.82
      1982 12: 2.78
      1983 5: 2.67
      1997 6: 2.60
      1982 11: 2.59
      2015 9: 2.53
      1878 3: 2.50

      You see that 1877/78 and 2015/16 both are far below 1982/83 and 1997/98 in this top 20.

      1982/83 is the strongest – not only because it has the top peak! But especially because if stays at top when you compute, for all event peaks, the average of the 12 strongest MEI values around every peak.

      *
      “My point is that the global oceans experienced 1 extra super El Nino cycle during the last AMO cycle compared to the others.”

      And… your conclusion?

      Mine would be that the oceans accumulate more and more heat, and thus throw it more often into the atmosphere. But maybe it’s not worth the pixels needed to write it.

      • Scott R says:

        Bindidon,

        I was using monthly HADSST3 values which is 5N-5S–170W-120W like you said.

        I actually agree with you that both the amplitude and staying power are important. That is how underlying trends are made. That said, max / peak values are useful also… especially when looking to determine inflection points in complex systems with multiple forcers on various timeframes.

        As you know, I have the impression that 2016 is going to be one such inflection point, after all is it the AMO max + a super ENSO + the end of the 400 year cycle (modern maximum).

        I think it is VERY possible that AMO is a 5th harmonic for the GSM cycle with the down beat low that caused the harmonic getting ready to occur. Based on that, the oceans will cool into 2049, and at a sharper rate than we’ve ever seen. Should be good times.

        • Bindidon says:

          Scott R

          “I think it is VERY possible that AMO is a 5th harmonic for the GSM cycle with the down beat low that caused the harmonic getting ready to occur. ”

          Yeah. Guessing, guessing, guessing…

          Sorry, it gets too boring for me here.
          Good luck!

        • Dr Roys Emergency Moderation Team says:

          Off you go, then.

  17. Scott R says:

    ENSO update:

    3.4 region has just made a fresh 52 week low this morning @ -.509.

    1+2 has made a fresh low @ -1.446

    This confirms once again the trend is lower as of today. Still heading towards La Nina.

    • barry says:

      I do appreciate anyone who is bold enough to make a prediction, particularly when they go against the experts.

      if 3.4 is at -0.509, and we are heading towards a la Nina, then your prediction very clearly is that this is the start of a la Nina. We will be able to assess your prediction in about 5 months.

      I’ve bookmarked this post.

  18. Pat Frank says:

    For the benefit of all, I’ve put together an extensive post that provides quotes, citations, and URLs for a variety of papers — mostly from engineering journals, but I do encourage everyone to closely examine Vasquez and Whiting — that discuss error analysis, the meaning of uncertainty, uncertainty analysis, and the mathematics of uncertainty propagation.

    These papers utterly support the error analysis in “Propagation of Error and the Reliability of Global Air Temperature Projections.”

    There’s no point in further arguing the case. Either one gets it, or one doesn’t.

    Summarizing: Uncertainty is a measure of ignorance. It is derived from calibration experiments.

    Multiple uncertainties propagate as root sum square. Root-sum-square has positive and negative roots (+/-). Never anything else, unless one wants to consider the uncertainty absolute value.

    Uncertainty is an ignorance width. It is not an energy. It does not affect energy balance. It has no influence on TOA energy or any other magnitude in a simulation, or any part of a simulation, period.

    Uncertainty does not imply that models should vary from run to run, Nor does it imply inter-model variation. Nor does it necessitate lack of TOA balance in a climate model.

    For those who are scientists and who insist that uncertainty is an energy and influences model behavior (none of you will be engineers), or that a (+/-)uncertainty is a constant offset, I wish you a lot of good luck because you’ll not get anywhere.

    For the deep-thinking numerical modelers who think rmse = constant offset or is a correlation: you’re wrong.

    The literature follows:

    Moffat RJ. Contributions to the Theory of Single-Sample Uncertainty Analysis. Journal of Fluids Engineering. 1982;104(2):250-8.

    Uncertainty Analysis is the prediction of the uncertainty interval which should be associated with an experimental result, based on observations of the scatter in the raw data used in calculating the result.

    Real processes are affected by more variables than the experimenters wish to acknowledge. A general representation is given in equation (1), which shows a result, R, as a function of a long list of real variables. Some of these are under the direct control of the experimenter, some are under indirect control, some are observed but not controlled, and some are not even observed.

    R=R(x_1,x_2,x_3,x_4,x_5,x_6, . . . ,x_N)

    It should be apparent by now that the uncertainty in a measurement has no single value which is appropriate for all uses. The uncertainty in a measured result can take on many different values, depending on what terms are included. Each different value corresponds to a different replication level, and each would be appropriate for describing the uncertainty associated with some particular measurement sequence.

    The Basic Mathematical Forms

    The uncertainty estimates, dx_i or dx_i/x_i in this presentation, are based, not upon the present single-sample data set, but upon a previous series of observations (perhaps as many as 30 independent readings) … In a wide-ranging experiment, these uncertainties must be examined over the whole range, to guard against singular behavior at some points.

    Absolute Uncertainty

    x_i = (x_i)_avg (+/-)dx_i

    Relative Uncertainty

    x_i = (x_i)_avg (+/-)dx_i/x_i

    Uncertainty intervals throughout are calculated as (+/-)sqrt[(sum over (error)^2].

    The uncertainty analysis allows the researcher to anticipate the scatter in the experiment, at different replication levels, based on present understanding of the system.

    The calculated value dR_0 represents the minimum uncertainty in R which could be obtained. If the process were entirely steady, the results of repeated trials would lie within (+/-)dR_0 of their mean …”

    Nth Order Uncertainty

    The calculated value of dR_N, the Nth order uncertainty, estimates the scatter in R which could be expected with the apparatus at hand if, for each observation, every instrument were exchanged for another unit of the same type. This estimates the effect upon R of the (unknown) calibration of each instrument, in addition to the first-order component. The Nth order calculations allow studies from one experiment to be compared with those from another ostensibly similar one, or with “true” values.

    Here replace, “instrument” with ‘climate model.’ The relevance is immediately obvious. An Nth order GCM calibration experiment averages the expected uncertainty from N models and allows comparison of the results of one model run with another in the sense that the reliability of their predictions can be evaluated against the general dR_N.

    Continuing: “The Nth order uncertainty calculation must be used wherever the absolute accuracy of the experiment is to be discussed. First order will suffice to describe scatter on repeated trials, and will help in developing an experiment, but Nth order must be invoked whenever one experiment is to be compared with another, with computation, analysis, or with the “truth.”

    Nth order uncertainty, “

    *Includes instrument calibration uncertainty, as well as unsteadiness and interpolation.
    *Useful for reporting results and assessing the significance of differences between results from different experiment and between computation and experiment.

    The basic combinatorial equation is the Root-Sum-Square:

    dR = sqrt[sum over((dR_i/dx_i)*dx_i)^2]

    https://doi.org/10.1115/1.3241818

    Moffat RJ. Describing the uncertainties in experimental results. Experimental Thermal and Fluid Science. 1988;1(1):3-17.

    The error in a measurement is usually defined as the difference between its true value and the measured value. … The term “uncertainty” is used to refer to “a possible value that an error may have.” … The term “uncertainty analysis” refers to the process of estimating how great an effect the uncertainties in the individual measurements have on the calculated result.

    THE BASIC MATHEMATICS

    This section introduces the root-sum-square (RSS) combination (my bold), the basic form used for combining uncertainty contributions in both single-sample and multiple-sample analyses. In this section, the term dX_i refers to the uncertainty in X_i in a general and nonspecific way: whatever is being dealt with at the moment (for example, fixed errors, random errors, or uncertainties).

    Describing One Variable

    Consider a variable X_i, which has a known uncertainty dX_i. The form for representing this variable and its uncertainty is

    X=X_i(measured) (+/-)dX_i (20:1)

    This statement should be interpreted to mean the following:
    * The best estimate of X, is X_i (measured)
    * There is an uncertainty in X_i that may be as large as (+/-)dX_i
    * The odds are 20 to 1 against the uncertainty of X_i being larger than (+/-)dX_i.

    The value of dX_i represents 2-sigma for a single-sample analysis, where sigma is the standard deviation of the population of possible measurements from which the single sample X_i was taken.

    The uncertainty (+/-)dX_i Moffat described, exactly represents the (+/-)4W/m^2 LWCF calibration error statistic derived from the combined individual model errors in the test simulations of 27 CMIP5 climate models.

    For multiple-sample experiments, dX_i can have three meanings. It may represent tS_(N)/(sqrtN) for random error components, where S_(N) is the standard deviation of the set of N observations used to calculate the mean value (X_i)_bar and t is the Student’s t-statistic appropriate for the number of samples N and the confidence level desired. It may represent the bias limit for fixed errors (this interpretation implicitly requires that the bias limit be estimated at 20:1 odds). Finally, dX_i may represent U_95, the overall uncertainty in X_i.

    From the “basic mathematics” section above, the over-all uncertainty U = root-sum-square = sqrt[sum over((+/-)dX_i)^2] = the root-sum-square of errors (rmse). That is U = sqrt[(sum over(+/-)dX_i)^2] = (+/-)rmse.

    The result R of the experiment is assumed to be calculated from a set of measurements using a data interpretation program (by hand or by computer) represented by

    R = R(X_1,X_2,X_3,…, X_N)

    The objective is to express the uncertainty in the calculated result at the same odds as were used in estimating the uncertainties in the measurements.

    The effect of the uncertainty in a single measurement on the calculated result, if only that one measurement were in error would be

    dR_x_i = (dR/dX_i)*dX_i)

    When several independent variables are used in the function R, the individual terms are combined by a root-sum-square method.

    dR = sqrt[sum over(dR/dX_i)*dX_i)^2]

    This is the basic equation of uncertainty analysis. Each term represents the contribution made by the uncertainty in one variable, dX_i, to the overall uncertainty in the result, dR.

    http://www.sciencedirect.com/science/article/pii/089417778890043X

    Vasquez VR, Whiting WB. Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of Computer Models Involving Experimental Measurements with Monte Carlo Methods. Risk Analysis. 2006;25(6):1669-81.

    [S]ystematic errors are associated with calibration bias in the methods and equipment used to obtain the properties. Experimentalists have paid significant attention to the effect of random errors on uncertainty propagation in chemical and physical property estimation. However, even though the concept of systematic error is clear, there is a surprising paucity of methodologies to deal with the propagation analysis of systematic errors. The effect of the latter can be more significant than usually expected.

    Usually, it is assumed that the scientist has reduced the systematic error to a minimum, but there are always irreducible residual systematic errors. On the other hand, there is a psychological perception that reporting estimates of systematic errors decreases the quality and credibility of the experimental measurements, which explains why bias error estimates are hardly ever found in literature data sources.

    Of particular interest are the effects of possible calibration errors in experimental measurements. The results are analyzed through the use of cumulative probability distributions (cdf) for the output variables of the model.”

    A good general definition of systematic uncertainty is the difference between the observed mean and the true value.”

    Also, when dealing with systematic errors we found from experimental evidence that in most of the cases it is not practical to define constant bias backgrounds. As noted by Vasquez and Whiting (1998) in the analysis of thermodynamic data, the systematic errors detected are not constant and tend to be a function of the magnitude of the variables measured.”

    Additionally, random errors can cause other types of bias effects on output variables of computer models. For example, Faber et al. (1995a, 1995b) pointed out that random errors produce skewed distributions of estimated quantities in nonlinear models. Only for linear transformation of the data will the random errors cancel out.”

    Although the mean of the cdf for the random errors is a good estimate for the unknown true value of the output variable from the probabilistic standpoint, this is not the case for the cdf obtained for the systematic effects, where any value on that distribution can be the unknown true. The knowledge of the cdf width in the case of systematic errors becomes very important for decision making (even more so than for the case of random error effects) because of the difficulty in estimating which is the unknown true output value. (emphasisi in original)”

    It is important to note that when dealing with nonlinear models, equations such as Equation (2) will not estimate appropriately the effect of combined errors because of the nonlinear transformations performed by the model.

    Equation (2) is the standard uncertainty propagation sqrt[sum over(±sys error statistic)^2].

    In principle, under well-designed experiments, with appropriate measurement techniques, one can expect that the mean reported for a given experimental condition corresponds truly to the physical mean of such condition, but unfortunately this is not the case under the presence of unaccounted systematic errors.

    When several sources of systematic errors are identified, beta is suggested to be calculated as a mean of bias limits or additive correction factors as follows:

    beta ~ sqrt[sum over(theta_S_i)^2], where i defines the sources of bias errors and theta_S is the bias range within the error source i. Similarly, the same approach is used to define a total random error based on individual standard deviation estimates,

    e_k = sqrt[sum over(sigma_R_i)^2]

    A similar approach for including both random and bias errors in one fterm is presented by Deitrich (1991) with minor variations, from a conceptual standpoint, from the one presented by ANSI/ASME (1998)

    http://dx.doi.org/10.1111/j.1539-6924.2005.00704.x

    Kline SJ. The Purposes of Uncertainty Analysis. Journal of Fluids Engineering. 1985;107(2):153-60.

    The Concept of Uncertainty

    Since no measurement is perfectly accurate, means for describing inaccuracies are needed. It is now generally agreed that the appropriate concept for expressing inaccuracies is an “uncertainty” and that the value should be provided by an “uncertainty analysis.”

    An uncertainty is not the same as an error. An error in measurement is the difference between the true value and the recorded value; an error is a fixed number and cannot be a statistical variable. An uncertainty is a possible value that the error might take on in a given measurement. Since the uncertainty can take on various values over a range, it is inherently a statistical variable.

    The term “calibration experiment” is used in this paper to denote an experiment which: (i) calibrates an instrument or a thermophysical property against established standards; (ii) measures the desired output directly as a measurand so that propagation of uncertainty is unnecessary.

    The information transmitted from calibration experiments into a complete engineering experiment on engineering systems or a record experiment on engineering research needs to be in a form that can be used in appropriate propagation processes (my bold). … Uncertainty analysis is the sine qua non for record experiments and for systematic reduction of errors in experimental work.

    Uncertainty analysis is … an additional powerful cross-check and procedure for ensuring that requisite accuracy is actually obtained with minimum cost and time.

    Propagation of Uncertainties Into Results

    In calibration experiments, one measures the desired result directly. No problem of propagation of uncertainty then arises; we have the desired results in hand once we complete measurements. In nearly all other experiments, it is necessary to compute the uncertainty in the results from the estimates of uncertainty in the measurands. This computation process is called “propagation of uncertainty.”

    Let R be a result computed from n measurands x_1, … x_n„ and W denotes an uncertainty with the subscript indicating the variable. Then, in dimensional form, we obtain: (W_R = sqrt[sum over(error_i)^2]).”

    https://doi.org/10.1115/1.3242449

    Henrion M, Fischhoff B. Assessing uncertainty in physical constants. American Journal of Physics. 1986;54(9):791-8.

    “Error” is the actual difference between a measurement and the value of the quantity it is intended to measure, and is generally unknown at the time of measurement. “Uncertainty” is a scientist’s assessment of the probably magnitude of that error.

    https://aapt.scitation.org/doi/abs/10.1119/1.14447

    • Pat Frank says:

      This illustration might clarify the meaning of (+/-)4 W/m^2 of uncertainty in annual average LWCF.

      The question to be addressed is what accuracy is necessary in simulated cloud fraction to resolve the annual impact of CO2 forcing?

      We know from Lauer and Hamilton that the average CMIP5 (+/-)12.1% annual cloud fraction (CF) error produces an annual average (+/-)4 W/m^2 error in long wave cloud forcing (LWCF).

      We also know that the annual average increase in CO2 forcing is about 0.035 W/m^2.

      Assuming a linear relationship between cloud fraction error and LWCF error, the (+/-)12.1% CF error is proportionately responsible for (+/-)4 W/m^2 annual average LWCF error.

      Then one can estimate the level of resolution necessary to reveal the annual average cloud fraction response to CO2 forcing as, (0.035 W/m^2/(+/-)4 W/m^2)*(+/-)12.1% cloud fraction = 0.11% change in cloud fraction.

      This indicates that a climate model needs to be able to accurately simulate a 0.11% feedback response in cloud fraction to resolve the annual impact of CO2 emissions on the climate.

      That is, the cloud feedback to a 0.035 W/m^2 annual CO2 forcing needs to be known, and able to be simulated, to a resolution of 0.11% in CF in order to know how clouds respond to annual CO2 forcing.

      Alternatively, we know the total tropospheric cloud feedback effect is about -25 W/m^2. This is the cumulative influence of 67% global cloud fraction.

      The annual tropospheric CO2 forcing is, again, about 0.035 W/m^2. The CF equivalent that produces this feedback energy flux is again linearly estimated as (0.035 W/m^2/25 W/m^2)*67% = 0.094%.

      Assuming the linear relations are reasonable, both methods indicate that the model resolution needed to accurately simulate the annual cloud feedback response of the climate, to an annual 0.035 W/m^2 of CO2 forcing, is about 0.1% CF.

      To achieve that level of resolution, the model must accurately simulate cloud type, cloud distribution and cloud height, as well as precipitation and tropical thunderstorms.

      This analysis illustrates the meaning of the (+/-)4 W/m^2 LWCF error. That error indicates the overall level of ignorance concerning cloud response and feedback.

      The CF ignorance is such that tropospheric thermal energy flux is never known to better than (+/-)4 W/m^2. This is true whether forcing from CO2 emissions is present or not.

      GCMs cannot simulate cloud response to 0.1% accuracy. It is not possible to simulate how clouds will respond to CO2 forcing.

      It is therefore not possible to simulate the effect of CO2 emissions, if any, on air temperature.

      As the model steps through the projection, our knowledge of the consequent global CF steadily diminishes because a GCM cannot simulate the global cloud response to CO2 forcing, and thus cloud feedback, at all for any step.

      It is true in every step of a simulation. And it means that projection uncertainty compounds because every erroneous intermediate climate state is subjected to further simulation error.

      This is why the uncertainty in projected air temperature increases so dramatically. The model is step-by-step walking away from initial value knowledge further and further into ignorance.

      On an annual average basis, the uncertainty in CF feedback is (+/-)144 times larger than the perturbation to be resolved.

      The CF response is so poorly known, that even the first simulation step enters terra incognita.

      • Nate says:

        ‘The CF ignorance is such that tropospheric thermal energy flux is never known to better than (+/-)4 W/m^2. This is true whether forcing from CO2 emissions is present or not.’

        As mentioned several times, the 4 W/m^2 is not the NET error on the thermal energy flux, because other fluxes combine to produce a near balance, as the physics requires, and as happens on the real Earth.

        Thus it is not apt to compare the 4 W/m^2 to the 0.05 W/m^2/year from CO2 that accumulates over time and is not balanced by other fluxes.

        • nick says:

          That other fluxes provide a near balance of errors is irrelevant, Nate. The corresponding uncertainties add *in quadrature*, so no cancellation of the uncertainties is possible. The uncertainty cannot be diminished by additional effects, but always grows; it corresponds to a loss in knowledge, which is irreversible. This is really super basic undergraduate stuff, at least for physicists. Why is it so hard to understand?

        • Nate says:

          ‘ This is really super basic undergraduate stuff, at least for physicists.’

          Funny because I am a physicist, and calculate error all the time, and it is still not convincing to me.

          It is not that there is error that is being cancelled, it is whether that error contributes significantly to error in global temperature.

          If there is error in the partitioning of thermal flux between modes, that does not obviously have a first-order effect on the way the total energy flux imbalance, nor temperature changes over time.

          See my example here:

          https://www.drroyspencer.com/2019/09/additional-comments-on-the-frank-2019-propagation-of-error-paper/#comment-389711.

          Another example is CERES measurements of TOA fluxes that have been discussed here several times. It is know they have systematic error such that the total flux summed over the Earth has an imbalance of few W/m^2.

          More or less, they simply apply an offset to this to bring it into balance. They are then able to study the changes over time in the outgoing LW and incoming SW radiation. The offset makes little difference to these analyses.

    • Nate says:

      “That is, the cloud feedback to a 0.035 W/m^2 annual CO2 forcing needs to be known, and able to be simulated, to a resolution of 0.11% in CF in order to know how clouds respond to annual CO2 forcing.”

      No. We don’t need to accurately simulate the cloud feedback response to the annual change in CO2 forcing. That is a strawman.

      We need to accurately simulate the cloud feedback response to the accumulated warming due to CO2 forcing over decades, which is several W/m^2.

  19. JCM says:

    It’s astonishing that so few on this board are understanding distinctions of accuracy, precision, error, and uncertainty. This is elementary stuff. Relatively consistent (precise) model runs are completely independent of uncertainty. That precision simply reflects the model structure and has no bearing on reliability (accuracy). Dr Frank demonstrates that the GCM ensembles have no more reliability than a linear equation that can be calculated on the back of an envelope i.e. they are a black box that teaches nearly nothing about the physical processes driving that system. He is able to provide one example of that ignorance by calculating propagation of uncertainty based on cloud effects. This is evidently huge and represents only one component of the uncertainty. We are basically at square one where all we really seem to know is there is some correlation between GHG increases and observed temperatures. The uncertainty calculated by Dr Frank demonstrates that there may well be other factors that contribute significantly more to observed temperatures, we just don’t know. What we do know is that the increasing GHG component is way below the radiative resolution of current physically based climate models.

    • Nate says:

      JCM, ‘This is elementary stuff.”

      I don’t think that is the right takeaway from this discussion. In fact proper error analysis of a complicated system is not elementary. That’s why there are thick books on the subject.

      And it is not uncommon to find papers with errors 😉 in their error analysis. A famous case is the Harvard psychologist who claimed to find evidence of ESP.

      https://www.ejwagenmakers.com/2011/WagenmakersEtAl2011_JPSP.pdf

      If you look at the back and forth between Spencer and Frank, you can see that Spencer (and many others BTW), who has done plenty of error analysis in his career, is arguing that Frank’s approach to error analysis is not applicable to the simulations, for several legitimate reasons that he described.

      For Frank, you, or others to say things to like “because you plainly do not grasp the difference between error and uncertainty.”

      is simply a lazy ad-hom in place of real logic or evidence.

      • JCM says:

        The distinctions of accuracy, precision, error, and uncertainty is discussed in my grade 13 intro stats text from 2001 “Elementary Statistics, Canadian Edition (2nd Edition)”. I think Dr Frank’s paper speaks for itself. thanks for the psych paper I’ll check it out. regards.

      • Nate says:

        JCM,

        ” This is evidently huge and represents only one component of the uncertainty.”

        This sentence says it all.

        It is ‘evidently huge’ to Frank, and to you and others with confirmation bias, because you are uncritically accepting its validity.

        But it is not evident or convincing to other experts who read his paper.

        For example: ‘ i.e. they are a black box that teaches nearly nothing about the physical processes driving that system.’

        This is pure nonsense. The models are built on known atmospheric physics equations. The same ones that successfully predict weather out to about a week.

        But because there is not enough computing power to simulate all physical processes down to micro level, some average properties need to be set as ‘parameters’ whose values are improved over time.

        This approach has worked well for weather models which obviously have improved dramatically in the last 50 y.

        • JCM says:

          Uncertainty must include both systematic and random error – Dr Frank has demonstrated through a test of serial autocorrelation of ensemble mean residual error that the structure of the TCF error is in fact systematic . Frank further clarifies that if the model TCF errors were random, then cloud error would disappear in multi-year averages. They don’t. He then goes on to compare TCF error series among different models and demonstrates that they are correlated, further reinforcing the notion systematic theory error in TCF. If the energy flux is forced to balance in spite of this error it means available energy is being incorrectly partitioned among the internal fluxes at every step. How many other internal states are impacted by this, and to what degree? This uncertainty is propagated at every step, and compounds. Considering these errors must then be distributed among other fluxes within the model it suggests very little predictive value for any parameter.

        • Nate says:

          “If the energy flux is forced to balance in spite of this error it means available energy is being incorrectly partitioned among the internal fluxes at every step.”

          Simulations don’t need to capture every detail of a complex system correctly, in order to predict a global average of a variable like temperature. It can get regional things wrong, but still capture the average.

          For example, I can simulate heating a pot of water. At some point there is going to be chaotic turbulence in the water. The local temperature may vary dramatically. I will not be able to simulate that very well, its chaotic. Nonetheless i can simulate the overall rise in average temperature of the water pretty well.

          “How many other internal states are impacted by this, and to what degree? This uncertainty is propagated at every step, and compounds. Considering these errors must then be distributed among other fluxes within the model it suggests very little predictive value for any parameter.”

          In my pot of water I am not capturing these internal states very well. I didnt capture the turbulent temperature variation at point p accurately.

          It didnt matter significantly to my predicted rise in average temperature.

          Frank has to show that TCF error, though balanced by other fluxes, still matters for capturing global temperature rise.

  20. donald penman says:

    I would like to comment on this you tube interpretation of DR Franks method in that climate models always predict warming. the cumulative errors could be seen as a person taking steps of uncertain length backwards as well as forward. My view of the reliability of climate models in predicting the future of the Earths temperature is very low.
    https://youtu.be/rmTuPumcYkI

  21. Russ says:

    Roy,
    Is it possible that the models all generate higher temperatures BECAUSE higher temperatures cause higher CO2 levels? I.E. rightly correlated but wrong cause/effect relationship. Honest question, since I do not know how these so-called models work.

  22. I am in fact grateful to the owner of this site who
    has shared this wonderful post at at this place.

  23. Pretty nice post. I just stumbled upon your blog and wanted
    to say that I have truly enjoyed browsing your blog posts.
    In any case I’ll be subscribing to your rss feed and I hope you
    write again very soon!

  24. Thanks for sharing such a fastidious thought,
    piece of writing is nice, thats why i have read it entirely

  25. Thanks for sharing such a fastidious thought, post is fastidious,
    thats why i have read it entirely

  26. Very nice post. I just stumbled upon your weblog
    and wished to say that I’ve truly enjoyed surfing
    around your blog posts. After all I will be subscribing to
    your rss feed and I hope you write again soon!

  27. It’s impressive that you are getting thoughts from this post as
    well as from our dialogue made at this time.

  28. Nice post. I was checking continuously this blog and I
    am impressed! Extremely useful info specially the final section 🙂 I care for such
    info a lot. I was seeking this certain info for a very
    lengthy time. Thanks and best of luck.

  29. Hello there, I discovered your site by way of Google at the same time
    as looking for a comparable topic, your web site came up, it
    appears great. I’ve bookmarked it in my google bookmarks.

  30. Hi there! I know this is somewhat off topic but I was wondering which blog platform
    are you using for this website? I’m getting tired
    of WordPress because I’ve had problems with hackers and I’m
    looking at alternatives for another platform. I would be fantastic if you could point me in the direction of a good platform.

  31. I all the time used to study paragraph in news papers but now as I
    am a user of internet therefore from now I am using net for posts, thanks to web.

  32. Hey there! Do you use Twitter? I’d like to follow you if that would be ok.
    I’m definitely enjoying your blog and look forward to new posts.

  33. Thank you for sharing your thoughts. I truly appreciate
    your efforts and I am waiting for your next write ups thanks once again.

Leave a Reply