New Results on Climate Sensitivity: Models vs. Observations

January 27th, 2011 by Roy W. Spencer, Ph. D.

Partly as a result of my recent e-mail debate with Andy Dessler on cloud feedbacks (the variable mostly likely to determine whether we need to worry about manmade global warming), I have once again returned to an analysis of the climate models and the satellite observations.

I have just analyzed the 20th Century runs from the IPCC’s three most sensitive models (those producing the most global warming), and the 3 least sensitive models (those that produce the least global warming), and compared their behavior to the 10 years of global temperature and radiative budget data Dessler analyzed (as did Spencer & Braswell, 2010).

The following plot shows the most pertinent results. While it requires some explanation, an understanding of it will go a long way to better appreciating not only how climate models and the real world differ, but also what happens when the Earth warms and cools from year-to-year…say from El Nino or La Nina.

What the plot shows is (on the vertical axis) how much net loss or gain in radiant energy occurs for a given amount of global-average surface warming, at different time lags relative to that temperature peak (on the horizontal axis). You can click on the graph to get a large version.

All observations are shown with black curves; the climate model relationships are shown in either red (3 models that predict the most global warming during the 21st Century), or blue (the 3 models predicting the least warming). Let’s examine what these curves tell us:

1) RADIATIVE ENERGY ACCUMULATES DURING WARMING IN ADVANCE OF THE TEMPERATURE PEAK: In the months preceding a peak in global temperatures (the left half of the graph), both models and observations show the Earth receives more radiant energy than it loses (try not to be confused by the negative sign). This probably occurs from circulation-induced changes in cloud cover, most likely a decrease in low clouds letting more sunlight in (“SW” means shortwave, i.e. solar)…although an increase in high cloud cover or tropospheric humidity could also be involved, which causes a reduction in the rate if infrared (longwave, or “LW”) energy loss. This portion of the graph supports my (and Lindzen’s) contention that El Nino warming is partly a radiatively-driven phenomenon. [The curves with the much larger excursions are for oceans-only, from instruments on NASA’s Aqua satellite. The larger excursions are likely related to the higher heat capacity of the oceans: it takes more radiative input to cause a given amount of surface warming of the oceans than of the land.]

2) RADIATIVE ENERGY IS LOST DURING COOLING AFTER THE TEMPERATURE PEAK: In the months following a peak in global average temperature, there is a net loss of radiative energy by the Earth. Note that THIS is where there is more divergence in the behavior of the climate models, and the observations. While all the climate models showed about the same amount of radiative input per degree of warming, during the cooling period there is a tendency for the least sensitive climate models (blue curves) to lose more energy than the sensitive models. NOTE that this distinction is NOT apparent at zero time lag, which is the relationship examined by Dessler 2010.

WHAT DOES THE DIVERGENCE BETWEEN THE MODELS DURING THE COOLING PERIOD MEAN?
Why would the climate models that produce less global warming during the 21st Century (blue curves) tend to lose MORE radiant energy for a given amount of surface temperature cooling? The first answer that comes to my mind is that a deeper layer of the ocean is involved during cooling events in these models.

For instance, look that the red curve with the largest dots…the IPCC’s most sensitive model. During cooling, the model gives up much less radiant energy to space than it GAINED during the surface warming phase. The most obvious (though not necessarily correct) explanation for this is that this model (MIROC-Hires) tends to accumulate energy in the ocean over time, causing a spurious warming of the deep ocean.

These results suggest that much more can be discerned about the forcing and feedback behavior of the climate system when time lags between temperature and radiative changes are taken into account. This is why Spencer & Braswell (2010) examined phase space plots of the data, and why Lindzen is emphasizing time lags in 2 papers he is currently struggling to get through the peer review cycle.

SO WHICH OF THE CLIMATE MODELS IS MORE LIKELY TO BE CORRECT?

This is a tough one. The above plot seems to suggest that the observations favor a low climate sensitivity…maybe even less than any of the models. But the results are less than compelling.

For instance, at 3 months after the temperature peak, the conclusion seems clear: the satellite data show a climate system less sensitive than even the least sensitivie model. But by 9 months after the temperature peak, the satellite observations show the same relationship as one of the most sensitive climate models.

So, I’m sure that you can look at this chart and see all kinds of relationships that support your view of climate change, and that’s fine. But *MY* contention is that we MUST move beyond the simplistic statistics of the past (e.g., regressions only at zero time lag) if we are to get ANY closer to figuring out whether the observed behavior of the real climate system supports either (1) a resilient climate system virtually immune to the activities of humans, or (2) a climate system that is going to punish our use of fossil fuels with a global warming Armageddon.

The IPCC is no nearer to answering that question than they were 20 years ago. Why?


38 Responses to “New Results on Climate Sensitivity: Models vs. Observations”

Toggle Trackbacks

  1. Ken Lowe-Oil says:

    I’m concerned about my impact on the environment. I heat my home with heating oil but am worried about what this is doing to the environment. I live in a rural area of lincolnshire so there’s not much alternative to heating my home with oil except wood and LPG… but I don’t know if this is even more harmful.

    I have just found a heating oil website who offer Group Buying Days, this seems like a great way to help the environment because you can order with others which helps to keep tankers off the roads more, reducing CO2 emissions.

    I would like to see more information on the internet about the effects of heating oil on the environment. On most climate change sites I go on there are articles on gas and electric heating but little on the effects of heating oil.

    Does anyone have any figures about heating oil and ways to minimize my impact on the environment?

  2. RW says:

    Why?

    Because ultimately they’re more or less just guessing, I think.

  3. Tom says:

    Coal produces nearly twice as much CO2 per BTU of heat as natural gas. Fuel oil splits the difference, falling about halfway between coal and gas.

    Wood actually emits a bit more CO2 per BTU than oil does, but of course it is renewable, assuming you are growing it at the same rate you are burning it.

  4. Hank Zentgraf says:

    Thanks Roy for this new insight. What percentage of climate scientists have the mathematics skill to deal with this issue of climate sensitivity in a chaotic system such as you presented in your book? You have a computational physicist helping you (Braswell). My sense is that many in the field have very narrow mathematics training. As a result they can’t respond with a critique so they remain silent.
    Regards,
    Hank

  5. Joe Born says:

    For those of us who don’t eat and drink this stuff, could you give a little more explanation of what the curves are? I do understand (1) the concept of sensitivity, (2) that the y-axis units represent sensitivity, and (3) that the x-axis units represent lag.

    And I suppose you could have obtained the curves by simply choosing all the (temporally) local temperature-anomaly maxima in your record, taking the ratios of the (appropriately advanced or delayed) flux values to those temperature-anomaly maxima, and averaging the results for each lag. If that’s all you did, then no further explanation is needed. (Or almost none. What’s the dashed black line?)

    If you did something else, such as some kind of a cross-correlation, on the other hand, I don’t know how you went from the correlation values to sensitivity. Could we laymen trouble you for a little more explanation?

  6. Johan says:

    Why? Taking a wild guess here: belief perseverance and confirmation bias.

  7. MAK says:

    Roy, I think you are trying to make this too complicated:

    In the months preceding a peak in global temperatures the Earth receives more radiant energy than it loses due to fact that there is La Nina conditions going on.

    During La Nina the sea surface temperatures are lower than normal and thus less energy is lost to the space. During La Nina there cannot be “temperature peak”.

    The opposite happens during El Nino, which is the cause of temperature peak.

    • David44 says:

      MAK,
      I’m not qualified to answer, but I don’t think El Nino/La Nina/ENSO necessarily involves a global net gain or loss. My understanding is that it involves a massive sloshing of water (and thus heat) back and forth across the Pacific ocean tropics so that the ocean surface temperature is very much dependent on where you measure it at what point in time.

  8. Eli Rabett says:

    oceans accumulate energy too

  9. Christopher Game says:

    Dr Spencer asks the question:
    “The IPCC is no nearer to answering that question than they were 20 years ago. Why?”

    I propose the answer:
    because they persist in thinking in terms of the IPCC “forcings and feedbacks” formalism (e.g. Schlesinger 1985, 1986, 1988, Bony et al. 2006, Roe 2009).

    Why do they persist with it? Because it works a treat for them: people do not criticize it nearly enough; people just accept it, and so the IPCC goes on using it; Dr Spencer is just now beginning to criticize it directly, I think. The IPCC formalism has the spurious terms “amplification” and “positive feedback” which hypnotize people and are a magnificent propaganda trick.

    Dr Spencer writes: “But *MY* contention is that we MUST move beyond the simplistic statistics of the past (e.g., regressions only at zero time lag)”. Quite right.

    The zero time lag is one of the features of “feedbacks” as presented in the IPCC formalism. The formalism uses the term ‘feedback’, but this is only a sort of spinny metaphor, or more pointedly, an abuse of language, because feedback as ordinarily understood is a dynamic concept, while the IPCC “forcings and feedbacks” formalism defines a static model with no time-dependent dynamics.

    Dr Spencer and Dr Braswell use a model with time-dependent dynamics, expressed by an ordinary differential equation, which is not the same thing as the IPCC “forcings and feedbacks” static model. My reading of Dr Spencer and Dr Braswell is that they use a first order ordinary differential equation.

    Even simple harmonic motion needs a second order system, and I don’t see it as too unreasonable to use a second order coupled system for the climate problem. One ordinary differential equation for the ‘forward’ process, coupled with a second for the ‘backward’ process.

    But to do this for empirical observational data, one must engage the help of serious mathematicians who work on identification and estimation of dynamic models. The climate problem is important enough to justify resort to these admittedly rather complicated techniques.

    I think Dr Lindzen made a bad mistake with a recent paper in not using the mathematical expertise that he surely has very good access to if he pleases. I hope this time he is using more customary and transparent and canonical mathematical methods, and explaining his work carefully in ordinary language. The lucid use of ordinary language is one of the requirements for good scientific work. The canonical methods can be explained in ordinary language. The ability to explain in ordinary language is a test of the writer’s real physical understanding, a test of whether he really understands the physics. I am not proposing a mathematical mystery game; I am proposing conventional mathematics as a way of making things clear in ordinary language.

    It is part of the job of a scientist to express his findings and theories in language that other reasonably scientifically literate persons can understand: for the present problem, that means using the canonical formalisms of dynamical systems theory and the usual terminology and methods of that theory. The basic ideas of simple harmonic motion are not so tricky that the reasonably scientifically literate person cannot have them made clear to him by a good scientific writer.

  10. Jacob C says:

    Dr. Spencer, very nice post, as per usual.
    I was just thinking, you know what would be interesting…if you could somehow mimick the experiments done by the researchers for the IPCC on the supposed fingerprints of climate change (those six pictures of model runs for the vertical profiles of the atmosphere) but with this cloud as forcing concept. I wonder if it could add another bit of intrigue, since my guess is that it may actually somewhat resemble the actual data (at least more so than the GHG only picture the alarmists continue to push).
    Just a suggestion. Have a nice one. 🙂

    Cheers,
    Jacob

  11. RW says:

    Christopher Game wrote:

    “I propose the answer:
    because they persist in thinking in terms of the IPCC “forcings and feedbacks” formalism (e.g. Schlesinger 1985, 1986, 1988, Bony et al. 2006, Roe 2009).”

    Agreed. The other thing they do is express sensitivity as units of degrees C per W/m^2 of GHG ‘forcing’, which effectively hides its applicability to solar forcing; which has a gain or ‘amplification’ of only about 1.6 at the surface.

    I’ve still yet to see anyone adequately explain why the system is all of the sudden going to respond to each 1 W/m^2 from additional CO2 greater than 5 times as much as it does each 1 W/m^2 from the Sun, or why additional infrared power from CO2 obeys different physics than power from the Sun?

    Furthermore, if the 3.7 W/m^2 of additional infrared from 2xCO2 at the surface is treated the same as each 1 W/m^2 from the Sun, that leaves a deficit of over 10 W/m^2 to needed for a 16+ W/m^2 at the surface for a 3 C rise in temperature. This begs the question: Where is all the energy coming from that is supposed to be causing the warming?

  12. lgl says:

    Why do people always look at radiation and temperature, where there is not supposed to be a good correlation because of the heat capacity? Rad and dT is much better. No time lag and seasons don’t mess up. http://virakkraft.com/SW-dSST.png
    (mid 2007 to mid 2010, corr. 0.7)

  13. lgl says:

    RW

    “‘amplification’ of only about 1.6 at the surface.”
    How? I’m getting (output from the surface)/(solar input to the surface) = 500(W/m2)/~200(W/m2) = 2.5
    where 200 is 165 directly + absorbed and reemitted by the atmosphere.

  14. Fred from Canuckistan says:

    There is only one model that really works and it belongs to Mother Nature.

    It is called the Reality Model.

    Buy long underwear and snow shovels. Mother Nature is telling us the current lovely little time gap between Ice Ages is very, very, very long in the tooth and it is time to teach those pesky humans the same lesson as before.

    Ice Ages rule.

    Life is a series of lessons.
    Lessons will be repeated until learned.

  15. The IPCC is no closer because they are clueless.

    They don’t consider external forces such as solar activity,or for that matter, even volcanic activity here on earth.

    Their ridiculous global man made warming theory, will be going up in smoke.

    Correction ,it has already gone up in smoke ,since their pathetic models have been forecasting an ever increasing +AO oscillation to evolve, as a result of global man made warming(a joke) , while the reality is due to low solar activity/increase in high latitude volcanic activity, the atmospheric circulation has been evolving into an ever increasing -AO.

    This has been predicted way in advance of what we now have.( Joe D’Aleo / Piers Corbyn) -Reasons, low solar/high latitude volcanic activity. I reached this conclusion myself ,two years ago, and now it is coming to fruition.

    As far as clouds, they will fall into place ,because they are a consequence to a large degree of what type of atmospheric circulation we have. The atm. circulation is the key, not the clouds themselves.

    NEG. AO OR MERIDIONAL CIRC. EQUALS MORE CLOUDS MORE COOLING,NOT TO FORGET SNOW COVER INCREASE WITH THIS TYPE OF CIRCULATION.

    -AO EQUALS COOLING FOR N.H. POSITIVE AO EQUALS WARMING FOR N.H.

  16. The discussions on this site are, one can’t see the forest through the trees.

    It is on such a narrow micro level. Not looking at the big picture and taking into account the many ,many items that influence the climate.

    Not that it is right or wrong however, it is just to narrow in my opinion, becasue this area is the one and only area that is ever discussed when it comes to the climate.It is as if nothing else matters.

  17. With that said will I study and try to comprehend what Dr. Spencer has said. Of course I will ,because it is a part of the puzzle.

    EVERTHING HAS TO BE TAKEN INTO ACCOUNT

    I AM GOING TO PRINT A COPY OF THIS RIGHT NOW.

  18. Stephen says:

    Have no fear. James Hansen has said that it is a trivial matter for humans to prevent the next ice age, whenever it should appear. We just need to build a single CFC factory or something.

  19. Stephen says:

    oops; that was intended as a reply to Fred from Canuckistan

    🙂

  20. Mak ‘s comments are good. This is very complicated, but interesting.

  21. I understand it now. This is good. Dr. Spencer is saying the models show earth’s temp. response to be more sensitive to radiation changes then what is actually observed. Especially after 3 months.

    This just brings out another flaw of the flawed models the global warmers use. The list goes on.

    I hope Dr. Spencer believes in the other arguments for global cooling, and I think I have to realize that he wants to concentrate in this particular area.

  22. Thank you for a very good article Dr. Spencer.

    In this article you ask: “SO WHICH OF THE CLIMATE MODELS IS MORE LIKELY TO BE CORRECT?”
    If you ask me then my answer must be: “NONE OF THEM” – and why do I say that? – It is quite simple because on January 3rd, 2011 under the heading; “Dec. 2010 UAH Global Temperature Update: +0.18 deg. C” you wrote:
    “WHO WINS THE RACE FOR WARMEST YEAR?” –
    “As far as the race for warmest year goes, 1998 (+0.424 deg. C) barely edged out 2010 (+0.411 deg. C), but the difference (0.01 deg. C) is nowhere near statistically significant. So feel free to use or misuse those statistics to your heart’s content.”

    Now then, the British Met Office has gone one better and admitted officially that 2010 has tied with 1998 as the warmest year since records began.
    Whether they realize it or not they are now saying what you and other ‘skeptikal scientists’ have said for a long time which is that there has been no ‘Global Warming’ since 1998. –Sands to reason, does it not? There must have been 12 years between those two years (1998 – 2010) when no warming has occurred?
    OHD

    • Alex says:

      Olav,

      You cannot state that because 1998 and 2010 are technically tied that “there was no warming”. In 1998 we had a mega-warm event in the Pacific, whereas in 2010 we had a shift form a moderate El Niño to a La Niña that dominated most of the period. Second, if you pick from 1997 to 2010 you would be talking about a very accelerated warming, so connecting the dots from one single year to the other has no meaning. You have to take moving averages (5yr to 10yr moving averages) to actually get into a valid conclusion. Because all but one year in the last decade ranked in the top ten on record, it is clear that warming continues. Or, alternatively, try to connect the dots in a temperature chart chosing the first and last year by chance in the last two decades to see how many “warming”, “cooling” and “steady” tendency lines you would get.

  23. That should also answer the other question: “The IPCC is no nearer to answering that question than they were 20 years ago. Why?”

    OHD

  24. RW says:

    lgl says:

    “‘amplification’ of only about 1.6 at the surface.”
    How? I’m getting (output from the surface)/(solar input to the surface) = 500(W/m2)/~200(W/m2) = 2.5
    where 200 is 165 directly + absorbed and reemitted by the atmosphere.”

    The post albedo power from the Sun is about 239 W/m^2, and the surface power is about 390 W/m^2 (288K). 390/239 = 1.63

  25. lgl says:

    RW

    But there is 100 watts of evaporation and thermals in addition (or 161 SW and 333 LW input, according to Trenberth) so the amplification must at least be 494/239

    • RW says:

      lgl says:

      “But there is 100 watts of evaporation and thermals in addition (or 161 SW and 333 LW input, according to Trenberth) so the amplification must at least be 494/239”

      The evaporation and thermals are moving heat away from the surface. If the power at the surface was 494 W/m^2, the equivalent temperature would be over 32 C!

  26. Alex says:

    Hank,

    Have you ever look at the CV of the leading authors of the last and next IPCC report, especially the ones involved with the physical basis? Most of them have degrees in Atmospheric Scince, Meteorology, Oceanography, Physics and Mathematics (as it is common to have people moving from one of the two latest disciples to the first, as my case). Have you checked the courses in the undergrad and graduate programs from which they got their degrees? I would suggest that before you even speculate on that you check at least part of the real world first, ok?

  27. Alex says:

    Roy,

    As far as I understood, the graph was obtained calculating the radiative budget around “local” temperature maxima. How they were determined? After removal of or including the annual cycle? Is there a threshold to define those maxima? Is the analysis dominated by interannual variability? And what was the period of the observations? The models apparently performed well before the peak and diverged after it (with one of the most sensitive models actually surpassing the least sensitive models in radiative cooling after a few months), but if the entire 20C3M runs were used, the model data refer to the entire century with a chance of having more peaks in the model series than in the observations.

    I would be quite curious to see what happens around tempertaure valleys, i.e., to see what’s the response of both observations and models in the opposite case.

    Second question is: how about the rest of the models? I became quite curious to see the entire envelope. There was no clearly discernible difference in the behavior of the models during the warming phase and at least one of high-sensitivity models produces a time-integrated cooling bigger than two of the least sensitive models.

  28. lgl says:

    RW

    There is 494 W/m2 to the surface, if Kiehl&Trenberth are right, so that’s the number you should use (and not the output like I first did)

    • RW says:

      They are showing latent heat and thermals constantly moving heat away from the surface. Most importantly, there is not 494 W/m^2 at the surface – but only 390 W/m^2.

  29. lgl says:

    RW

    To make sense the amplification must be (total input)/(solar input) and then the numbers are 494/239 (or more like 494/200)

  30. Alex says: January 29, 2011 at 5:11 AM
    “Olav, You cannot state that because 1998 and 2010 are technically tied that “there was no warming”.”

    The statement belongs to the UK Met.Office and all their accompanying data confirms that there has been a “flat period” or no warming since 1998. Unfortunately I cannot here reproduce their graph which shows the same. Below, if I may, is the whole shooting match which of course was ment to show something completely different from what it did show.

    “2010 — a near-record year
    20 January 2011 — The Met Office and the University of East Anglia have today released provisional global temperature figures for 2010, which show the year to be the second warmest on record.
    With a mean temperature of 14.50 °C, 2010 becomes the second warmest year on record, after 1998. The record is maintained by the Met Office and the Climatic Research Unit at UEA.
    Earlier this month, in the US, NASA’s Goddard Institute for Space Studies and NOAA’s National Climatic Data Center announced that the past year is either warmest or equal-warmest on their respective records.
    Events in the Pacific Ocean have heavily influenced the global temperature in 2010. The year began in El Niño conditions, which have a warming effect. But the El Niño was replaced by a very strong La Niña – the strongest for more than 30 years – which acts to cool the climate.
    Dr Adam Scaife, head of long range forecasting at the Met Office, said: “The three leading global temperature datasets show that 2010 is clearly warmer than 2009. They also show that 2010 is the warmest or second warmest year on record as suggested in the Met Office’s annual forecast of global temperature issued in December 2009.”
    Speaking about the figures, Professor Phil Jones, Director of Research at the Climatic Research Unit of the University of East Anglia said: “The warmest 10 years in all three datasets are the same and have all occurred since 1998. The last 10 years 2001-2010 were warmer than the previous 10 years (1991-2000) by 0.2 °C.”
    2010 has been a year of headline-making weather. In the summer there were extremes such as the Russian heatwave and the floods in Pakistan and China. At the end of the year many areas across Northern Europe experienced heavy snowfalls and very low temperatures, while eastern Australia saw extensive flooding.
    Professor Julia Slingo investigates the driving forces behind the weather extremes of 2010.
    Locally, the UK recorded its coldest year since 1986 and its coldest December on record. However, very few parts of the world were significantly colder than normal during 2010. The Northern Hemisphere experienced its warmest year with a mean temperature anomaly of 0.69 °C.

    Notes:
    • The 1961-90 global average mean temperature is 14.0 °C.
    • Inter-annual variations of global surface temperature are strongly affected by the warming influences of El Niño and the cooling influences of La Niña in the Pacific Ocean. These are quite small when compared to the total global warming since 1900 of about 0.8 °C but, nevertheless, typically reach about +/- 0.15 °C, and can strongly influence individual years.
    • Temperature anomaly for the Southern Hemisphere is 0.30 °C, the fifth warmest on the HadCRUT record.
    • The table provides the top 10 rankings for all three datasets:
    Rank HadCRUT3 NOAA NCDC NASA GISS
    Year Anomaly * Year Anomaly * Year Anomaly *
    1 1998 0.52 2010 0.52 2010 0.56
    2 2010 0.50 2005 0.52 2005 0.55
    3 2005 0.47 1998 0.50 2007 0.51
    4 2003 0.46 2003 0.49 2009 0.50
    5 2002 0.46 2002 0.48 2002 0.49
    6 2009 0.44 2006 0.46 1998 0.49
    7 2004 0.43 2009 0.46 2006 0.48
    8 2006 0.43 2007 0.45 2003 0.48
    9 2007 0.40 2004 0.45 2004 0.41
    10 2001 0.40 2001 0.42 2001 0.40
    • Anomaly: °C above long-term average.
    Last Updated: 20 January 2011″

    According to these figures it looks a pretty flat 12 year period to me. Say between 0.16° max. & 0.10° min. between 1998 (strong El Ni?o) and 2001 (lowest temp). Of course nothing is said about the “Solar Constant” and any other factors. But El Ni?o is happily included except for in 2010 when La Ni?a also had to be included.

  31. Mervhob says:

    I agree with Christopher Game, the impression given by the IPCC language is that their mathematical approach is inept, out of date and rather akin to the approach that you might expect from rather poorly educated undergraduates. Or, computer modellers, most of whose grasp of the underlying limitations inherent in mathematics is often poor.
    Drs Spencer and Braswell are at least aware of the limitations – like 17th cenrury scientists they question the lack of good supporting experimental evidence, and search out the flaws in the model. Rather than bowing down to the Moloch of serial computing. I had the misfortune to watch on British TV a truely dreadful piece of propaganda where a prominent Nobel prize biologist was shown by NASA two videos running concurrently – one of an alleged climate model, the other supposedly real time data. The NASA ‘scientist’ stated, ‘Here is the model, and here is the real time data – look how closely they correspond.’ Yes, we can all find parts of data sets that when compared with parts of other data sets look remarkably similar. The blatent dishonesty was the statement that this was real time data, which completely fooled the poor biologist! We do not possess computer systems that can process the data from satellites in realtime and put it into a whole planet model in real time. There is not enough computational bandwidth to do that, and the orbital coverage of the satellites does not allow it! I began to wonder if I was watching a scifi movie, not serious science.

  32. Dendold says:

    Are you able to please send me a mail. I actually like your design.

Leave a Reply