The UAH Global Temperature Dataset at 30 Years: A Look Back at the Early Days

March 30th, 2020 by Roy W. Spencer, Ph. D.

Today (Monday, March 30) is the 30th anniversary of our publication in Science describing the first satellite-based dataset for climate monitoring.

While much has happened in the last 30 years, I thought it might be interesting for people to know what led up to the dataset’s development, and some of the politics and behind-the-scenes happenings in the early days. What follows is in approximate chronological order, and is admittedly from my own perspective. John Christy might have somewhat different recollections of these events.

Some of what follows might surprise you, some of it is humorous, and I also wanted to give credit to some of the other players. Without their help, influence, and foresight, the satellite temperature dataset might never have been developed.

Spencer & Christy Backgrounds

In the late 1980s John Christy and I were contractors at NASA/MSFC in Huntsville, AL, working in the Atmospheric Sciences Division where NASA managers and researchers were trying to expand beyond their original mission, which was weather support for Space Shuttle launches. NASA/MSFC manager Gregory S. Wilson was a central figure in our hiring and encouragement of our work.

I came from the University of Wisconsin-Madison with a Ph.D. in Meteorology, specializing in the energetics of African easterly waves (the precursors of most Atlantic hurricanes). I then did post-doc work there in the satellite remote sensing of precipitation using microwave radiometers. John Christy received his Ph.D. in Atmospheric Science from the University of Illinois where he did his research on the global surface atmospheric pressure field. John had experience in analyzing global datasets for climate research, and was hired to assist Pete Robertson (NASA) to assist in data analysis. I was hired to develop new microwave satellite remote sensing projects for the Space Shuttle and the Space Station.

James Hansen’s Congressional Testimony, and Our First Data Processing

In 1988, NASA’s James Hansen testified for then-Senator Al Gore, Jr., testimony which more than any single event thrust global warming into the collective consciousness of society. We were at a NASA meeting in New Hampshire. As I recall, UAH’s Dick McNider on the plane ride up had just read a draft of a paper by Kevin Trenberth given to him by John Christy (who had been a Trenberth student) that discussed many issues with the sparse surface temperature data for detecting climate change.

During lunch Dick asked, given all the issues with the lack of global coverage and siting issues with surface data sets discussed by Trenberth, if there wasn’t satellite data that could be used to investigate Hansen’s global claims? NASA HQ manager James Dodge was there and expressed immediate interest in funding such a research project.

I said, yes, such data existed from the NOAA/NASA/JPL Microwave Sounding Unit (MSU) instruments, but it would be difficult to access approximately 10 years of global data. Note that this was before there was routine internet access to large digital datasets, and ordering data from the government had a very large price tag. No one purchased many years of global data; it came on computer 6250 bpi computer tapes each containing approximately 100 MB of data, and computers then were pretty slow. The data we wanted was from NOAA satellites, and NOAA would reuse these large (10.5 inch) IBM tapes rather than to keep the old data tapes around using up storage space.

It turns out that Roy Jenne who worked data systems at the NSF’s National Center for Atmospheric Research (NCAR) in Boulder had years before taken it upon himself to archive the old NOAA satellite data before it was lost altogether. He kept the data on a “mass storage system” (very large and inefficient by today’s standards) and I believe it was Greg Wilson who John Christy made the connection to gain us access to those data.

We obtained somewhat less than 10 years of data from NCAR, and I decided how to best calibrate it and average it into a more manageable space/time resolution. I had frequent contact with JPL engineers who built the MSU instruments, Fred Soltis in particular, who along with Norman Grody at NOAA provided me with calibration data for the MSU instruments flying on different satellites.

We enlisted John Christy to analyze those data since he brought considerable experience with diagnosing global datasets for climate purposes. One of the first things John discovered was from comparing global averages from different satellites in different orbits: They gave surprisingly similar answers in terms of year-to-year temperature variability. This was quite unexpected and demonstrated that the MSU instruments had high calibration stability, at least over a few years. It also demonstrated that NOAA’s practice of adjusting satellite data with radiosondes (weather balloons) was backwards: the differences others had seen between the two systems were due to poor spatial sampling by the radiosondes, not due to changes in the satellite calibration stability.

In addition to the critical historical data archived by Roy Jenne at NCAR, we would some of the more recent satellite data that was kept at NOAA. We didn’t have quite ten years of data, and an editor at Science magazine wanted ten full years of data before they would publish our first findings. We were able to order more data from NOAA to get the first 10 years’ worth (1979 through 1988), and Science accepted our paper.

The NASA Press Conference

On March 29, 1990 we held a “media availability” at the communications center at NASA/MSFC. For some reason, NASA would not allow it to be called a full-fledged “press conference”. As I recall, attendance was heavy (by Huntsville standards) and there was no place for me to park but in the grass, for which I was awarded a parking ticket by NASA security. JPL flew a remaining copy of the MSU instrument in as a prop; it had its own seat on a commercial flight from Pasadena.

Jay Leno would later mention our news conference in his monologue, and Joan Lunden covered it on Good Morning America. While we watched Ms. Lunden on a monitor the next morning, a NASA scientist remarked that he was too distracted by her long, slender legs to listen to what she was saying.

Our 1990 Senate Testimony for Gore

After we published our first research results on March 30,1990, we received an invitation to testify for Al Gore in a Senate committee hearing in October, 1990 on the subject of coral bleaching. Phil Jones from the University of East Anglia was also there to testify.

As people filed into the hearing room, I saw a C-SPAN camera being set up, and having noticed that Al Gore seemed to be the only committee member in attendance, I asked the cameraman about the lack of interest from other senators. He said something like, “Oh, Senator Gore likes it this way… he gets all the media attention.”

We still used overhead projectors back then with view graphs, and I thought I’d better check out the equipment. The projector turned out to be seriously out-of-focus, and the focus adjustment on the arm would not fix it. I remember thinking to myself, “this seems pretty shoddy for Congress”.

Senator Gore launched into some introductory remarks while looking at me as I struggled with the projector. From his comments, he was obviously assuming I was Phil Jones (who was supposed to go first, and who Gore said he had previously visited in England). I thought to myself, this is getting strange. Just in time, I realized the projector arm was bent slightly out of alignment, I bent it back, and took my seat while Phil Jones presented his material.

Our testimony, which was rather uneventful, led to the traditional letter of thanks from Gore for supporting the hearing. In that letter, Gore expressed interest in additional results as they became available.

So, when it came time to get the necessary additional satellite data out of NOAA, I dropped Gore’s name to a manager at NOAA who suddenly became interested in providing everything they had to us at no charge… rather than us having to pay tens of thousands of our research dollars.

Hundreds of Computer Tapes and an Old Honda Civic

It might seem absurd to today’s young scientists, but it was not an easy task to process large amounts of digital data in the late 1980s. I received box after box of 9-track computer tapes in the mail from NOAA. Every few days, I would load them up in my old, high-mileage, barely-running 2-door Honda Civic and cart them over to the computer center at MSFC.

NASA’s Greg Wilson had gotten permission to use the computer facility for the task. At that time, most of the computer power was being taken up by engineers modeling the fuel flows within the Space Shuttle main engines. As I added more data and processed it, I would pass the averages on to John Christy who would then work his analysis magic on them.

I don’t recall how many years we would use this tape-in-the-mail ordering system. Most if not all of those tapes now reside in a Huntsville landfill. After many years of storage and hauling them from one office location to another during our moves, I decided there was no point in keeping them any longer.

A Call from the White House, and the First Hubble Space Telescope Image

Also in 1990, John Sununu, White House Chief of Staff to President George H. W. Bush, had taken notice of our work and invited us to come up to DC for a briefing.

We first had to bring the NASA Administrator V. Adm. Richard Truly up to speed. Truly was quite interested and was trying to make sure he understood my points by repeating them back to me. In my nervousness, I was apparently interrupting him by finishing his sentences, and he finally told me to “shut up”. So, I shut up.

The next stop was the office of the Associate Administrator, Lennard Fisk. While we were briefing Fisk, an assistant came in to show him the first image just collected by the Hubble Space Telescope (HST). This was before anyone realized the HST was miss-assembled and was out of focus. In retrospect, it was quite a fortuitous coincidence that we were there to witness such an event.

As the day progressed, and no call was coming in from the White House, Dr. Fisk seemed increasingly nervous. I was getting the impression he really did not want us to be briefing the White House on something as important as climate change. In those days, before NASA’s James Hansen made it a habit, no scientists were allowed to talk to politicians without heavy grooming by NASA management.

As the years went by, we would learn that the lack of substantial warming in the satellite data was probably hurting NASA’s selling of ‘Mission to Planet Earth’ to Congress. The bigger the perceived problem, the more money a government agency can extract from Congress to research the problem. At one point a NASA HQ manager would end up yelling at us in frustration for hurting the effort.

Late in the afternoon the call finally came in from the White House for us to visit, at which point Dr. Fisk told them, “Sorry, but they just left to return to Huntsville”, as he ushered us out the door. Dr. Wilson swore me to secrecy regarding the matter. (I talked with John Sununu at a Heartland Institute meeting a few years ago but forgot to ask him if he remembered this course of events). This would probably be – to me, at least – the most surreal and memorable day of our 30+ years of experiences related to the satellite temperature dataset.

After 1990

In subsequent years, John Christy would assume the central role in promoting the satellite dataset, including extensive comparisons of our data to radiosonde data, while I spent most of my time on other NASA projects I was involved in. But once a month for the next 30 years, we would process the most recent satellite data with our separate computer codes, passing the results back and forth and posting them for public access.

Only with our most recent Version 6 dataset procedures would those codes be entirely re-written by a single person (Danny Braswell) who had professional software development experience. In 2001, after tiring of being told by NASA management what I could and could not say in congressional testimony, I resigned NASA and continued my NASA projects as a UAH employee in the Earth System Science Center, for which John Christy serves as director (as well as the Alabama State Climatologist).

At this point, neither John nor I have retirement plans, and will continue to update the UAH dataset every month for the foreseeable future.


71 Responses to “The UAH Global Temperature Dataset at 30 Years: A Look Back at the Early Days”

Toggle Trackbacks

  1. Mark D says:

    Thanks for the great work. One thing that really interests me is taking advantage of the coronavirus output decrease to calculate the human impact on climate. Is there some climate variable you would expect to respond to the decrease in industrial and travel pollution? Can you use your data to make some sort of prediction. This seems like a once in a lifetime ‘experiment’.

    • Roy W. Spencer says:

      I suspect that the CO2 decrease will be too small and too temporary to have a measurable effect, although in a climate model you might be able to do some clever experiment that can see a change at the 0.01 deg. C level.

      • Windy says:

        “Clever” as in deceptively meaningless?

      • Entropic man says:

        I verymuch doubt that the short term reduction in CO2 emissions will have a measurable effect.

        Reduced albedo is likely to cause more short term change, perhaps enough to push 2020 above the 2016 record.

        For those interested I am offering odds of 5/4 for a 2020 record and 2/1 against.

        • tonyM says:

          At those odds you will be buried in the dust as punters scramble to take you on (given how you have stated the odds).

          • studentb says:

            Yes , EM is being too generous.
            The implied probabilities on offer are 4/9=44% and 1/3=33% .
            Total =77% for any result when it should add up to 100% in an unbiased framework.

            To illustrate:
            I can invest $40 with EM for a record to collect $90 (i.e. win $50)
            At the same time I can invest $40 for NOT a record to collect $120 (i.e. win $80).

            i.e. I can bet a total of $80 with EM and be assured of collecting either $90 (win $10) or $$120
            (win $40) no matter what happens.

          • tonyM says:

            You mean to say you actually offered those odds in practice? You may make a good clerk but certainly a lousy bookmaker. Perhaps your brain has been subjected to too much entropy. I jest but sometimes you do come out with some doozies.

            Nothing is being gamed here by the punters. You haven’t grasped what is involved. Odds is another way of expressing a probability. StudentB has shown you what the implications are; the Pr(A) + Pr(NOT A) add up to much less than 1.0 Bookmakers try to ensure that they add to greater than 1.0. Your odds allow the punters to make a book like the bookie…a perfect book against you.
            Bet for A : $44.4 @ 5/4 will return to punter $100
            Bet for NOT A : $33.3 @ 2/1 will return punter $100.

            So $77.7 will result in a punter always receiving $100 whatever the outcome (leave aside minuscule dead-heat).

            The odds (probabilities) offered by you allow a punter to make 29% on his turnover. Any bookie would be ecstatic if he could achieve this; you call it gaming the system.

        • Entropic man says:

          Truly I would make a terrible bookie’s clerk.

          I thought of this in terms of probabilities.I hadn’t thought about those gaming the system.

          I offered this bet for fun on a non-climate website. Eight people bet for the record and three against.

          • tonyM says:

            misplaced this above…

            You mean to say you actually offered those odds in practice? You may make a good clerk but certainly a lousy bookie. Too much entropy in the brain, perhaps. I jest but sometimes you do come out with some doozies.

            Nothing is being gamed here by the punters. You haven’t grasped what is involved. Odds is another way of expressing a probability. Student B has shown you what the implications are; the Pr(A) + Pr( NOT A) add up to much less than 1.0 Bookmakers try to ensure that they add to greater than 1.0. Your odds allow the punters to make a book like the bookie…a perfect book against you.
            Bet for A : $44.4 @ 5/4 will return to punter $100
            Bet for NOT A : $33.3 @ 2/1 will return punter $100.

            ie. $77.7 in these bets will result in a punter always receiving $100 whatever the outcome (leave aside minuscule dead-heat).

            The odds (probabilities) offered by you allow a punter to make 29% on his turnover (rounding). Any bookie would be ecstatic if he could achieve this; you call it gaming the system.

          • Entropic man says:

            in my case eight people bet 1 each for a new record and three against.

            I took in 11.

            In the event of a new record I would pay out 8*1.25=12 and lose 1.

            If no record appeared I would have paid 3*2=6 and made a profit of 5. I won’t give up the day job.

          • tonyM says:

            EM:
            Will laugh because it is a happier state to be in, but there is room to cry with what you have just said. You have swallowed more entropy pills rather than think through the explanation. Rather than your claimed payouts of 12 with a loss of 1 unit and payout of 6 with a gain of 5 units the real result is as follows:

            Total bets collected = 11.
            If NEW record payout is: 8 x 2.25 = 20 : hence a loss of 9 units

            If NO record payout is: 3 x 3 = 9 : hence a gain of 2 units

            (A punter receives his stake back should he win – showed you in my calcs earlier to return 100 units for either bet)
            You were fortunate these punters did not understand the opportunity you offered; being stripped to underpants would not be fun. Agree with you; don’t give up your day job.

            Change one word in your original proposal would turn you into a bookie viz:

            ‘For those interested I am offering odds of 5/4 for a 2020 record and 2/1 ON against.’

      • Eben says:

        In a “model” you can achieve any desired effect you chose

  2. RW says:

    Glad you have no plans to retire. We need you and John to keep everyone honest with these measurements.

  3. Tom Tucker says:

    Great news that you don’t have retirement plans.
    I worry about what will happen to this essential data source when you leave.

  4. m d mill says:

    Thank you for your service!

  5. Tim Rhyne says:

    A sincere thank you to Dr. Christy and yourself. I hope that you still “feel the burn” (and not the Bern) for years to come. BTW, in two days I finish my self-quarantine after picking up my daughter at UAH. She, and her parents, love your school.

  6. John Nicol says:

    I was hoping to see a graph showing atmospheric temperature changes over the now 40 years since 1979! Could you include such a figure in this presentation please? Thank you John N

  7. Aaron S says:

    Happy 30 year celebration. Congratulations on excellent contributions to science. I believe that the satellite data is the golden standard in global temperature estimates because of its coverage, repeatability, and consistency and without the satellite data this entire climate change topic would be extremely one sided (rather than moderately).

  8. Bruce Kay says:

    judging by the comments of “the gold standard of temperature estimates”, you might want to add some more detail about error.

    knowing how serious you are about accuracy in the scientific record

    • Gordon Robertson says:

      bruce…”judging by the comments of the gold standard of temperature estimates, you might want to add some more detail about error”.

      It has been covered ad nauseam by UAH and any errors have been insignificant. The major error for orbital errors was claimed by John Christy, circa 2005, to be less than the error margin estimated, and applied only in the Tropics.

      Significant sat errors are fiction created by climate alarmists because they lack any other scientific evidence. If you want to talk about errors, talk about the fudging NOAA has built into the surface record.

      • nurse ratchet says:

        Get back to your room!

      • Bruce Kay says:

        well if they were so insignificant, why did Spencer’s errors result in “cooling” as opposed to “warming”? As those erroneous findings were presented as evidence for “skepticism” turns out it itself was deserving of the skeptics gaze, which is exactly how it was discovered – certainly Spencer and Christy weren’t fist to pick it out, as far the record shows.

        Either way, if inclined to spend time and text describing NASA’s errors or someones random speculations of Gore’s motives, you’d think a blog devoted to scientific credibility might not to forget their own “errors”.

        • Gordon Robertson says:

          bruce…”well if they were so insignificant, why did Spencers errors result in cooling as opposed to warming?”

          According to who? Let me guess, Gavin Schmidt of climate alarm central at NASA GISS and realclimate? Climategate’s Kevin Trenberth of climate alarm south at NCAR? Or maybe an alarmist at NOAA?

          No scientists without an alarmist axe to grind finds any amiss in the UAH data.

          • Bruce Kay says:

            Since when does it matter who? The facts are as stated and accept by Spencer himself and even you wouldn’t deny the erroneous observations prior identification were leveraged to full advantage in the political realm as doubt in the science.

            Well turns out the doubt was justified. Just levelled at the wrong guys, by those who most certainly have axes to grind. The record is what it is. Spencer and Christy made errors. Their peers noted them. They made the required adjustments, numerous times, always in a warming direction over decades of observation to the point they were no longer so outlier to the competing observations

            As you might characterize it if they were other people…..

            they willingly and knowingly “fudged” the numbers, to fit an agenda

        • barry says:

          Some early papers written by Spencer and Christy had warming trends, such as this one. This was for the period January 1979 to April 1997, and the trend was -0.049 (+/- 0.05). This paper revised the previous trend upwards by +0.03C.

          In the current data set, UAH6.0, the mean trend is now positive for the same period: 0.08C /decade, a difference per decade of 0.13C, which coincidentally is the current trend for the whole series.

          Prior to 2005, UAH satellite data presented much lower global temp trends than the surface data sets. Corrections Spencer and Christy made from 2005 brought UAH trends closer to the other well-known data sets.

  9. gallopingcamel says:

    Thanks for sharing that. It used to be that scientists were like wild ducks. Sadly, NASA and James Hansen taught them to fly in formation.

  10. Gordon Robertson says:

    Roy…thanks for the history….very interesting.

    And thanks to you and John for persevering in the face of unwarranted and unfair criticism by climate alarmists. I still recall Hillary Clinton, arms folded, glaring at John as he testified at a hearing. Nothing like politicians with an open-mind.

    I was somewhat perturbed the other day to see Donald Trump taking advice from Anthony Fauci. Fauci is a fossil who depends on computer models to project 100,000 to 200,000 deaths from covid in the States. He needs to be put out to pasture.

    Fauci reminds me of the climate modelling fossils who have made life uncomfortable for you and John.

    • Entropic man says:

      You don’t need models. Primary school arithmetic is enough.

      Eventually SARSCoV-2 will infect 70% of the US population.

      Of those infected 10% will need hospital treatment and 1% will die.

      With 300 million people in the US that becomes 210 million people infected, 21 million in hospital and 2.1 million dead.

      Anthony Fauci is trying to introduce you to the full horror gradually.

      • gallopingcamel says:

        @Entropic,
        Sadly, if nothing changes you are all too likely to be right. Even if Fauci is right about the peak being reached around Easter there could still be 60 million cases in the USA within a year.

        I won’t speculate about how many deaths there will be given that several promising therapies are likely to be available in weeks rather than months. This is an opportunity for “Science to the Rescue”. If the number of deaths turns out to be less than 200,000 there should be a victory parade to honor the medical professionals who made it possible.

        • Gordon Robertson says:

          Cam…” Even if Fauci is right about the peak being reached around Easter there could still be 60 million cases in the USA within a year…”

          No one knows what the tests are testing for. Some test for antibodies and antibodies can’t be related to a virus. Others test for RNA found in infected people. Trouble is, every human has RNA and it can come from dying cells.

          This is a replay of the projected and debunked HIV/AIDS pandemic that never happened. The same pseudo-science is at work here with Fauci and he was dead wrong with his projections about HIV.

          There’s no denying something is wrong re a contagious infection but there’s also no way to very positivity for a virus called covid19 because it has never been properly isolated, purified, and photographed.

          The models being used presume an infection from a virus so there’s no way to determine via projection how serious the outcome may be. Meantime the world’s economy collapses due to sheer speculation.

          It’s not rocket science to follow that protocol so why was it never done with HIV, SARS, H1N1 and now covid19? The answer seems simple, they can’t find a virus.

          In the early part of the 20th century, a mysterious pellagra outbreak occurred in the southern US States. A government agent on the scene immediately diagnosed the cause as nutrition. It took the establishment close to 30 years, while searching for a viral cause, to finally admit it was a lack of nutrition. The idiots would not concede that till the requisite B-vitamin was discovered, a lack of which causes pellagra.

          I fear we are caught up currently in the same stupid cycle. If you cannot explain an infectious agent, call it a virus. Even if you cannot find the virus using the requisite protocol, invent a method for identifying it indirectly then poison people with antivirals that have nothing to do with a virus.

          I’m willing to bet that if all this nonsense about self-isolation and social distancing was stopped, the infection would clear up on its own. And we might be able to build immunity to it.

          • Entropic man says:

            I think that is called coronavirus denial.

          • Carbon500 says:

            Gordon Robertson: I’m astonished by your comment that ‘antibodies can’t be related to a virus’.
            Where have you got this idea from?
            Here’s a link outlining what antibodies do:
            https://www.labtestsonline.org.au/learning/test-index/antibody-tests

          • Carbon500 says:

            Gordon: you also say ‘Others test for RNA found in infected people. Trouble is, every human has RNA and it can come from dying cells.’
            Viral genomic RNA contains virus-specific regions – i.e. they are not found in human DNA.
            Using a technique called the polymerase chain recation (PCR), these regions of viral RNA can be amplified and detected. The test cannot distinguish between material from viable or dead cells – as mentioned recently in The Lancet, the inability to differentiate between infective and non-infective (dead or antibody-neutralised) viruses remains a limitation of nucleic acid (RNA) detection.

  11. Bob Tisdale says:

    Thanks, Roy. Great post.

    Stay safe and healthy,
    Bob

  12. MrZ says:

    Thanks for sharing Roy.

    Here is a WEB application I made that makes your data come alive:
    https://cfys.nu/GTA/

    No manual yet, but start with selecting a dataset and try out the buttons.
    Please note the application requires a modern bowser and works best on PC/laptops. Mobile phones is harder because of their small screen.

    Enjoy!

  13. Tim S says:

    Very interesting history lesson. What a silly concept to measure the whole earth and then use the same calculation method every day (sarcasm). How is anyone going to manipulate the data? On a more serious note, is there a succession plan? Could I suggest you recruit a bright-eyed young scientist who wants to carry on the tradition of scientific integrity.

    • bdgwx says:

      UAH hasn’t used the same method or calculation throughout its history. That’s a good thing though. We want UAH or any institution who maintains climate data to adjust their methodology and data to address bugs, improve algorithms, quality control data, or for any purpose that improves the dataset.

      We also don’t want different institutions using the same methodology either. We want them to use different techniques and subsets of available data. This allows us to cross check their results against all others without needing to worry about hidden systematic mistakes/biases.

      I do realize your post contains sarcasm. I’m just chiming in on some the talking points.

      • Tim S says:

        Your response missed my point so let me be clear. I should have stated day to day, instead of every day. When “errors” are found, the correction is applied broadly, not cherry picked to favor selected data the way a surface data set can be. Adjusting for retired satellites and new satellites does introduce the possibility of a trend change, but the month to month data remains comparatively useful over a very long period of years, and if the transition is long enough, those errors can be minimized. I do not think anyone can make the case that any other published temperature trend is more accurate or precise, or has a higher level of integrity for at least attempting to be objective.

    • Bruce Kay says:

      Well if you really like irony, how about the fact that due to errors discovered in their observations, Spencer and Christy had to go back and “manipulate” their data exactly how everybody else does?

      Funny how you never take them to task for it but I guess thats the irony

      • gallopingcamel says:

        One of the important points of this post is that balloon data was used to “correct” the satellite data until it was realized that the the cart was before the horse. Now satellite data is used to calibrate the balloon data.

        As more precise methods of measurement become available standards are changed accordingly. Take a look at the standards used for electrical measurement:

        In 1908 the Weston Cell (1.018638 V) was used as we could measure voltage more precisely than current or charge at that time.

        Techniques for measuring current improved leading to the Ampere becoming the “International Standard”.

        Last year the Ampere was replaced by the charge of an electron (1.60217663410−19 Coulombs) which can now be measured to better than 1 part per billion.

        • bdgwx says:

          That’s the first I’m hearing of this. What balloon datasets are calibrated using satellite datasets?

          • gallopingcamel says:

            Dr. Roy’s post includes this:

            “It also demonstrated that NOAA’s practice of adjusting satellite data with radiosondes (weather balloons) was backwards: the differences others had seen between the two systems were due to poor spatial sampling by the radiosondes, not due to changes in the satellite calibration stability.”

            This brought to mind the logic of using the most precise measurement technique to standardize the others.

            Right now there are only seven internationally standardized units and the most precise of these is “Time” which can now be defined with a precision of better that one part in 1,000,000,000,000,000,000.

          • bdgwx says:

            I remember reading that. But I never got the interpretation that balloon datasets were calibrated using satellite datasets.

          • Stephen Paul Anderson says:

            It seems pretty clear.

      • Gordon Robertson says:

        bruce…”Well if you really like irony, how about the fact that due to errors discovered in their observations, Spencer and Christy had to go back and manipulate their data exactly how everybody else does?”

        Sheer nonsense. Where do you get your info, from the realclimate school of climate alarm?

        • E. Swanson says:

          Gordo wrote:

          Sheer nonsense. Where do you get your info, from the realclimate school of climate alarm?

          I know you won’t do it, but why not read the literature? Start with:

          Spencer, R. W. and J. R. Christy, 1992a: Precision and radiosonde validation of satellite gridpoint temperature anomalies, Part I:MSU Channel 2, J. Climate 5, 847-857.

          Then read about their revised analysis:

          Spencer, R. W. and J. R. Christy, 1992b: Precision and radiosonde validation of satellite gridpoint temperature anomalies, Part II: A Tropospheric retrieval and trends during 1979-90, J. Climate 5, 858-866.

      • Tim S says:

        As I responded above, adjustments are applied broadly and from previous posts over the years, it appears the cumulative effect of any errors is relatively small. The basic science of their method produces a highly accurate and precise product.

        • Bruce Kay says:

          You may be happy to call them “adjustments” here but if so, you better make sure all the other adjustments that occur as a matter of routine throughout climate science receive the same treatment.

          That why I’m riding Spencer’s behind on his accounting of history. You want your historical record to be credible, don’t cherry pick yourself into a position of holy veneration while disparaging others for exactly the same “transgressions”. We know they are not transgressions – they are routine and valid adjustments.

          Roy could at least thank his peers in the footnote! Wouldn’t it have been unfortunate if he and Christy carried on without their criticism?

          • Stephen Paul Anderson says:

            Geese that fly in formation have learned to adjust to the others in the flock who adjust to the head goose.

        • bdgwx says:

          I’m not sure the cumulative effect of errors is small. UAH is an outlier to just about every other dataset including satellite, surface, reanalysis, balloon, etc. That doesn’t mean UAH is necessarily flawed, but it does mean it is justified to place extra scrutiny upon UAH. Afterall outliers are usually outliers for a reason. We need to understand that reason before we proclaim that UAH had a monopoly on correctness whereas everybody else floundered. I bet many in the industry would love to see UAH release their source code so that results can be cross checked and replicated like how many other datasets have done. Don’t get me wrong. I don’t begrudge UAH for holding on to their intellectual property. But you also can’t blame us for being skeptical.

          • Tim S says:

            Have you considered it is an outlier to other flawed methods because it is more accurate? I seem to recall that RSS agrees more with UAH than it does with surface data sets. For a long time the folks at RSS just threw up their hands and published error probabilities rather than do the hard science that has been accomplished with UAH to make it accurate and precise. Now that RSS is publishing a more precise product, it still possible that they show more warming because their method is not as good. You want source code? I recall that the whole thing has been published and available for peer review. You could probably research that if you really are interested. From my previous response, I do not think anyone can make the case that any other published temperature trend is more accurate or precise, or has a higher level of integrity for at least attempting to be objective.

          • bdgwx says:

            It’s not just RSS that has a higher warming trend. It’s nearly every dataset in existence. In fact, I’m not aware of any dataset that can corroborate UAH’s low outlier warming trend estimate.

            Unfortunately the UAH source code is not publicly available. That’s okay. Not all datasets provide their source code and other materials required for replication. But some do. Namely, NASA, NOAA, Berkeley Earth, and others. As of yet no one has identified any substantial flaw in these datasets.

          • Tim S says:

            All of the other datasets I have seen including the much touted NASA product do not accurately depict the ENSO effect the way satellite data does, and that is because they are smoothed, averaged, and massaged so aggressively. Only the satellite data shows the monthly and yearly variation with any kind of precision. For that reason, I say it is also more accurate.

          • Stephen Paul Anderson says:

            Geese that fly in formation have learned to adjust to the others in the flock who adjust to the head goose.

  14. Rhee says:

    Dr Roy, the description of your Honda Civic reminds this old IT guy of a joke my CS professor told in data structures class regarding data throughput on the nascent internet: Never underestimate the bandwidth of a station wagon full of 9-track tapes.

  15. kramer says:

    I enjoyed reading about this part of your life, Dr. Spencer, thanks.

  16. RoHa says:

    “In 1988, NASA’s James Hansen testified for then-Senator Al Gore, Jr., testimony which more than any single event thrust global warming into the collective consciousness of society. ”

    Bollocks. It was Margaret Thatcher pushing the idea at the UN that really made the idea take off. No-one in the real world paid any attention to Gore until then.

    • Gordon Robertson says:

      RoHa…” It was Margaret Thatcher pushing the idea at the UN that really made the idea take off”.

      That’s correct, she was having trouble with the UK coal mining unions and she wanted to discredit emissions from coal. She had a degree in chemistry and an advisor advised her to go to the UN and baffle them about CO2 emissions from coal.

      The first co-chair of the IPCC, John Houghton, was one of her proteges. Thatcher was pushing global warming issues years before Gore or Hansen.

      • Ian brown says:

        Thatcher was a believer in Nuclear energy, and could not understand how a country which built the world’s first working nuclear power station no longer had the capability to carry on, her vision was a UK powered by the Nuclear industries, climate change was a pawn which she exploited,she might have got her way but fate dealt a blow, not only the commons committee on the cost but the double whammy of the Falklands war and the minors strike,the rest is history.

  17. barry says:

    Happy anniversary, Roy. To John Christy also. And thanks for the tour through history.

  18. Galaxie500 says:

    Potholer54 has posted a good youtube video mainly regarding Trumps/USA poor response to the outbreak. Constant denials from Trump that was a problem.

  19. Galaxie500 says:

    Oops Sorry wrong thread

  20. Stephen Paul Anderson says:

    Wonderful legacy Dr. Spencer and Dr. Christie, congratulations!

  21. Adelaida says:

    Congratulations Dr. Spencer and Dr. Christie !!!
    It has been a joy to read your story in the midst of this coronavirus concern! From Spain Thank you very much for sharing it !!!

  22. Tim S says:

    I posted this basic comment as a reply above, but I would like to expand on it. Like most people with some amount of knowledge of calibration, I do understand the difference between precision and accuracy. I would make the case that a higher level of precision suggests a greater degree of accuracy over time. None of the published surface measurement datasets display the level of accuracy of satellite data. The raw data from an individual weather station may have good precision, but the global product loses all of that precision. For example, the ENSO effect seems to be completely lost in surface datasets. Only the satellite data shows the monthly and yearly variation with any kind of precision. For that reason, I say it is also more accurate.

    • barry says:

      “I would make the case that a higher level of precision suggests a greater degree of accuracy over time.”

      It would be interesting to see you try. A group of measurements tightly clustered far away from the true value are precise, but not accurate. A series of groups of measurements tightly clustered far away from the true value is more accurate how?

      “None of the published surface measurement datasets display the level of accuracy of satellite data.”

      It would be interesting to see you demonstrate that, as the satellites do not measure temperature, but radiance of O2 in a column of atmpsphere >10km high, taking 3 days to stripe the entire field of observation, from which temperature must be derived. This is for one full global measurement. For a time series of this data, there are different instruments over time, different satellites with diferent orbits and orbital decay over time that have to be intercalibrated. The shift from one device to another screws up the precision over time, which according to your first quote, would interfere with the accuracy.

      “For example, the ENSO effect seems to be completely lost in surface datasets.”

      Bollocks. The 1998 and 2016 el Ninos are clearly captured in all the data sets, as are many other ENSO events, including la Ninas (see the 2007/2008 la Nina, for example). The only difference is that the temperature amplitude of the events is higher in the satellite data sets, possibly reflecting the fact that they measure 10km of the lower troposphere, rather than the surface.

      Here are 3 surface and 2 satellite data sets. The interannual variation is highly correlated. ENSO events are certainly not “completely lost” in the surface data sets.

      https://tinyurl.com/to4vcqw

  23. Bri says:

    The civic was no were near as good as the 1980 Toyota tercel with bad rings and a smoke trail 1/2 mile long.LOL

  24. Ahsani says:

    very interesting and insightful review, thank you for the review

Leave a Reply