Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
Keep me logged in
New? Register here
Forgot your password?

Latest Posts

Archives

Roy Spencer on Climate Sensitivity - Again

Posted on 1 July 2011 by Chris Colose

Whenever some of the more credible "skeptics" come out and talk about climate change, they usually focus on a subject known as climate sensitivity.  This includes Dr. Roy Spencer, who has frequently argued that the Earth’s climate sensitivity is low enough that global warming should only be very mild in the future (for example, in his 'silver bullet' and 'blunders', as Dr. Barry Bickmore described them).  He has recently done so again in a post here (which Jeff Id then talked about here).  Naturally, the "skeptics" have uncritically embraced this as a victory, a turning over of the IPCC, a demonstration that global warming is a false alarm, and all that.  The perspective on how easily a blog post overturns decades of scientific research is at best, amusing.

Of course, claims of a relatively high or low climate sensitivity need to be tested and evaluated for how robust they are in the context of Earth’s geologic record and in models of varying complexity.  My hope is to briefly lay out a context for the body of evidence behind climate sensitivity and then to briefly talk about Roy Spencer’s “model” (which is really not a model at all).  Here, I will review a bit about climate sensitivity and then talk about Spencer's post toward the end.

What is Climate Sensitivity?

Climate sensitivity is a measure of how much the Earth warms (or cools) in response to some external perturbation.  The original perturbation will take the form of a radiative forcing which modifies either the solar or infrared part of the Earths energy budget. The magnitude of the forced response is critical for understanding the extent to which the Earth’s climate has the capacity to change.

As an example, if we turn up the sun, then the Earth should warm by a certain amount as a response to that increase in energy input.  Greenhouse gases also warm the planet, but do so by reducing how efficient the planet loses heat.  Of course, we are interested in determining just how much the planetary temperature changes.  A higher climate sensitivity implies that, for a given forcing, the temperature will change more than for a system not as sensitive.

Determining the Climate Sensitivity

Determining climate sensitivity is not trivial.  First, we must define what sensitivity we are referring to. 

The most traditional estimate of climate sensitivity is the equilibrium response, which ensures that the oceans have had time to equilibrate with the new value of carbon dioxide (or solar input) and with various “fast feedbacks.”  These include the increase in water vapor in our atmosphere with higher temperatures, changies in the vertical temperature structure of the atmosphere (which determines the outgoing energy flow), and various cloud or ice changes that modify the greenhouse effect or planetary reflectivity.

There is also the transient climate response, which characterizes some the initial stages of climate change when the deep ocean is still far out of equilibrium with the warming surface waters.  This is especially important for understanding climate change over the coming decades.  On the opposite end of the spectrum, there are even longer equilibrium timescales scientists have considered that include a number of feedback processes that operate over thousands of years.

Unfortunately, there is no perfect method of determining climate sensitivity, and some methods only give information about the response over a certain timescale.  For instance, relatively fast volcanic eruptions do not really give good information about sensitivity on equilibrium timescales.  Researchers have looked at the observational record (including the 20th century warming, the seasonal cycle, volcanic eruptions, the response to the solar cycle, changes in the Earth’s radiation budget over time, etc) to derive information about climate sensitivity.  Another way to tackle the problem is to force a climate model with a doubling of CO2, and then run it to equilibrium.  Finally, Earth's past climate record provides a diverse array of climate scenarios.  These include glacial-interglacial cycles, deep-time greenhouse climate transitions, and some exotic events such as the Paleocene-Eocene Thermal Maximum (PETM) scattered across Earth’s past.

Dr. Richard Alley discusses a number of independent techniques to yield useful information about paleoclimate variables, including climate sensitivity, in his AGU talk here (highly recommended, it's one of the best videos you'll see in this field).  Estimates of climate sensitivity have been studied for many different events or time frames in Earths climate history, and through a variety of very clever ways.

A vast number of studies have focused on the glacial-interglacial cycles as a constraint on sensitivity (such as many of Jim Hansen’s papers, the most recent of which we discussed here), but also based on Earth’s more distant past.  Dana Royer has several publications on this (e.g., Park and Royer, 2011), including an interesting one entitled “Climate sensitivity constrained by CO2 concentrations over the past 420 million years” where he constrains sensitivity based on the CO2 concentration in the atmosphere itself.  The idea here is that rock weathering processes regulate CO2 on geologic timescales as a function of temperature and precipitation (see the SkS post on that process here) – an insensitive climate system would make it difficult for weathering to regulate CO2; in constrast, a system too sensitive would be able to regulate CO2 much easier than is observed in the geologic record, making large CO2 fluctuations impossible.  The authors find a best fit of doubling CO2 sensitivity of ~2.8 °C and that a sensitivity of "...at least 1.5 °C has been a robust feature of the Earth’s climate system over the past 420 Myr..." (Figure 1).  The authors cannot exclude a higher sensitivity, however.

 

Figure 1: Comparison of CO2 calculated by GEOCARBSULF for varying Δ T(2x) to an independent CO2 record from proxies. For the GEOCARBSULF calculations (red, blue and green lines), standard parameters from GEOCARB11 and GEOCARBSULF12 were used except for an activation energy for Ca and Mg silicate weathering of 42 kJ mol21. The proxy record (dashed white line) was compiled from 47 published studies using five independent methods (n5490 data points). All curves are displayed in 10 Myr time-steps. The proxy error envelope (black) represents 61 s.d. of each time-step. The GEOCARBSULF error envelope (yellow) is based on a combined sensitivity analysis (10% and 90% percentile levels) of four factors used in the model.

There are many other similar studies. Pagani et al (2010) compared temperature and CO2 changes between the present day and the Pliocene (before four million years ago), just before the onset of major Northern Hemisphere ice sheets.  Based on Bayesian statistical thinking, Annan and Hargreaves (2006) constrain sensitivity based on data from the 20th Century, responses to volcanoes, and the Last Glacial Maximum data.  There are other estimates based on Maunder Minimum forcing.

A review by Knutti and Hegerl (2008) examines a vast number of these papers and concludes:

"studies that use information in a relatively complete manner generally find a most likely value between 2 and 3.5 °C [for a doubling of CO2] and that there is no credible line of evidence that yields very high or very low climate sensitivity as a best estimate."

There conclusions are summarized in Figure 2.  This also shows the limitations of various methods, for example, the inability of volcanic eruptions to constrain the high end of climate sensitivity. In general, the high end of sensitivity is more difficult to chop off from the available evidence, and there are also theoretical reasons why a more sensitive climate system corresponds to more uncertainty, as Gerard Roe has argued frequently.

 

Various estimates of climate sensitivity

Figure 2: Distributions and ranges for climate sensitivity from different lines of evidence. The circle indicates the most likely value. The thin coloured bars indicate very likely value (more than 90% probability). The thicker coloured bars indicate likely values (more than 66% probability). Dashed lines indicate no robust constraint on an upper bound. The IPCC likely range (2 to 4.5°C) and most likely value (3°C) are indicated by the vertical grey bar and black line, respectively.

Of course, varying techniques are subject to inherent model and/or data limitations.  Using modern observations to constrain sensitivity is difficult because we do not know the forcing very well (even though it is positive, aerosols introduce a lot of uncertainty in the actual number), the climate is also not in equilibrium, there are data uncertanties, and a lot of noise from natural variability over short timescales.  Similarly, models may be useful but imperfect, so it is critical to understand their strengths and weaknesses.  Convincing estimates of sensitivity need to be robust to different lines of evidence, and also to different choices that researchers make when constructing a climate model or interpreting a certain record.

One of the criticisms of Lindzen and Choi’s 2009 feedback paper was that the authors compared a number of intervals of warming/cooling over ~14 years with the radiation flux at their endpoints, but the conclusions were highly sensitive to the choice of endpoints, with results changing if you move the endpoints by even a month.  Results that are too choice-sensitive such as this are not robust, and even if mathematically correct, will not convince many researchers (or reasonable policy makers) that several decades of converging lines of evidence got it all wrong.

Roy Spencer on "More Evidence that Global Warming is a False Alarm: A Model Simulation of the last 40 Years of Deep Ocean Warming"

So what about Spencer’s “model”, where he uses a simple ocean diffusion spreadsheet setup and creates a profile of recent warming with depth in the ocean?  It turns out in this case that there is no physics other than assuming the heat transport perturbation between layers depends linearly on the temperature difference.  His “model” is just an ad hoc fitting of an ocean temperature profile from a particular atmosphere-ocean general circulation model (AOGCM), and yields no explanatory or predictive power. 

Spencer models his ocean as a pure diffusion process.  This is  insufficient.  You also need to include an upwelling term (Hoffert et al., 1980), which in the global ocean amounts to about 4 meters per year  in order to compensate for the density-driven sinking of water in the Atlantic (Wigley, 2005; Simple climate models. Proc. International Seminar on Nuclear War and Planetary Emergencies).  In a somewhat more complex model, you can separate the land from the ocean in each hemisphere to account for the higher climate sensitivity over land and because radiative forcing may differ between hemispheres (for example, sulfate aerosols are more concentrated in the North), and the interaction between these reserovirs.  These are still simple, as they have no 3-D ocean dynamics, no hydrologic cycle, etc. Simple and early steps toward a systematic approach to this problem (as in Hoffert et al., 1980) have already been done many decades ago.

Spencer also "starts" his model in 1955, but this is problematic because of the pre-1955 intertia and an out of equilibrium climate at this time.  He does not consider alternative issues, such as the possibility of increased (negative) aerosol forcing, and only considers possibilites that reflect his pre-conceived notions.

One model of low complexity (albeit much more complex than Spencer’s model) is the MAGICC model (Tom Wigley, personal correspondence) ,which can be downloaded on any Windows platform and used to play a similar game as Spencer is playing here.  This produces results for ocean heat content or vertical temperature profiles (although not displayed as an output); this model can be easily tuned to match any AOGCM with a small number of parameters, and it fits the observed data well with a sensitivity of ~3°C per doubling of CO2.  There are other Windows-based user friendly GCM’s that can be downloaded.  One which offers a more diverse interface than MAGICC, is EdGCM (developed by Mark Chandler) where forcing inputs can be specified and a standard laptop can run a ~100 year simulation in 24 hours or so.  EdGCM is known to produce sensitivity on the high end of the IPCC Fourth Assessment Report (AR4) range. 

More modern and sophisticated models describing climate feedbacks and sensitivity (e.g., Soden and Held, 2006; Bony et al. 2006) provide much more complexity and realism and agree with paleoclimate records in that Spencer is too low in his estimates.  There is a certain irony in those that think Spencer's model explains climate sensitivity and heat storage processes, while readily dismissing more sophisticated models as unreasonable lines of evidence for anything.

It is also worth noting that only the transient climate response can be affected by observations of ocean heat content change, since this has no bearing on equilibrium climate sensitivity.  Spencer agrees with this in one of his comments, but dismissed its relevance by saying “the climate is never in equilibrum,” yet in his article he directly compares his result with “the range of warming the IPCC claims is most likely (2.5 to 4 deg. C)” which is an equilibrium response.

If Spencer creates and publishes a model that has a very large negative feedback and can explain observed variations, then we have a starting point to debate which model (his, or everyone else’s) is better.  But for now, comments by Roy Spencer such as

These folks will go through all kinds of contortions to preserve their belief in high climate sensitivity, because without that, there is no global warming problem

are extremely ironic.  Comments such as those by Jeff Id that “Roy’s post is definitely some of the strongest evidence I’ve read that the heat from climate feedback to CO2 just isn’t there” are only an indication that he hasn’t really read any other evidence.

There is still much to be said about the "missing heat" in the ocean.  A couple of papers recently (Purkey and Johnson, 2010; Trenberth and Fasullo, 2011; Palmer et al. 2011, GRL, in press), for example, highlight the significance of the deep ocean.  These show that there is an energy imbalance at top of the atmosphere and energy is being buried in the ocean at various depths, including decades where heat is buried below well below 700 meters, and that it may be necessary to integrate to below 4000 m.  Katsman and van Oldenborgh (2011, GRL, in press) use another model to show periods of stasis where radiation is sent back to space. It is also unsurprising to have decadal variability in sea surface temperatures.

There is clearly more science to be done here, but Spencer´s declarative conclusions and conspiracy theories really have no place in advancing that science (or for that matter, Roger Pielke Sr., who thinks everything stopped in the early 2000s).  Nor do absurd attacks on the IPCC's graphs (which were realy based on graphs from other papers). In fact, the AR4 actually discusses how models may tend to overestimate how rapidly heat has penetrated below the ocean’s mixed layer (see here and the study by Forest et al., 2006), which is Spencer's explanation for his results.  Spencer is thus not adding anything new here, and he further criticizes the graphs for not labeling the "zero point" or suggesting we should not include the uncertainty of natural variation in plotting model output. This is not convincing.

This must be why he knows the climate sensitivity to be 1.3°C per doubling with no uncertainty. 

Acknowledgments: I thank Kevin Trenberth and Tom Wigley for useful discussions on this subject.

0 0

Bookmark and Share Printable Version  |  Link to this page

Comments

Prev  1  2  

Comments 51 to 64 out of 64:

  1. Update to my 39 and 40:

    Going exhaustively through the readme file for the UAH TLT dataset, I found listed the following significant adjustments:

    Version D:
    4 Feb 2000 +0.013 C/ Decade
    6 Oct 2000 +0.002 C/Decade
    2 Nov 2001 +0.002 C/Decade
    8 Apr 2002 +0.012 C/decade
    Version 5.0:
    7 Mar 2003 +0.02 C/decade
    5 Feb 2004 +0.002 C/decade
    20 Aug 2004 +0.02 C/decade
    7 Aug 2005 +0.035 C/decade
    5 Dec 2006 +0.01 C/decade

    Together with prior adjustments, these amount to a cumulative adjustment of 0.146 C/decade. However, I note that the switch between version D and version 5 is quoted as generating an adjustment of +0.008 C/decade in the table above, and in the peer reviewed literature whereas it shows a cumulative adjustment of +0.029 C/decade in the readme file. Any such discrepancy must be resolved in favour of the peer reviewed literature, and therefore I propose to treat the readme file adjustments as ad hoc, and likely to be superceded without notice. Therefore I will ignore them.

    That being the case, the cumulative adjustments to the UAH TLT record add a trend of 0.069 degrees C per decade to that which would be shown by the original method. That represents 49.3% of the current 0.14 C/decade trend.

    To place that into context I calculated how much the 97/98 El Nino added to the trend by the simple expedient of calculating the base trend, and then recalculating the trend with all values during the El Nino period adjusted to parity with those of the immediately adjacent months (0.02 C). The adjusted trend was 0.008 C/decade less than the true data, showing that the 97/98 El Nino added 0.008 C/decade to the overall trend. That represents just 5.7% of the total trend, and just 11.6% of the effect on the trend of the various adjustments.

    Returning to Christy, we recall that he said:

    "The major result of this diagram is simply how the trend of the data, which started in 1979, changed as time progressed (with minor satellite adjustments included.) The largest effect one sees here is due to the spike in warming from the super El Nino of 1998 that tilted the trend to be much more positive after that date."


    Describing adjustments that account for nearly 50% of the entire trend as "minor satellite adjustments" is misleading at the minimum. This is especially so as the single largest contribution to the trend (comparing adjustments and individual years) was the 0.035 adjustment found by Mears and Wentz, which represents 25% of the final trend.

    The largest single change in trend from year to year is the change of 0.1 C/decade between 1998 and 2000 (publication dates). In that change, the components where:

    Orbital Decay Adjustment +0.1 C/decade
    Hot Target Adjustment -0.07 C/decade
    97/98 El Nino +0.07 C/decade

    It is true that the El Nino represents 70% of the net change in trend. However it is also true that the Orbital Decay Adjustment amounts to 100% of the net change in the trend. Picking the El Nino as being the most important part of that change is dubious at best. Saying that the largest effect is the spike due to the El Nino without mentioning that 30% of that spike in trends was due to the net adjustment is also misleading (at best).

    Finally, it is sometimes incorrectly stated by "warmists" that Spencer and Christy never find adjustments themselves, or that all the adjustments have been in one direction. In the chart @39, the two adjustments in red where found by other teams, whereas all other adjustments where found by Spencer and Christy. Further, there are clearly both positive and negative adjustments. Therefore both of these "warmist" beliefs are myths and should not be repeated.

    However, it is possible to note that the cumulative adjustments found by Spencer and Christy sum to a -0.066 trend. In contrast, those found by other teams sum to +1.35 C/decade. Interestingly, that means that left to their own devices, Spencer and Christy would still be reporting a trend of 0.05 C/decade. Given that their product has been an outlier sitting well below other temperature products from the beginning, the trend of their adjustments (excluding those found by other teams) is troubling to say the least. This is not a record you can look at and say with confidence that the scientists involved have not let biases guide their work.

    Characteristically, Spencer hints that they have another negative adjustment in the pipeline already.
    0 0
  2. DB wrote: "I demur. Stating something that is "Completely false" is misleading."

    Well, fine... if you're going to get all 'technical' about it. :]

    Thanks for the detailed analysis Tom. On your point about the existence of 'cooling' adjustments... I think it is still valid to say that errors in Spencer & Christy's work have consistently been biased towards cooling. Indeed, some of their cooling adjustments are likely examples of this problem, and even if there are correct cooling adjustments they are clearly very minor compared to the cooling errors.
    0 0
  3. CBDunkerson @52, I suspect that you are mostly right, but I cannot know that without somebody sufficiently expert getting their code and going through it with a fine tooth comb. However, I am sure the adjustments are legitimate. The problem is that the method of correcting a known warming influence (ie of making a cooling adjustment) is not always implicit in the data. Different teams might use different methods without it being possible to demonstrate which method is better from the satellite data alone. So while the reasons for the adjustments are probably legitimate, the method of adjustment may well have seen a consistent bias towards methods that show lower trends.
    0 0
  4. Tom, thanks for the analysis. Didn't UAH come up with the +0.1°C/decade orbital decay correction though? I thought I remembered Spencer taking credit for that, or maybe it was the diurnal drift (I know RSS identified at least one of those two).
    0 0
  5. Dana @54, Christy, Spencer and Braswell, 2000 attribute the discovery of the orbital decay adjustment they make to Wentz and Schabel (Wentz, F. J., and M. Schabel, 1998: Effects of satellite orbital decay on MSU lower tropospheric temperature trends. Nature, 394,
    361–364.) The diurnal correction in 2005 is attributed to Carl Mears and Frank Wentz of RSS in Christy's readme file.

    Sorry, 3:22 am here so this is definitely my last post of the night.
    0 0
  6. CBDunkerson @44,

    You noted about Eric's comments here that,
    "you seem very concerned that the Washington Post article and/or graph could be misinterpreted to mean something other than intended but still true... but not particularly put out that what Spencer and Christy are saying is blatantly false."

    That is the exact same impression that I have. We can debate the semantics of how the graph might have been better, but the message of that graphic is very (inconveniently) clear-- the UHI data were biased on the low side, and when Spencer and Christy eventually did start implementing the corrections, some from the RSS team, some of their own, in the majority of cases the corrections increased the temperature estimates.

    There are a number of problems here:
    1) Spencer and Christy, to this day, remain way too confident in the veracity of their product and repeatedly overstate the robustness and accuracy of the satellite inferred temperatures, while greatly exaggerating uncertainties in the surface temperature record.
    2) When told back in 1997 by Hurrell and Trenberth that their product likely had a significant cool bias, they dismissed it and made excuses (more on that later). Yet to this day they claim that they are interested in producing a robust product.
    3) They are using their data to play politics and mislead politicians, the public and policy makers.
    4) Even now Spencer and Christy are bending over backwards and cherry picking to lower the warming trend in their own data. For example, Christy cherry-picking 1998 is beyond belief. That issues has been dealt with so many times I have lost count (most recently here), but that does not stop Spencer and Christy peddling his nonsense in late 2011.
    5) Spencer and Christy still have not released their code used to calculate the temperatures from the satellite data. That did not stop Christy from testifying before congress that their code was freely available.

    There are probably more disturbing issues with this saga, so feel free to add them. I'm writing something up on how badly Spencer and Christy have behaved on this file and will post it soon.

    I'm amazed that Spencer and Christy have not been investigated by UAH for scientific misconduct. Their repeated misrepresentations, cherry-picking, distortions, exaggerations, and their politicization of science are reprehensible and the very antithesis of good science. Unbelievably, Spencer has the gaul to accuse other scientists studying attribution of "pseudo-scientific fraud".
    0 0
  7. Tom @51,

    Many thanks. This exercise reflects very poorly on Spencer and Christy.
    0 0
  8. Right you are, Tom. 0.135 out of 0.14°C/decade due to corrections by other groups. Yikes.
    0 0
  9. re Tom @55. Yes that's correct. The major scientific publications in which errors in the MSU analyses were highlighted are probably these:

    [1] B.L. Gary and S. J. Keihm (1991) Microwave Sounding Units and Global Warming Science 251, 316 (1991)

    [2] J. W. Hurrell & .K E. Trenberth (1997) Spurious trends in satellite MSU temperatures from merging different satellite record. Nature 386, 164 – 167.

    [3] F. J. Wentz and M. Schabel (1998) Effects of orbital decay on satellite-derived lower-tropospheric temperature trends. Nature 394, 661-664

    [4] Q. Fu et al. (2004) Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends Nature 429, 55-58.

    [5] C. A. Mears and F. J. Wentz (2005) The Effect of Diurnal Correction on Satellite-Derived Lower Tropospheric Temperature, Science 309, 1548-1551.

    Wentz's response [see Science 310, 972-3 (2005)] to Spencer/Christy's comment on the latter paper on the list above is about as close as one gets in the rather rarifed language of scientific publications to insinuation of incompetence:

    "Once we realized that the diurnal correction being used by Christy and Spencer for the lower troposphere had the opposite sign from their correction for the middle troposphere sign, we knew that something was amiss. Clearly, the lower troposphere does not warm at night and cool in the middle of the day. We question why Christy and Spencer adopted an obviously wrong diurnal correction in the first place. They first implemented it in 1998 in response to Wentz and Schabel (1), which found a previous error in their methodology, neglecting the effects of orbit decay."
    0 0
  10. Tom, all I can say is "Wow." That is a pretty darn amazing record. It seems like like everyone else is doing Spencer and Christy's work. How have these guys escaped being pilloried for this record?

    Given the amount of frothing over the CRU emails, imagine the hubaloo that would be created by an equal but opposite set of corrections in one of the main temperature records that have supported increasing temperatures. It gives you a sense of how unbalanced the debate is, IMO.
    0 0
    Moderator Response: [DB] Fixed text per request.
  11. Chris @59,

    Thanks for the citations. Here is another important one by Prabhakara and Iacovazzi (1999), their abstract is worth a read.

    At the end of the day Spencer and Christy were wrong and Wentz and Schabel (1998), Hurrell and Trenberth (1997, 1998) and Prabhakara et al. (1998) were correct.

    But when first notified of the errors in their data by Hurrell and Trenberth in 1997, Spencer and Christy were quick to dismiss them and did not take the critique at all seriously, this from March 1997:

    "There isn't a problem with the measurements that we can find," Spencer explained. "In fact, balloon measurements of the temperature in the same regions of the atmosphere we measure from space are in excellent agreement with the satellite results."

    And in February of 1997 they said this:

    "Spencer and co-author Dr. William Braswell of Nichols Research Corporation have great confidence in the quality of their satellite data. "We've concluded there isn't a problem with the measurements," Spencer explained. "In fact, balloon measurements of the temperature in the same regions of the atmosphere we measure from space are in excellent agreement with the satellite results."
    "Instead, we believe the problem resides in the computer models and in our past assumptions that the atmosphere is so well behaved. "


    Note how quick they are to blame the models. more noteworthy though is their reliance on the balloon data is intriguing and convenient, because even back then it is well established in the literature that there were also serious issues/biases with the balloon data (see Luers (1997), Parker and Cox (1995), and Gaffen (1994) et cetera.) A summary paper by Randel and Wu (2005) can be found here.

    So when Christy claims that "When problems with various instruments or processes are discovered, we characterize, fix and publish the information", that is not entirely true and not what the literature and history show and does not credit or acknowledge the errors pointed out to them by other researchers.

    Additionally, when Christy claims that "Indeed, there have been a number of corrections that adjusted for spurious warming, leading to a reduction in the warming trend" that is not entirely true either as shown by Tom's research shown above.

    In the same blog post Christy says, "The notion in the blog post that surface temperature datasets are somehow robust and pristine is remarkable."

    Interestingly back in March 1997 Christy said:
    "Over Northern Hemisphere land areas, where the best surface thermometer data exist, the satellites and thermometers agree almost perfectly", said Dr. Christy of UAH."

    So in March 1997 he agreed that there was good agreement between the satellite and surface (land) thermometer data. Ironically, it is now in 2011 that the evidence that the surface temperature record is robust is strongest, but Spencer and Christy are still choosing to questioning that and casting doubt on the land temperature record. I strongly suspect that they in their heart or hearts know that the surface record is robust, but prefer to be merchants of doubt.

    Someone should write a book on this sad saga, maybe titled "Satellite temperature illusion".
    0 0
  12. A simple estimate of the corrections is to use current UAH data with up-to-date corrections and compare the trend for an early part of the data to the trend calculated in an old paper with uncorrected or less corrected data. The corrected trend for Jan1979 to Apr2002 is 0.26C or 0.11C per decade, see http://woodfortrees.org/plot/uah/from:1979.0/to:2002.33/plot/uah/from:1979.0/to:2002.33/trend The corresponding trend from the paper in Tom's post 51 above is 0.06C per decade. So about one half of the corrected trend for that period is from corrections and the other half is from warming over that interval.

    Looking back a little farther, there is 0.23 trend from 1979 through 1998 or 0.115 per decade in current corrected data. The corresponding paper http://journals.ametsoc.org/doi/pdf/10.1175/1520-0426(2000)017%3C1153%3AMTTDCA%3E2.0.CO%3B2 indicates a 0.03 per decade trend corrected to a 0.06 per decade trend (+/- 0.06).

    Although the "peak" in their underestimate of TLT trend may have occurred earlier than 1998, the correction made at that point seems to be the most significant in magnitude (comparing the error in the trend to the trend itself). Also the comparison above does not mean that the current corrections are complete.
    0 0
  13. Eric (skeptic) @62, I considered using that method, but to do so correctly you must ensure the trends are taken over the same period, to the month. As Spencer and Christy do not always state the final month in the trends in various publications, that is not always convenient. It is not clear to me that you have done that, particularly with the 1998 date (which was published in 1998 but may have included no data later than 1997).
    0 0
  14. Tom, I used table 2 from my link in #62 which says "Jan 1979–Dec 1998" so I went through Dec for the trend from woodfromtrees. There are other problems with this method such as the basic difficulty of estimating what the effect of the old errors would be on new trends. Would their estimate in 2011 be only 1/4 of the actual trend today if they had not made the 1998 and subsequent corrections? Can't say for sure.
    0 0

Prev  1  2  

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

TEXTBOOK

THE ESCALATOR

(free to republish)

THE DEBUNKING HANDBOOK

BOOK NOW AVAILABLE

The Scientific Guide to
Global Warming Skepticism

Smartphone Apps

iPhone
Android
Nokia

© Copyright 2014 John Cook
Home | Links | Translations | About Us | Contact Us