Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
Keep me logged in
New? Register here
Forgot your password?

Latest Posts

Archives

Roy Spencer on Climate Sensitivity - Again

Posted on 1 July 2011 by Chris Colose

Whenever some of the more credible "skeptics" come out and talk about climate change, they usually focus on a subject known as climate sensitivity.  This includes Dr. Roy Spencer, who has frequently argued that the Earth’s climate sensitivity is low enough that global warming should only be very mild in the future (for example, in his 'silver bullet' and 'blunders', as Dr. Barry Bickmore described them).  He has recently done so again in a post here (which Jeff Id then talked about here).  Naturally, the "skeptics" have uncritically embraced this as a victory, a turning over of the IPCC, a demonstration that global warming is a false alarm, and all that.  The perspective on how easily a blog post overturns decades of scientific research is at best, amusing.

Of course, claims of a relatively high or low climate sensitivity need to be tested and evaluated for how robust they are in the context of Earth’s geologic record and in models of varying complexity.  My hope is to briefly lay out a context for the body of evidence behind climate sensitivity and then to briefly talk about Roy Spencer’s “model” (which is really not a model at all).  Here, I will review a bit about climate sensitivity and then talk about Spencer's post toward the end.

What is Climate Sensitivity?

Climate sensitivity is a measure of how much the Earth warms (or cools) in response to some external perturbation.  The original perturbation will take the form of a radiative forcing which modifies either the solar or infrared part of the Earths energy budget. The magnitude of the forced response is critical for understanding the extent to which the Earth’s climate has the capacity to change.

As an example, if we turn up the sun, then the Earth should warm by a certain amount as a response to that increase in energy input.  Greenhouse gases also warm the planet, but do so by reducing how efficient the planet loses heat.  Of course, we are interested in determining just how much the planetary temperature changes.  A higher climate sensitivity implies that, for a given forcing, the temperature will change more than for a system not as sensitive.

Determining the Climate Sensitivity

Determining climate sensitivity is not trivial.  First, we must define what sensitivity we are referring to. 

The most traditional estimate of climate sensitivity is the equilibrium response, which ensures that the oceans have had time to equilibrate with the new value of carbon dioxide (or solar input) and with various “fast feedbacks.”  These include the increase in water vapor in our atmosphere with higher temperatures, changies in the vertical temperature structure of the atmosphere (which determines the outgoing energy flow), and various cloud or ice changes that modify the greenhouse effect or planetary reflectivity.

There is also the transient climate response, which characterizes some the initial stages of climate change when the deep ocean is still far out of equilibrium with the warming surface waters.  This is especially important for understanding climate change over the coming decades.  On the opposite end of the spectrum, there are even longer equilibrium timescales scientists have considered that include a number of feedback processes that operate over thousands of years.

Unfortunately, there is no perfect method of determining climate sensitivity, and some methods only give information about the response over a certain timescale.  For instance, relatively fast volcanic eruptions do not really give good information about sensitivity on equilibrium timescales.  Researchers have looked at the observational record (including the 20th century warming, the seasonal cycle, volcanic eruptions, the response to the solar cycle, changes in the Earth’s radiation budget over time, etc) to derive information about climate sensitivity.  Another way to tackle the problem is to force a climate model with a doubling of CO2, and then run it to equilibrium.  Finally, Earth's past climate record provides a diverse array of climate scenarios.  These include glacial-interglacial cycles, deep-time greenhouse climate transitions, and some exotic events such as the Paleocene-Eocene Thermal Maximum (PETM) scattered across Earth’s past.

Dr. Richard Alley discusses a number of independent techniques to yield useful information about paleoclimate variables, including climate sensitivity, in his AGU talk here (highly recommended, it's one of the best videos you'll see in this field).  Estimates of climate sensitivity have been studied for many different events or time frames in Earths climate history, and through a variety of very clever ways.

A vast number of studies have focused on the glacial-interglacial cycles as a constraint on sensitivity (such as many of Jim Hansen’s papers, the most recent of which we discussed here), but also based on Earth’s more distant past.  Dana Royer has several publications on this (e.g., Park and Royer, 2011), including an interesting one entitled “Climate sensitivity constrained by CO2 concentrations over the past 420 million years” where he constrains sensitivity based on the CO2 concentration in the atmosphere itself.  The idea here is that rock weathering processes regulate CO2 on geologic timescales as a function of temperature and precipitation (see the SkS post on that process here) – an insensitive climate system would make it difficult for weathering to regulate CO2; in constrast, a system too sensitive would be able to regulate CO2 much easier than is observed in the geologic record, making large CO2 fluctuations impossible.  The authors find a best fit of doubling CO2 sensitivity of ~2.8 °C and that a sensitivity of "...at least 1.5 °C has been a robust feature of the Earth’s climate system over the past 420 Myr..." (Figure 1).  The authors cannot exclude a higher sensitivity, however.

 

Figure 1: Comparison of CO2 calculated by GEOCARBSULF for varying Δ T(2x) to an independent CO2 record from proxies. For the GEOCARBSULF calculations (red, blue and green lines), standard parameters from GEOCARB11 and GEOCARBSULF12 were used except for an activation energy for Ca and Mg silicate weathering of 42 kJ mol21. The proxy record (dashed white line) was compiled from 47 published studies using five independent methods (n5490 data points). All curves are displayed in 10 Myr time-steps. The proxy error envelope (black) represents 61 s.d. of each time-step. The GEOCARBSULF error envelope (yellow) is based on a combined sensitivity analysis (10% and 90% percentile levels) of four factors used in the model.

There are many other similar studies. Pagani et al (2010) compared temperature and CO2 changes between the present day and the Pliocene (before four million years ago), just before the onset of major Northern Hemisphere ice sheets.  Based on Bayesian statistical thinking, Annan and Hargreaves (2006) constrain sensitivity based on data from the 20th Century, responses to volcanoes, and the Last Glacial Maximum data.  There are other estimates based on Maunder Minimum forcing.

A review by Knutti and Hegerl (2008) examines a vast number of these papers and concludes:

"studies that use information in a relatively complete manner generally find a most likely value between 2 and 3.5 °C [for a doubling of CO2] and that there is no credible line of evidence that yields very high or very low climate sensitivity as a best estimate."

There conclusions are summarized in Figure 2.  This also shows the limitations of various methods, for example, the inability of volcanic eruptions to constrain the high end of climate sensitivity. In general, the high end of sensitivity is more difficult to chop off from the available evidence, and there are also theoretical reasons why a more sensitive climate system corresponds to more uncertainty, as Gerard Roe has argued frequently.

 

Various estimates of climate sensitivity

Figure 2: Distributions and ranges for climate sensitivity from different lines of evidence. The circle indicates the most likely value. The thin coloured bars indicate very likely value (more than 90% probability). The thicker coloured bars indicate likely values (more than 66% probability). Dashed lines indicate no robust constraint on an upper bound. The IPCC likely range (2 to 4.5°C) and most likely value (3°C) are indicated by the vertical grey bar and black line, respectively.

Of course, varying techniques are subject to inherent model and/or data limitations.  Using modern observations to constrain sensitivity is difficult because we do not know the forcing very well (even though it is positive, aerosols introduce a lot of uncertainty in the actual number), the climate is also not in equilibrium, there are data uncertanties, and a lot of noise from natural variability over short timescales.  Similarly, models may be useful but imperfect, so it is critical to understand their strengths and weaknesses.  Convincing estimates of sensitivity need to be robust to different lines of evidence, and also to different choices that researchers make when constructing a climate model or interpreting a certain record.

One of the criticisms of Lindzen and Choi’s 2009 feedback paper was that the authors compared a number of intervals of warming/cooling over ~14 years with the radiation flux at their endpoints, but the conclusions were highly sensitive to the choice of endpoints, with results changing if you move the endpoints by even a month.  Results that are too choice-sensitive such as this are not robust, and even if mathematically correct, will not convince many researchers (or reasonable policy makers) that several decades of converging lines of evidence got it all wrong.

Roy Spencer on "More Evidence that Global Warming is a False Alarm: A Model Simulation of the last 40 Years of Deep Ocean Warming"

So what about Spencer’s “model”, where he uses a simple ocean diffusion spreadsheet setup and creates a profile of recent warming with depth in the ocean?  It turns out in this case that there is no physics other than assuming the heat transport perturbation between layers depends linearly on the temperature difference.  His “model” is just an ad hoc fitting of an ocean temperature profile from a particular atmosphere-ocean general circulation model (AOGCM), and yields no explanatory or predictive power. 

Spencer models his ocean as a pure diffusion process.  This is  insufficient.  You also need to include an upwelling term (Hoffert et al., 1980), which in the global ocean amounts to about 4 meters per year  in order to compensate for the density-driven sinking of water in the Atlantic (Wigley, 2005; Simple climate models. Proc. International Seminar on Nuclear War and Planetary Emergencies).  In a somewhat more complex model, you can separate the land from the ocean in each hemisphere to account for the higher climate sensitivity over land and because radiative forcing may differ between hemispheres (for example, sulfate aerosols are more concentrated in the North), and the interaction between these reserovirs.  These are still simple, as they have no 3-D ocean dynamics, no hydrologic cycle, etc. Simple and early steps toward a systematic approach to this problem (as in Hoffert et al., 1980) have already been done many decades ago.

Spencer also "starts" his model in 1955, but this is problematic because of the pre-1955 intertia and an out of equilibrium climate at this time.  He does not consider alternative issues, such as the possibility of increased (negative) aerosol forcing, and only considers possibilites that reflect his pre-conceived notions.

One model of low complexity (albeit much more complex than Spencer’s model) is the MAGICC model (Tom Wigley, personal correspondence) ,which can be downloaded on any Windows platform and used to play a similar game as Spencer is playing here.  This produces results for ocean heat content or vertical temperature profiles (although not displayed as an output); this model can be easily tuned to match any AOGCM with a small number of parameters, and it fits the observed data well with a sensitivity of ~3°C per doubling of CO2.  There are other Windows-based user friendly GCM’s that can be downloaded.  One which offers a more diverse interface than MAGICC, is EdGCM (developed by Mark Chandler) where forcing inputs can be specified and a standard laptop can run a ~100 year simulation in 24 hours or so.  EdGCM is known to produce sensitivity on the high end of the IPCC Fourth Assessment Report (AR4) range. 

More modern and sophisticated models describing climate feedbacks and sensitivity (e.g., Soden and Held, 2006; Bony et al. 2006) provide much more complexity and realism and agree with paleoclimate records in that Spencer is too low in his estimates.  There is a certain irony in those that think Spencer's model explains climate sensitivity and heat storage processes, while readily dismissing more sophisticated models as unreasonable lines of evidence for anything.

It is also worth noting that only the transient climate response can be affected by observations of ocean heat content change, since this has no bearing on equilibrium climate sensitivity.  Spencer agrees with this in one of his comments, but dismissed its relevance by saying “the climate is never in equilibrum,” yet in his article he directly compares his result with “the range of warming the IPCC claims is most likely (2.5 to 4 deg. C)” which is an equilibrium response.

If Spencer creates and publishes a model that has a very large negative feedback and can explain observed variations, then we have a starting point to debate which model (his, or everyone else’s) is better.  But for now, comments by Roy Spencer such as

These folks will go through all kinds of contortions to preserve their belief in high climate sensitivity, because without that, there is no global warming problem

are extremely ironic.  Comments such as those by Jeff Id that “Roy’s post is definitely some of the strongest evidence I’ve read that the heat from climate feedback to CO2 just isn’t there” are only an indication that he hasn’t really read any other evidence.

There is still much to be said about the "missing heat" in the ocean.  A couple of papers recently (Purkey and Johnson, 2010; Trenberth and Fasullo, 2011; Palmer et al. 2011, GRL, in press), for example, highlight the significance of the deep ocean.  These show that there is an energy imbalance at top of the atmosphere and energy is being buried in the ocean at various depths, including decades where heat is buried below well below 700 meters, and that it may be necessary to integrate to below 4000 m.  Katsman and van Oldenborgh (2011, GRL, in press) use another model to show periods of stasis where radiation is sent back to space. It is also unsurprising to have decadal variability in sea surface temperatures.

There is clearly more science to be done here, but Spencer´s declarative conclusions and conspiracy theories really have no place in advancing that science (or for that matter, Roger Pielke Sr., who thinks everything stopped in the early 2000s).  Nor do absurd attacks on the IPCC's graphs (which were realy based on graphs from other papers). In fact, the AR4 actually discusses how models may tend to overestimate how rapidly heat has penetrated below the ocean’s mixed layer (see here and the study by Forest et al., 2006), which is Spencer's explanation for his results.  Spencer is thus not adding anything new here, and he further criticizes the graphs for not labeling the "zero point" or suggesting we should not include the uncertainty of natural variation in plotting model output. This is not convincing.

This must be why he knows the climate sensitivity to be 1.3°C per doubling with no uncertainty. 

Acknowledgments: I thank Kevin Trenberth and Tom Wigley for useful discussions on this subject.

0 0

Bookmark and Share Printable Version  |  Link to this page

Comments

1  2  Next

Comments 1 to 50 out of 64:

  1. I apologize for my ignorance, but when you say Spencer is modelling purely on ocean diffusion, do you mean he assumes heat evenly diffuses throughout the seas without considering upwelling, convection or conduction?
    0 0
  2. There's no ocean physics. All he's doing is tuning several parameters in his "model" to match observations or a particular AOGCM.

    There are models that don't have 3-D ocean physics which have credibility (of course, depending on what you're trying to do with it), but Spencer's work is well below this level and is simply not credible. Some of the models I listed can be played with by anyone on a PC and have more realism, but even something like EdGCM is well below what is in today's models.

    Spencer has a long history of making sweeping statements about "big issues" (like climate sensitivity) completely independent of whether he has the data to justify those statements, but he hasn't set up any paradigm shift in the community. I don't suspect he will either.
    0 0
  3. What really irritates me, which Chris touched upon in the article, is that after Spencer comes up with these 'silver bullets' based on oversimplified models and over-tuning of parameters until they're no longer physically realistic, he then claims that he's the only one who "gets it" and other climate scientists are either stupid or ignorant or hiding something. He doesn't consider the possibility (reality) that he hasn't made a valid physical argument. Hence the aggravating quote:
    "These folks will go through all kinds of contortions to preserve their belief in high climate sensitivity"
    Yes, all kinds of contortions like doing real physics with reasonably realistic models. Such contortions!
    0 0
  4. A few years ago, Spencer forwarded the idea that the increased CO2 was released from the ocean as a natural response to heating. Now he's arguing for low sensitivity because of a missing heat buildup in the upper layers. His supporting 'model' is an icon to his scientific abilities.
    0 0
  5. If Spencer's model has some validity, then he will publish it in a reputable journal, if not, it will remain as a blog post and eventually be forgotten by all but the die-hard deniers. Much like his post on the rise in CO2 being due to ocean temperatures (the mathematical error in which was very obvious). In a way it is good that Roy is willing to share his ideas openly; in a less contentious area of science it would be a very good thing. Sadly in climate science it is likely that there are those who will uncritically cling onto the bad ideas as well as the good.
    0 0
  6. Roy Spencers article on whether warming causes carbon dioxide increases clearly demonstrates he is unaware of the limitations of regression analyses. In that case we know 100% of the long term rise in CO2 is anthropogenic as natural emissions are always exceeded by natural uptake, yet Spencer's simple model says that the rise is 20% anthropogenic and 80% natural. It is sad that Spencer seems not to have learned from his previous errors; fitting a model to data does not mean the model is correct.
    0 0
  7. Now that is a very interesting critique, thanks Chris!

    Let me make a fool of myself, having produced my own very simple model to estimate climate sensitivity, described in this post.

    In my defence:
    - Unlike Spencer I make no claim to authority in this field
    - I think I do a better job of assessing the limitations of my own work than him.
    - I am also, I hope, willing to learn from others critiques. Test me on this!

    Rationale:
    I like to test things for myself. I had already reproduced the ITR from the GHCN data, and the next project seemed to be to try and make my own estimate of climate sensitivity. I realised that I didn't have time to learn enough about the system to use a physics based approach, and so was limited to an empirical approach.

    Tamino's 'two box model' (also implemented by Lucia and Arthur Smith) has a bare minimum of physics - two heat reservoirs driven by the external forcing. However they had to pick values for the heat capacities. Refining the time constants of the exponentials is tricky and likely to be unstable. (Indeed when I tried it, it was).

    Arthur Smith also tried simply fitting an empirical response function composed of Gaussian basis functions in this post, which was my primary inspiration. I didn't really like the Gaussians though, because the long tails are not constrained by the short temperature record and yet contribute significantly to the sensitivity estimate. Additionally, Arthur's model is probably overparameterised. So I pared the model down to 5 quadratic B-spline basis functions on a log_{2.5} time axis, plus a constant temperature offset (total 6 params), and the constraint that the B-spline coefficients must be positive.

    The result is a climate sensitivity over ~60 years of just over 0.6C/(W/m2), or 2.1C/x2CO2. That's a lower bound because of the short period, assuming there isn't some mysterious negative feedback on a century timescale, and so fits in well with the IPCC estimates. That doesn't mean it's right. (To do: A transient sensitivity calculation.)

    However we know from Roy's previous exploits that 6 parameters is enough to not only fit the elephant but make it wiggle it's trunk. I tried to address this by the cross-validation experiments I showed (fit the model on 62 years, predict 62). I've more recently recognised that I'm also subject to the equilibriation problem, so those results are probably invalid (unless the model is in approximate equilibrium by chance at 1880 and 1942).

    Stability is another interesting question, which can also give an indication of overfitting. Is the model stable? Sort of. If I remember correctly it gives fairly stable results for teh response function for a start date of 1880 and end dates between 1940 and 1980, but with a lower sensitivity of 0.44 (1.6C/2x) over ~60 years. With a later end date this increases towards the value I quoted earlier. Of course the recent data contains the strongest signal due to the big increase in both CO2 and temperature, and some nice volcanoes. But that doesn't explain the stability of the shorter run values. That's interesting and needs further investigation.

    But the elephant in the room is the forcings. If the forcings are bigger, the sensitivity to give the same temperature change is smaller, and vice-versa. So I am completely dependent on accepting Hansen's forcing data. If the aerosol forcing were much smaller (less negative), and thus the total forcing more positive, the resulting sensitivity would be lower. Hansen actually suggests that the aerosol forcings should be greater (more negative) than they are, however.

    So the outstanding problems I can see at the moment with my DIY approach are:
    1. I can't prove at the moment that the model is not over-fit.
    2. The stable lower sensitivity of shorter runs, with a change in sensitivity when including the recent data, needs explaining.
    3. I should probably try varying the aerosol effects and see what happens.

    So, if nothing else I've confirmed that it is hard. I may have produced a credible sensitivity estimate, but I can't prove it to my own satisfaction. I'll carry on playing with it. But maybe you can spot some more fundamental problems?
    0 0
  8. If anyone can answer my question in post #1, could you also direct me to a comprehensive definition of "transient thermal response"? I see multiple uses of the term in scientific literature, but I haven't been able to puzzle out the full context.
    0 0
  9. So when will we see Spencer submit this paper to Nature, I wonder? He should be challenged to do so. If he doesn't, then I don't know why I should bother bothering about anything he writes at this point. He's entered the realm of von Daniken, IMO -- only he seems to have a cynical political motivation.
    0 0
  10. Trueofvoice,

    Understanding 'transient thermal response', in this context, is probably easiest when considering the simple model of putting a tea kettle on a stove at low heat (insufficient to reach boiling point). The transient thermal response is the differential change in temperature at any particular point in time (∆T/∆t), between putting the kettle on the stove and the final temperature when reaching an equilibrium state.

    As for your first question, I suspect Spencer is only considering conduction (linear diffusion between layers) in his model, i.e., treating the oceans as if they were as simple a system as the tea kettle.
    0 0
  11. How about adding a tab to the bottom of this article that lists and links to other SkS articles directly related to this topic?
    0 0
  12. How well do the current array of climate models address the distribution of heat within the oceans?
    0 0
  13. If the present rather stable trend of ca 0.15 gC warming per decade continues for a few more decades, we will have a simple refutation of the lowest sensitivity estimates. And, apparently, we are far from radiation balance now.

    One major problem in handling sensitivity is that it behaves like a "true" random variable, seemingly with expectation and variance connected, like with the Poisson distribution. That means, among other things, that a higher "true" (expectation value) sensitivity could very possibly be associated with _more_ low-values samples than a lower sensitivity with smaller variance. And if you find out how to look for such samples, you are set up with a whole denialist cottage industry in sensitivity "estimation" - of which we may have seen the first examples. It may be viewed as kind of science-based cherry-picking. On the other hand, we may also get way too high estimates for the same reasons - which I think would be even worse.

    I suspect that the ocean effects are very difficult to model adequately, and with no check of them, we can't really tell how representative our sensitivity samples are. But it is always very helpful when different independent lines of evidence seem to give compatible results, as we see in this case. But it's very important not to underestimate the uncertainty involved.
    0 0
  14. NOAA now uses the phrase, "The combined global land and ocean average surface temperature..." in its monthly and annual reports on the status of the climate.

    Am I correct in assuming that NOAA means the temperature of the lower troposphere over the oceans when it says, "ocean average surface temeprature"?
    0 0
  15. Thanks, luminous.
    0 0
  16. Reading Trenberth and Fasulo (2011) I find myself playing catch up. My first question is whether photosynthesis is accounted for in the global energy balance. Trenberth's graph doesn't break this out, and maybe it's buried in the earlier literature. But I wouldn't on reading the chart and the accompanying text assume that the latent heat stored by photosynthesis is included. Of course, the amount could be trivial compared to the overall budget. In doing some research I've seen that claimed.

    According to wikipeida cyanobacteria in the ocean account for 20-30% of the photosynthetic energy at 450 TW. Using the conservative 30% (to minimize total photosynthetic energy) and 5.1E14 square meters for the earth's surface area I get nearly 3 w/m2. That seems a reasonaby large chunk given a defect error of 0.9 w/m2, and a surface absorption of 116 w/m2 according to Trenberth.

    My second concern to this is that I think the defect of 0.9 w/m2 has reasonably large error bars (1sd=0.5 w/m2) compared with the total budget. A 10% variation on the photosynthesis budget is a good fraction of 1 sd on the energy defect. Do cyanobacteria photosynthesie more in warmer oceans? Is there a CO2 driven increase that needs to be factored in (probably not but worth checking?)

    Comments?
    0 0
  17. It seems Dr. Spencer has found a hobby in making utterly terrible "models" of the climate system lately. And the funny part is that these blunders are exactly consistent with his past criticism of climate modelling:
    The modelers will claim that their models can explain the major changes in global average temperatures over the 20th Century. While there is some truth to that, it is (1) not likely that theirs is a unique explanation, and (2) this is not an actual prediction since the answer (the actual temperature measurements) were known beforehand.

    If instead the modelers were NOT allowed to see the temperature changes over the 20th Century, and then were asked to produce a ‘hindcast’ of global temperatures, then this would have been a valid prediction. But instead, years of considerable trial-and-error work has gone into getting the climate models to reproduce the 20th Century temperature history, which was already known to the modelers. Some of us would call this just as much an exercise in statistical ‘curve-fitting’ as it is ‘climate model improvement’.

    So really he's only making models exactly the way he thinks they're made, by curve-fitting instead of physics, years after criticizing everyone else for allegedly doing it that way. He's both:
    - Consistent, because he's following the imaginary modelling procedure he outlined a couple of years ago on his blog.
    - Inconsistent, for whining about how wrong the models are and then turning to these same methods to seemingly prove his side of the climate manufactroversy.
    Tamino calls curve-fitting without physics "mathturbation." It's strange how I rarely see the "skeptics" criticizing this kind of shallow effort. Where's the Climate Auditor?
    0 0
  18. Dave123, energy stored by photosynthesis is stored as chemical energy, then released as low grade heat into the environment when that stored energy is used as food. The storage is only for a short time (<1 year) on average. Because the amount released is approximately equal to the amount stored, it makes no difference to the overall budget.

    A very small amount of the stored energy is not released because it is gets incorporated into sediments in low oxygen environments. The lack of oxygen prevents decay, and hence the release of the energy. Overtime, and given the right conditions, that energy eventually gets turned into fossil fuels. However, given that humans are using fossil fuels at very much above the replacement rate, it follows that energy released from ancient photosynthetic storage is currently much greater than energy lost through fossilization of current photosynthetic storage. As the energy released by burning fossil fuels is inconsequential in terms of the total global energy budget, the much smaller amount lost by fossilization is certainly also inconsequential.
    0 0
  19. Thanks Tom
    0 0
  20. One fact which continues to fascinate me is that geomorphologists and paleaclimatologists etc., have long known that even in the Holocene i.e. the last 10,000 years, global sea levels have been up to as high as ~1.5 m above the present level for periods up to not only thousands of years but in addition have gone up and down again not once but at least three times!

    This is over a period stretching from about 7500 years ago up to about 3000 years ago. Yet we can point to neither orbital/precessional/obliquity effects (aka Milankovitch Effects) nor to episodic elevated CO2 or methane levels as likely drivers of those significant sea level changes. Note the rates of sea level shift involved have been typically about 1 – 2 mm/year over centennial to millenial timescales i.e. similar to what we see at present.

    I am happy to be corrected on this, but I also cannot recall any published evidence for major oscillations in the global polar/glacier ice inventory which suggests episodic Holocene melt water magnitudes have ever been significant enough (i.e. in the last 10,000 years) to cause 3 separate episodes of high seal level stands of the order of 1.5 m above present.

    If I am correct, this can surely only mean that the heat content of the oceans themselves has varied considerably in the recent past, on centennial to millenial timescales, thus producing sea level change of the order of up to 1.5 m due to expansion and contraction effects alone.

    Further, this suggests to me that perhaps both Roy Spencer’s simple model and the various AOGCMs may all be invalid (in respect of global oceanic heat contents) because they either cannot, or have not, been run, out to sufficiently long timescales to allow validation against simply the known sea level record of the relatively recent past?
    0 0
  21. Ecoeng, could you point to the studies that show global sea levels to have been up to 1.5 m higher than the present level and fluctuate so much in the period you indicate?
    0 0
  22. Not a problem. I was expecting such queries. These findings have become ubiquitous in recent years as the precision of U/Th and 14C dating measurements on un-recrystallized fossil corals still in growth position has improved (as a result of the development of accelerator mass spectometric and laser-based plasma mass spectrometric techniques). Here is just one fairly recent example:

    http://eprints.jcu.edu.au/1855/
    0 0
  23. Ecoeng - "If I am correct, this can surely only mean that the heat content of the oceans themselves has varied considerably in the recent past, on centennial to millenial timescales, thus producing sea level change of the order of up to 1.5 m due to expansion and contraction effects alone"

    Well either that or most of the sea level rise came from ice loss in the northern hemisphere during the period of increased solar heating there (the Holocene Climatic Optimum). The reduced gravitational attraction adjacent to the ice sheets in the northern hemisphere would have lowered sea level there, but caused greater sea level rise in the southern hemisphere, so it was not a globally uniform phenomenon.

    Ocean siphoning then would have lowered sea levels again, as too would the regrowth of northern hemisphere ice, as the solar heating in the northern hemisphere cooled. See the work of Jerry Mitrovica on this topic. It's hard going though - perhaps an easier explanation is here: Why sea level is not level

    I briefly touched on the mid-Holocene sea level highstand in the Pacific in this post: Coral atolls and rising sea levels: That sinking feeling
    0 0
  24. Ecoeng, thanks. I do note it is a regional reconstruction. Also note that it does discuss possible reasons for the observations, including several that have little to do with expansion/contraction.
    0 0
  25. >>"ubiquitous"

    Should be worthy of further citations.

    TY arch
    0 0
  26. Not sure how many people know that the industry sponsored deniers were at it again in Washington D.C. (June-30 to July-1)

    climateconference.heartland.org

    The usual suspects include: Roy Spencer, Harrison Schmitt, Willie Soon, Anthony Watts, S. Fred Singer, to only name a few
    0 0
  27. Spencer and Christy are at it again, using the 33rd anniversary of the satellite record to repeat their claims that their satellite records 'disprove' climate model projections and that the observed warming is therefor mostly 'natural variability'.

    A Washington Post article on these claims included a great rebuttal in image form;



    This was apparently created by John Abraham and sent to them by Andrew Dessler. In any case, I hadn't seen it before and it really brings home just how many serious problems there have been with Spencer & Christy's work and how they have consistently been biased in one direction.

    The temperature trend Spencer & Christy show now is more than 0.2 C per decade higher than their original claims. If we apply that as a 'demonstrated uncertainty range' around their current claim then the possible spread on their current value includes warming much greater than any of the mainstream projections.
    0 0
  28. CBDunkerson @27,

    Agreed. Spencer and Christy have way too much confidence in their product. Other satellite products such as STAR show much greater rates of warming in the mid-troposphere than do UAH and RSS. So to claim that the satellite data are the gold standard or the benchmark for global temperatures is just plain wrong.

    This was a pretty blatant PR exercise and misinformation campaign by Christy and Spencer-- and I suspect that we will keep hearing them repeating the same debunked myths each year. They seem to be under the impression that if they keep repeating falsehoods they will become true.

    That was a pretty good article by Freeman, good for him for calling Spencer and Christy on their misinformation.

    But no worries, SkS is on the case and will have something posted soon :)
    0 0
  29. Yes, look for our response to the UAH 33rd anniversay release, probably on Monday.
    0 0
  30. Dana, I hope you fit a line to their predictions ;)
    0 0
  31. CBD: excellent graph. It should be added to every post about Spencer and Christy. The entire Post article was a good read.
    0 0
  32. Tristan - that's an interesting idea, though it seems a little mean :-) I wouldn't want to imply that because the UAH trend estimates have been trending upward at 'x' °C per decade, that trend will necessarily continue. The trend might be worth a brief mention though, with that caveat.
    0 0
  33. CBDunkerson, you should read Spencer and Christi's explanation of what that graphic is, http://www.drroyspencer.com/2011/12/addressing-criticisms-of-the-uah-temperature-dataset-at-13-century/ In a nutshell it is the increase of temperature trend over time (i.e. the world is warming). It is not a graphic showing biases in their estimates (although there were some).
    0 0
  34. In the comments section at the Washington Post article (link above), John Abraham defends the graphic shown in #27 thusly: "The intent of this graph is to show that there have been continuous errors in the UAH results and as time has progressed, fixing those errors has resulted in UAH data being brought into closer and closer agreement with the rest of the scientific community."

    But the graphic shows neither the errors in UAH results nor a comparison with the rest of the scientific community. The points in the graph (and specifically their upward slope) are the result of two things: the temperature trend increasing over time (especially considering that UAH starts in 1979) and errors in the UAH results that were fixed. The label under the graphic in the article is still misleading, unfortunately.
    0 0
  35. Eric @33,

    That is not my understanding or how I interpret the graphic. My understanding is that those corrections are applied to the entire record, retroactively. Those corrections have resulted in a net increase in the rate of warming in the UAH data. Either that or the rate of warming is accelerating quite rapidly over the satellite era. Or it is both, which is what I think is the most plausible explanation. But Spencer and Christy do not agree that the rate of warming is increasing. In fact, Christy is claiming in the press release that there has been little or no warming since 1998. Now where have we heard that fallacy before?

    The fact remains that many of the errors in the satellite data led to a low bias, and going by other groups like STAR and UW, the UAH data (and to a lesser extent the RSS data) are still biased low. And let us not forget that Spencer and Christy were reluctant to accept that their product had a cool bias when it was pointed out to them by Trenberth and Hurrell back in 1997.

    The history and observations hardly supports Spencer and Christy's claim that their product is the gold standard.

    This is all about spin and PR for Christy and Spencer, sadly not about science. Science for them is just a means to an end.
    0 0
  36. Albatross, I agree when you say "Or it is both, which is what I think is the most plausible explanation", and I agree that Christy's comment about 1998 is wrong (as are most "skeptic" comments that mention that year). The current blog post of S&C does jive with your "But Spencer and Christy do not agree that the rate of warming is increasing". By saying that the rate is decreasing, S&C are admitting that the some of the trend increase (particularly recent) in the graph was due to correction of errors in the UAH data.

    So the corrected error in some cases may be more than what is shown in the graph and in other cases is obviously a lot less as they explain (e.g. the jump in trend due to El Nino in 1998). But the graph does not separate out which is which. The label in the article "Corrections made to the University of Alabama-Huntsville temperature record over time." is misleading.
    0 0
  37. Eric (skeptic) @36, using current UAH data, the trend from 1979 to 1997 is positive, whereas in 1998 UAH showed it as negative. The difference is such that around half difference between C&S 1998 and CS&B 2000 is a result of corrections, and so it is not true that, as Christy would have it:

    "The major result of this diagram is simply how the trend of the data, which started in 1979, changed as time progressed (with minor satellite adjustments included.) The largest effect one sees here is due to the spike in warming from the super El Nino of 1998 that tilted the trend to be much more positive after that date."


    Further,on their current product, the trend from 1979 to 2002 is less than the trend between 1979 and 1999. Therefore that 0.03 C/decade increase in the trend is entirely due to adjustments. In all, based on a visual inspection of this graph, around three quarters of the total increase in trend over time is due to adjustments. Attributing the "major result" of the diagram to being "...simply how the trend of the data ... changed as time progressed" is a substantial misstatement.

    Further, if there had been "... little or no additional warming" since 1997 as Spencer and Christy maintained in their press release, then the trend would have declined gradually after 1997 as the years lengthened but the temperature did not increase. In other words, if their statement in the press release had been true, the trend for 2011 would have been less than that reported in 1998, with any difference being the result of adjustments. Being generous, the decline in trend should follow the result reported in 2000, meaning that if their press statement was correct, and their criticism of Abraham's graph were also true, the current UAH trend should be 0.05 C/decade or less.

    This constitutes a major contradiction between their two statements, and is not reconcilable as an honest mistake. If they thought a week ago that the data indicated no increase in the trend after 1997, then one weeks data cannot have justified a switch to the position that most of the increase in trend since then has been the result of warming. If, on the other hand, the claim that the increase in trend since 1997 reflects the data rather than adjustments is their honest opinion, they cannot also have believed just one week ago that there has been "... little or no additional warming since then".
    0 0
  38. Tom: you are mostly correct, although the jump in trend after 98 "tilted the trend to be much more positive after that date." as Christy claims. But you are right that the trend after that should have decreased if "... little or no additional warming" since 1997 is a correct claim. It is clear that the 1997 claim is not correct looking at trends. Assuming that Christy is misleading in that latter claim, it still does not justify using the label on the graph in #27 which I quoted in #36. It is not possible to say that the label is correct given the jump in 1998, the start of trending in 1979, and the relative flatness in the 80's.

    What would make the label correct is saying it applies post 1998 or something along those lines.
    0 0
  39. Eric (skeptic) @38, I believe you are over interpreting the graph's title. The graph does indeed show corrections to the "University of Alabama-Huntsville temperature record over time". Every data point, except the most recent is a journal article reporting a correction. The articles are presented so as to show the reported trend for each journal article. It is true that as presented in the blog, the graph invites your misunderstanding, which is a problem. However, it is certainly not clear that the problem lies in the graph, which as originally produced and presented by Abrahams may well have explained the issue we are discussing.

    Taking a different approach to that issue, I have just come across this figure detailing corrections to the UAH record:


    (h/t to Eli)

    Together with the correction of 5th Dec 2006 (listed here, they sum to a total correction of + 0.079 degrees C per decade, ignoring some minor corrections whose effects are not given. That means by Spencer and Christy's own account, corrections represent 39% of the total change in reported trend from 1994 to 2011. Adjustments post 97/98 El Nino amount to 0.049 degrees C per decade, meaning the El Nino itself contributed 0.052 degrees C of the trend in 2000. That is, the 97/98 El Nino, which Spencer and Christy call the largest effect, is approximately equal in effect to adjustments since 1998, and less in effect than total adjustments which they consider not worthy of mention. I need only add that the significance of the 97/98 El Nino declined with time given the warm years that followed.
    0 0
  40. Addendum, looking through the UAH readme file, I have found more positive corrections not reported in the above chart and not included in the calculations I performed. I may produce a more accurate record later if I have time and interest.
    0 0
  41. Tom @40,

    I know time is an issue, but I encourage you to please do that.
    0 0
  42. Tom, do you think that in a nontechnical press article that people are going to realize that "corrections" means "journal article" or are they going to read that "[Santer] and other researchers contacted for this column noted that there have been several instances when Christy and Spencer have had to correct their datasets for factors such as changes in satellite orbits over time, and with each correction the data has come into better alignment with surface warming and model projections." (right below the figure noting the "corrections")

    I realize that the WashPost article is ambigious since it also says "Santer's recent study found that the warming seen in the UAH dataset is unlikely to be the result of natural variability alone." Thanks for quantifying the correction versus the trend. That's something that should have been done all along.
    0 0
  43. Eric, it is entirely accurate to call the chart a list of corrections because the papers cited were corrections. Yes, the warming trend would also change over time, but as has been demonstrated above, that is a relatively minor contributor... Christy's claim to the contrary is;

    A: Demonstrably false based on the numbers.
    B: Directly contradictory to the original claims in their press release that their results were biased high (Ha!) due to anomalously cool years in the early part of the record and thus showed that the warming trend was declining over time.

    I suppose a chart could have been constructed to calculate the trend only over the period covered by Spencer & Christy's original report (i.e. up to 1995) based on each stage of the corrections, but obviously that would have required access to all the past values rather than just citing the trends that Spencer & Christy themselves reported at various points. It would also then be 16 years out of date. A 'perfect' comparison would require using the base satellite data and their different methodologies / calculations (which I don't believe they release) from each stage of revision to compute a trend up to present. If anything, given that they had several issues which introduced progressively larger cooling biases over time, that would likely show a much larger divergence.

    Finally, in your most recent post you ask how people would interpret "corrections"... but I don't see why. Either of the possible interpretations you cite would be entirely accurate.
    0 0
  44. To put it another way, you seem very concerned that the Washington Post article and/or graph could be misinterpreted to mean something other than intended but still true... but not particularly put out that what Spencer and Christy are saying is blatantly false.
    0 0
  45. For those readers who are not familiar with Spencer and Christy it should be pointed out that Spencer and Christy did not find the errors in their temperature measurements. The errors were pointed out by other scientists and only then did Spencer and Christy correct their mistakes. Usually scientists find their own errors. Since there were so many errors to be corrected, and S&C consistently errored in the negative direction one wonders why they are usually low in their measurements.
    0 0
  46. CBDunkerson, 61% is not minor, and the Washington Post caption is misleading. As far as S&C, their blogging style is poor point scoring and Spencer in particular is always changing the topic. Their heading for the graph is "The major result of this diagram is simply how the trend of the data, which started in 1979, changed as time progressed (with minor satellite adjustments included.) The largest effect one sees here is due to the spike in warming from the super El Nino of 1998 that tilted the trend to be much more positive after that date." That is not particularly clear but not misleading.
    0 0
  47. Just looked at Christy, Spencer & Braswell 2000 which acknowledges that they were not the first to find the errors. They also point out that their trend estimate of 0.06 was plus or minus 0.06 and that error range does not include the shortness of the time series nor future unknown errors. At the very least the graphic in post 27 and the Washington Post should have shown those error bars.
    0 0
  48. Eric wrote: "That is not particularly clear but not misleading."

    Misleading, no. Completely false, yes.

    The El Nino was NOT the largest effect. See Tom's analysis above and his subsequent note about additional adjustments which weren't included in the chart.
    0 0
  49. I demur. Stating something that is "Completely false" is misleading.
    0 0
  50. Tom @40 - I'd be interested in seeing a more accurate record. It would be useful for the response post.
    0 0

1  2  Next

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

TEXTBOOK

THE ESCALATOR

(free to republish)

THE DEBUNKING HANDBOOK

BOOK NOW AVAILABLE

The Scientific Guide to
Global Warming Skepticism

Smartphone Apps

iPhone
Android
Nokia

© Copyright 2014 John Cook
Home | Links | Translations | About Us | Contact Us