Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Lessons from Past Climate Predictions: IPCC AR4 (update)

Posted on 23 September 2011 by dana1981

Note: this is an update on the previous version of this post.  Thanks to readers Lucia and Zeke for providing links to the IPCC AR4 model projection data in the comments, and Charlie A for raising the concern about the quality of the original graph digitization.

In 2007, the IPCC published its Fourth Assessment Report (AR4).  In the Working Group I (the physical basis) report, Chapter 8 was devoted to climate models and their evaluation.  Section 8.2 discusses the advances in modeling between the Third Assessment Report (TAR) and AR4.

"Model improvements can...be grouped into three categories. First, the dynamical cores (advection, etc.) have been improved, and the horizontal and vertical resolutions of many models have been increased. Second, more processes have been incorporated into the models, in particular in the modelling of aerosols, and of land surface and sea ice processes. Third, the parametrizations of physical processes have been improved. For example, as discussed further in Section 8.2.7, most of the models no longer use flux adjustments (Manabe and Stouffer, 1988; Sausen et al., 1988) to reduce climate drift."

In the Frequently Asked Questions (FAQ 8.1), the AR4 discusses the reliability of  models in projecting future climate changes.  Among the reasons it cites that we can be confident in model projections is their ability to model past climate changes in a process known as "hindcasting".

"Models have been used to simulate ancient climates, such as the warm mid-Holocene of 6,000 years ago or the last glacial maximum of 21,000 years ago (see Chapter 6). They can reproduce many features (allowing for uncertainties in reconstructing past climates) such as the magnitude and broad-scale pattern of oceanic cooling during the last ice age. Models can also simulate many observed aspects of climate change over the instrumental record. One example is that the global temperature trend over the past century (shown in Figure 1) can be modelled with high skill when both human and natural factors that influence climate are included. Models also reproduce other observed changes, such as the faster increase in nighttime than in daytime temperatures, the larger degree of warming in the Arctic and the small, short-term global cooling (and subsequent recovery) which has followed major volcanic eruptions, such as that of Mt. Pinatubo in 1991 (see FAQ 8.1, Figure 1). Model global temperature projections made over the last two decades have also been in overall agreement with subsequent observations over that period (Chapter 1)."

AR4 hindcast

Figure 1: Global mean near-surface temperatures over the 20th century from observations (black) and as obtained from 58 simulations produced by 14 different climate models driven by both natural and human-caused factors that influence climate (yellow). The mean of all these runs is also shown (thick red line). Temperature anomalies are shown relative to the 1901 to 1950 mean. Vertical grey lines indicate the timing of major volcanic eruptions.

Projections and their Accuracy

The IPCC AR4 used the IPCC Special Report on Emission Scenarios (SRES), which we examined in our previous discussion of the TAR.  As we noted in that post, thus far we are on track with the SRES A2 emissions pathChapter 10.3 of the AR4 discusses future model projected climate changes, as does a portion of the Summary for Policymakers.  Figure 2 shows the projected change in global average surface temperature for the various SRES.

AR4 projections

Figure 2: Solid lines are multi-model global averages of surface warming (relative to 1980–1999) for the scenarios A2, A1B, and B1, shown as continuations of the 20th century simulations. Shading denotes the ±1 standard deviation range of individual model annual averages. The orange line is for the experiment where concentrations were held constant at year 2000 values. The grey bars at right indicate the best estimate (solid line within each bar) and the likely range assessed for the six SRES marker scenarios.

Figure 3 compares the multi-model average for Scenario A2 (the red line in Figure 2) to the observed average global surface temperature from NASA GISS. In the previous version of this post, we digitized Figure 2 in order to create the model projection in Figure 3.  However, given the small scale of Figure 2, this was not a very accurate approach.  Thanks again to Zeke and lucia for pointing us to the model mean data file.

AR4 projections

Figure 3: IPCC AR4 Scenario A2 model projections (blue) vs. GISTEMP (red) since 2000

The linear global warming trend since 2000 is 0.18°C per decade for the IPCC model mean, vs. 0.15°C per decade according to GISTEMP (through mid-2011).  This data falls well within the model uncertainty range (shown in Figure 2, but not Figure 3), but the observed trend over the past decade is a bit lower than projected.  This is likely mainly due to the increase in human aerosol emissions, which was not expected in the IPCC SRES, as well as other short-term cooling effects over the past decade (see our relevant discussion of Kaufmann 2011 in Why Wasn't The Hottest Decade Hotter?).

What Does This Tell Us?

The IPCC AR4 was only published a few years ago, and thus it's difficult to evaluate the accuracy of its projections at this point.  We will have to wait another decade or so to determine whether the models in the AR4 projected the ensuing global warming as accurately as those in the FAR, SAR, and TAR

Section 10.5.2 of the report discusses the sensitivity of climate models to increasing atmospheric CO2.

"Fitting normal distributions to the results, the 5 to 95% uncertainty range for equilibrium climate sensitivity from the AOGCMs is approximately 2.1°C to 4.4°C and that for TCR [transient climate response] is 1.2°C to 2.4°C (using the method of Räisänen, 2005b). The mean for climate sensitivity is 3.26°C and that for TCR is 1.76°C."

Thus the reasonable accuracy of the IPCC AR4 projections thus far suggests that it will add another piece to the long list of evidence that equilibrium climate sensitivity (including only fast feedbacks) is approximately 3°C for doubled CO2.  However, it will similarly take at least another decade of data to accurately determine what these model projections tell us about real-world climate sensitivity.

0 0

Printable Version  |  Link to this page

Comments

1  2  3  Next

Comments 1 to 50 out of 104:

  1. The caption in Figure 3 reads since 1990, but the data shows from 2000.
    0 0
  2. Whoops thanks Alex, a holdover from the previous version.
    0 0
  3. I'd like everyone (especially critics and 'skeptics') to note four things here: 1) The error has been acknowledged 2) The error was corrected with lightning speed 3) The people who identified the error were thanked and their names highlighted at the top of the post 4) The correction did not change the primary point or conclusion of the post, as far as I can tell.
    0 0
  4. Sadly I see that this incident is already being distorted and spun in certain quarters-- and that after Dana graciously thanked them for their feedback. Must be desperate times and/ or a slow week for "lukewarmers" and 'skeptics' if this is something that gets them so excited.
    0 0
  5. To be fair, Lucia wrote a blog post on the subject before I updated the post, and maybe even before she commented here. Then again, she could have just commented here noting the problem without writing a blog post, in which case, as we found out, the problem would have quickly been corrected. To each his (or her) own. I hope the matter is now considered resolved.
    0 0
  6. Fair point Dana, The text in my above post @ 4 should probably read: "Sadly I see that this incident is already being distorted and spun in certain quarters-- and that after Dana graciously thanked them for their feedback." Regardless, it certainly seems that some people are trying to use the situation to their advantage.
    0 0
  7. Since critics will be linking to the original post, it would be worth adding a note there which points out that this revised version has been posted.
    0 0
  8. Albatross-- My post was probably written and published before Dana posted any thanks. Don't let the time stamps fool you - it's still Sept 22 in my time zone. I'm glad to see Dana acknowledged the corrections -- including Charlie A. But it happens that I don't think the miscalculation of trends digitization is the only flaw here. I happen to think Dana should have a) Included other observational data sets like HadCrut and NOAA. (If s/he thinks they are inferior GISTemp, s/he should say why he thinks so.) b) Discussed whether the comparison in his figure ends with an El Nino year vs. La Nina year. (BTW: If the observations ends with 2010, it ends with an El Nino). The models average over numerous cases so its not important for the models but it matters for the observations. c) Mention the exact year when the AR4 was published rather than merely saying it was published "recently". After mentioning the year, he should explain why his comparison starts in 2000. (Whatever the reason for his choice, he should give it.) Of course Dana is not required to do all these things particularly since Skeptical Science is a blog like any other. But it's hardly fair to complain that people at other blogs are expressing their opinion on these shortcomings merely because Dana did acknowledge that someone people found obvious errors in his first post and brought them to his attention.
    0 0
  9. To be fair, Lucia wrote a blog post on the subject before I updated the post, and maybe even before she commented here. Then again, she could have just commented here noting the problem without writing a blog post, in which case, as we found out, the problem would have quickly been corrected.
    I commented here then immediately wrote the blog post. The issue of Skeptical Science revising posts and the blog owner seeming to forget this has happened and then inserting inline replies happens to be a 'live' topic at blogs. Under the circumstances, I judged posting immediately the most appropriate thing to do under the circumstance and I still judge it so. As it stands, people witness first hand how you responded. Had I done otherwise, they could not witness this. This was part of my motive. The other motive is to host discussion of features of your blog post-- and as you can see from my previous post, I have opinions on issues beyond the mistake in the computation of the trend.
    0 0
  10. Lucia... Please note that Dana is a man. No need for further "s/he" references.
    0 0
  11. Lucia: The general reason for consistently using GISTEMP is that it is a truly global dataset, whereas HadCRUT is not for instance (it excludes the poles). I am not personally sure about NOAA. As to your other two points, you may want to recheck the first line of the article, and also where Dana stated that the trend calculated from GISTEMP was through mid-2011, which so far has been La Nina.
    0 0
  12. Lucia, The post is about the IPCC AR4 projections (as described in chapter 10.3 or the AR4 report) and their accuracy. Figure 2 is the second figure in that report. It presents GISS data up to the year 2000 and model projections from there until 2100. Had the IPCC plot used one of the other instrumental records or a different cut-off between the observational and modelled data then it would make sense for Dana to have used those. But that isn’t the case. Seem pretty obvious to me. The IPCC AR4 report was published in 2007 (it states it on the report!). Is that relevant?
    0 0
  13. About -> Team might be useful for Lucia going forward. I'm glad that she is finally beginning to familiarize herself with Skeptical Science.
    0 0
  14. lucia - you are of course entitled to your own opinion, but your own opinion seems to consist of whole bunch of nitpicks [you should have explained 'x' and 'y']. There are a million things I 'could have' explained, and what I 'should have' explained [especially your examples] is extremely subjective. You also might want to read the second word in the post, and also the title of Figure 3. I'm relieved to see that I'm not the only one who makes errors.
    0 0
  15. Hello Lucia, Welcome to SkepticalScience. I'm very sorry that you have some misgivings about this site and even some suspicions it seems, but please do not be so quick to judge. We appreciate that you assisted earlier in identifying flaws, it is important to get things right and Dana corrected the error lightning speed. While we welcome your opinions, I'm afraid that your minor points have their own issues: "a) Included other observational data sets like HadCrut and NOAA. (If s/he thinks they are inferior GISTemp, s/he should say why he thinks so.)" What Craig said @12. I think that you know as well as we do that each of the datasets has its limitations. What do you think is the best GAT analysis and why? Dana was simply being true to the original graphic that was shown. "b) Discussed whether the comparison in his figure ends with an El Nino year vs. La Nina year. (BTW: If the observations ends with 2010, it ends with an El Nino). The models average over numerous cases so its not important for the models but it matters for the observations." As it happens that is not quite as straight forward as your presume, because 2010 was a transition year. The CPC ONI data show that El Nino conditions persisted until April, with a short window of neutral conditions, followed by the formation of a La Nina in JJA. So while the impacts on global SATs would have potentially lingered for 5 months or so after the El Nino, is is not strictly correct to say that "If the observations ends with 2010, it ends with an El Nino" as you did, 2010 actually ended in a strong La Nina. Besides, as noted by Alex @11, the trend was calculated through mid-2011. "c) Mention the exact year when the AR4 was published rather than merely saying it was published "recently". After mentioning the year, he should explain why his comparison starts in 2000. (Whatever the reason for his choice, he should give it.)" Please read the first sentence of the post. It starts"In 2007, the IPCC...". Also, the year appears in a light blue banner in some of the pages linked to, or in the case of the very first hyperlink to "Chapter 8", the following is also printed at the top of the page: "Climate Change 2007: Working Group I: The Physical Science Basis". I'll let Dana explain why he starts in 2000-- it probably has something to looking at decadal trends. I'm afraid nit picking is not constructive, especially when the nits have no basis or are in error. I hope to see you acknowledge your own mistakes (we all make them, no shame in that) and correct your blog post too (if needed) so as to keep you readers informed. Best, Albatross.
    0 0
  16. Albatross: I'm afraid nit picking is not constructive, especially when the nits have no basis or are in error. I hope to see you acknowledge your own mistakes (we all make them, no shame in that) and correct your blog post too (if needed) so as to keep you readers informed. Best, Albatross. Just because you assert this doesn't make it so. There is nothing "nit" in the points that Lucia has made. 1) Dana has chosen the highest of the three surface temperature records to compare against. This is a fact not a nitpick. 2) While there is an argument for using GISTEMP, it's not a very good one...GISTEMP relies on an untested method for extrapolating temperatures into the high Arctic, one that is is likely wrong. 3) Further there is an inconsistency in methodology here: Dana uses the average of the models, which is not actually a defensible thing to do, but selects out a single temperature series (the one with the most rapid growth in temperature, [inflamatory deleted]) to compare against as the "exemplar" that other temperature series must live up to. 4) By innocently selecting 2000 as his starting year, he's front-loaded his data with a low-valued extrema, which has a substantive effect on the temperature series. 2000 was in the throes of a La Nina cooling event, he either needs to correct for this (see how on Lucia's or Tamino's blog), or he needs to shift his starting point to 2001. Doing this latter operation changes the slope from 0.15°/decade to 0.074°C/decade. 5) He needs to use consistent baselines in making the comparisons. A factor of two isn't a "nit". Picking a starting year that contains an outlier isn't a nit. Using inconsistent baselines is not a nit. And by the way 2001 is the start of the decade, not 2000, so the "we picked 2000 to represent the start of the decade" argument doesn't fly either. If you want my recommendation [inflamatory deleted] I would use the mean of the temperature series if you're going to compare to the mean of models. It's at least a consistent treatment. Here is my own version of Dana's graph. It clearly labels the "verification period" (where data was available to compare the models against, which ended in 2004 for AR4) versus the "validation period" (where data were not yet available, but are now) I don't personally make a huge deal of the disagreement between model and data over that period, other than to state it exists. [inflamatory deleted]
    0 0
    Moderator Response: [Dikran Marsupial] While constructive criticism is very welcome here, posturing and inflamatory statements are unhelpful and should be avoided, by both sides of any disagreement.
  17. I'll write two responses on this. In this first one I'm going to disagree with Lucia and Carrick on the problems of polar temperatures in the ITRs. This is speculative and of general relevance rather than specific to this post, but I think there is a good basis for arguing that the GISTEMP approach is less biased than HADCRUT or NOAA. This is a good topic for further study. In the second I'll suggest why I think the article above still needs a rewrite. So, polar temperatures. Everyone is familiar with the concept of polar amplification? Here's a figure from U Columbia, although we should really recalculate it without NASAs interpolation: . Clearly the arctic at least is behaving rather differently than the rest of the planet. Again, as a further check we would want to do some sort of test of sensitivity to choice of stations, cross validation, bootstrap etc. If we use the HADCRUT/NOAA method of simply omitting the arctic, then that is the same as saying the the arctic is behaving as the average of the rest of the planet. That doesn't look like a good starting hypothesis to me. Doesn't it seem more likely that the N pole is behaving more like the nearest arctic stations, rather than postulating a another inflection with latitude? Another interesting experiment would be to calculate a global temperature from Nick Stokes' spherical harmonics in the latest version of TempLS. My guess is that these will show a higher trend than GISTEMP, because I suspect the pole will show as a peak. If might be possible to test whether it is physically meaningful by cross validation.
    0 0
  18. The more serious problem with this article in my view is the quoting of trends without error estimates, or indeed at all. If an undergraduate quotes a trend in a lab report without a standard deviation, uncertainty interval or P-value, we have to mark it down, and rightly so. And if we calculate uncertainties on the any of the ITRs over the last 11 years what do we find? IIRC we can't distinguish between then, or between them and the 4AR estimate, or between them and the no-warming case. (Sorry, I should check that, but I don't have my code to hand. Bear in mind to get the right answer you have to factor in the autocorrelation - both Lucia and Tamino have written on this at length, but as a crude estimate multiply the OLS estimate of standard deviation of the gradient on the monthly data by 2.5. As a sanity check, calculate 12 month and 22 month non-overlapping averages for the 11 years and calculate a gradient on the averages - you should get roughly the same uncertainties in each case.) That's ignoring the uncertainty in the 4AR projections, which is substantial. Visually, you can draw a whole range of gradients, including negative ones, within the uncertainty bounds. Factor in that and the whole exercise is completely meaningless. What is the correct way to present an discussion of the 4AR predictions, given that there isn't enough data to draw any real conclusions? Maybe we can have a constructive discussion of that? What I would do is plot the AR4 data with the uncertainty intervals from 1990-2020, and plot the ITR moving average on top of it. For reasons stated above I still prefer GISS, but you could plot all three. Ideally of course this would be updated annually. Something like this figure from the Monckton article "IPCC overestimate temperature rise": The other thing that is missing, which formed an important part of some of the other articles is a comparison of the emission predictions - both GHGs and aerosols, and also discussion of which scenarios are most realistic and why.
    0 0
    Moderator Response: [grypo] Real Climate also used a similar method.
  19. Informative post, thanks. I tried to reproduce the Figure 3, but maybe I'm using a wrong offset in constructing the graph. Using the GISS data from and de IPCC data as referenced here, I get an average for the IPCC data (20C3M values) of -0.309 °C for the period 1951 - 1980. This period is the baseline of the GISS data. Correcting the IPCC data with this offset I get a graph where the A2 scenario is higher than the one shown in Figure 3. For instance the value for the A2 scenario for 2000 is 0.256, using my offset I get 0.565. Eyeballing the value in figure 3 for A2 in 2000, it is about 0.45. Could somebody please explain my error and what the offset should be ?
    0 0
  20. With regards to which observational data aset should be used, that would rely on which set was used in order to determine the projections. Therefore, whichever temperatures were used in Figure 2, should be used in Figure 3. As mentioned by Dana, the temperature observations will fall within the uncertainy range in the projections, although the uncertainty range in not clearly stated. It appears that the quoted observed trend of 0.15C/decade is from 1/1/2000 through the end of 2010; GISS data to the present yields a trend of 0.12C/decade. It appears that this is still within the uncertainty range, but on the low side, which Dana clearly states. Dana does fine stating, "The IPCC AR4 was only published a few years ago, and thus it's difficult to evaluate the accuracy of its projections at this point." However, claiming that this adds another piece to the puzzle of climate sensitivity, is a bit premature. No conclusion can be drawn yet as to the accuracy of the projection.
    0 0
  21. Jonathon @20, Dana states that the AR4 projections "will add another piece" (my emphasis) to the puzzle of climate sensitivity, and goes on to indicate that it will take another 10 years of data to do so.
    0 0
  22. Yes Tom, In another 10 years or so, if the projection matches observations, then it will add another piece.
    0 0
  23. Carrick @16: 1) Given three data sets, a, b, and c, which have not been falsified as indices of Global Mean Surface temperature, congruence of the multi-model mean with any of the three means the multi-model mean has not been falsified as a projection of future evolution of the multi-model mean. As the lack of falsification is all that Dana claimed, no issue arises from his choice of just one index, Gistemp, to compare the multi-model mean with. 2) In fact there is good reason to believe that two of the indices understate trends in GMST, in that they do not include data from a region known to be warming faster than the mean for the rest of the globe. In contrast, while there are unresolved issues relating to Gistemp, it is not clear that those issues have resulted in any significant inaccuracy. Indeed, comparison of Gistemp north of 80 degrees North with the DMI reanalysis over the same region shows Gistemp is more likely to have understated than overstated the trend in that region: Your claim that gistemp's method of extrapolation is untested is simply false. What is more, I would suggest this is not a case of your having knowledge from peer reviewed literature of a lack of testing; but rather of you having simply made the claim up because you did not know the contrary and the claim was convenient. 3) Dana uses the average of the model runs because weather is chaotic, so that no individual run can be considered a prediction. The model mean, however, can be considered a projection of the "typical" weather state, something which can be projected. In contrast, the mean of the temperature indices does not represent a "typical" weather state. Rather it represents one possibly accurate measure of one particular evolution of the weather over time. There is no reason to believe that mean is a more accurate measure than Gistemp, and indeed given considerations under point 2 above, it is likely to be less accurate than gistemp. However, because the model mean is a statistical prediction, what it predicts is not the global temperature at any give time. Rather it predicts that the global temperature will lie within its error bars 95% of the time, and that the long term trend will closely match the long term trend of the multi-model mean. To that extent, while your criticism is entirely invalid, that of Kevin C @18 is not. A correct presentation would be similar to that by Real Climate (moderator inline comment @18). That Dana chose a less technically correct comparison for purposes of simplicity is unexceptional, however, in that it leads to no misunderstanding. 4) Dana did not choose 2000 as the start year. The IPCC chose 2000 as the start year of model projections both in its graphic (see figure 2 in the main post) and in the data. Given that Dana was commenting the IPCC's projections, he was constrained to use the same start date for those projections as the IPCC's projections. You are in fact criticizing Dana here for not misrepresenting the IPCC projections. There could not be a clearer example of somebody determined to criticize without regard for the merits of the case than you have provided by this example. However, your suggestion of using Tamino's ENSO corrected temperature index has merit: The trend of the temperature indices once corrected for ENSO, VEI, and solar variation? 0.17 degrees C per decade, against which the 0.18 degrees C per decade AR4 projection is not bad at all. (Note, Tamino only reports the trend over the entire 1975-2010 interval. By eye, the 2000-2010 trend does not differ significantly, but you are welcome work it out more accurately.) 5) Dana did use consistent baselines. This can be seen by comparing his figure 3 with Zeke's version of Dana's original figure (or indeed the RC graph inline @18): As Zeke was one of the discoverers of Dana's initial error, it cannot be supposed he is trying to distort the data.
    0 0
    Moderator Response: fixed comment per user request
  24. I would like to make a suggestion to the operators/moderators about the format of SkS. I read some of the posts on the blogs lambasting SkS over the updated article issue, and I can't help but agree with some of the points they raise. I think it might be in SkS's best interest to keep a "Wiki-like" history link at the top of the articles so that people can go back and see previous versions with a short explanation of why they were updated. I think it's perfectly reasonable to update articles as new evidence emerges and actually a good thing, but I also understand that people who are not inclined to be favorable to anyone who agrees with the mainstream consensus will use the updated article talking point to try and discredit the site. A history section for each of the "permanent" pages would mute complaints of sneaky updating, and allow you to keep old comments around for future reference, simply move the old version of the article with comments intact into the history section and allow the new updated article to start over with a blank comment section. I'm under no illusions that this will be easy or fast to implement, but it's a suggestion on how you might improve the site to make it more resilient to criticisms that you are "covering up" errors. I've been reading this site for a while now, and think it's a very useful reference and hope my suggestion can help make it even better.
    0 0
    Moderator Response: Thank you. This is a good idea.
  25. Isn't there also a Japanese global temperature data set? If so, it may be an interesting topic for another post.
    0 0
  26. Excellent explanations by Tom Curtis in #23. I don't have much to add, except on the point of choosing 2000 as the starting point. The explanation is quite straightforward - that's the year the IPCC projections begin, as you can see for yourself by examining the data file linked in the post and here. People need to bear in mind that SkS is not RealClimate or the Blackboard or any other site. Lucia for example has a very specific target audience, which is different from the SkS target audience, which is as broad as we can make it. Yes, I could have done a more detailed analysis including error bars and such, but this post is intended for a general audience, which might be turned off by too much statistical detail. On SkS we have a wide variety of posts and technical levels, in keeping with our basic, intermediate, and advanced myth rebuttal levels. This post (in fact most of the 'Lessons' series) is intended for the basic and intermediate level audience, who may be interested to know approximately how well models project temperature changes. Pete - yes, there's also JMI. I don't know the details about that data set though.
    0 0
  27. I tried posting this at Lucias but it didn't work so since it is related I will bring it up here: Re: Lucia #81953 "While saying it, Dana could still admit that that particular choice for observational data set gives the largest observed trend and happens to support the narrative s/he is conveying to his/her reading audience." "Dana might want to include the observational data set that happens to be the IPCC’s choice in their figure. I’m pretty sure that’s not GIStemp. It’s HadCrut." They may have chosen to use HadCrut but will they in 10 years? My guess is no. It has been demonstrated by others (including JeffID) that Hadleys method will ALWAYS underestimate the trend. We all know that so why is there even a debate on this subject? ECMWF has already also concluded that Hadley has undersampled the warmth, particularly in the Arctic. This is also something we know. I'm getting tired of people saying "the IPCC did it so we can too" despite the huge differences in knowledge we have on these issues. We can't just plug our ears and pretend like Hadley isn't undersampling the warmth. Maybe some people would like to. I can tell you straight up for a region I've submitted a paper on Hadley's method wouldn't even work going back past 1950 despite using a Least Squares method allowing the reconstruction back into the 1880s. This is called an evolution of science and we now know that if you want to use less available station data. Use Hadley. Real "Skeptics" want to evaluate as much data as possible... And for those people who comment on GISS polar interpolations I hope you have the same reservations about UAH which does a similar process? Where is the arms up in the air over that? Can you pick which one is usually always undersampling Arctic warmth from this: It is a no brainer... Let us have a look at the residuals versus all methods... oh wait, it appears Hadley is underestimating the warmth greater through time and it is statistically significant... i.e. undersampling the trend.
    0 0
  28. Man, I wonder if those more beligerant commenters have the same standards of criticism when they read something at WUWT.
    0 0
  29. @Noesis #24 Would you be interested in joining the SkS author team to work on this?
    0 0
  30. Robert #27 - well said. That's exactly why I didn't use HadCRUT - we know it's biased low, so why use it? Of course we know certain parties want to use it precisely because we know it's biased low.
    0 0
  31. I'm still figuring out the baseline used in Figure 3 of this post, see my reaction #19. I can create the last graph in reaction #23, the baseline there is 1990-2010. I can also create the RealClimate graph where the GISS data are downward adjusted with the Giss-T-average of 1980 - 1990 being 0.244. It seems to me the graph in Figure 3 is constructed with some sort of a difference in the averages of the IPCC and Giss data around 2000 and used that difference to adjust the IPCC data. Is that correct? When I adjust the IPCC data to the Giss baseline of 1951 - 1980, using an offset of -0.309, the model data are clearly above the Giss-T. Is there something with the model data which explains these differences? It's probably me, but I still don't get that.
    0 0
  32. Hi Albatross-- I've commented here before. I just don't visit often and comment here less.
    Dana was simply being true to the original graphic that was shown.
    It seems to me that being true to the original graphic would require Dana to use HadCrut. The original graphic containing data in this post appears to be the one Dana calls "figure 1" and corresponds to figure 9.4 in the AR4-wg1. The caption reads for figure 9.4 in the AR 4 reads:
    "as observed (black, Hadley Centre/Climatic Research Unit gridded surface temperature data set (HadCRUT3); Brohan et al., 2006)"
    Dana's choice of GISTemp represents a switch to a different data set.
    I hope to see you acknowledge your own mistakes (we all make them, no shame in that) and correct your blog post too (if needed) so as to keep you readers informed.
    Assuming your meant to suggest that the observations in Dana's figure 1 are from GISTemp, I hope to see you acknowledge your mistake. :) I conceded that the 2010 in it's entirely was not El Nino. Few years are all one thing or another. But the data in the figure are annual averages and the global surface temperature in 2010 was dominated by El Nino. This is particularly so because the temperatures lag the MEI. You may think it's a mistake to call it an El Nino year, but I consider 2010 an El Nino year and I consider choices of what to include in figures meaningful. I also consider the surface temperature for 2011 dominated by La Nina and I will continue to think so even if El Nino were to unexpectedly turn up at this point or even if it had turned up in August.. On your response to (c): None of those represents reason Dana can't write "was published in 2007' or "was published four years ago" rather than "was published recently". You think it's enough to have readers scan back? I don't. You think it's a nit-pick? I think Dana is using tendentious language and construction. So, there you go. Kevin C
    I'll write two responses on this. In this first one I'm going to disagree with Lucia and Carrick on the problems of polar temperatures in the ITRs.
    I haven't suggested GISTemp is biased. I said that Dana chose the data set with the highest trend; this choice happens to better support this claim than choosing other trends. That's all I've claimed. In my original comment, I didn't point out that his choice of GISTemp results in an inconsistency in that his figure 1 and figure 3 use different observational data sets, but I have pointed that out at my blog. But since Albatross seems to be under the impression the choice of GIStemp results in consistency, I provide the caption for the IPCC figure showing they used HadCrut, not GISTemp. By the way- your pink figure with the observations is similar to the ones I plot when the final of NOAA, HadCRut and GISTemp post. I don't have any objection to someone showing all three observational data sets, or arguing that one is better than the others. But I do consider omitting a set cherry picking. For some reason I don't recall, I did 25 month smoothing a while back. (I think i've varied-- but right now for a quick look, the script has 25 months coded in:) The choice of 25 months means the peak lags the El Nino/La Nina cycle, so the averages were recently coming down from local maxima since monthly temperatures were lower than they were 25 months ago.
    As Zeke was one of the discoverers of Dana's initial error, it cannot be supposed he is trying to distort the data.
    Zeke coblogs at my blog; I agree he is not trying to distor the data. I'll have to ask Zeke if he discovered this before or after I published my blog post. :) I usually avoid speaking for Zeke as he is perfectly capable of speaking for himself. But I would like to point out that Zeke's figure doesn't appear to be either an endorsement nor a criticism of Dana's graph. When Zeke posted the graphic my blog, Bill noticed that Dana appears to shifted the baseline so that it doesn't match the one chosen by the. The IPCC projections are all "relative to a baseline of 1980-1999", but Dana appears to have picked a different one. Zeke then wrote
    Bill Illis, I was trying to recreate his graph. In this case both have a 1990-2010 baseline period. It appears that Dana did graph the correct data, but incorrectly calculated the slope of the last decade. That said, comparing trends is much more useful, is its rather hard to eyeball correlation in noisy data.
    There is further discussion of this. This is Zeke.
    Carrick, I think you have the legend backwards in your graph. This discussion does raise the interesting question of how to objectively determine the best baseline for use in comparison. Obviously for visual comparison purposes you would want to use a pre-validation period baseline, but I’m not sure how to choose between one (say 1950-1980) or another (1970-2000). For trends of course, it doesn’t matter.
    Note that Dana does not use a pre-validation baseline. There are two things that need to be considered Dana's decision to pick a different baseline than that specified by the IPCC: 1) It's just choice Dana, not the IPCC made. 2) Dana did not disclose his rebaselining of the model and observations. 3) Using a baseline of 1990-2010 forces the means for the model and observations to match during that period. Models would have to be very, very bad for the differences to be apparent with this choice. Given this, it's not too surprising that the models seem to agree with data. Agreeing on average has been enforced by subtracting the difference in the means. Dana's choice to not use a "pre-validation baseline" tends to force better agreement between projections and observations; this is why it should not be used. On your response to (c): None of those represents reason Dana can't write "was published in 2007' or "was published four years ago" rather than "was published recently". You think it's enough to have readers scan back? I don't. You think it's a nit-pick? I think Dana is using tendentious language. So, there you go. Kevin C
    I'll write two responses on this. In this first one I'm going to disagree with Lucia and Carrick on the problems of polar temperatures in the ITRs.
    I haven't suggested GISTemp is biased. I said that Dana chose the data set with the highest trend; this choice happens to better suppor this claim than choosing other trends. In my original comment, I didn't point out that his choice of GISTemp results in an inconsistency in that his figure 1 and figure 3 use different observational data sets. But since Albatross seems to be under the impression the choice of GIStemp results in consistency, I provide the caption for the IPCC figure showing they used HadCrut, not GISTemp. By the way- your pink figure with the observations is similar to the ones I plot when the final of NOAA, HadCRut and GISTemp post. I don't have any objection to someone showing all three observational data sets, or arguing that one is better than the others. But I do consider omitting a set cherry picking. For some reason I don't recall, I did 25 month smoothing a while back. (I think i've varied-- but right now for a quick look, the script has 25 months coded in:) The choice of 25 months means the peak lags the El Nino/La Nina cycle, so the averages were recently coming down from local maxima since monthly temperatures were lower than they were 25 months ago.
    As Zeke was one of the discoverers of Dana's initial error, it cannot be supposed he is trying to distort the data.
    Zeke coblogs at my blog; I agree he is not trying to distor the data. I'll have to ask Zeke if he discovered this before or after I published my blog post. :) When Zeke posted the graphic my blog, Bill noticed that Dana appears to shifted the baseline so that it doesn't match the one chosen by the. The IPCC projections are all "relative to a baseline of 1980-1999", but Dana appears to have picked a different one. Zeke then wrote
    Bill Illis, I was trying to recreate his graph. In this case both have a 1990-2010 baseline period. It appears that Dana did graph the correct data, but incorrectly calculated the slope of the last decade. That said, comparing trends is much more useful, is its rather hard to eyeball correlation in noisy data.
    There is further discussion of this. This is Zeke.
    Carrick, I think you have the legend backwards in your graph. This discussion does raise the interesting question of how to objectively determine the best baseline for use in comparison. Obviously for visual comparison purposes you would want to use a pre-validation period baseline, but I’m not sure how to choose between one (say 1950-1980) or another (1970-2000). For trends of course, it doesn’t matter.
    There are two things that need to be considered Dana's decision to pick a different baseline than that specified by the IPCC: 1) It's just choice Dana, not the IPCC made. 2) Dana did not disclose his rebaselining of the model and observations. 3) Using a baseline of 1990-2010 forces the means for the model and observations to match during that period. Models would have to be very, very bad for the differences to be apparent with this choice. Given this, it's not too surprising that the models seem to agree with data. Agreeing on average has been enforced by subtracting the difference in the means.
    0 0
    Moderator Response: [Dikran Marsupial] Fixed a couple of blockquote tags, I hope this is what was wanted!
  33. Perhaps Dana used the same baseline adjustment technique he used for the FAR model comparison: "All I did was offset the SAR projection in 1990 to match the GISTEMP 5-year running average value in 1990 (approximately 0.25°C)." Lessons from Past Climate Predictions: IPCC SAR In other words, he matched the start of projections with the 5 year running average in the start year. In that comparison, he also adjusted the slope or scale factor of the projections. Not surprisingly, with after-the-fact adjustments of both slope and offset, the projections were a good match for observations.
    0 0
  34. JosHag @31 - the point of this exercise was to compare the model projections to the data. The model projections began in 2000, so I adjusted the baselines accordingly. I looked at the 5-year running average for the model projections and GISTEMP, and offset the models such that they matched in 2000. I'm interested in trends, in which case as Zeke noted, baselines mean diddly squat. Charlie A - of course the post-slope adjustments made the models match the data. That was the entire point of the adjustments - to see what slope (sensitivity) would make the models match the data. I haven't done that here because as I noted in the post, there's insufficient data for a meaningful comparison. However, there has obviously been more time elapsed since the SAR, so it was a useful exercise.
    0 0
  35. @dana1981: At this juncture in the comment thread, do you have any reason to revise the conclusions stated in the final paragraph of your aticle?
    0 0
  36. @All commentors: At this juncture in the comment thread, do you have any reason to believe that the conclusions stated in the final paragraph of Dana's article are incorrect? If so, why?
    0 0
  37. Lucia, You have made a very long post. What is your point? You seem to be raising questions about Dana's motivation for the way he graphed the data. When I look at the various graphs presented it is clear to me that not enough time has passed to make any kind of conclusion. Dana pointed this out in the post. You are splitting hairs about a graph of data that is not definative in any case. When the other data sets were added the conclusion is the same, why do you continue to complain? Minor baseline shifts do not matter to the conclusion. Dana claims the data is consistent with the IPCC conclusions, but it is a little low, the changes you recommend would lead to the same conclusion. Do you have a graph that shows that conclusion is not true? If you cannot challenge the conclusion why are you going on and on? Dana did not even point out that the solar radience is at its lowest point in 50 years which affects the data significantly. I have seen you make similar posts on other sites. Please stop whinging and add to the conversation.
    0 0
  38. Per Dana “Evaluating all the data is by definition not cherrypicking.” Agreed, so use multiple data sets and up to date data. Decadal Trends: 2000-Current GISS: +.121 HadCrut +.0024 UAH +.143 RSS +.004 Most Recent 10 years (to 2011.58) GISS +.0022 HadCrut -.0075 UAH +.0036 RSS -.006 Sorry I don’t have NCDC on my computer…) Models highlighted in AR4 are not performing well. The data set is too short to say ‘the models are falsified’, but to write a post saying they are accurate is gilding the lily in the extreme.
    0 0
  39. Dana accurately says "The IPCC AR4 was only published a few years ago, and thus it's difficult to evaluate the accuracy of its projections at this point." That didn't stop Lucia. In 2008, she stated: "The current status of the falsification of the IPCC AR4 projection of 2 C/century is: Falsified." Now I don't know of she's changed her argument since then, but it's this sort of silly stuff that reduced my time spent reading her blog since then. Related to this, model mean doesn't really tell us much at the decadal level, as trends in individual model runs ("real-world" scenarios) vary wildly at that level. RC had a post on this topic some time back, showing trend distribution of 8-year and 20-year trends of model runs. On a different note, I don't quite understand why trends in temperature data products like HadCrut are compared to modeled trends of the global average as if they are a 100% apples to apples comparisons, expected to match very closely over the long run. As has been mentioned here, HadCrut neglects the Arctic. So if projections were accurate and precise, wouldn't HadCrut be expected to diverge on the cool side, with the divergence becoming larger over time?
    0 0
  40. radar, "What does this tell us" ...it's difficult to evaluate the accuracy of its projections at this point. We will have to wait another decade or so to determine whether the models in the AR4 projected the ensuing global warming as accurately as those in the FAR, SAR, and TAR." is the conclusion of the post. That's a very long way from "saying they are accurate". It does say that the IPCC has a good track record from earlier projections. But it does not claim that the latest projections are accurate. Yet. A decade of further observations are required for that (or its opposite).
    0 0
  41. Radar, There are reasons to believe that both HadCRUT and UAH have issues giving enough reason for exclusion. Regarding HadCRUT I have discussed the reasons above. Regarding UAH they are discussed over at Tamino's and it isn't even measuring the same thing as the models would be predicting really and the datasets from the TLT have a strong dependency on El Nino and Volcanism... Also you shouldn't compute trends on monthly data (See RomanM's blog for this). It results in a stepwise trend change from month to month. (maybe stepwise isn't the right word). Finally Dana didn't cherry pick, based upon advice given on the options and on work he's seen he selected the dataset he felt was the most accurate at this time. Perhaps he should have been explicit about his reasoning but ultimately I could not call this a cherry-pick.
    0 0
  42. @dana1981 #34 I completely agree with your conclusions and this comparison which I liked very much. It's just that I wanted to reproduce your Figure 3, which took me a couple of hours to figure out the numbers. Reading your answer, my guess was approximately correct about your baseline. The baselines are not important, that's true of course, but when I visit a deniers site I regularly encounter graphs where they moved baselines just to trick people. An example is Bob Tisdale with his ocean heat content regression line as described in: http://tamino.wordpress.com/2011/05/09/favorite-denier-tricks-or-how-to-hide-the-incline/ I construct and reconstruct graphs just to learn about the relations that exists in data, besides that a strong visual image, like a explanatory graph, is a very powerful tool. Thanks for your time and regards, Jos. PS, I had to look up the meaning of "diddly squat", in Dutch this means "helemaal niks".
    0 0
  43. John H @35 - no, I believe the conclusions remain valid and don't require any change. I think michael sweet @37 and adelady @40 did a good job illustrating why. NewYorkJ @39 also provides a very revealing quote from lucia's relevant to the discussion here. Jos - it's true, baselines can be manipulated for dishonest purposes, but that's not something we would permit on this site, of course. Thanks for the Dutch translation :-)
    0 0
  44. Hello Lucia, Re my comment that "Dana was simply being true to the original graphic that was shown. You are correct. Thanks for pointing that out. The caption does indeed state that they were using HadCRUT3. But that was my mistake, not Dana's. You see, it is quite easy to admit error :) As for your lengthy defence (and obfuscation) of your other demonstrably wrong assertions, it is very unfortunate that you are not willing to concede that you erred. A double standard is evident on your part when it comes to admitting error. You demand it of others, and even go so far as to insinuate intent to mislead, but when it comes to you admitting error the hand waving starts. So be it then. Fortunately, reasonable and sensible people will see right though that. Have a lovely weekend.
    0 0
  45. Hi again Lucia, One more thing. I noticed that you neglected to answer my question: "What do you think is the best GAT analysis and why?" Just to be clear in case it was not already clear from the context, I was specifically referring to the surface temperature record. Thoughts? Thanks.
    0 0
  46. Dana1981 at #43 says "it's true, baselines can be manipulated for dishonest purposes, but that's not something we would permit on this site, of course. " As shown in the caption to your figure 2, the baseline for the AR4 projections is 1980-1999. You choose to compare the projections to the GISS global temperate time series. The proper thing to do is to use 1980-1999 as the baseline for both GISS and AR4 projections. It is trivial to adjust the GISS to that baseline. This is the plot of the annual data, properly baselined. When using the proper apple-to-apple comparison (and using the GISS temp series preferred by Dana) the only years where the observation exceeds the projection are 2002, 2003, and 2006. Note the difference between this and Figure 3 of this post.
    0 0
  47. Charlie A#46: "the proper apple-to-apple comparison" Nice job. It looks like the projections are less than 0.05 deg from the actual. Given the short time period represented, that's hardly a significant difference.
    0 0
  48. @Charlie A #46 I completely agree with Dana that for the trend-conclusion the value of the baseline is "diddly squat". It seems you ran into the same problems as I did, trying to reconstruct the Figure 3 graph. When everything is baselined to 1981-1999 you get the your graph, using some average around 2000 you get the graph from this post. When I use all the model data you can download from the IPCC site, I get the same graphs. I tried the opposite of what you did and baselined the IPCC average data or an average of all model data to 1951-1980, this results in a graph where the IPCC A2 model data are a bit lower than in your graph, e.g. the 2005 Giss value will be just a bit lower than the A2-model value. I am still figuring out why. Of course, muoncounter is right and the whole discussion is about some small insignificant value, but it will only take a little time and I will encounter an image on a Dutch denier site with a graph using a certain baseline with real T-data and where they try to convince every Dutchman that the IPCC models are completely wrong and therefore all CO2 related theories can be added to the household garbage. I want to have my answer prepared when that happens. An image like a good explanatory graph is hard to set aside. For example, the famous hockey stick graph immediately tells you what is happening, even when you didn't finish high school. In my opinion that's why there is so much resistance from the deniers regarding this hockey stick graph.
    0 0
  49. Charlie, thanks for your (totally subjective) opinion on what's "proper". It looks strikingly similar to Figure 3.
    0 0
  50. Craig Allen at 15:12 PM on 23 September, 2011 Craig claims that the reason for not doing a sensitivity test is that the IPCC used GISS data, viz:
    Lucia, The post is about the IPCC AR4 projections (as described in chapter 10.3 or the AR4 report) and their accuracy. Figure 2 is the second figure in that report. It presents GISS data up to the year 2000 and model projections from there until 2100. Had the IPCC plot used one of the other instrumental records or a different cut-off between the observational and modelled data then it would make sense for Dana to have used those. But that isn’t the case.
    Albatross agrees that the IPCC used GISS data, viz: Albatross at 15:50 PM on 23 September, 2011
    Hello Lucia,
    ..."a) Included other observational data sets like HadCrut and NOAA. (If s/he thinks they are inferior GISTemp, s/he should say why he thinks so.)"
    What Craig said @12. I think that you know as well as we do that each of the datasets has its limitations. What do you think is the best GAT analysis and why? Dana was simply being true to the original graphic that was shown.
    I find this line of argument curious. Sensitivity analyses don't depend on what the original graph does. It is an attempt to find out if the original graph is correct. In any case, I just followed the link to Chapter 10.3 of the AR4 report. I don't find Figure 2 there, as claimed above. It is located in the SPM ... but there, it says nothing about using GISS data. Nor is this helped by Figure 1, which says nothing about the data source either, only that it is based on Stott 2006b. But Stott 2006b doesn't use GISS either, it uses HadCRUT2 data ... so I fear I can't find a scrap of backup for the claim that the IPCC used GISS for either Figure 1 or Figure 2. Cite? I'm not saying they didn't use GISS, I just can't find any evidence they did use GISS ... and generally, they use HadCRUT. w.
    0 0

1  2  3  Next

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us