Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Lessons from Past Climate Predictions: IPCC AR4 (update)

Posted on 23 September 2011 by dana1981

Note: this is an update on the previous version of this post.  Thanks to readers Lucia and Zeke for providing links to the IPCC AR4 model projection data in the comments, and Charlie A for raising the concern about the quality of the original graph digitization.

In 2007, the IPCC published its Fourth Assessment Report (AR4).  In the Working Group I (the physical basis) report, Chapter 8 was devoted to climate models and their evaluation.  Section 8.2 discusses the advances in modeling between the Third Assessment Report (TAR) and AR4.

"Model improvements can...be grouped into three categories. First, the dynamical cores (advection, etc.) have been improved, and the horizontal and vertical resolutions of many models have been increased. Second, more processes have been incorporated into the models, in particular in the modelling of aerosols, and of land surface and sea ice processes. Third, the parametrizations of physical processes have been improved. For example, as discussed further in Section 8.2.7, most of the models no longer use flux adjustments (Manabe and Stouffer, 1988; Sausen et al., 1988) to reduce climate drift."

In the Frequently Asked Questions (FAQ 8.1), the AR4 discusses the reliability of  models in projecting future climate changes.  Among the reasons it cites that we can be confident in model projections is their ability to model past climate changes in a process known as "hindcasting".

"Models have been used to simulate ancient climates, such as the warm mid-Holocene of 6,000 years ago or the last glacial maximum of 21,000 years ago (see Chapter 6). They can reproduce many features (allowing for uncertainties in reconstructing past climates) such as the magnitude and broad-scale pattern of oceanic cooling during the last ice age. Models can also simulate many observed aspects of climate change over the instrumental record. One example is that the global temperature trend over the past century (shown in Figure 1) can be modelled with high skill when both human and natural factors that influence climate are included. Models also reproduce other observed changes, such as the faster increase in nighttime than in daytime temperatures, the larger degree of warming in the Arctic and the small, short-term global cooling (and subsequent recovery) which has followed major volcanic eruptions, such as that of Mt. Pinatubo in 1991 (see FAQ 8.1, Figure 1). Model global temperature projections made over the last two decades have also been in overall agreement with subsequent observations over that period (Chapter 1)."

AR4 hindcast

Figure 1: Global mean near-surface temperatures over the 20th century from observations (black) and as obtained from 58 simulations produced by 14 different climate models driven by both natural and human-caused factors that influence climate (yellow). The mean of all these runs is also shown (thick red line). Temperature anomalies are shown relative to the 1901 to 1950 mean. Vertical grey lines indicate the timing of major volcanic eruptions.

Projections and their Accuracy

The IPCC AR4 used the IPCC Special Report on Emission Scenarios (SRES), which we examined in our previous discussion of the TAR.  As we noted in that post, thus far we are on track with the SRES A2 emissions pathChapter 10.3 of the AR4 discusses future model projected climate changes, as does a portion of the Summary for Policymakers.  Figure 2 shows the projected change in global average surface temperature for the various SRES.

AR4 projections

Figure 2: Solid lines are multi-model global averages of surface warming (relative to 1980–1999) for the scenarios A2, A1B, and B1, shown as continuations of the 20th century simulations. Shading denotes the ±1 standard deviation range of individual model annual averages. The orange line is for the experiment where concentrations were held constant at year 2000 values. The grey bars at right indicate the best estimate (solid line within each bar) and the likely range assessed for the six SRES marker scenarios.

Figure 3 compares the multi-model average for Scenario A2 (the red line in Figure 2) to the observed average global surface temperature from NASA GISS. In the previous version of this post, we digitized Figure 2 in order to create the model projection in Figure 3.  However, given the small scale of Figure 2, this was not a very accurate approach.  Thanks again to Zeke and lucia for pointing us to the model mean data file.

AR4 projections

Figure 3: IPCC AR4 Scenario A2 model projections (blue) vs. GISTEMP (red) since 2000

The linear global warming trend since 2000 is 0.18°C per decade for the IPCC model mean, vs. 0.15°C per decade according to GISTEMP (through mid-2011).  This data falls well within the model uncertainty range (shown in Figure 2, but not Figure 3), but the observed trend over the past decade is a bit lower than projected.  This is likely mainly due to the increase in human aerosol emissions, which was not expected in the IPCC SRES, as well as other short-term cooling effects over the past decade (see our relevant discussion of Kaufmann 2011 in Why Wasn't The Hottest Decade Hotter?).

What Does This Tell Us?

The IPCC AR4 was only published a few years ago, and thus it's difficult to evaluate the accuracy of its projections at this point.  We will have to wait another decade or so to determine whether the models in the AR4 projected the ensuing global warming as accurately as those in the FAR, SAR, and TAR

Section 10.5.2 of the report discusses the sensitivity of climate models to increasing atmospheric CO2.

"Fitting normal distributions to the results, the 5 to 95% uncertainty range for equilibrium climate sensitivity from the AOGCMs is approximately 2.1°C to 4.4°C and that for TCR [transient climate response] is 1.2°C to 2.4°C (using the method of Räisänen, 2005b). The mean for climate sensitivity is 3.26°C and that for TCR is 1.76°C."

Thus the reasonable accuracy of the IPCC AR4 projections thus far suggests that it will add another piece to the long list of evidence that equilibrium climate sensitivity (including only fast feedbacks) is approximately 3°C for doubled CO2.  However, it will similarly take at least another decade of data to accurately determine what these model projections tell us about real-world climate sensitivity.

0 0

Printable Version  |  Link to this page

Comments

Prev  1  2  

Comments 51 to 54 out of 54:

  1. Willis - the IPCC does use HadCRUT data [Albatross acknowledged the error in comment #44]. I don't think it really matters what observational data set the IPCC chose. We're not examining their observational data, we're examining their model projections.
    0 0
  2. #49 Dana says "Charlie, thanks for your (totally subjective) opinion on what's "proper". It looks strikingly similar to Figure 3. " Ummm. I don't know why you call it "totally subjective". The model means are referenced to a 1980-1999 baseline (see the caption you posted in Figure 2, which I assume is a true copy of the IPCC figure). Doesn't it make sense to use the same baseline for both the observations and the model projection? Do you consider using the same baseline to be "totally subjective"?
    0 0
  3. I appreciate the feedback. What is the point of this post, if not to 'bust the myth' that the models are forecasting more temperature rise than is being observed and to state that they are 'reasonably accurate' ? I don't disagree that 10 years does not a model falsify, but it's not too short to raise an eyebrow. UAH and RSS may have problems, but we are looking at trends. How many 10 year periods of dead flat temperatures have the models shown that did not immediately follow a volcanic eruption? One might think that this would peak the curiosity of a website named "Skepticalscience". NYJ "The IPCC AR4 was only published a few years ago, and thus it's difficult to evaluate the accuracy of its projections at this point." As Dana said the models supposedly started in 2000 regardless of the publishing date of AR4 so not only a "few years", 10 in this post, 11.58 in reality. I reiterate, the point of the post is to say that the models are 'reasonably accurate'. Look at Lucia's graphs above and realize that plenty of folks understandably disagree.
    0 0
  4. radar "... the point of the post is to say that the models are 'reasonably accurate'." The 'models' - plural - are demonstrably 'reasonably accurate'. The only IPCC model on which we need to withhold judgment for the time being is the most recent one. Judging on past performance, i.e. demonstrated accord with reality, most people are willing to say we'll wait for the decade or so needed before calling yay or nay on this one. (Perhaps if the previous ones had been wildly off the mark, a lot of people might say, "Uh oh. Looks like we're doing it again." But that didn't happen so there's no need to venture down a similar path.)
    0 0
  5. radar#53: Do you accept charlie's graph here? If so, how do you not agree that model is 'reasonably accurate'? I don't know what field you are in, but it doesn't get much better a fit over a short period than that.
    0 0
  6. I don't disagree that 10 years does not a model falsify, but it's not too short to raise an eyebrow. [snip] I reiterate, the point of the post is to say that the models are 'reasonably accurate'. Look at Lucia's graphs above and realize that plenty of folks understandably disagree.
    If you want to raise an eyebrow and impute falsification, one first needs to demonstrate that certain assumed model inputs have actually behaved as was assumed at the time of modelling. As others have noted, solar output has been much lower than was expected at the time of modelling for AR4. Such a result does not falsify the models. Unless, of course, one revisits the models and repeats them with the inclusion of the real-world parameters obtained subsequent to the original runnings, and gets a result that then invalidates the predictions. If there is work that shows this, I'd be most interested to see it.
    0 0
  7. radar @53 - the UAH trend isn't zero over the past decade. Moreover, it's odd that you're willing to acknowledge the short-term cooling effects of volcanic aerosols, but don't seem to recognize the cooling effects of anthropogenic aerosol effects over the past decade.
    0 0
  8. Model validation requires the demonstration that for a set of input vectors varying within credible uncertainty levels, the set of key output vectors vary within acceptable tolerance levels. Matching to a single dataset, such as GMST is a necessary but not a sufficient condition for model validity, whereas demonstrating that there is no match to a key data time series can be a sufficient condition to demonstrate the lack of validity of a model. The comparison of GMST here is disingenuous, to say the least, but suppose that Dana1981 could indeed credibly show a reasonable match of GMST between models and observations, should that convince us of the validity of the model(s)? AR4 WG1 reveals a very large shortwave heating deficit in all of the CMIP GCMs, which is compensated for by LW impedance. Until this is resolved, a comparison of GMST is like showing one face of a Rubik’s cube and claiming that the problem is solved. Model developers of CCSM have already reported on their latest model update, with focus on trying to solve the SW mismatch. The result is interesting:
    “The CCSM4 sea ice component uses much more realistic albedos than CCSM3, and for several reasons the Arctic sea ice concentration is improved in CCSM4. An ensemble of 20th century simulations produces a pretty good match to the observed September Arctic sea ice extent from 1979 to 2005. The CCSM4 ensemble mean increase in globally-averaged surface temperature between 1850 and 2005 is larger than the observed increase by about 0.4◦C.”
    Journal of Climate 2011 ; e-View doi: 10.1175/2011JCLI4083.1 http://journals.ametsoc.org/doi/abs/10.1175/2011JCLI4083.1 What price the current comparisons of GMST?
    0 0
    Moderator Response: [grypo] Thank you for that link. Fixed it too.
  9. PaulK You mention "acceptable tolerance levels"; please specify what acceptable tolerance levels would be in this particular case. Note that even if the models were perfect, the observtions can only be reasonably expected to lie within the spread of the model runs, nothing more than that.
    0 0
  10. #58 PaulK -- a minor quibble with your comment, and a further explanation of why the baseline adjustment method chosen by Dana1981 is incorrect ...... You use the abbreviation GMST. I assume this is Global Mean Surface TEMPERATURE. The datasets are not temperature, they are anomalies, or changes in temperature. We can monitor CHANGES in global average surface (actually 2 meter air) temperature much more accurately than we can monitor the actual absolute temperature. 0 degees Celsius (or Kelvin) on an AR4 model projection does not mean the temperature at which water freezes. It means "same same temperature as the average temperature from 1980-1999". Similarly, the GISS dataset is NOT temperature. It is the change, in degrees Kelvin, from the average temperature from 1951-1980. Fortunately, the GISS dataset can be converted to the same "degrees Kelvin change from the 1980-1999 average" that is used for the AR4 model anomalies. Contrary to what Dana1981 says, the choice of baseline is not arbitrary. On both this thread and the earlier thread of the same title, I have had a very limited specific goal. The desire to understand and replicate what Figure 3 purports to show. In the first thread, I was confused by Dana's claims that the 2000-2010 model mean trend was 0.12C/decade, followed by a model mean trend of 0.28C/decade. Those numbers have been corrected to the 0.18C/decade and 0.19C/decade numbers. What has not yet been resolved, at least in the main article, is the proper adjustment of two sets of temperature anomaly data series so that they have the same reference zero point.
    0 0
    Moderator Response: [Dikran Marsupial] Trends are independent of the baseline, f(x) = 2x + 4 has exactly the same slope/gradient/derivative/trend (whatever you want to call it) as g(x) = 2x + 2. If the interest is in trends the baseline is irrelevant.
  11. There are several posters on this thread who are making nit-picks about how data has been presented. The lead post clearly states that the data set is too short for strong conclusions. These posters are making arcane arguments that there is some problem with the graphs in the post. Can they please post a proper graph that shows the conclusions are not correct? So far the "corrected" graphs argee with the lead post's conclusions. If the changes do not affect the conclusions, why bother? If you think that you are proving doubt by questioning the baseline of a short term graph you are wrong (0.05C!! who cares). Read a book about analyizing data with a lot of noise. There is always more than one way of properly presenting data of this type. If the conclusion is not affected by the change, it is not important. Please post data that shows the conclusion is not correct.
    0 0
  12. Paulk The next line abstract for JoC paper is important
    This is consistent with the fact that CCSM4 does not include a representation of the indirect effects of aerosols, although other factors may come into play. The CCSM4 still has significant biases, such as the mean precipitation distribution in the tropical Pacific Ocean, too much low cloud in the Arctic, and the latitudinal distributions of short-wave and long-wave cloud forcings.
    The models inability to model the cooling effects will mean the models will overestimate warming. If they didn't, it would be more worrisome for the modellers.
    0 0
  13. Michael, I agree that the data set is too short for strong conclusions. It appears that there are two beefs with Dana's conclusion that the AR4 models are reasonably accurate. First, that the time frame is too short to draw an accurate conclusion. Second, that the trend for the observational temperatures does not reflect the most recent data. Either way, the statement that the AR4 projections are reasonably accurate is not supported (nor falsified) by the recent temperatures.
    0 0
  14. John Hartz asked: John Hartz (at 06:42 AM on 24 September, 2011) asked:
    @All commentors At this juncture in the comment thread, do you have any reason to believe that the conclusions stated in the final paragraph of Dana's article are incorrect? If so, why?"
    The last section is very problematic to me, and seems to contain logical incompatibilities. In the space of a few sentences it states (concerning IPCC AR4 projections), that "it's difficult to evaluate the accuracy of its projections" and that the projections are "reasonably accurate thus far"..... ....and that it will take another decade before we know what the projections will say about climate sensitivity, but that their "reasonable accuracy thus far" indicates that this will provide evidence that climate sensitivity is near 3 oC. This seems a semantic and logical mess. I don't see how one can say at the same time that it's difficult to evaluate the accuracy of projections, and that they're reasonably accurate! Nor can we say at the same time that we consider that the accuracy of the projections so far (which is supposed to be difficult to evaluate) will indicate that climate sensitivity is around 3 oC, and that we won't know what the projections will say about climate sensitivity for another decade. I don't think I'm being overly obtuse (if I am I'm sure you'll say so!). I would have thought that the justifiable conclusions are that the AR4 simulations are not inconsistent with the surface temperature progression since 2000, and that they are not inconsistent with the broad consensus of evidence that the climate sensitivity is near 3 oC (plus/minus a bit). However so far comparison of the AR4 models with the surface temperature progression of the last decade doesn't really tell us anything very much at all about climate sensitivity at all. One the other hand if one were to consider the comparison of climate model projections since the late 1980's, for example, with the subsequent temperature progression, then those models and the empirical data would give us confidence that our understanding of atmospheric and ocean physics and climate sensitivity are quite well supported. Overall I have a few concerns about the perception and use of model data in an out-of-science context. In my experience models are very useful for systematising large amounts of independent physical understanding into a useable predictor (through parameterization), for testing the reliability of our parameterizations through comparison of model output with empirical data, for suggesting possible interpretations and experiments (not quite so useful in the case of climate models) and providing a focus for independent study (for example, by identifying arenas where models and empirical observations are seemingly incompatible; good examples are model success with respect to MSU tropospheric temperatures; tropospheric water vapour uncertainty). There's no question that climate models are important for projecting multi-decadal consequences of specific emission scenarios. However we should be very careful in considering the value of decadal model-empirical comparisons, when it's very well understood that we don't have any expectation that models will necessarily do a good job of this (at least with respect to tropospheric or surface temperature progression).
    0 0
  15. Jonathon "reasonably accurate" is not a "strong conclusion" and hence is supportable by the evidence. To my eyes "reasonably accurate" means "as accurate as there is reason to suggest it should be". As I have said before, even if the model is perfect, there is no reason to expect anything more than for the observations to lie within the spread of the model runs. If the observations (considering their uncertainty) lie within the bulk of the model runs, rather than right out in the tails, then "reasonably accurate" would be an excellent summary. In order to work out whether the models are accurate, you first need an estimate of the variance that can be caused by internal climate variability. We can't get a good estimate of that as we have only one realisation of the observed climate. So the best estimate we have of climate variability is given by the spread of the model runs. If you have a better estimate, lets hear it, the models mean cannot be expected to be any closer to the observations that that.
    0 0
  16. Charlie A at 12:29 PM on 24 September, 2011 What's the difference between the trends in your graph, and the trend in Dana's graph?
    0 0
  17. Has Lucia left the building?
    0 0
  18. Maybe I'm being obtuse, but I don't see the contradiction in the conclusion. So far, the models and data are in reasonable agreement [falling within the spread of model runs, and not too far off from the average]. At the same time, it's not enough data to say anything meaningful.
    0 0
  19. @66 Alexandre --- my comments in both this article and the previous one have been with the sole focus of trying to understand what is being presented in Figure 3. I have not made any statements about model validation or trends, other than questioning whether Dana1981's assertion that the AR4 A2 model mean trend from 2000-2010 is 0.12C/decade. I did not even comment upon his related assertion that the A2 model mean from 2010 to 2020 is 0.28C/decade. Obviously offsets do not affect trends. The IPCC could have forecast global surface temperature anomaly of 55C going to 55.2C by 2010 and the trend would not be inconsistent with observations. While a projection of 55C might seem ridiculous, it is an anomaly not a temperature and the trend, as others have pointed out, would be correct. The AR4 projections, however, contain more information that just trends. The projections are anomalies, with a specific baseline of 1980-1999 (See caption of Figure 2 in the article). This allows us to determine not only the error in the projected trend, but to also estimate the rms error of the projection. The RMS difference between GISS and AR4 A2 projection anomalies, using a common baseline of 1980-1999 is 0.09C -- or about 1/2 of the expected difference over a decade. The rms difference between GISS data as plotted, and the AR4 A2 annual series using the link of the article is 0.21C. I cannot directly calculate the rms difference between the two time series as plotted in Dana's Figure 3, because I still cannot figure out how that figure was generated. It appears that the GISS data, after being passed through some sort of spline filter, is plotted using the standard 1951-1980 baseline, but it is unclear how the AR4 A2 projections have been modified. They appear to have been moved upward about 0.2C. This is what would happen if, after the fact, the difference between the projections and the observed data is used to adjusted the projections to minimized the rms difference.
    0 0
  20. #55muoncounter "Do you accept charlie's graph here?" Note that the realclimate graph at the end of comment #18 shows the GISS rebaselined to 1980-1981. The model data in that graph is for A1B scenario, but the GISS data does match up with my figure from comment #46. I see all sorts of justifications and excuses for showing bogus data. Showing the correct data does not affect the rather vague conclusions of this article, so I don't understand why there is so much reluctance to correct the figure. again.
    0 0
    Response:

    [DB] "I see all sorts of justifications and excuses for showing bogus data."

    So if someone has an alternative approach to something than one you favor then the other approach is "bogus"?

    One thinks that the skeptical thing to do would be to first understand the other approach (which you say you do) and either agree that there is no meaningful difference in results (which you do) or show why the other approach is invalid (which you don't).

    Suggestion: since you agree that it doesn't matter, perhaps it would be best to drop this line of discussion as your persistence in this reflects poorly on you.

  21. #62 Grypo, "The models inability to model the cooling effects will mean the models will overestimate warming. If they didn't, it would be more worrisome for the modellers." Well, I agree with your statement, but it introduces a bizarre argument in a conversation about validation. What weight should we attribute to a model that is missing a substantial known forcing, and which gives rise to a warming bias of untested magnitude? If the model is only partly formed, then the obvious solution is to drop the model from the CMIP suite until it is ready to be validated.
    0 0
  22. Dana, Without sufficient data, any model, hypothesis, or prediction can be "reasonably accurate." In such a situation, the statement is essential meaningless. Based on the AR4 trend of 0.2C/decade, then the model would predict warming from 1/1/2000 to date of ~0.25C. The GISS observations yield 0.14C increase; HadCRU yields -0.04C. The simple conclusion is that not one of these trends is significantly different from each other. Since 2000, the monthly temperature range is 0.7C To claim as Dikran suggests that this model is "as accurate as there is reason to suggest it should be," is more of a statement about the uncertainty of the model rather than its accuracy. The model no more validates a climate sensitivity of 3 any more than it validates a sensitivity of 5 or 0. This whole exercise appears to be a vain effort to prove (falsify) something that is unprovable to date. At some point in the future, we will be able to ascertain the validity of this model.
    0 0
  23. Charlie A - there is no "bogus data" in the graph. You disagree on the choice of baseline - that's fine, you're entitled to your opinion (and that's all it is, your opinion), but don't start claiming that the data is bogus. In another comment, lucia posted a graph with a very different baseline which frankly I think is rather deceptive. But I didn't comment on it, because baselines don't really matter in this case. She's entitled to her presentation, I'm entitled to mine, and you're entitled to yours. You really need to accept that and move on. Or feel free to continue arguing about it with Zeke and lucia on their site, since they seem to disagree with you as well.
    0 0
  24. Charlie A at 01:06 AM on 25 September, 2011 I don't see the importance of each decade's trend as you seem to see. As noted by others, in such short periods there's a lot of noise, and starting and ending points make a big difference in your results. Are you saying that this decade's trend is less steep than projected, so warming scenarios are exaggerated?
    0 0
  25. Jonathon wrote: "To claim as Dikran suggests that this model is "as accurate as there is reason to suggest it should be," is more of a statement about the uncertainty of the model rather than its accuracy." No, you are completely missing the point. The ensemble mean is an estimate of only the forced component of climate change. The observed trend is a combination of the forced change and the unforced change (i.e. a realisation of the effects of internal climate variability). Thus to directly compare the ensemble mean and the observations, is comparing apples with pears. To know whether the ensemble mean is "reasonably accurate" you need an estimate of the plausible effects of climate variability on the observations. Currently the spread of the models is the best estimate of this available. Thus it is not a statement about the uncertainty of the models, but of the uncertainty in estimaing the forced component of the trend in the observations, which is what you need in order to be comparing like-with-like when comparing the observations with the model ensemble. As I said "reasonably accurate" is a good summary, if you understand that the ensemble mean is only an estimate of the forced component of the trend. Saying that the models appear "reasonably accurate" is in no way "validation" of the models, and I don't think anyone is claiming that it is.
    0 0
  26. Jonathon#72: "Without sufficient data, any model, hypothesis, or prediction can be "reasonably accurate."" Really? It only takes one data point to falsify some hypotheses. You seem to be objecting here to the ambiguity in the phase "reasonably accurate." Yet you now introduce the equally ambiguous qualifier 'sufficient'. So we now have a statement virtually devoid of practical meaning, as we will only have the necessary data to certify a prediction as 'accurate' after the fact. That is the perfect excuse to do nothing -- or as we used to say in the oil business, 'I never lost any money on a well I didn't drill.' Has this level of scrutiny been equally applied to predictions made on both sides of the climate change argument? Or do we have a data point for a hypothesis about the motivations for these objections?
    0 0
  27. Johnathon Perhaps an example from a more basic domain will help. Say we roll a six-sided unbiased die, and we get a value of five. This is our observation. To make our model ensemble, we get a get 100 six sided die, and roll them once each and get a mean value of 3.5 with a standard deviation of 1.7. So is our ensemble mean of 3.5 a "reasonably accurate" estimate of the observed value of 5. I'd say yes, because the observation is a random variable, that is only predictable within the limits of its internal variability. In this case, we can accurately estimate this variability by the variability in the ensemble runs (because in this case our models are exactly correct). Model uncertainty is another matter. Say we didn't know what kind of dice we had (c.f. uncertainty regarding climate physics). In this case, we might make an ensmble of D6s, D4s, D8s and D20s etc (ask your local D&D player). In this case we will have an even larger standard deviation, because of the model uncertainty in addition to the inherent uncertianty of the problem. In climate modelling, that is why we have multi-model ensembles.
    0 0
    Moderator Response: [DB] Not to mention the D10s, D12s and D32s some of us used. :)
  28. Nitpicking a bit: Dana: "This data falls well within the model uncertainty range (shown in Figure 2, but not Figure 3), but the observed trend over the past decade is a bit lower than projected. This is likely mainly due to the increase in human aerosol emissions, which was not expected in the IPCC SRES" Couldn't the extended solar minimum, leveling off of methane concentration, and/or potentially a negative trend in ENSO (not sure if this applies with the starting date) have had some effect? "A bit lower than projected" and the text that follows implies that there must be a likely explanation for it identified. In fact, the trend is extraordinarily close to the mean model projection, perhaps within measurement margin of error. Starting later reduces the trend quite a bit, but from the RC post we can see that there are many individual model runs over 8-year periods, and in a small percentage of cases, 20-year periods, that run flat or negative. So we're back to short time periods don't tell us much. There's also an impression perpetuated among denial realms that observations are expected to match closely with the mean model projection over a 10-year period or less, which is bogus.
    0 0
  29. NYJ - yeah, there have been other cooling effects this decade too. I'll update the post to clarify that later.
    0 0
  30. Muon, If it only takes one data point, then that qualifies as sufficient. Yes, the statement is completely devoid of meaning. Based on the observations, reasonably inaccurate would also qualify. However, that is not an excuse to do nothing, as you claim. Dikran, if you continue to roll the dice enough, the uncertainty will decrease until you achieve a mean of 3.5 with an uncertainty such that 5 will fall outside your error. With enough data, we can can get a temperature trend that will determine the whether the model falls within or outside of error bars.
    0 0
  31. Moderator DB, inline comment #70 "One thinks that the skeptical thing to do would be to first understand the other approach (which you say you do) and either agree that there is no meaningful difference in results (which you do) or show why the other approach is invalid (which you don't)." I agree there is not difference in trends. I did not say, and I do not agree, that there is no meaningful difference in results. If I wished to compare the mean projected temperature for 2000-2010 with observations, my graph and calculations would give a proper comparison. The Figure 3 graph by Dana1981 would give an erroneous result. Do you (and Dana) understand that statement? Do you agree or disagree? If I were to rebaseline the 2000-2010 model mean time series to match the GISS 2000-2010 mean, then the projected temperature for 2000-2010 would perfectly match the observed data, not matter what the projection was originally. Dana uses only a portion of the observed data (up through 2005, if I understand correctly) to adjust the mean of the projection, but philosophically it has the same problem as matching over the entire period for which we are comparing projections vs observation. If DB and Dana1981 don't see any problems using the hindsight of observations performed after the start date of the projections to make post hoc adjustment of the projections, then there is nothing further I can say.
    0 0
  32. Jonathon Sadly we can only roll the die once as we only have one planet Earth. It seems you have not grasped the analogy. The single roll of the die represents the value of the trend we observe on earth, note I haven't specified the period over which this trend is calculated, becuase it is irrelevant. If you compute the trend over a longer period then the uncertainty of the observed trend will decrease, but the spread of the distribution of modelled trends will decrease along with it. Why is it that whenever I offer an analogy, the person I am trying to explain something to always proceeds to over-extend the analogy in a way that doesn't relate to reality?
    0 0
  33. Jonathan Just to verify something. Are you trying to say that the observations should lie within some number of standard errors of the mean (SEM) from the ensemble mean, and that as the size of the ensemble grows the SEM will decrease?
    0 0
  34. Jonathon#80: "the statement is completely devoid of meaning. " To be clear, we are discussing the statement you made: "Without sufficient data, any model, hypothesis, or prediction can be "reasonably accurate". Side question: If a statement has no meaning why make it? In this case, the only reason that there is 'insufficient data' is because of an artificial restriction to a short time period. A more important question to ask might be: Is there any reason in data obtained between the 3rd and 4th assessment reports to invalidate the model that increasing GHGs are influencing climate change? How would you answer that question? Here's one possible answer: None of the criticisms leveled at Dana's graphs (nor charlies, for that matter) suggest that there is any such reason. All else is nit-picking -- and while I understand there is a place for that activity, it does not alter the basic conclusion. "that is not an excuse to do nothing, as you claim." On the contrary, there are many who make exactly that claim under the guise of 'the science is not settled.' Guvna Perry is just one high profile example. Note to DB: a colleague of mine has a 100 sided die; makes grading very easy.
    0 0
    Moderator Response:

    [Dikran Marsupial] Well I use a Mersenne twister for that, it fills in the marksheet as well! ;oP

    [DB] I found the D100 liked to roll off the table, thus destroying the class curve and my "critical hit" chances at the same time...

  35. DM#82: Don't feel bad; an interesting list of arguments from over-extending an analogy appears here. I like this one: "The solar system reminds me of an atom, with planets orbiting the sun like electrons orbiting the nucleus. We know that electrons can jump from orbit to orbit; so we must look to ancient records for sightings of planets jumping from orbit to orbit also." Alert Doug Cotton!
    0 0
  36. John Hartz at 00:29 AM on 25 September, 2011 Has Lucia left the building?
    I don't know how our times stamps line up, but there has been plenty of discussion at my blog. I tend to comment lightly on Friday nights, most of Saturday and Sunday. Moreover, I've tried to foster the habit of not responding to unimportant things if I miss the page turn on blog comments. I do, however see something worth commenting on this page. I disagree with this:
    To know whether the ensemble mean is "reasonably accurate" you need an estimate of the plausible effects of climate variability on the observations. Currently the spread of the models is the best estimate of this available.
    I agree that you need an estimate of the plausible effects of climate variability on the observations. However, I disagree with the notion that the spread of all runs in all models in an ensemble is the best estimate of climate variability. I don't even think it's the best model based estimate of the contribution of climate variability to the spread in trends. Or, maybe it's better to say that based on my guess of what DM probably means by "the spread of models", I think the information from the spread in model runs is often used in a way that can tend to over-estimate the contribution of natural variability on trends. First, to engage this, admit I'm guessing what DM means by "the spread of models". I suspect he means that to estimate climate variability we just find all the trends in a multi-model ensemble, create a histogram and that spread tells us the contribution of climate variability to the spread of the trends. (If this is not what he means, it may turn out we agree on how to get a model based estimate.) I don't consider this sort of histogram of all runs in all models to produce the best estimate of the spread in variability due to actual, honest too goodness climate variability. The reason is that in addition to the spread due to natural variability in each model, this distribution includes the spread due to the mean response of each model. That is: if each model predicts a different trend on average, this broadens the spread of runs results beyond what one expects from "weather". So, for example, in the graph below, the color or the trace is specific to a particular model. You can see that run trends tend to cluster around the mean trend for individual models: (Note, a long time period was selected to illustrate the point I am attempting to make; this choice of times does result in particularly clustering of trends about the mean for individual models.) In my opinion, if you wanted a model based estimate of the contribution of climate variability to spread in trends on earth for any time period, it would be unwise to simply take all those trends, make a histogram and suggest that the full spread was due to something like 'weather' or "variability in initial conditions" or, possibly "climate variability". At least part of the spread in trends in the graph above is due to the difference in mean trends in different models. That's why we can see clustering of similarly colored lines for individual runs from the matched models. If someone wanted to do a model based test of the likely spread, I would suggest that examining the variance of runs in each model gives an estimate of the variance due to 'natural variability' based on that model. (So, in the 'blue' model above, you could get an estimate of natural variability by taking the spread over the trends corresponding to the individual 'blue' traces). We have multiple models (say N=22). If we have some confidence in each model, then the average of the variance over the N models gives an ensemble estimate of the variability in trends based on the ensemble. Using a distribution with a standard deviation equal to the square root of this average variance is likely a better estimate of the spread of natural variability. (Note btw that you want to do this by variances for a number of reasons. But lest someone suspect I'm advocating averaging the variances instead of standard deviations because it results in a smaller estimate of variability, that is not so. If modelA gets a variance of 0 and modelB gets a variance of 4. Averaging variances results in an average of (0+4)/2 =2; the standard deviation is 1.4. In contrast, if we average the s.d. we get (0+2)/2=1. There are other reasons to average variances instead of s.d. though. ) The method I describe for getting a model based estimate of the spread results in a somewhat smaller spread than one based on the spread of runs taken from an ensemble of models whose mean differs. Needless to say, since it gives tighter uncertainty intervals on trends, we will detect in accurate models more often than using the larger spread. But I think this method is more justifiable because the difference in the mean trends is not a measure of natural variability. Having said that: I think checking whether the trend falls inside the spread of runs tells us one thing: Does the trend fall inside the spread of runs in the ensemble. That's worth knowing, but I don't happen to think the spread of runs in the ensemble is the best model based estimate of the spread of trends that we expect based on natural variability. I also don't think the full spread of all runs-- including contributions from difference in the means-- should be represented as an estimate of uncertainty in natural variability around the climate trend. Of course, I'm not sure that by "the spread of the models" DM mean the spread of all runs in all models. It may turn out he means exactly what I meant, and in which case, we agree.
    0 0
  37. lucia I was talking of an ensemble of perfect models, in which case the spread of (an infinite number of) model runs is exactly a characterisation of the plausible variation due to internal climate variability. Whenever discussing tests or model-data comparison it is always a useful boundary case to consider what you can expect from a perfect model. Of course if you have imperfect models (as all models are in practice) then the spread of the ensemble will also include a component relecting the uncertainty in the model itself. However, the overall spread of the model runs in a multi-model ensemble is still a characterisation of how close we should expect the observations to lie to the multi-model mean, given all known uncertainties. Thus if the observations lie within the spread of the models, then the ensemble is "reasonably accurate" as it would be unreasonable to expect any more than that. Having a hetrogenous ensemble does make things a bit more awkward, I think I am broadly in agreement with Lucia about estimating the effect of climate variability by averaging the variances from the runs for each model type. I am also in agreement about what the observations lying within the spread means, it is essentially a test of the consistency of the models. No big complement if they are consistent, quite a severe criticisism if they are not. Having said which, as GEP Box said "all models are wrong, but some are useful". It is not unreasonable for the model to fail tests of consistency with respect to one metric, but still be a useful predictor of another. I should add that there may be subtleties in pooling the variances due to the act we are talking about time-series, which is more Tamino's field than mine (I'm also a Bayesian and so I don't really agree with hypothesis testing or confidence intervals anyway ;o)
    0 0
  38. I should just add, that even with Lucia's variance pooling method (which I agree with in principle even if there may be difficulties in the details), that estimate of internal climate variability is only valid if you accept that GCMs are basically a valid way to model the climate (as the estimate is based on the behaviour of the models rather than the actual climate, so if you don't think the models are representative of the climate then the variability of the model runs can't logically be representative of the variability of the climate either). So those skeptics that don't accept models as being valid need some other way to estimate internal climate variability in order to determine what would be a "reasonably accurate" projection. Best of luck with that!
    0 0
  39. There has been some discussion of an apparent contradiction in Dana's summary. I say "apparent" because there is no actual contradiction in Dana's conclusion. Accuracy is not bivalent like truth. Something is either true, or it is not - but things can be more or less accurate. Indeed, Dana clearly states that the AR4 results meet one (vague) standard of accuracy, they are "reasonab[ly] accura[te]", but it is impossible to tell as yet whether they meet another, more stringent standard of accuracy. Because different levels of accuracy are being considered, there is no contradiction. To illustrate the point, we can compare the AR4 projections to predictions analyzed earlier in this series, in this case the one by Don Easterbrook in 2008: The image was formed by overlaying Zeke's version of Dana's fig 3 graph above with figure 3 from Dana's discussion of Easterbrook's prediction (link above). The heavy Red line is a running mean of Gistemp, the heavy blue and green lines two of Easterbrook's three projections (the third declining even faster. Even the best of Easterbrook's projections (heavy green line) performs poorly. From the start it rapidly declines away from the observed temperature series. Briefly in 2008 (the year of the prediction) it is closer to the observations than is the A2 multimodel mean, but then falls away further as temperatures rise so that in the end it is further away from the observations than the A2 projections ever are. Given that 2008 was a strong La Nina year and in the middle of a very deep solar minimum, we would expect it to be below, not above and projected trend. But regardless of that subtlety, Easterbrook's projection performs far worse than the AR4 A2 projection. It is not reasonably accurate, although it may not yet be falsified. However, despite the fact that the conclusion contains no contradiction, I would suggest rewording it, or appending a clarifying update to the bottom of the post. As it stands it is an invitation to misunderstanding by those (apparently including Lucia) who think "accuracy" is an all or nothing property. It is also an invitation to the creative misunderstanding some deniers attempt to foster.
    0 0
  40. Tom, I find it interesting that Jonathan and I seem to be the only two people that consider that the last section of the top article is problematic. Incidentally, I think all the other stuff flying around about this analysis is unproductive nit-picking. I agree that the term "accuracy" can have gradations of meaning. The problem with the use of "accuracy" in the last section of the top article is that its use in describing the AR4 projections changes throughout the section. So first the accuracy of the projections is "difficult to evaluate", but then we find it's "reasonably accurate", and then it's sufficiently accurate that one can have some confidence that it will eventually add to the evidence that climate sensitivity is around 3 oC. Aren't those interpretations somewhat incompatible? On their own the AR4 projections have little to say about climate sensitivity and I don't think we can presuppose that they will eventually support a climate sensitivity near 3 oC. On the other hand comparison of empirical surface temperature progression with simulations from the late 1980's does support our confidence arising from a wealth of other data, that the likely value of the climate sensitivity is near 3 oC. I don't think this is "creative misunderstanding". It's simply how my mind interprets the text! I have quite a lot of confidence that the best value for the climate sensitivity is around 3 oC, but I don't think the AR4 projections (and their comparisons with surface temperature progression), gives us much insight into climate sensitivity, nor that we can presuppose that they will in the future support a value near 3 oC.
    0 0
  41. NewYorkJ Yes, taking a look under the bonnet of the ensemble cast makes it clear that attempting to determine climate sensitivity (essentially what everyone is talking about) from such a short period won't work. Here's a rundown of the models, their equilibrium and transient sensitivities* and their 2000-2010 trends: BCCR-BCM2.0    n.a.    n.a.        -0.03K/Decade CGCM3.1(T63)    3.4    n.a.         0.29K/Decade CNRM-CM3    n.a.    1.6               0.09K/Decade CSIRO-MK3.0    3.1    1.4            0.42K/Decade GFDL-CM2.0    2.9    1.6              0.08K/Decade GFDL-CM2.1    3.4    1.5              0.09K/Decade GISS-ER    2.7    1.5                    0.16K/Decade INM-CM3.0    2.1    1.6                0.34K/Decade IPSL-CM4    4.4    2.1                  0.28K/Decade MIROC3.2(med)    4.0    2.1         0.13K/Decade ECHO-G    3.2    1.7                     0.15K/Decade ECHAM5/MPI-OM    3.4    2.2       0.28K/Decade MRI-CGCM2.3.2    3.2    2.2         0.08K/Decade CCSM3    2.7    1.5                       0.29K/Decade PCM    2.1    1.3                           0.13K/Decade UKMO-HadCM3    3.3    2.0           0.13K/Decade UKMO-HadGEM1    4.4    1.9         0.11K/Decade There really isn't a discernible pattern at this stage linking sensitivity to temperature trend in the model outputs. I actually think the ensemble mean probably is too "warm", in the short term anyway, mainly because a few of the model cast do not include indirect aerosol effects. If you've ever seen a chart showing recent radiative forcing factors, such as this one in AR4 SPM, you'll know why this is a major omission. I've so far identified two models - INMCM3 and CCSM3.0 - which don't include indirect aerosol effects and it's no surprise they both exhibit two of the largest trends over 2000-2010. If just those two are removed from the cast the ensemble mean drops from 0.18K/Dec to 0.16K/Dec. Over the longer term it becomes less important: there is no difference in the 2000-2099 trend between the original 17-member cast and my cut-down 15-member version. Another interesting finding along this train of thought is that the median average is only 0.13K/Dec and 11 of the 17 members feature a lower trend than the ensemble mean. * http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6-2-3.html
    0 0
  42. Chris @90:
    "So first the accuracy of the projections is "difficult to evaluate", but then we find it's "reasonably accurate", and then it's sufficiently accurate that one can have some confidence that it will eventually add to the evidence that climate sensitivity is around 3 oC. Aren't those interpretations somewhat incompatible?"
    No! Think of yourself as trying to predict the match between the AR4 A2 20 year trend in 2020, and the trend of observed GMST between 2000 and 2020. Well, based on eleven years data, that 'accuracy' is difficult to evaluate, ie, to predict. It is difficult to evaluate because an 11 year trend is a very short trend in climate terms with a high noise to signal ratio. Changing the start or end point of the trend by just one year can make a significant difference to the trend. Hence, "it's difficult to evaluate the accuracy of its projections at this point". However, though difficult to evaluate, we are not completely without information regarding the accuracy of the projection. The information we do have suggests that it is more probable than not that the GMST trend will lie within error of the AR4 A2 multi-model projection. In this it contrasts with a host of other possible projections, including some actually made by AGW 'skeptics'. For those other projections, and based on evidence to date, it is more likely than not that the GMST trend will not lie within error of those projections (where within error is based on model mean variance). So, unlike those other possible projections, the AR4 A2 projections are "reasonably accurate" and give some (but not a great deal) confidence that the actual climate sensitivity is close to that of the models. We could be precise about this. But doing so would defeat the purpose of keeping the article accessible to the general reader. What I have been convinced of by the traction the argument of inconsistency has received is, not that it is correct, for it is not, but that as stated the conclusion can foster confusion.
    0 0
  43. As Tom identifies @92, there's always a challenge in keeping wording of a post both correct and accessible to the general public. For example, Phil Jones' BBC interview. What he said was scientifically and statistically accurate, and what most of the public got from it was "no warming since 1995" - not at all what he actually said. This post is intended for a broad audience. The language could have been more precise from a statistical standpoint, but the more technical you get, the more of the general audience you lose. My suggestion for those who are nitpicking the language is to go do the analysis for yourself on your own blog. Then you can use whatever language you think is appropriate.
    0 0
  44. I wanted to post a clarification to some comments made about by Tom Curtis, who makes the following claim:
    2) In fact there is good reason to believe that two of the indices understate trends in GMST, in that they do not include data from a region known to be warming faster than the mean for the rest of the globe. In contrast, while there are unresolved issues relating to Gistemp, it is not clear that those issues have resulted in any significant inaccuracy. Indeed, comparison of Gistemp north of 80 degrees North with the DMI reanalysis over the same region shows Gistemp is more likely to have understated than overstated the trend in that region:
    I apologize for missing it earlier and any response that may have set him right, but I am busy getting ready for a trip and only had a chance to skim the comments, but this comment is in error. Actually of the three regularly updated series, NCDC probably has the best approach among them, and were I to pick one to put my money on, it would be this one. Unlike Tom's claims, NCDC does in fact interpolate, it uses something called "optimal interpolation' combined with empirical orthogonal functions to reconstruct the global mean temperature. Here's a standard reference on it (To quote from my comment on Lucia's blog): NCDC is the only method that incorporates fluid mechanics into their reconstruction. That is, they use pressure, temperature, and wind speed to reconstruct the various fields in a self-consistent fashion, and they use an empirical orthogonal function approach to do so. (he person who is the closest to this, as I understand it, is Nick’s code. Mosh can update me on that. Nick Stoke's Moyhu index uses an OLS based approach that also (in some sense) produces an "optimal interpolation". I also like NCDC's approach because they can estimate the uncertainties in the measurement. Those of us in physics would crucify any of you guys trying to stand up and present graphs that don't have uncertainty bounds on them (unless the uncertainty bounds are comparable or smaller than the thickness of the plot line). Secondly, simply making a correct, without being able to set bound on it, doesn't guarantee that the "change for change's sake" is an improvement over the simpler technique. With GISTEMP, the evidence is that it runs "hot" over the last decade compared to HadCRUT, NCDC and ECMWF. ECMWF is a meteorological based tool. An ascii version isn't currently available (to my knowledge), I'm trying to see if I can get that changed. In the mean time here's a rasterized version in case anybody wants to play with it I also think a lot has been made about GISTEMP being better. This smells purely of confirmation bias. I think people are picking GISTEMP because it runs hot (they like the answer) not because they understand in any profound way how it works. Truthfully, if you compare the series: You really have to squint to tell them apart. My personal predelection is to use all of the data series together, it's more consistent if your goal is to compare against a mean of models, than say cherry picking one data series, especially when your defense of it is utterly flawed and only demonstrates the depths of your own ignorance on this topic: (-Snip-). (What is magical about smoothing over a 1200-km radius. Why do you need two radiuses to compare? Do you realize how ad hoc using two different radiuses basically drawn from a hat and just visually comparing the products truly is???) (-Snip-). There would have been no serious criticism to using GISTEMP, if Dana hadn't mistakenly started with a year that was an outlier in estimating his trend. (This is another thing you get crucified for in physics presentations.) The better thing to do is shift the period to 2001, move away from the outlier when comparing trend. The best is to regress your data with MEI as Tamino does and remove this natural fluctuation that is not been captured by the GCM models before attempting a linear regression. Then you are free to pick any start and end point you please. But otherwise, issues with OLS sensitivity to outliers near the boundaries should trump other considerations. Science starts by considering the merits of a technique, and not its outcome, in determining which technique to use. ECMWF and NCDC are heads above the other two methods. (-Snip-) (-Snip-). I also find it interesting that he thinks mistakes that would get him reamed at a science conference (and I'm being *really nice* compared to how some of us behave there) constitute nitpicking. If you're going to blog in this area, you need to start with the assumptions 1) you're not smarter than everybody else, 2) other people have valid contributions and viewpoints to make that should influence your future blog postings, and 3) if you are going to disregard criticism of your post, why bother posting? It just ends up discrediting you and the viewpoint you are advocating in favor of.
    0 0
    Response:

    [DB] Multiple inflammatory statements snipped.  Please rein in the inflammatory tone and rhetoric; confining your postings to the science with less inflammatory commentary is highly recommended.

    "you need to start with the assumptions 1) you're not smarter than everybody else"

    You would do well to remember this yourself.

  45. Just one follow up: Here are the tends 1990-now (°C/decade) ecmwf 0.151 giss 0.182 hadcrut 0.149 ncdc 0.153 GISTEMP runs a bit high, the other three appear to be in complete agreement. There are legitimate issues with how GISTEMP computes the upper arctic, this seems to call that method further into question. For an independent test of GISTEMP, I need mean temperatures averaged over 80°-90° to do a direct comparison with DMI (which is based on instrument measurements), but my suspicion is you'll find it runs hot compared to more physics-based models (both ECMWF and NCDC are heavily physics based, Hadcrut and GISTEMP are basically tonka toys in comparison, IMO). ClearClimateCode provides the gridded data I would need to make a direct comparison with DMI, but they are in a binary format I haven't gotten around to decoding. If anybody here is a data sleuth here is a link to a rasterized version of DMI, updated daily. If anybody here has a clue on how to decode the CCC grid files, I'd appreciate a bone thrown my way on that.
    0 0
    Response:

    [DB] Please note WRT DMI:

    "DMI averages the data based on a regular 0.5 degree grid. That means it weights the region from 88.5N to 90N the same as the region from 80N to 80.5N despite the fact that there's a 40-fold difference in area.

    Ergo, the DMI value is very strongly weighted to the area immediately around the Pole and neglects the warming areas around the periphery."

    Essentially, the DMI "runs cold".

    H/T to Peter Ellis at Neven's.

  46. Carrick, as DB suggests, please tone it down. I don't see why we can't have a civil discussion without becoming so abrasive. As we already discussed on Lucia's blog, choosing 2000 as the start date for the analysis was not "a mistake", nor was it a cherrypick. That's the year that the IPCC AR4 began running its models. To exclude the first year of the model run, as you suggest, would be a cherrypick. And to exclude any available data when we're only looking at 11 years' worth or so would be unwise. Removing the effects of ENSO is another possible approach, but an incomplete one. What about solar effects? And AMO? And volcanoes? And anthropogenic aerosols? If you're going to start filtering out short-term effects, you need to filter them all out, and that's a major undertaking. The point of this post, as in most of the 'Lessons' series, is merely to get an idea about how well model projections and data have matched up to this point. As I noted in the post, there's really not enough data since the AR4 to make any concrete conclusions. If you disagree with my approach, you're free to do the analysis however you'd like on your own blog. But if you're going to keep posting here, please take the time to read our comments policy. This isn't Lucia's or Watts' or Bishop Hill's blog. Accusations of deception and inflammatory tones are not allowed here. Please keep it clean.
    0 0
  47. Carrick @90, contrary to your claim, the NCDC does not include data north of approximately 80 degrees North, or south of 60 degrees South (except for a few isolated stations). This can be seen clearly on this plot of the temperature anomaly for March 2011 (chosen because it has a minimum number of 0 degree anomalies to confuse the issue of coverage). (Clicking on the image links to the NCDC so you can compare multiple graphs). I have also downloaded the ncdc_blended_merg53v3b.dat file and confirmed directly that the relevant cells are in fact missing. So my original claim that both Hadcrut and NCDC do not show polar regions stands. NCDC is preferable to HadCRUt in that it at least shows remote locations in South America, Africa, Australia, Siberia and the Canadian Arctic, unlike Hadcrut. Never-the-less, gistemp remains superior in coverage to both its rivals. In your follow up you suggest comparing DMI and gisemp. I have already shown one such comparison in my 23. Based on that comparison, gistemp is running cool in the Arctic.
    0 0
  48. Garrick @94 again suggest that Dana has been guilty of cherry picking. It is a rather bizzare accusation given that Dana started with his comparison with the first year of the projection. Of course, as those who make it know very well, in short trends, changing the start of finish date by a year can make a large difference. They know, in fact that shifting the start date back a year will make a difference to the trend. That makes the accusation worth looking at closely. Carrick claims that Dana's start year is an outlier, and it is indeed cooler than any other year in the 2000-2011 period. There is a reason for that - it was a moderate La Nina year. Of course, 2011 was also an outlier in that respect. Indeed, the first months of 2011 (and hence Dana's end point) was the strongest La Nina in over thirty years. I'm pretty sure Carrick knows that too, but you don't see him insisting that Dana finish his trend analysis in December 2010 because 2011 is an outlier. It is only cool periods early in the trend that Carrick believes should be expunged from the analysis. You can see what is going on in this detail of the Multivariate ENSO Index: Delaying the start point of the analysis until one year after the start of the projection shifts us from a moderate La Nina to neutral (but slightly cool) conditions). It would make a very large difference in a plot of the MEI trend, both by shortening the interval and by raising the start point. It would also make a difference to the temperature trend. But excluding 2000 as a start date because doing so will give a flatter trend is cherry picking. In fact, the suggestion of a 2001 start date is not the only one that has been suggested by Lucia, Carrick and cohorts. Suggestions have even been made that 2004 (a moderate El Nino) should have been chosen as a start date. Indeed, even more bizzarely, Lucia has even suggested that running the trend to the most recent date available is also grounds for an accusation of cherry picking. Apparently, in order to be absolutely free of any taint of Cherry-Picking according to Lucia's rabble, Dana needed to start the trend between 2001 and 2004, and finish it in a strong La Nina year (2011 if you must, but 2008 by preference). In simple terms, in order to avoid accusations of cherry picking by Lucia's rabble, Dana needed to deliberately cherry pick for a low trend.
    0 0
  49. So I figured out a smoothing avg of 5 years on a 1980-1999 baseline for giss data to remove the enso... Here is what I got... 1987-1991 is avged to .02c 2006-2010 is avged at .316c 20 year equation for 1990-2010 is f(x)=.0148x+.02 f(20)=.0148(20)+.02 f(20)=.316c check! f(30)=.0148(20)+.02 f(30)=.464c for 2020 f(40)=.0148(30)+.02 f(40)=.612c for 2030 The ten year trend for 2000-2010 is (0,.164)(10,.316) equation f(x)=.0152x+.164 f(x)=.0152(10)+.164 f(x)=.316 checks! for 2010 f(20)=.0152(20)+.164 f(20)=.468c for 2020 f(30)=.0152(30)+.164 f(30)=.64c by 2030 So yes it is quite a bit off the models here they are year A1b1, A2 2010 .408, .423 2020 .684, .615 2030 .944, .809 I did a smoothing of 5 years with the new maps to get the means...>1987-1991(.02c), 1997-2001(.164), 2006-2010(.316), each one of these have nino's and nina's. Had to as there is no map of the giss at 1980-1999 baseline. I put together a 20 year equation from 1990-2010 and worked it out to 2020, 2030.
    0 0
    Response:

    [DB] Your link is broken to your graphic.

  50. It looks like Carrick has joined Lucia and the others on this thread who cannot criticize the conclusion of the post so they make up unsupportable accusations of cherry picking to distract readers. The conclusion of the post is: the IPCC projections match the last 5 years of temperature reasonably well but are a little low (within the error bars). If you contest the conclusion please present a correct graph that shows the conclusion is in error (make sure that you have not cherry picked the start time). {snip} Carrick: please use at least 30 years of data to compare temperature trend records (your data only covers 20 years)or people will think you are cherry picking. You will find your differences mostly go away. Why did you make such a transparent argument here? {snip}
    0 0
    Moderator Response: [grypo] Please, take it easy on the inflammatory language and insinuations. Thanks!

Prev  1  2  

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us