Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
Keep me logged in
New? Register here
Forgot your password?

Latest Posts

Archives

Climate Hustle

Recent Comments

Prev  1280  1281  1282  1283  1284  1285  1286  1287  1288  1289  1290  1291  1292  1293  1294  1295  1296  1297  1298  1299  Next

Comments 64451 to 64500:

  1. Chip Knappenberger at 04:34 AM on 7 May 2011
    Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    grypo,

    Pat's Daily Caller article has always been a trimmed down version of his Current Wisdom piece for Cato.

    Here is the link to Current Wisdom piece which is where there is more detail given about our latest paper:

    http://www.cato.org/pub_display.php?pub_id=13010

    I don't think anything has been changed in either article.

    -Chip
  2. Jesús Rosino at 04:25 AM on 7 May 2011
    Why 450 ppm is not a safe target
    Sphaerica, I think there's some misunderstanding here. I just said that RC didn't say anything about Hansen & Sato's 20th century multimeter [if you like it more than 5m] SLR suggestion. Of course I cannot link to silence, because it just means they've ignored it. The citations you provide are, of course, previous to Hansen & Sato 2011 and I think they just support my point: that Hansen & Sato's suggestion of 5m SLR is not supported by any other scientist.

    Caution when assuming more than 1 meter SLR is my personal opinion, which I've tried to back up whithout mentioning RC at all.

    I cannot imagine where you get the idea that I've said that there's any "comment at RC suggesting that a 5m rise by 2100 is at all likely" nor what makes you think I should provide such a link.

    On the other hand, if you think Hansen made no such a projection of 5m SLR, then you should go against this blog post summary, not against my comments. In any case, my comments apply the same to the "I find it almost inconceivable that BAU climate change would not yield a sea level change of the order of meters on the century timescale". So now you seem more interested in discussing the semantics of "projection". Sorry I'm not. Regardless of the label of your choice for Hansen & Sato's multimeter fantasy, it's way off the peer reviewed literature numbers and they provide weak evidence (I'd be more open to the idea if they meant in something like 300 years rather than one century). As I said before, if they can get that through peer review, I will take it more seriously.

    Regarding scientists' opinions on Hansen 2007, you can see William Connolley here or here, or James Annan here.

    Alexandre, yes, I refer to the same IPCC projection. See your link where they say: "They include a contribution from increased Greenland and Antarctic ice flow at the rates observed for 1993-2003, but this could increase or decrease in the future".

    I don't think we know what the cause of the IPCC underestimate is (I may be wrong, it's just my impression from what I've read in blog posts), but I guess it can only be ice sheets or thermal expansion. Given that thermal expansion seems easier to calculate and there's a lot of uncertainty about ice sheet dynamics, the latter turns up as a more likely culprit. However, I think this is rather speculative, especially considering this is a short-term comparison (we have just a couple of decades of data to compare with projections). See, for example,
    Deep ocean warming solves the sea level puzzle about Song & Colber 2011.

    Don't trust me on the consensus matter, I'm not an expert, but that's my impression. I think the +1m is rather based on empirical data, which is quite compelling, but, lacking a physical understanding of the underlying causes, I think it's difficult to call it a consensus yet. Anyway, I may be too influenced by this discussion I had with Zorita and its later blog post.

    Cheers.
  3. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Well it appears the Beach House article on Cato and the Daily Caller has changed to exclude any specifics to this research. I thought I'd gone nuts, but the original was reposted elsewhere, so I was able to recheck my sanity.
  4. Lindzen Illusion #2: Lindzen vs. Hansen - the Sleek Veneer of the 1980s
    angusmac, you're misrepresenting the facts. Maybe Hansen described Scenario A as the result if we continued with "business as usual", but the fact is that the radiative forcing has been nowhere near that in Scenario A. Hansen is not in the business of predicting how GHG emissions will change, he's in the business of projecting for a given GHG change, how much temps will change. That's what the adjusted Scenario B represents.

    Your claimed "dramatic drop in temperature projections" is purely imagined. That's why it's not mentioned.
  5. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Well Eli Rabbet has something of potential interest to this issue. But I'm sure the "skeptics" will probably again bend over backwards to try and defend this too.

    "Now look at the legend, notice that the red line is a ten year trailing average. Now, some, not Eli to be sure, might conjecture, and this is only a conjecture, that while using a trailing average is a wonderfully fine thing if Some Bunny, not Chip Knappenberger to be sure, is looking at smoothing the data in the interior of the record, but, of course, Chip understands that if you are comparing the end of the data record to the middle, this, well, underweighs the end. The rising incline of the trailing average at the end is depressed with respect to the data. Some Bunny, of course, could change to a five year moving average, which would make the last point not the average of the melt between 1999 and 2009 but the average between 2003 and 2009, a significantly larger average melt. Of course, this effect would be even clearer if someone, not Eli to be sure, knew that the melt in 2010 was a record."
  6. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Tom Curtis at 10:47 AM on 6 May, 2011

    In response to:

    1) Ice melt extent in the 2000's would be statistically indistinguishable from that for the highest period in the reconstruction. (This is true as it stands, but apparently was not worthy of comment in the paper.)


    Huh? I thought what everyone is upset about is that the paper basically says the melt during the 2000s is statistically indistinguishable from that for the prior reconstruction. But if you agree that the ice melt extent is 2000 is statistically indistinguishable from that of the highest period in the reconstruction, I'm sure Michaels will be happy to report that Tom Curtis decrees this is so.

    2) The two years with the greatest ice melt extent would have occurred in the last ten years, and five of the most extensive melts would have occurred in the last ten years. In no other ten year period would more than two of the ten most extensive melts have occured.

    3) The year with the highest melt extent would be the most recent year, with just eleven reconstructed values having 95% confidence intervals encompassing that value.

    So what if the record happens to fall in the most recent year? This summer will probably have a lower melt than 2010. I'm unaware of any scientific rule that says papers discussing reconstructions can't be published if they happen to end with a melt index that is not a record. Moreover, if 2010 is a record it will still be a record until it is broken. The fact that it might not have been broken again in 2011 won't prevent anyone from pointing out that a record was broken in 2010.(And of course, if 2010 is not a record, it's not a record. )


    On the claim that two years with the greatest ice melt would have occurred in the past ten years: How are you concluding this with any certainty?

    It's true that the satellite measurements would indicate that the melts for 2007 and 2009 are greater than all reconstructed melt indices. But the reconstructed melt indices have uncertainties associated with them. Based on the observation that the 2007 melt index falls outside the ±95% uncertainty intervals for the reconstruction and there is at most a 60% probability that the 2007 melt is greater than all melts during the previous period. (The probability is actually much lower because I did a quick calculation which assume the 2007 melt index exactly equaled the upper 95% confidence value for 20 cases. So I did ( 1- 0.975^20) = 0.3973 as the probability all previous 20 are lower. But this represents the upper bound for the probability that all 20 are lower because in each individual case the probability a particular value from the previous period is < 0.975).

    So with respect to 2007-- you can't know for sure its melt exceeded those in during the 30s. That certainly makes your claim that two years with the greatest melt occurred in the past 10 years tenuous. (I suspect if we got the more detailed data from Chip and did the full analysis, we'd find the probability your claim is true is less than 1/2. )

    But even your claim that one year with the greatest melt occurred in the past 10 years is tenuous. Assuming your estimate that 2010 would fall outside the uncertainty intervals for 11 years, there is at least a 24% probability that the 2010 value is a not record. Very few people would consider this probability sufficiently high to say with confidence that even 2010 was a record. So you can't even say 2010 must be a record. (Though of course it might be. If we got the data from Chip, we could do a better calculation, and the probability that it's a record is even lower. )

    If a reviewer has been thoughtful, they might have asked FKM to do this calculation to firm up the numbers-- but given the traditions for making statistical calls, no one looking at the probabilities would decree that a record can be called even with only 11 and even making the simplifying assumption I made above.

    But the reviewers didn't do that. The consequence is the text in FKM doesn't discuss this at all, and text that might have needed more extensive modification showing that the probability of a record is "x%" isn't included in the paper.

    As the abstract stands in the published paper, changing "11" to "20" and "2007" to "2010" does not need to be modified. (So, yeah, assuming your '11' is correct, I missed on edit.) As a practical matter, the abstract only needs a "tweak" and you would have been no happier with it.

    Note: When the observed index is falls inside fewer than 5 ±95% uncertainty intervals, a more refined calculation will be needed to figure out if we can 'call' a record. At some point-- I'm SWAGING when the observed melt index falls inside fewer than 2 ±95% uncertainty intervals-- it will be impossible to say that there is any realistic probability that the melt falls outside the range experienced during the 20s-40s.

    I suspect this will happen during the next El Nino. Since KFM's reconstruction is now published, you'll be able to do this and using the KFM reconstruction and they will need to admit it. (I don't think this notion seems to have sunk in.)

    4) The relatively low ice melt extents in the early satellite period are due in large part to major tropical volcanic eruptions, eruptions which were absent in the 1930s. In the absence of these eruptions, the longest and period of extensive melting would be that straddling the end of the 20th century, not that in the middle. Clearly natural forcings have favored extensive ice melt in the mid 20th century, while acting against it towards the end. (True also in 2009, and surely worth a mention in the paper.)

    First: If this is intended to engage or rebut anything I wrote, it's a bit misplaced. I wrote about changes to the existing manuscript that would be required if 2010 was incorporated.

    Second: I don't disagree with your explanation of why the data looks as they do. Given the nature of this paper, I even think the paper would be stronger with this sort of discussion inserted.

    However, the reviewer (Box) who made a rather vague suggestion to this effect while simultaneously requesting inclusion of data that was not available (and is still unavailable than 8 months later) bowed out because that not-yet-available data were not incorporated. Evidently, whatever happened, neither the editors, the other reviewers nor the authors thought to incorporate this sort of thing.

    It's worth noting that not every paper showing a time series or reconstructions discusses why the time series looks the way it does-- for example the "Surface mass-balance changes of the Greenland ice sheet
    since 1866 L.M.WAKE,1 P. HUYBRECHTS,2,3 J.E. BOX,4,5 E. HANNA,6 I. JANSSENS,2 G.A. MILNE1" doesn't discuss Volcanism when explaining they reported
    "Higher surface runoff rates similar to those of the last decade were also present in an earlier warm period in the 1920s and 1930s and apparently did not lead to a strong feedback cycle through surface lowering and increased ice discharge. Judging by the volume loss in these periods, we can interpret that the current climate of Greenland is not causing any exceptional changes in the ice sheet."


    So, while I agree both the Wake paper and the FKM paper -- both decreeing that this century appears more or less similar to the previous melt period-- might have benefited from inclusion of a few sentences mentioning causal factors for the previous high and low melt periods, neither did. It seems the editors and reviewers standards are consistent in this regard.

    A paper drawing these conclusions, IMO, would be substantially different from the paper actually produced. Certainly it would have been very hard for Michaels to place the spin on it that he as been doing.

    As I understand it, his "spin" amounts to your conclusion (1) above which is "Ice melt extent in the 2000's would be statistically indistinguishable from that for the highest period in the reconstruction. (This is true as it stands, but apparently was not worthy of comment in the paper.)"

    Since other conclusions you make are unsupportable based on the data, your suggesting that his including them would prevent him from "spinning" as he is seem a bit odd. It would be rather silly to suggest that FKM are required to include incorrect or tenuous conclusions to avoid whatever you, Tom, happen to consider "spin".

    Other issues that puzzle me in your comment:
    Tedesco shows mass loss, while FK&M show melt extent.

    The graph I inserted is figure 1C from Tedesco indicates it shows "standardized melting index anomaly". The caption reads "(c) standardized melting index (the number of melting days times area subject to melting) for 2010
    from passive microwave data over the whole ice sheet and for different elevation bands." Tedesco also shows a graph of "SMB" (Surface Mass Balance". It's labeled figure 3a in Tedesco. Since FKM use melt extent, incorporating data for 2010 would involve 2010 melt extent data, not SMB data.


    Second, this analysis has been done graphically, and has all the consequent uncertainties (ie, the numbers might be out by one or two in either direction).

    Of course it's done graphically and your numbers might be out one or two.... I believe I said I was discussing an estimate and I assume you are too. We could add: Done in blog comments. Not even at the top of a blog post were many people could see it. Using Tedesco data as a proxy for the data that would really be used by FKM and so on.

    I've suggested before that this will be worth doing when the melt data used by FKM do become available.

    I see in your later comment you speculate that if only FKM had used a different proxy to reconstruct, they would get different answers, and you speculate as to what those results would be based on eyeballing graphs. Ok... but if their choice of proxy was tenuous, or the reviewers had wanted to see sensitivity to choice of proxy, then that was the reviewers call. They didn't make that call.

    Also: The fact that choice of proxy makes a difference widens the true uncertainty intervals on the reconstruction relative to those show in n FKM. So it would take an even longer time to decree that we feel confident that the current melt index is a record. When the melt index data are available, would you recommend doing the analysis to determine how much widen the uncertainty intervals on the reconstruction? It seems to me that may well be justified.
  7. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Dikram
    steven mosher@152 If you want papers published with turn key code and data, then I would agree that would be great, but who is going to provide the (quite considerable) additional funding to support that? Are you going to fund that just for climatology, or for all science? Sadly it is just unrealistic.

    I've often told Steve the same thing.

    I do however agree with the principle that authors should grant access to key data on request. So, for example, if someone asked Chip for the data points underlying FKM figure 2 or asked Tedesco for the data points underlying figures 1(c) in his paper, I think both authors should grant that sort of request and fairly promptly. Ideally there would be some sort of formal archive for this sort of thing possibly funded by NSF/DOE office of science or something. People whose projects were funded would be required to either deposit it there or say where the data are deposited.

    But turnkey code? With all data in a nice neat little folder? I think Mosher is insisting on something that goes beyond what is practical.
  8. Ken Lambert at 23:56 PM on 6 May 2011
    Trenberth can't account for the lack of warming
    "So Ken, do I understand by "missing heat", that you mean that you will accept AGW if better measurements can close the energy budget, but in the meantime you will choose to believe that "missing heat" means that energy imbalance isnt real and we are not warming?"

    The energy imbalance is as real as the reality of our measurement.

    I have never argued that we have not had warming (0.75 degC surface since AD1750). The energy absorbed to produce that temperature increase is in the past.

    If surface temperature rise is flattening and heat increase in the oceans is also flattening with better measurement then a reasonable conclusion is that heat imbalance is reducing. The missing heat might stay missing because it was never there.
  9. Ken Lambert at 23:44 PM on 6 May 2011
    A Flanner in the Works for Snow and Ice
    muoncounter

    I assume that your attempt to inject a little humour means that you can't really disagree with my parting comment at #149 MC? Tom seems to have vacated the field for other threads.
  10. Ken Lambert at 23:37 PM on 6 May 2011
    Brisbane book launch of 'Climate Change Denial'
    Why did you not invits me John? A good chance for us to catch up and see if I could get an autographed copy.
    Response: Sorry Ken, there were a few Brisbanites I forgot to invite (I've gotten some cross emails).
  11. Ken Lambert at 23:33 PM on 6 May 2011
    Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    Marcus #84, #85

    "It took fossil fuels around 60 years, & massive State support, to reach the relatively cheap prices they are today"

    That type of comment betrays a lack of understanding of the history of electricity generation in Australia and most of the first world.

    What technologies were available 100 years ago for large scale central generation? Answer - fossil fuel and hydro. In the absence of hydro resources and the availablity of relatively cheap coal - the choice of was one of necessity - coal.

    Such large investments by State Utilities (ie owned by the taxpayer) with 25-40 year lives were bound to remain a mainstay of our generation until nuclear arrived after WW2.

    You all know what has happened to nuclear. Oil gas coal hydro and nuclear are there because of economics. If there were better cheaper technologies, they would take over.

    When you talk of PV Solar being economical in boutique applications now - that is not new. PV Solar has been the best choise for powering remote area small scale applications for many years.

    I was playing around with Solar brine ponds in 1984 - the technology looked simple and effective but did not fly for cost reasons, chiefly maintenance in a very corrosive environment.
  12. Why 450 ppm is not a safe target
    Jesús Rosino at 17:15 PM on 6 May, 2011

    When I said IPCC projections exclude "glacier and ice sheet mechanics" I was referring to the IPCC caveat "Model-based range excluding future rapid dynamical changes in ice flow".

    As I understand it (feel free to correct me), this is the main cause of the IPCC underestimate.

    I agree that, at least at this point, the 5m SRL projection for 2100 is a big outlier. OTOH I don't know how far a consensus goes here, but I thought the 1+ m was already considered to be quite plausible at the BAU scenario.
  13. Bob Lacatena at 21:54 PM on 6 May 2011
    Why 450 ppm is not a safe target
    33, Jesús,

    I was expecting a link to a RealClimate blog post. You did not provide a link, and a Google search shows no hits anywhere for the text that you have posted.

    A search for "sea level" rise at realclimate.org, however, provides any number of reasoned, scientific posts at RC explaining upper and lower boundaries for sea level rise, and the reasons behind each. In particular, reasonable estimates recognized by RC fall between 0.5 and 2.0 meters.

    From RC 9/4/2008:
    We stress that no-one (and we mean no-one) has published an informed estimate of more than 2 meters of sea level rise by 2100.

    From RC 11/15/2010:
    ...and Gillis shows that most of the experts now assume a considerably higher rise until 2100 than IPCC: about one meter, potentially even more.


    I see nothing remotely close to a comment at RC suggesting that a 5m rise by 2100 is at all likely, and must insist that if you cannot produce a link to such a statement, you must openly and loudly withdraw the statement that "RealClimate silence about this ... is somewhat telling".

    From the Hansen and Sato 2011 paper you posted:
    Alley (2010) reviewed projections of sea level rise by 2100, showing several clustered around 1 m and one outlier at 5 m, all of which he approximated as linear. The 5 m estimate is what Hansen (2007) suggested was possible, given the assumption of a typical IPCC's BAU climate forcing scenario.
    Also:
    However, the fundamental issue is linearity versus non-linearity. Hansen (2005, 2007) argues that amplifying feedbacks make ice sheet disintegration necessarily highly non-linear. In a non-linear problem, the most relevant number for projecting sea level rise is the doubling time for the rate of mass loss. Hansen (2007) suggested that a 10-year doubling time was plausible, pointing out that such a doubling time from a base of 1 mm per year ice sheet contribution to sea level in the decade 2005-2015 would lead to a cumulative 5 m sea level rise by 2095.

    Non-linear ice sheet disintegration can be slowed by negative feedbacks. Pfeffer et al. (2008) argue that kinematic constraints make sea level rise of more than 2 m this century physically untenable, and they contend that such a magnitude could occur only if all variables quickly accelerate to extremely high limits. They conclude that more plausible but still accelerated conditions could lead to sea level rise of 80 cm by 2100.
    You will note that Hansen is not with this projecting a 5m rise by 2095. He's doing math. He's saying if ice sheet disintegration is non-linear, and if doubling time for rate of mass loss is the best predictive factor, and if that doubling rate is 10 years (which is plausible, but in no way predicted), then the math says that a doubling of 1 mm/year from 2005-2015 would arrive at a cumulative 5 m sea level rise by 2095.

    He never says this is going to happen. He never suggests that it will. The paper in many places is actually a rather dispassionate discussion of all of the various estimates of others (as well as Hansen 2007) on sea level rise.

    From Hansen's 2007 paper, where the proposal was presented (given that Hansen and Sato 2011 is merely summarizing the current state of the literature):
    Of course I cannot prove that my choice of a ten-year doubling time for nonlinear response is accurate, but I am confident that it provides a far better estimate than a linear response for the ice sheet component of sea level rise under BAU forcing.
    His point is clearly that linear estimates are overly simplistic, and that a non-linear mechanism could produce a dangerously higher value. The point here is not the value. It's not a prediction. It's a demonstration of the importance of not taking a purely linear approach to a non-linear problem, much in the way that exponential growth is taught to school children by pointing out that if they start with a penny, and double it every day, by the end of a month they have over 10 million dollars.

    From the same paper, a few sentences later:
    The nonlinearity of the ice sheet problem makes it impossible to accurately predict the sea level change on a specific date. However, as a physicist, I find it almost inconceivable that BAU climate change would not yield a sea level change of the order of meters on the century timescale.

    Suggestion:If something bothers you, take the time to actually read the source material, in the proper context and perspective, and pay attention to the words, not your emotional reaction to the words.

    And never, ever base anything on a "blog post."
    Moderator Response: [Dikran Marsupial] s/2010/2100/
  14. CBDunkerson at 21:40 PM on 6 May 2011
    10 Indicators of a Human Fingerprint on Climate Change
    neil, night time temperatures increasing faster than day time temperatures is indicative of enhanced greenhouse warming because of the way that greenhouse warming operates. That is, increasing the concentration of greenhouse gases in the atmosphere (which is being caused by human industry) decreases the rate at which the planet cools.

    Consider a 100 degree day in Miami vs a 100 degree day in the Arizona desert. Once the Sun goes down the temperature starts dropping... but Miami is very humid (lots of atmospheric water vapor, a greenhouse gas) and can actually stay warm all night. The desert on the other hand gets very cold very fast because it has almost no water vapor and thus the daytime heat escapes quickly.

    Thus, if we increase the level of atmospheric carbon dioxide and other greenhouse gases which have much lower geographic variance than water vapor we are decreasing the rate of night-time cooling for the entire planet... nights stay warmer longer and thus the average night time temperature increases faster than the day time temperature.

    As to ocean heat content... a strong indication of warming, but not of anthropogenic causes. Any warming forcing would result in most of the energy going into the oceans.

    For instance, if the observed global warming were being caused by increased solar radiation we would expect to see days warming faster than nights (because there is no sunlight at night) and ocean heat content rising as most of the solar forcing went into heating the upper ocean. Ergo, we'd have the ocean warming either way, but the day vs night warming speed would be different.
    Response:

    [DB] Additionally, what was predicted by models and then subsequently confirmed later by observational studies is summarized here.  For example:

    Nights warm more than days Arrhenius 1896 Dai et al. 1999
    Sherwood et al. 2005
  15. alan_marshall at 19:24 PM on 6 May 2011
    Brisbane book launch of 'Climate Change Denial'
    I would like to copies of your book on the shelves of Parliament Shop in Parliament House, Canberra. Maybe you will find an MP to suggest it. They already stock "Requiem for a Species" by Clive Hamilton. Located in the main Foyer, the Shop is open 7 days a week
    9:30 am - 5:00 pm (on sitting days extended to 5:45 pm).
    Ph: (02) 6277 5050 Fax: (02) 6277 5068
    Response: Haydn and I will actually be heading down to Canberra on May 16 and will be delivering a copy of the book to every federal MP in Australia (more news on this shortly). I'll check whether NewSouth Books are distributing to Parliament shop.
  16. CO2 is plant food? If only it were so simple
    It seems the evidence indicates that climate change is already negatively impacting global cereal crop years in the period 1980 to the present.

    The paper published in Science yesterday is behind a paywall, but Scientific American has a summary here.
  17. Lindzen Illusion #2: Lindzen vs. Hansen - the Sleek Veneer of the 1980s
    Angusmac, I have no problem acknowledging Hansen's 1988 prediction were out because the primitive model has a sensitivity that was too high. However, given the uncertainties in determining climate sensitivity, I think it was a pretty good effort for its time. Both history and better model show a convergence on sensitivity of 3.
  18. Lindzen Illusion #2: Lindzen vs. Hansen - the Sleek Veneer of the 1980s
    Scaddenp @68, you misrepresent me by stating that, "You [I] want real airplanes to be designed from a high school flight model, because it's simple?"

    Real engineers and scientists use simple models every day to check the validity of their more complex models. They usually use simple physics to check if small changes in their assumptions lead to large changes in outcomes.

    Dana in this blog used a simple spreadsheet to adjust Hansen's Scenario B without the need to re-run the whole computer model.

  19. Jesús Rosino at 17:15 PM on 6 May 2011
    Why 450 ppm is not a safe target
    Alexandre, Rahmstorf & Vermeer base their projections on empirical correlation between 20th Century SLR and temperature. However, 20th Century SLR doesn't have a big contribution from ice sheets (thermal expansion plays a big role), so I don't think it can reflect future contribution from ice sheets. I think that warmer deep ocean would be a more likely explanation of the IPCC undersetimation (and this would hopefully help to close the global heat budget).

    On the other hand, the IPCC projections do include a contribution from increased ice flow from Greenland and Antarctica at the rates observed for 1993-2003. The point by Rahmstorf and Vermeer being that it may accelerate.

    My comment was motivated by Martin #13:
    "[...] Until now, I was under the impression that predictions for a sea level rise by the end of this century were between 0.5 and 2m. That Hansen should predict 5m and so soon (by 2095!! not 2195 or 3095) surprises me.

    Do you know what the reaction to Hansen is among climate experts (the kind that publish peer reviewed papers on climate change)? Is this a consensus view?"

    I'm trying to answer that by saying that this projection by Hansen & Sato cannot be considered anything close to a scientific consensus (besides, it's not peer reviewed and it's quite different to what peer reviewed papers say), and highlight that even those latest figures we are recently more used to (+1 meter) must be taken with some caution, as we always look for physical expalantions, rather than empirical correlations. Yes, I do know that current data show that IPCC projections are undersetimating SLR, but between 19-59 cm (IPCC) and more than 1 meter there's a big gap. SLR is, of course, a serious threat with any of those numbers.


    Sphaerica, regarding the citation, I'm trusting this blog post:
    "Hansen and Sato (2011) using paleoclimate data rather than models of recent and expected climate change warn that “goals of limiting human made warming to 2°C and CO2 to 450 ppm are prescriptions for disaster”.

    They predict that pursuit of those goals will result in an average global temperature exceeding those of the Eemian, producing decadal doubling of the rate polar ice loss, resulting in sea level rise of up to 5m by the end of this century."
  20. Lindzen Illusion #2: Lindzen vs. Hansen - the Sleek Veneer of the 1980s
    Dana@67, I am not arguing for higher sensitivity.

    I just find it ironic that SkS can praise the accuracy of Hansen's 1988 projections without mentioning the dramatic drop in temperature projections. Here is a timeline derived from SkS blogs for Hansen's 2019 temperature anomaly:

    • Hansen (1988): Scenario A is "business as usual." Anomaly = 1.57°C.
    • Hansen (2006): Scenario B is ''most plausible.'' Anomaly = 1.10°C.
    • Dana (2011) in this blog: Adjusted Scenario B "matches up very well with the observed temperature increase." Dana is silent on the anomaly but the blog's chart indicates that Anomaly = 0.69°C.

    In summary, SkS have presented many blogs praising the accuracy of Hansen (1988) but have neglected to mention that the Hansen's original projection for the 2019 temperature anomaly has plummeted from 1.57°C in 1988 to 0.69°C in Dana (2011). Why is this not mentioned?

    Dana, I find it difficult to believe that these plummeting temperatures have not escaped your attention. Do you have a problem with declaring that your adjusted Scenario B is only slightly above Hansen's zero-emissions Scenario C?

  21. Dikran Marsupial at 16:58 PM on 6 May 2011
    Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    steven mosher@152 If you want papers published with turn key code and data, then I would agree that would be great, but who is going to provide the (quite considerable) additional funding to support that? Are you going to fund that just for climatology, or for all science? Sadly it is just unrealistic.

    Also the idea that would allow the reviewer to see if additional data would change the result, that would only apply to a limited range of studies. I am working on a project at the moment that has several processor centuries of computation behind it!
  22. Lindzen Illusion #2: Lindzen vs. Hansen - the Sleek Veneer of the 1980s
    Moderator@66, your Eyecrometer sounds like a very useful device. Can I buy one on Amazon?

    You already have the post-2000 figures. Scenarios B and C both show a warming trend of 0.26 °C/dec for the 1958-2000 period, however, they diverge post-2000. I show the trends for 2000-2011 here and I summarise them below:

    • Scenario B = 0.38 °C/decade
    • Scenario C = 0.12 °C/decade
    • Real-world (GISS LOTI) = 0.15 °C/decade

    There is a definite dogleg in real world temperatures post-2000.

    I request that you publish the figures for the adjusted Scenario B so that I can incorporate them in my models.

    Response:

    [DB] "There is a definite dogleg in real world temperatures post-2000."

    The warmest decade in the instrumental record is a "dogleg"?  Where's the dogleg:

    Ten

    [Source]

     

    Well, it's not evident in any of the temperature records.  How about since 1975 (removing the cyclical noise & filtering out volcanic effects):

    1975

    [Source]

    No dogleg there.  How about the warming rate estimated from each starting year (2 sigma error):

    CRU

    [Source]

    THe warming since 2000 is significant.  Still no dogleg.

    Your focus on statistically insignificant timescales is misplaced.  While a world-class times-series analyst (like Tamino) can sometimes narrow down a window of significance to decadal scales, typically in climate science 30 years or more of data is used.

  23. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    "You mean they used to *burn* this stuff. Burn? Seriously? Talk about primitive!"

    Yep, they'll say that the way people today say "they used to think the Earth was *flat*? Seriously?"
  24. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    Tom " ...actually I think it will be a very easy sell to our grandchildren simply because they will be able to see the consequences of our inaction."

    I think it will be more than that. Our grandchildren, and their descendants, will have quite a lot of "What the ..!" responses.

    In a society which values carbon fibre products as a necessity (along with other products we've not yet developed), there'll be a lot of head scratching along the lines of "You mean they used to *burn* this stuff. Burn? Seriously? Talk about primitive!" (Or brainless, or silly, or stupid, or choose your own word.)
  25. steven mosher at 13:18 PM on 6 May 2011
    Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Tom,

    I have no issue with your characterization. My point is rather simple. As you probably know i've advocated for open science from day 1. I think papers should be published with turn key code and data. If that were the case any reviewer or subsequent reader could see for themselves if newer data changed the answer. And that would be that. Whatever the answer is. instead we have a situation where on all sides people make inferences about the motives of authors, reviwers, editors.

    I'm suggesting that one way past some of this is to push for reproducible results.

    I don't have any issue calling out michaels. When he and santer testified together, I thought that santer cleaned his clock. And I said so.
  26. 10 Indicators of a Human Fingerprint on Climate Change
    Also, a suggestion:

    I would suggest that you add in the increase in Ocean Heat Content Anomaly (OHCA). We expect about 90% of the Anthropogenic forcing to go into heating the upper ocean - and this is indeed what is observed, and this close link between TOA radiative imbalance and OHCA increase I believe is some of the strongest evidence of anthropogenic warming. There are good recent updates on ocean heat content:

    Lyman et al. (2010)

    And there are other papers connecting the TOA radiation with the oceanic warming: you have some Trenberth ones, but Levitus principally talks about this:

    Levitus et al. (2005)

    Levitus et al. (2009)

    Apologies if this was already obvious and listed elsewhere.
  27. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    "heir democratically elected governments back to 1900 who started the electricity generation industries with (except for hydro), the only feasible, abundant and cheap fuel available - fossil fuel - chiefly coal."

    What's your point here Ken? I never argued that fossil fuels should *never* have received government support-in their infancy. I was merely pointing out how fossil fuels *continue* to enjoy State subsidies, in spite of their apparent maturity. Yet mention *any* kind of government funding for renewable energy-especially ones which cut into the profits of the big energy suppliers, like rooftop solar-& the usual suspects cry foul. I call that *hypocrisy*!
  28. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    "Marcus, I will be the first to buy your PV Solar panel when it can re-produce itself without the help of relatively cheap fossil fuels."

    What a completely bogus argument. It took fossil fuels around 60 years, & massive State support, to reach the relatively cheap prices they are today-& even then only with ongoing, often well hidden, government "subsidies". Most renewable energy technologies are already approaching parity with fossil fuels, in about half the time & with far less State support than what fossil fuels received-& continue to receive.
    Also, as scaddenp rightly points out, future PV's probably *will* increasingly be made without the help of fossil fuels, as relatively cheap *alternative* energy sources-like bio-diesel, solar-thermal, geo-thermal, bio-gas, tidal & wind power-become the mainstay of the manufacturing industry (as is already the case in California, for instance). So again, Ken, your argument really is quite weak, & getting weaker with each new posting.
  29. 10 Indicators of a Human Fingerprint on Climate Change
    Hi John,

    Thanks for your useful site.

    About #7: The decreasing diurnal temperature range (DTR) is listed as a fingerprint of Anthropogenic warming, with references. However, you do not say what the basis is for this claim - why would the DTR be expected to decrease under greenhouse warming rather than solar warming of some other forcing? Initially it seems intuitive because obviously the sun shines in the day, and not at night -so increasing night-time temperatures "should" be due to something else. However, upon more thought I started doubting that this is the case.

    Indeed, going back over the references you gave, plus further back into for example the Stone and Weaver (2002, 2003) papers, and even the Easterling papers - I don't see anywhere a definitive statement that a reduced DTR is indeed a fingerprint of additional greenhouse forcing. There is some mention of this in the intro's but never anything further.

    It seems to come down to a matter of clouds and soil moisture in the Stone and Weaver papers, and in Braganza (2004). All of them say it is a good index of recent climate change, while I don't see any of them actually saying it provides evidence of anthropogenic warming.
  30. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    scaddenp @81, actually I think it will be a very easy sell to our grandchildren simply because they will be able to see the consequences of our inaction. There will undoubtedly be climate change deniers in fifty years time, but I do not expect them to be more numerous, nor more influential than modern day geocentrists. The problem is, if we don't act effectively now, the cost to our grandchildren of acting then will be much steeper. They will also have lost much that is irreplaceable including, major ecosystems such as the Amazon and Great Barrier Reef.
  31. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    KL - "I will be the first to buy your PV Solar panel when it can re-produce itself without the help of relatively cheap fossil fuels."

    Yeah! Good to know we have a customer for a project under consideration...
  32. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    I cant find anything more recent published on commitments that really adds to Hare and Meinshausen, though results from AR5 full carbon cycle will be interesting. However, I think a zero emissions is interesting only from a theoretical point of view. Best we can hope is to see what commitment we have with a cap though progress towards that isnt really happening either.

    If getting reductions is tough now as a policy sell, imagine how difficult it could be for our grandchildren selling really tough emission control and seeing NO results (in terms of slower climate change) from those efforts for decades. I dont think humans work well on those time scales.
  33. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    scaddenp @79, thanks for the link. From it I find the following graph by Hare and Meinshausen whose zero emissions trajectory is much more like what I would expect. Most of the initial rise in temperatures is, of course, due to the loss of aerosols, but some rise would still be expected with out that.



    However, given the example of the Eemian, it is plausible that both of these constant forcing scenario's are wildly optimistic and that constant forcing would result in a two plus degrees C increase.
  34. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    FK&M 2011 base their reconstruction on a multiple regression on the Greenland summer temperatures taken from four stations, and the Winter North Atlantic Oscillation. Base on this, they show melt extents in the 1930's to be significantly more extensive than those in the 1990s. However, as shown below, Greenland summer temperatures (red line, first graph)in the 1990's where equivalent to those in the 1930s, while the winter NAO index (second graph) was only a little stronger (more positive) in the 1930s. Given that, we would expect only a slightly stronger Ice Melt Extent Index for the 1930's than for the 1990's. This is particularly the case given that the Summer Temperature has 2.6 times the effect of the Winter NAO on Ice Melt Extent (FK&M 2011, 4th page of 7).





    This suggests at a minimum that the FK&M reconstruction is sensitive to the choice of surface stations in constructing their index. Given that, there is a significant probability that the choice of a larger number of surface stations would show a significantly stronger end 20th century melt relative to the 30's.
  35. muoncounter at 11:12 AM on 6 May 2011
    Hurricanes aren't linked to global warming
    Interesting new blog post by Michael Tobis:

    Is there a trend in Atlantic hurricanes?

    Some people are still stuck in hypothesis testing mode, which I think is starting to get a little bit crazy. We can only test hypotheses that way. We cannot use that approach to establish that changing the radiative properties of the atmosphere is safe.

    My claim is that we need to get our thinking out of nudge-world. This is not a nudge. ... I don't understand why people don't anticipate some ringing in a system that gets kicked this hard.
  36. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Lucia @140 and Mosher @147 suggest that using 2010 data would simply require substituting 2010 for 2007 in the sentence,

    "We find that the recent period of high-melt extent is similar in magnitude but, thus far, shorter in duration, than a period of high melt lasting from the early 1920s through the early 1960s. The greatest melt extent over the last 2 1/4 centuries occurred in 2007; however, this value is not statistically significantly different from the reconstructed melt extent during 20 other melt seasons, primarily during 1923–1961.


    Close scrutiny of the graph of the Tedesco figures (Figure 2 above) shows that the difference between 2007 and 2010 mass loss is only 1/6th of the difference between 2005 and 2007. Assuming a similar magnitude difference in the Ice Melt Extent Index, one difference the inclusion of 2010 would make is that the number of years prior to satellite observations in which 2010 lies within the 95% confidence intervals reduces to approximately eleven. The number of reconstructed temperatures lying within one RMSEv of the 2010 value would probably also fall from two to either one or zero. The trailing 10 year moving average for 2010 would also rise by about 0.3 points to almost match that in the 1930's.

    Given these changes:

    1) Ice melt extent in the 2000's would be statistically indistinguishable from that for the highest period in the reconstruction. (This is true as it stands, but apparently was not worthy of comment in the paper.)

    2) The two years with the greatest ice melt extent would have occurred in the last ten years, and five of the most extensive melts would have occurred in the last ten years. In no other ten year period would more than two of the ten most extensive melts have occured.

    3) The year with the highest melt extent would be the most recent year, with just eleven reconstructed values having 95% confidence intervals encompassing that value.

    4) The relatively low ice melt extents in the early satellite period are due in large part to major tropical volcanic eruptions, eruptions which were absent in the 1930s. In the absence of these erruptions, the longest and period of extensive melting would be that straddling the end of the 20th century, not that in the middle. Clearly natural forcings have favoured extensive ice melt in the mid 20th century, while acting against it towards the end. (True also in 2009, and surely worth a mention in the paper.)

    A paper drawing these conclusions, IMO, would be substantially different from the paper actually produced. Certainly it would have been very hard for Michaels to place the spin on it that he as been doing.

    Of course, there are certain caveattes to this analysis. Firstly (and most importantly), Tedesco shows mass loss, while FK&M show melt extent. The two are not equivalent and it is possible to have (as in 2005) very extensive melt areas with restricted mass loss due to greater precipitation. Given comments by Box, it is quite possible that that has indeed happened in 2010. If that where the case it would require an even more extensive revision of FK&M 2011 to bring it up to 2010, datawise. Second, this analysis has been done graphically, and has all the consequent uncertainties (ie, the numbers might be out by one or two in either direction). This is particularly the case in that FKM's graph reproduce by Lucia has an uneven scale.
  37. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    Its a letter to editor, but discussed (along with Hare and Meinshausen) at RC. Go here. Includes pointer to discussion of Matthews and Weaver.
  38. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Chris, (the "dense" remark was not about you, it was everyone's ability to ignore the CATO article elephant in the room)

    "However an incisive review, paarticularly in late Dec/Jan, could easily have addressed the several problmatic sentences in the paper, and ensured that the data was properly interpreted in relation to independent data. "

    Well, he certainly could have, but Box's initial review tried to make the paper better by suggesting the FKM consider the warming and cooling causal factors that Box himself used to predict the 09-10 melt, accurately. But as a shortcut, he suggested they include the 2010 data. Now, we are all worked up because the data wasn't available, or timed right, even though, somehow, every other expert knew the implications, but whatever, who cares? Box's point is that publishing "as is" was insignificant and not up to JGR's usual standard. So his suggestion was ignored. This paper was too important wait apparently. Not my call or Box's. This coupled with that way the paper has been used, post-publication, is the real story here. Unless these are addressed, getting on to Box's handling of the review process seems like a misdirected priority. Although, it is certainly something to discuss in improving the review process in climate papers.
  39. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    scaddenp @78, I would be very dubious of Matthews'and Weaver's result. It is widely believed, and with good reason, that thermal inertia alone would result in an additional 0.5 degrees C warming if CO2 levels where held constant. M&W show only half of that, and show no additional warming for zero emissions. Even allowing that CO2 levels would decline with zero emissions, they would do so on a decadal time scale which would ensure a significant early warming with zero emissions.

    M&W is behind a pay wall, so I cannot discuss their methods, and they may be correct. However, their paper does require close scrutiny before accepting its results.
  40. The Climate Show Episode 12: twisters, ozone and Google in the sun
    Hey, it's 15 minutes shorter than last episode! :-)

    I like to watch the videos myself - the figures, charts, and images that are interspersed add a lot to the story being told.

    And it's easy enough to have it open in a browser window, while doing something else on the PC (as much as my wife might be amused by me watching a vidcast and playing a game or working on a presentation for work at the same time...)

    Either way, I guess I know what I'm doing with my Friday night, now (yes, I am that boring! ;-)
  41. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    BP,

    Seriously, please do email and tell Lindzen and Christy and Spencer to stick to the science, and to stop talking through their hats about economics, politics and socioeconomics--then there will be no need for others to make the observation I did above. Until then, it is not political to call them on their politicization of science.....
  42. Arctic Ice March 2011
    Oops! I see I said it would take the water longer to freeze in the autumn! That is a relic of a revision! Sorry!
  43. Arctic Ice March 2011
    Interesting comments! My understanding is rather incomplete, but since the Arctic basin and surrounding seas are adding more meltwater to the surface waters each melt season, I suppose this means that the intensity (is that the correct word?) of the halocline during the autumn and winter is increased. In other words, the increased summer melting and reduction in the overall ice volume must be making the surface waters of the Arctic fresher which may well cause the surface waters to take just a bit longer to freeze in the autumn.

    It seems to me that this is a variable that is potentially in flux for a number of different reasons, and this must have consequences if it is so. For example, I understand that the fresher water being added to the Arctic waters during the melt season results in more surface ice forming earlier in the fall/winter for the simple reason that it freezes at a higher temperature than saltier water. And I gather that the Arctic still has a well-established halocline despite the significant and continuing reduction in overall ice volume, but I also suppose that eventually the halocline will begin to break down, if the trend continues.

    Right now, my best guess is that the Arctic still has plenty of relatively fresh water at the surface. Also, given that the sea ice that forms is fresher if it forms at higher temperatures because it is then able to exclude more salt, I suspect that the ice that exists in the Arctic is fresher than it used to be. Has anyone studied this?

    If the ice is fresher than it was a few decades ago, it makes sense that it melts more rapidly once the temperatures warm in the spring. This is part of the reason I made my comment about the potential for a record melt, since not only are the air temperatures higher than they were a few decades ago, but the ice that is present may have a higher melting point.

    That said, it seems to me that eventually, as the Arctic is open to wave and wind action for longer periods of time, it will lose a significant portion of its fresher surface waters to the Atlantic in particular. Is this happening? Or is it predicted to happen? If it does, given the rising temperature trend, I would think that future sea ice formation in the Arctic will delayed and reduced since the surface water will be saltier and the air temperature will be warmer.
  44. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Well done Daniel Bailey, you have drawn the "lukewarmers" (but I think that excludes poster Chris) out of the woodwork to defend their champion Michaels.

    Yes, please KFM release the code and perhaps CA will do a thorough audit of KFM too, and then perhaps, just perhaps, pigs might fly.
  45. steven mosher at 08:37 AM on 6 May 2011
    Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Oddly Dan thinks:
    "No, the data would not have been in the proper format the authors were accustomed to dealing with. But that is merely a technical limitation and could have been dealt with. After all, the Muir Russell Commission was able to replicate the entire "hockey stick" from original data in a mere two days (something the auditors still have not yet completed themselves), pronouncing it something easily accomplished (cf page 48 of the report) by a competent researcher."

    Muir Russell did no such thing. They constructed their own 'replication' of the land temperature series (chapter 6) and not the hockey stick. Strange the simple mistakes people make when grasping at straws.

    On the issue of failing to use "up to the data" I'm fairly confident that no one here will throw out all papers that are published with truncated data series. if that were the case some of Box's own work would qualify as bad science. And of course we have the famous example of Santer's paper.

    I would hope that KFM would publish their code so that an update could be made. That seems a simple matter and I suppose someone can go hound them for the code.

    The issue seems to be this:

    "Had the authors considered all available data, their conclusion that ‘Greenland climate has not changed significantly’ would have been simply insupportable."

    That is a TESTABLE claim. but no one has taken the time to test it. It seems rather brazen to claim that their conclusions would have been unsupportable. Well, test the claim of this post. if they had considered all the data would their conclusions be unsupportable? dont wave your skeptical arms. Ask them for their code and do the work.

    And what specifically do you think was wrong. you wrote

    "They write:

    “We find that the recent period of high-melt extent is similar in magnitude but, thus far, shorter in duration, than a period of high melt lasting from the early 1920s through the early 1960s. The greatest melt extent over the last 2 1/4 centuries occurred in 2007; however, this value is not statistically significantly different from the reconstructed melt extent during 20 other melt seasons, primarily during 1923–1961.”"


    if you add 2010 data which sentence is wrong

    A "We find that the recent period of high-melt extent is similar in magnitude but, thus far, shorter in duration, than a period of high melt lasting from the early 1920s through the early 1960s."

    adding 2010 data wont change a word of that claim.

    B "The greatest melt extent over the last 2 1/4 centuries occurred in 2007; however, this value is not statistically significantly different from the reconstructed melt extent during 20 other melt seasons, primarily during 1923–1961.”"

    adding 2010 data will change one word here. So..

    The greatest melt extent over the last 2 1/4 centuries occurred in 2010; however, this value is not statistically significantly different from the reconstructed melt extent during 20 other melt seasons, primarily during 1923–1961.


    better?
  46. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    A worldwide zero emissions isnt required. The Matthews and Weaver 2010 diagram was this .

    Holding emissions at current level doesnt look too dangerous from this diagram. Of course, this doesnt address the equity issue which would propose that west reduced emissions heavily so developing countries can increase their's but that is a different issue.
    Moderator Response: [muoncounter] Please restrict image width to 500.
  47. David Horton at 07:43 AM on 6 May 2011
    Lindzen Illusion #4: Climate Sensitivity
    So, just pure Milankovitch with no CO2 contribution?
  48. Chip Knappenberger at 07:25 AM on 6 May 2011
    Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    And for a comparison to the Wake et al. conclusions provided by chris@144...

    Here is the first paragraph of conclusions from our paper:

    We have created a record of total annual ice melt extent across Greenland extending back approximately 226 years by combining satellite‐derived observations with melt extent values reconstructed with historical observations of summer temperatures and winter circulation patterns. This record of ice melt indicates that the melt extent observed since the late 1990s is among the highest likely to have occurred since the late 18th century, although recent values are not statistically different from those common during the period 1923–1961, a period when summer temperatures along the southern coast of Greenland were similarly high as those experienced in recent years. Our reconstruction indicates that if the current trend toward increasing melt extent continues, total melt across the Greenland ice sheet will exceed historic values of the past two and a quarter centuries.


    Not really very different from Wake et al.

    This conclusion follows directly from our work and will be little impacted by what occurred in 2010.

    In the last two paragraphs of our Conclusions, we go beyond our direct results, and speculate on sea level rise. In writing a paper, it is not particularly unusual to try to put the results in the bigger picture.

    -Chip
  49. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Albatross, I'm not being naive (or I should say I don't consider that I am). In this particular instance I consider many of the individuals complaining about this mess, to be naive:

    There are two elements that ensure decent quality papers. The first is the inherent scientific integrity in the vast majority of publishing scientists (the most important part of peer-review; i.e. "self-peer-review"). The second is the peer-review process itself (reviewers giving robust and unambiguous guidance to editors). It's very rare indeed that each of these fail at the same time.

    Of course KFM are "playing games". We all know that. So in this particular case, we lose a main element of peer review, and are left with the other to hold the fort. Unfortunately the reviewer most qualified to do this seems to have deserted his post. He wrote a dismal review (have you read it?), classified the paper as "good", and declined to re-review the paper at a time when he could have made some very strong recommendations indeed. Why complain now?

    My comments about Wake et al don't constitute a strawman, and I recommend tha people try to look at this from an objective viewpoint. Wake et al demonstrate that so long as one is clear about the period of analysis, it's acceptable to publish a paper in 2009 that only includes data through 2005. It doesn't matter whether or not a reviewer suggested that they should have included more up to date data. The editor can overrule that if he considers the suggestion is unreasonable.

    The obvious point that several people seem preternaturally unable to absorb is that one cannot include data in a paper that doesn't yet exist. The editor seems to have cottoned on to that reality. Of course that doesn't absolve KFM from interpreting their data properly in the light of the 2010 data as it stood especially in December 2010. Unfortunately the expert reviewer declined to take part in the process that would have ensured they did.

    (not sure what the second reviewer was doing - I wonder whether he might be hiding under his bed until this blows over, and hoping that no one decides to identify him/her!).

    Sadly, we can't conjure up good outcomes by a combination of indignation and anger. It really helps if the systems of peer review work well enough at source.
  50. Bob Lacatena at 06:48 AM on 6 May 2011
    Why 450 ppm is not a safe target
    30, Jesús,
    ...their suggestion that 450 ppm will result in 5 meters sea level rise at the end of this century.
    Citation (direct link), please.

Prev  1280  1281  1282  1283  1284  1285  1286  1287  1288  1289  1290  1291  1292  1293  1294  1295  1296  1297  1298  1299  Next



The Consensus Project Website

TEXTBOOK

THE ESCALATOR

(free to republish)

THE DEBUNKING HANDBOOK

BOOK NOW AVAILABLE

The Scientific Guide to
Global Warming Skepticism

Smartphone Apps

iPhone
Android
Nokia

© Copyright 2016 John Cook
Home | Links | Translations | About Us | Contact Us