Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
Keep me logged in
New? Register here
Forgot your password?

Latest Posts

Archives

Participate in a survey measuring consensus in climate research

Posted on 1 May 2013 by John Cook

The Skeptical Science team has a paper coming out within a few weeks in the high-impact journal Environmental Research Letters (ERL) (many thanks to all who donated money to help make the paper freely available to the public). In our paper, Quantifying the consensus on anthropogenic global warming in the scientific literature, we analysed over 12,000 papers listed in the 'Web Of Science' between 1991 to 2011 matching the topic 'global warming' or 'global climate change'.

Reading so many papers was an eye-opening experience as it hit home just how diverse and rich the research into climate change is. So before the paper comes out, we're inviting readers to in a small way repeat the experience we went through. Not just Skeptical Science readers - I'm emailing an invitation to 58 50 of the most highly trafficked climate blogs (half of them skeptic), asking them to post a link to the survey. In this way we hope to obtain ratings from a diverse range of participants.

You're invited to rate the abstracts of the climate papers with the purpose of estimating the level of consensus regarding the proposition that humans are causing global warming. The survey involves reading and rating 10 random  abstracts and is expected to take around 15 minutes. You have the option of signing up to receive the final results of the survey and be notified when our ERL paper on consensus is published.

No other personal information is required (and email is optional). You can elect to discontinue the survey at any point, and results are only recorded if the survey is completed. Participant ratings are confidential and all data will be de-individuated in the final results, so no individual ratings will be published.

The analysis is being conducted by the University of Queensland in collaboration with Skeptical Science. I'm heading the research project as the research fellow in climate communication for the Global Change Institute.

Click here to participate in the survey.

0 0

Bookmark and Share Printable Version  |  Link to this page

Comments

Comments 1 to 37:

  1. Participation is cool(TM) ;-)

    Thanks for the chance.

    0 0
  2. Two questions I had about how to answer:

    1) If the abstract talks only about model predictions without stating how confident we can be about these modelled results should this be counted as explicit endorsement (etc.)?

    2) If the abstract is about only one, probably relatively small, anthropogenic contribution without any reference to the reality or magnitude of other anthropogeic factors should this be taken as endorsement?

    I am aware that my judgements on this may be part of what you are trying to measure, but I would be interested to know how you and other readers made judgments in cases like these.

    0 0
  3. My sample had 6 endorsements, 4 neutrals, and unsurprisingly 0 rejections.

    My guess is that those participating from "sceptic" websites will be casting doubt on just how randomly selected these papers really are. Another conspiracy for Prof Lewandowsky perhaps?

    0 0
  4. Done. My selection got one explicit reject (no quantification). Looked like a contrarian try. One neutral, rest more or less endorsing (A)GW.

    I wonder whether the magic number, 97(%), will be established again.

    0 0
  5. OK, I get 8 neutrals and 2 endorsements (one cites and thus implicitly endorses the 2001 IPCC report, one says that global warming is among the dominant impacts of waste water treatment). While ten abstracts don't tell you that much about the consensus yet, the low endorsement percentage took me by surprise.

    The reason for the low endorsement percentage seems to be that most of my papers answered a narrow question and thus substantiated only one step in the AGW hypothesis without supporting the others. This --to my mind-- does not constitute an endorsement).

    Step 1 and 2. Humans have raised the atmospheric concentration of greenhouse gases and it will stay at an elevated level for a long time (the survey implies we accept this --which I think I do, but only a posteriori, having looked at numerous scientific sources)

    Step 3. Following lab experiments we expected this to lead to a radiation imbalance, which we observe via sattelites and on the Earth's surface.

    Step 4. This leads to warmer average temps.

    Step 5. Feedbacks are expected to amplify this.

    (Step 6. Warming has overwhelming negative impacts)

    I also got a borderline case: "Agaves can benefit from the increases in temperature and atmospheric CO2 levels accompanying global climate change". I call this neutral, because correlation isn't causality --but only just.

     

    0 0
  6. Not just Skeptical Science readers - I'm emailing an invitation to 58 of the most highly trafficked climate blogs (half of them skeptic), asking them to post a link to the survey.

    Based on the brouhaha over Lewandowsky's paper, I wonder how many of the self-styled skeptic bloggers will eventually claim they received no such email? Not to say they're being dishonest: I get dozens, sometimes over 100 emails a day every weekday - it's ridiculously easy to overlook (and then forget about) one or two, especially if you don't do something about it straight away. It wouldn't surprise me if that (a) is what happened with the solicitation to climate "skeptics" to participate in the Lewandowsky et al (2012) survey and (b) could very well happen with this invitation.

    0 0
  7. In my case, a lot of the neutrals were paleoclimate papers, which I wouldn't expect to say anything at all about what is happening now.

    0 0
  8. I got 6 neutrals, 3 explicitly endorse without quantifying, 1 implicitly endorse.  From one of the explicitly endorsing w/o quantifying abstracts, it seemed highly likely that the article itself would have explicitly stated that human emissions account for more than half of the warming, since I believe the abstract stated that the article was going to quantify the projected impacts of a mitigation scheme that involved reduction of GHG emissions.  But rules are rules, and I could only assess the abstract based on its text.  

    In light of which, it might be interesting to add a category of "explicitly endorses and suggests mitigation."  I suppose one objection to that approach could be that there would be no counterpart category for the "rejects" side, which could lead to a topheavy categorization scheme with more "accepts" than "rejects" categories.  To the extent that suggesting mitigation has a denier counterpart in the extreme view that we need to emit *more* GHG's, people who hold that view most likely explicitly accept (or at least do not reject) AGW, but assert that GW is a good thing - so that would also be categorized as an "accepts" position.  

    0 0
  9. OPatrick @2:  Disclaimer - I am a patent attorney, not a climate scientist, but since you are asking for other readers' opinions I am chiming in.  

    For situation 1), did the abstract actually state that the model prediction accounts for anthropogenic greenhouse warming trend, or are you just assuming that because practically all models do?  

    So the abstract mentions a prediction of that model without defining a confidence interval within which the prediction falls?  I'm curious, do you remember why the model prediction was mentioned?  If the abstract indicated that the model prediction is used in the article as a basis for recommending a particular action (mitigation or adaptation) or drawing some further conclusion, I would say it's at least an implicit endorsement.  But I doubt you would be wavering if it were that straightforward.  

    If on the other hand the article simply summarizes what the model predicts, and/or the assumptions on which the model is based, without adopting those assumptions, then probably I would have to say it is neutral.  

    For situation 2), it sounds like what you are describing literally fits the "endorsement without quantification" category, and that's how I think I would have classified it.  

    0 0
  10. I meant to say, "If on the other hand [the abstract indicates that] the article simply summarizes ~," since obviously the survey didn't involve reading the whole article.  

    0 0
  11. Hmm, this could be of sociological interest as well. The results should reveal at least as much about the perception of consensus as about the existence of consensus.

    0 0
  12. My issue about this survey is similar to what Oriolus Traillii@5 has said.

    It is that most papers focus on very narrow aspect of GW, and often the short abstracts don't specify anything that endorses AGW, therefore they must be classified as "Neutral". I think lots of those "Neutral" papers would be reclassified as endorsements if first few paragraphs of the body were reveiled (where the authors usually describe the context).

    As this survey allows to read the abstracts only (which provides not enough information about authors' stance), therefore this survey is, as Matt Fitzpatrick said: "about the perception of consensus" rather than the consensus itself. Most papers will be classified as "Neutral", while most of the rest will be the participants' guesses and opinions.

    In my case, it was 5 Neutrals, 4 implicits endorsements (my best guess) and only 1 explicit endorcement stated in the abstract. Interestingly, my sample included exactly the same "Agaves" ("Neutral") paper that Oriolus Traillii mentioned @5, how probable is it? John, please make sure that the survey selects truly random selection for all participants (i.e. check your random generator).

    0 0
  13. Oriolus Traillii @5

    The reason for the low endorsement percentage seems to be that most of my papers answered a narrow question and thus substantiated only one step in the AGW hypothesis without supporting the others. This --to my mind-- does not constitute an endorsement).


    If you were asked whether a tradesman had contributed to the building of an office block would you require that he had worked on the foundations, the scafolding, the plumbing, the electrical work, the interior fitout etc?

    Obviously only review papers are going to cover all the steps of proving dangerous anthropogenic warming - any original research is only going to deal with one aspect.


    To put it another way would you see a paper comming up with a large negative estimate of the cloud feedback as contradicting Global Warming if it did not at the same time deal with all the other feedbacks and show that the total was not significant?


     

    0 0
  14. I just did the survey, rated them 8 neutral, 2 implicitly endorse.


    BUT


    The response I got at the end was:


     


    Survey Results Received


    Your survey results have been successfully saved. Thank you for your participation. If you have indicated interest in receiving the results of this survey, you will be emailed the results as soon as they are available.


    Of the 10 papers that you rated, your average rating was 3.8 (1 being endorsement of AGW, 7 being rejection of AGW and 4 being no position). The average rating of the 10 papers by the authors of the papers was 2.7.



    WTF ???

    0 0
    Moderator Response:

    chemware, see Kevn C's comment below. The 1/4/7 numbers refer to how the different ratings are given a numeric score, not what the actual ratings were that you assigned to your papers. The wording of the response was certainly unclear and John Cook has reworded the response to make the meaning clearer.

    Thanks for taking part in the study and letting us know about this possible source of confusion. [GT]

  15. Chemware,

    "Implicitly endorses" gives a score of three, so your total score for 10 papers was 8 × 4 (no position) + 2 × 3 (implicitly endorse) = 38, and the average is 3.8.

    The authors' total of 27 suggests they rated most of the papers as implicitly endorsing and a few as explicitly endorsing.

    For my own survey I gave 3.3 and the authors' average was 2.8. (One explicit endorsement with quantification — that was easy!; four implicit endorsement; and five neutral.)

    It's an interesting exercise, not least because it shows just how hard it is to categorise papers by looking at the abstracts alone.

    I couldn't resist searching for every one of my papers to see whether the abstract was a good predictor or not. (I didn't allow that to affect my score, or course, because that would defeat the purpose and invalidate my results, and some I was unable to check due to paywalls.)

    As has been suggested above, often the Introduction is enough to remove all doubt. I had a paper I rated as 4 (neutral) based on the abstract who's very first sentence explicitly stated how much temperature was due to rise globally over the coming century based on greenhouse gas emissions and cited the IPCC as the reference!

    It's to be expected that most research papers are going to end up being categorised as Neutral because in their abstracts they're unlikely to say words to the effect that at least half of currently-observed warming is due to humans. Why would they? A lot of them are looking at the implications of global warming on some particular aspect of interest to them, but because they don't mention the human influence on the warming they end up being neutral on this scale.

    0 0
  16. Of the 10 papers that you rated, your average rating was 3.8 (1 being endorsement of AGW, 7 being rejection of AGW and 4 being no position). The average rating of the 10 papers by the authors of the papers was 2.7.

    Oh, I can see a possible misreading of this text. The parenthesis "(1 being endorsement of AGW, 7 being rejection of AGW and 4 being no position)" is not reporting back to you the ratings that you gave. It is defining the rating scale from 1 to 7.

    0 0
  17. Sorry, my last post was cut short. Sorry, this one is longer.

    @Sceptical wombat:

    You wrote:

    If you were asked whether a tradesman had contributed to the building of an office block would you require that he had worked on the foundations, the scafolding, the plumbing, the electrical work, the interior fitout etc?

    No. But if you asked me whether he had built the whole thing himself, I would. That's how I understand the survey.

    I agree with you that scientific opinion on the complete hypothesis only appears when you collect papers for each step in a review. Still, the complete hypothesis is what we want. Isn't a chain just as strong as its weakst link? To put it differently, if you were hanging of a cliff by a rope, would you be relaxed about several yards of it suddenly turning into thin air?

    Is the AGW hypothesis linear, supporting its conclusion like a rope? Or is the conclusion supported --at every point-- by multiple different lines (or ropes) of reasoning? I'll take a look at The Big Picture on this site:

    http://www.skepticalscience.com/big-picture.html

    After reading this, my metaphor of choice for the AGW hypothesis is a rope of varying thickness, in some places consisting of multiple more or less strong fibers of evidence, sometimes of only a few. But formulated comprehensively, the bits "Humans are Increasing Atmospheric Greenhouse Gases" (very thick rope), and "Human Greenhouse Gases are Causing Global Warming" + "Warming will continue" (thick rope) and "The Net Result will be Bad" (thick rope) are vital. Take any of them away and the whole argument for reducing CO2 emissions collapses (unless you look at relatively unrelated(?) matters like ocean acidification).

    You also wrote:

    Would you see a paper coming up with a large negative estimate of the cloud feedback as contradicting Global Warming if it did not at the same time deal with all the other feedbacks and show that the total was not significant?

    A single paper wouldn't put me in much doubt, but if there was a consensus for negative cloud feedbacks, then the bit of rope named "greenhouse gases are causing global warming of a given magnitude" would get considerably thinner. It wouldn't disappear, because I would agree with you that the picture isn't complete, until all the feedbacks are taken into account.

    So I think one should consider doing another study.

    1. It should formulate the AGW hypothesis as a logical argument with premises and conclusions.

    2. It should split the argument into essential steps that are small enough to be adressed in at least a few dozen average scientific papers.

    3. It should seek to establish which steps of the AGW hypothesis have which level of consensus.

    The results might look something like this:

    "X endorsements to Y neutrals and Z rejections for step 1 (2, 3, 4 ...)"

    Scepticalscience is in an ideal position to do this, because it has already split the argument into steps and collected papers.

    @chriskoz:

    Interestingly, my sample included exactly the same "Agaves" ("Neutral") paper that Oriolus Traillii mentioned @5, how probable is it?

    It depends on the number of papers that were pre-selected, doesn't it? (John, I'd be grateful, to know how that was done, by the way.) If there were 1000 papers and you chose 10, the likelyhood that I get one of them too, if I get 10 random picks, would be slightly more than 1/10, I think? Of course, we don't know how many papers there were or how many other people did and didn't get one of our papers, so I'm not sure there's any reason to worry.

    Cheers and thanks for the Survey. I've learned a lot!

    0 0
  18. "The average rating of the 10 papers by the authors of the papers was 2.7."

    Could you reword this sentence in your automatic form? I think it should read, "The average rating of the 10 papers by the authors of "Quantifying the consensus..." (ERL, 2013) was 2.7." (i.e., you did not get the authors of the 10 papers to provide ratings of their own papers)

    FYI: I think I rated my batch at a 2.8, compared to an "author" rating of 2.5 or so. I think I was much more generous about my "implicitly accepting" rating than several of the commenters here...

    0 0
  19. I got an average of 2.9 with the papers rated at 2.7.  I guess my reading comprehension is pretty good.

    BTW, Willard Tony is not happy with this survey.  Instead of posting the link to the survey you provided, he posted a link to this article.  If you were hoping to look at differences in perception between septics and sane people, this will likely screw up your results.

    0 0
  20. one abstract didn't say methane was a ghg so my average evaluation was lower than the authors. there may be some people who'll judge all the neutral papers to be against AGW, just because many of these are not climate papers you claim them to be. are you measuring the polarization of thoughts of evaluators vs. authors or what? the results might be interesting anyway.
    0 0
  21. anyway a nice way to educate people of the research done (climate and other), assuming the participants actually read the abstracts, so thumb.
    0 0
  22. I do hope that the same chicanery that happened with the last attempt at profiling doesn't happen again. Shub Niggurath has done some analysis on that paper and it doesn't look good . . 

     

    http://wattsupwiththat.com/2013/05/05/lewandowsky-et-al-2013-surveying-peter-to-report-on-paul/#comment-1297865

    0 0
  23. keitho @22, its odd that you should refer to Shub Niggurath's piece as proof of chicanery in LOG13 (moon landing paper).  What he demonstrates after detailed (and likely biased) analysis is that slightly less than 80% of commenters on the pro-science sites that posted LOG13 survey support the consensus on climate change; ie, approximately the same percentage as in the survey itself.  Beyond that the analysis seems entirely based on misinterpreting "broad readership" as meaning "readership equally divided between those opposing and those accepting the consensus".

    Curiously, despite his detailed analysis, he fails to mention that a link to the survey was posted on at least one "skeptical" blog, ie, Junk Science.  Admitedly Junk Science was not one of the blogs contacted by Lewandowsky, but that is hardly relevant.  The fact that the link was posted on a "skeptical blog" undermines the entire basis of Shub's analysis.

    0 0
  24. I should add, Shub also demonstrates the much larger readership at the "skeptical blogs" than at the pro-science blogs.  It follows that had the owners of those "skeptical" blogs posted links to the survey, the proportion of "skeptics" to pro-consensus respondents to the survey would have been reversed.  The lack of "skeptical" respondents to the survey was not by design of Lewandowsky and his co-authors.  Rather it is the direct consequence of "skeptics" not wanting their views of their supporters to be known.  Probably with good reason.  

    0 0
  25. I may not be a complete dunce.



    Of the 10 papers that you rated, your average rating was 3.5 (to put that number into context, 1 represents endorsement of AGW, 7 represents rejection of AGW and 4 represents no position). The average rating of the 10 papers by the authors of the papers was 3.3.




    Everyone posting here so far has rated their 10 papers less favourably towards endorsing AGW than did the authors, indicating a distinct lack of bias for this blog's readership. It may be that the abstracts are more neutral than the full results of the paper. Nevertheless, this small sample suggests an honest, objective approach from SkS readers, not to mention the comments above. Here be true skeptics.



    If it's not giving away too much before submission/publication, how did the Cook et al team's results match up with the paper authors' ratings?

    0 0
    Response:

    [JC] Nah, that's giving too much away, Barry. You're just going to have to wait :-)

  26. barry,

    It's not clear whether "the paper authors' ratings" refers to Cook et al or the authors of the original papers themselves. See MMM @ 18 above. The way it's worded is certainly ambiguous and assuming MMM is correct I agree that it could do with rewording. If the ratings really are from the authors of the original papers then the fact that an unbiased reading of their abstracts in isolation by third parties consistently seems to give a lower level of endorsement of the consensus than the authors themselves just goes to show how deep the level of acceptance of the consensus is.

    I've long been impressed by the lack of bias amongst SkS' readership, and a willingness to disagree with each other even if it gives "ammunition" to "the other side".

    As for the difference in scores, the biggest problem is likely the implicit endorsement question, which is highly subjective: "abstract implies humans are causing global warming. E.g., research assumes greenhouse gases cause warming without explicitly stating humans are the cause." The papers I classified as "neutral" rather than "implicit endorsement" all took global warming for granted, but none of them mentioned the cause of that global warming — rather, they were talking about the effect of that global warming on the particular subject of the paper. As I mentioned above, reading just the first line of the introduction to one of those papers made it very clear that the authors of the paper themselves fully supported human causation, but because they didn't say that in the abstract I had to classify that as neutral under my interpretation of the rules. It is possible that others, looking at the same abstract, might say that the paper implicitly endorsed human causation of global warming because, for example, it didn't try to suggest any other cause or cast doubt on the prevailing theory. Simply changing my four "neutral"s to "implicit endorement"s on that basis would reduce the difference to 2.9 (me) vs 2.8 (the authors).

    0 0
  27. On a side note, I had a look at WUWT's posting on this mentioned above and I had to laugh — the conspiracist ideation runs deep. :-) Their mantra appears to be "I may be paranoid, but am I paranoid enough?" :-) I love the fact that they're unwilling to have a go because of what it may reveal about them.

    The concerns centre around two aspects; the first is the survey code embedded in the URL, which is presumably intended to track which site the reviewer followed the link from. To avoid triggering a Pavlovian response next time (assuming conspiracy theories weren't the intended purpose of the survey code :) I would suggest using the HTTP referer field instead next time. It isn't perfect (copying-and-pasting the link instead of clicking on it will defeat it) but it might filter out some of the noise.

    The second is the randomness of the papers. I did a quick calculation and the chance of two people seeing the same paper out of a random selection of 10 out of 12,000 is pretty remote, so the fact that I've seen the same paper twice in four sessions suggests that the actual number of papers being used is far fewer than 12,000. This makes sense if the authors of the papers were actually asked to rate their own papers, and it wouldn't take many papers to turn up useful results.

    The funny thing is that when I noticed the above it made me laugh and very keen to see what the outcome will be; when the WUWT crowd noticed it they got too scared to take the survey. It's like primitive tribes-people being afraid of their soul being captured by someone taking a photograph. Brilliant!

    0 0
  28. I saw this in the comments at Lucia's:

    For whats its worth, my scores have been under 4, but over 3. That would mean the literature I reviewed, was lukewarm.

    What it means is that, on average, the papers he reviewed were somewhere between "not mentioning what's causing global warming" and "implying humans are causing global warming".

    Somehow this got converted, in his mind, into a statement about climate sensitivity.

    0 0
  29. jyyh@21

    "anyway a nice way to educate people of the research done"

    Aha!

    So this is nothing but an insidious plot by the warmistas to force the true sceptics to read the propaganda created by the industrial climate complex.

    In other words, nothing but a brute attempt to indoctrinate the opponents.

    </troll>

    0 0
  30. JasonB @27, the probability that any one of the ten papers should be identical to any of the papers in a prior selection given that the papers are randomly selected from 12,000 papers is just 0.458%, or just under 1 in 200; which is fairly remote, but not ridiculously so.  The probability that you would get an identical paper in two of four retries rises to 1.366%, or just under 1 in 73.  Certainly nothing to write home about.  The odds that one of 100 readers should have an identical paper to one reported by a previous reader rises to 36.782% (if they report all ten titles) and is hardly surprising at all.  Hence I consider the evidence adduced that the paper selection is not random is decidely lacking, and consists mostly of a failure to differentiate between the odds of a certain result with a single trial, and that odds with multiple trials.

    Having said that, the 12,000 papers are those surveyed in the original paper already accepted for publication.  The specific claim is:

    "This survey mirrors a paper, Quantifying the consensus on anthropogenic global warming in the scientific literature, to be published soon in Environmental Research Letters, that analysed over 12,000 climate papers published between 1991 to 2011."

    So, claims of non-random sampling not only fail at mathematics, but also fail at reading comprehension.

      As you surmise, it is likely that the abstacts used in the current survey in which you participated are a subset of the 12,000 used in the original paper.  Specifically, they are probably drawn from those which actually recieved an author rating.

    Speaking of which, it is my understanding that the author rating of a paper consists of the rating given to the paper by the lead author of the paper, rather than being the mean rating of all authors of the paper.

    Finally, I am concerned that you have seen four sets given that we are asked to only take the survey once.

    0 0
  31. Ah. Now I understand that "The average rating of the 10 papers by the authors of the papers was 2.7." is in fact worded correctly, since the random draws were not from the full 12,000+ but rather from the subset that had ratings by the lead author of the paper itself.

     

    This might also explain why most people here are getting higher average ratings than the ratings given by the authors - the authors might be drawing on their full knowledge of the paper, while we have to depend on just what is contained within the abstract itself...

    0 0
  32. Tom Curtis @ 30,

    I did use the words "pretty remote" rather than "ridiculously remote" and "something to write home about", so I stand by my comment. :-)

    Also, I didn't make the claim that the sample set were supposed to be from the entire set of 12,000, merely commented on the statements of those that did. It had already occurred to me that the original wording made no mention of the sample size for this exercise.

    Finally, speaking of reading comprehension:

    Finally, I am concerned that you have seen four sets given that we are asked to only take the survey once.

    I never said I took the survey more than once. In fact I only took the survey exactly one time, the results of which I reported above. Since "results are only recorded if the survey is completed" it seemed harmless to check to see if I could reproduce the behaviour being reported on the fake skeptic blogs, and the last time was just so I could double-check the exact wording of the different categories.

    0 0
  33. MMM @ 31,

    That's exactly what I meant by "just goes to show how deep the level of acceptance of the consensus is" above. It's much harder for the original authors to assess the abstract in isolation when they know what they feel and what they were trying to get across.

    0 0
  34. Very interesting survey - well done for thinking this through - there's some fascinating elements of psychology and the nature of scientific understanding and knowledge in this.

    Like most of the respondents on this thread I also slightly underscored relative to the authors consideration of their endorsement of AGW in their papers (I scored 3.0 on average; the authors scored 2.5 in my selection of 10).

    I suspect this relates to two things. The first already mentioned here is that the authors have a conceptual framework outside of the specific element of the study in their paper and this influences their own perception of the level of endorsement in their study. The second is that an abstract doesn't give a full account of the level of endorsement that might pertain in the paper as a whole. The authors have access to this - those of us rating on the basis of abstracts alone don't.

    In support of the above I had a look at a few of the papers in my ten in their entirety, after doing the survey. In all of the examples I looked at I would have scored the paper one step higher (ie.e. towards endorsement) had I known what was in the full paper.

    0 0
  35. From the article that dealt with crowd funding of this project;

    Over the past year volunteers here at Skeptical Science have been quietly engaged in a landmark citizen science project. We have completed the most comprehensive analysis of peer-reviewed climate science papers ever done. Some 21 years worth of climate papers – more than 12,000 in all – have been carefully ranked by their level of endorsement of human-caused global warming. We also invited thousands of the authors of these papers to rate their own papers.


    https://skepticalscience.com/Be-part-of-landmark-citizen-science-paper-on-consensus.html

    Seemed pretty clear to me that many papers had been rated by the authors of those papers.

    0 0
  36. @cRR Kampen at 00:48 AM on 3 May, 2013


    I believe it's anything but about the 97%. I suppose it's all about how ordinary people are evaluating the findings of scientific studies.

    0 0
  37. #36 SirCharles, you're right. That measure is inappropriate.


    Great conspiracy theorizing at WUWT. They should develop that, then tomorrow they can go into the trash cans of hollow earthists, chemtrail believers, UFO nutcases and creationists - or have they already arrived there?

    0 0

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

TEXTBOOK

THE ESCALATOR

(free to republish)

THE DEBUNKING HANDBOOK

BOOK NOW AVAILABLE

The Scientific Guide to
Global Warming Skepticism

Smartphone Apps

iPhone
Android
Nokia

© Copyright 2014 John Cook
Home | Links | Translations | About Us | Contact Us