Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Who's your expert? The difference between peer review and rhetoric

Posted on 17 June 2011 by John Cook

Reposted from The Conversation. This is the fifth part in a two-week series Clearing up the Climate Debate.

CLEARING UP THE CLIMATE DEBATE: Director of the Global Change Institute, Ove Hoegh-Guldberg submits some climate “sceptics” to peer-review and finds them wanting.

Peer review is the basis of modern scientific endeavour. It underpins research and validates findings, theories and data.

Submitting scientists' claims to peer review is a straightforward way to assess their credibility.

The Climate Commission was established by the Australian government to help build consensus around climate change.

Chief Commissioner Professor Tim Flannery handed the first major report, The Critical Decade to Julia Gillard on May 23.

Peer-reviewed by internationally respected scientists, the report summarises key evidence and conclusions regarding climate change for Australia and the world.

Rising temperatures, changing rainfall, threats to human health and agriculture, and deteriorating ecosystems are carefully documented from the scientific literature. The report makes compelling reading and a solid case for rapid action on greenhouse gases such as CO2.

But are all experts really in agreement with the Climate Commission’s report?

Enter an alternative group of experts.

Writing in Quadrant Online Bob Carter, David Evans, Stewart Franks and Bill Kininmonth stated, “The scientific advice contained within The Critical Decade is an inadequate, flawed and misleading basis on which to set national policy.”

Carter and his colleagues dispute the major findings and assert that “independent scientists are confident overall that there is no evidence of global warming” or unusual “sea-level rise”.

According to them “there is nothing unusual about the behaviour of mountain glaciers, Arctic sea ice or the Greenland or West Antarctic ice sheets.”

You would be forgiven for concluding that firm action on carbon dioxide might not be warranted if the experts can’t agree.

But is there really so much scientific dispute over the facts of climate change?

One way to resolve this is to ask a simple question. If Carter and company hold different views to those expressed in the majority of the peer-reviewed, scientific literature, then have they submitted their ideas to independent and objective peer-review?

This is a critical process that sorts opinion and rhetoric from scientific knowledge and consensus.

If the answer is “yes”, there are legitimate grounds for concern over the report’s conclusion.

If the answer is “no”, the arguments against the Climate Commission’s report fall away as unsubstantiated opinion.

The Web of Science is maintained by Thomson Reuters and covers 10,000 journals across the sciences, social sciences, arts and humanities.

You can search this database for papers by different authors within reputable, peer-reviewed journals.

I used the Web of Science to see if Carter, Evans, Franks and Kininmonth were legitimate experts in the areas in which they claim superior knowledge.

Given such strong opinions, you would expect that the four individuals would have published extensively in the peer-reviewed, scientific literature on subjects like climate change, oceanography, and atmospheric physics.

After all, if they have such strong opinions, then surely these ideas have been treated like all other valid scientific ideas?

The Climate Commission and its scientific advisory panel survive this type of scrutiny extremely well. For example, Climate Commissioner Professor Lesley Hughes has at least 39 peer-reviewed publications since 2000.

Many of these articles focus on the terrestrial ecosystems on climate change, an area for which Professor Hughes is internationally recognised.

Similar conclusions can be made for Professors Will Steffen, Matt England, David Karoly, Andrew Pitman and the others associated with the Climate Commission.

Searching for peer-reviewed articles by “R. M. Carter”, however, revealed plenty of peer-reviewed articles on unrelated topics within geology.

Only one paper turns up that could be remotely related to climate change.

This paper, however, was found to be seriously flawed by an internationally recognised group of Earth scientists.

This brings us back to zero for the number of credible papers published by Carter on climate change in the Web of Science.

Searching for articles by David Evans and William Kininmonth revealed no peer-reviewed scientific literature that tests their claim that climate change is not happening.

Lastly, searching for peer-reviewed papers from Stewart Franks yielded a number of articles (>50) on hydrology and climate variability since 2000.

None of these peer-reviewed articles presented data or tested the idea that climate change is or is not happening, or any of the other “errors” that Carter and his co-authors claim are associated with the conclusions of the Climate Commission.

The number of articles by Franks since 2000 that involve peer review of his claims that climate change is not happening is also zero.

So the number of peer-reviewed papers that adequately expose the ideas of Carter and co-authors to the scientific peer-review system on the climate change issue is 0, 0, 0 and 0.

We are left, then, with the observation that the Climate Commission’s report, peer-reviewed and assessed by scientists with appropriate expertise, is being challenged by four individuals who refer to websites and blogs and who have not had their core claims about climate change tested in the peer-reviewed scientific literature.

Don’t get me wrong, discussion is important, but on serious matter such as climate change, let us hope we listen carefully to the experts and not the unsubstantiated opinions of those that are not.

0 0

Printable Version  |  Link to this page

Comments

Comments 1 to 34:

  1. Sorry, this is off-topic, but... Can anyone link me to an article that discusses why ground based temperature measuring stations have been disappearing. (-Snip-)
    0 0
    Response:

    [DB] A quick perusal of the Search function yields this, this and this.

  2. Mags like Forbes need an ass kicking... http://blogs.forbes.com/patrickmichaels/2011/06/16/peer-review-and-pal-review-in-climate-science/
    0 0
  3. One nice thing about the peer review system is that in, in order to be published you just have to convince three to five people who have the knowledge and skills to assess your argument that your argument is worth considering. You don't have to convince them that you are right; only that the case you make is cogent enough to be worth further thought. A second nice thing is that if you can't convince the first group, you can always try again at another journal. Or improve your argument and/or evidence and try again at the same journal. That means that if your argument really is worth considering, it is highly unlikely to not be published. Conversely, if you can't get your paper through the peer review system, that means from a wide selection of people well informed enough, and skilled enough to understand your argument, you can't even get a few to agree that your argument is even worth considering. Worth thinking about.
    0 0
  4. Comments to this article on The Conversation are going quite well at the moment. Those opposed to the peer review process are not aquitting themselves well. Makes for a pleasant change.
    0 0
  5. Hear hear. It's time jokers get taken for what they are.
    0 0
  6. I'd suspect its just economic rationalism. Maintaining and monitoring a ground based weather station in remote country is labour intensive work.
    0 0
  7. I'm wary of idolising peer-review as the metric of truth. After all, many peer-reviewed papers are wrong. The papers which open up new areas of research are frequently both ground-breaking and largely wrong. And taking this approach is just setting ourselves up for a fall whenever a wrong paper, especially a paper which is wrong on climate, makes it into the peer-reviewed literature. The real test of a theory is its reception in the wider community over years and decades. I would like to broaden the term 'peer-review' to mean this, but it is not the common usage. 'Scientific consensus' fills roughly the right role, but creates problems for the public who see opposing talking heads in the media and conclude that no consensus (in the common rather than scientific usage) exists. There are interesting developments in evolutionary psychology (beware - not consensus) on why individual and small group reasoning is necessarily suspect, with the implication that science must by necessity rely on something like 'consensus'. See: http://sites.google.com/site/hugomercier/theargumentativetheoryofreasoning http://www.dan.sperber.fr/wp-content/uploads/2009/10/MercierSperberWhydohumansreason.pdf
    0 0
  8. In a way Kevin C is right and wrong in any case. If you look at the background history of the vast majority of quotidian papers published, there are many levels of peer-reviews. On average the academics in most universities will have been recruited, will apply for grants, will present material at seminars, conferences and workshops, will have discussed results with collaborators and students and so on. Often a journal editor will take a pro-active role in the paper as well as the reviewers... The implicit objective of all this is not to get consensus but robustness. IMHO problems come in two places. First, with results which are not controversial; as people tend not to question them as much. With controversial results, folks tend to look for higher levels of robust argumentation. Second with publishers who are outside the establishment... not because mavericks are bad, not because The Establishment should be preserved or what ever - but because they are not subject to all the other peer-review events that occur way before a paper is actually submitted. Sure, it works sometimes - but that can be a selection bias; it really really doesn't work 99% of the time - but then, very few people get to see the vast swathes of rubbish that is rejected from journals except for bits here and there which end up as blogs or in 4th rate publications.
    0 0
  9. Kevin, I see peer review as a gate-keeping exercise. The term I think you're looking for is "peer response". After all a bare handful of people, highly qualified though they may be, is all that's required for peer review. Good peer response - which usually means lots of citations as well as further work on the topic - is what leads to scientific consensus. A bad peer response, or if the paper in question is simply ignored and never cited, does not mean that the initial gate-keeping exercise failed (usually). It just means that more people, perhaps more highly qualified, can find errors of fact or reasoning in a paper which looked OK to start with.
    0 0
  10. Well peer-review has to be the starting point. It doesnt make it right, but compared to a blog post? The most important bit though is that does create an audit trial. If the community likes it, builds on it, then it gets cited. If something doesnt gel with current theory, then you can follow the trail back and re-examine the paper and its methods. By contrast, if its not peer reviewed, and not in a field where you have enough domain knowledge to evaluate yourself, then you are always asking what would real peer's make of it.
    0 0
  11. Kevin C: I think adelady has it right - peer review is a gate-keeping exercise, that is less about establishing whether a paper is *correct*, as establishing whether it is obviously *incorrect* - in which case, it doesn't get published until the obvious errors are fixed. Less obvious errors will usually end up with comments, letters, or further papers being published to point them out - sometimes by the original authors. As I understand it, though, it takes more than errors to prompt an actual retraction. I've been responding to some comments on the cross-post of this at ClimateSpectator. I've seen the "it's water vapour" one again, and "Phil Jones conspired to block critics' papers", and "it's not warming", and "sea levels aren't rising", and "CO2 is plant food". On a related thread on BusinessSpectator, I even had someone tell me that a 1.5% annual increase in evaporation & precipitation was a slow-down of the hydrological cycle... (I had to read that one three times to check I hadn't misunderstood!)
    0 0
  12. "I'm wary of idolising peer-review as the metric of truth." Agreed: this article asserts, it doesn't explain. To anyone who'd never heard of peer-review, what would they think? Sounds like appeal to authority (your peers, in fact!) What I'd like to read: an explanation of how the peer review process manages to (over time) produce a good approximation to knowledge. On the face of it, there's no reason to suspect it would: peer review has to exist in a system that incentivises those peers to *want* to tear down any mistakes they find. It's quite possible to imagine `peer review' that does nothing of the sort - and indeed journals exist where that's the case. So what distinguishes good from bad peer review? Is it actually anything to do with `peer review' itself, or is it to do with the structure of the disciplines they're embedded in? Or what?
    0 0
  13. Generally speaking, only peer-reviewed papers are cited in the scientific literature. Thus, IMO the primary way that peer-review impacts the progress of science is by determining which materials make it into the ongoing 'discussion' and which do not. That said, if utter garbage is let through a fake 'peer-review' process (E&E anyone?) the scientific community will just reject it in the wider 'peer-review' that is ongoing research. Thus, passing peer-review is USUALLY enough to identify something as valid science, but there are occasional mistakes and a few cases of outright fraud... but those quickly fall by the wayside while valid papers are referenced and built upon in further research.
    0 0
  14. DB,thanks for the links.
    0 0
  15. Overall, peer-review is a good process to evaluate the merits of a paper. However, it should not be idolized as a grand metric differentiating between acceptable and non-acceptable work. Peer-review is just that, a review by your peers. Most times "correct" papers get published, other times not. After all, if your peers are doing similar work, they may see similar results. Sometimes articles appear in scientific journals or magazines which contain preliminary results which the researcher deems significant enough to portray to others. Oftentimes, this will lead to a research report submitted at a later date. That does not automatic relegate the original article to an invalid state, but rather has not gone through the peer-review process to check for potential errors. Since peer-review is a human process, it is governered by human limitations. Consequently, the reputation(s) of the author(s) does influence the process. The biases of the reviewers can also play a role. Papers are occassional pased through to fill session in conferences, and later published with the proceedings. Having taken part in peer-review from both sides, I can see many of the advantages and disadvantages, and that the advantages far outweigh the disadvantages. I will reiterate that it is a good process to determine the merits of research, but not the endall barometer.
    0 0
  16. Good discussion - I like the term 'gate keeper' (I've tended to use 'junk filter') and especially 'peer response'. CBDunkerson@13:
    Thus, passing peer-review is USUALLY enough to identify something as valid science, but there are occasional mistakes and a few cases of outright fraud... but those quickly fall by the wayside while valid papers are referenced and built upon in further research.
    Ah, now I think you've just picked out something particularly important. There are two separate questions we can ask about a paper: 1. Is it valid science? 2. Is it right? A paper may be neither, it may be valid science but wrong, or it may be valid science and right. (I suppose you can also have a paper that is right but not valid science, if it is right by chance.) The journal peer review process should certainly pick out invalid science, but will also pick out some egrariously wrong science as well. As for what gets published, I guess one of the key questions in determining to what extent we can rely on a paper simply because it is peer reviewed is what proportion of peer reviewed papers are valid science but wrong. I would like to suggest that the proportion is significant, even in real jornals and in articles from reputable authors. One piece of evidence for this lies in the fact that many papers cite at least one previous work in order to disagree with it. Does this invalidate science? No, one way science progresses through a conversation between papers in which the errors are gradually weeded out. But does mean that being peer-reviewed is not a sufficient reason on its own to trust a paper. That's where Adelady's peer-response metric fits in.
    0 0
  17. Tom C: "One nice thing about the peer review system is that in, in order to be published you just have to convince three to five people who have the knowledge and skills to assess your argument that your argument is worth considering. You don't have to convince them that you are right; only that the case you make is cogent enough to be worth further thought." That can't be said enough. Part of my work is in peer review, though in a slightly different context. Peer review should be a part of any publication process. Peer reviewers should work with the advancement of knowledge in mind. That's all easily said. It is not easily done. For example, there are a number of areas of study where knowledge is saturated within self-imposed bounds, yet publication is still necessary in order to remain competitive in the job market. The journals in these areas (to some degree the bound-setters) become complicit with the interests of the desperate researchers, and they allow all sorts of esoterica and bizarre speculation to enter the larger conversation. It's only bizarre to outsiders, though. There is no mechanical, objective test for the validity and usefulness of a study (beyond the initial check for proper math) other than time, the scientific method, and the social production of science. Good, objective (within human limits) peer review puts an arm around an idea and introduces it to the public. Without peer review, ideas would stumble into the conversation, and we'd spend half our time picking them up off the ground and brushing the dirt off (and dealing with the subsequent psychological issues of having stumbled in public--something with which posters on this website have no experience (snort)).
    0 0
  18. The "skeptic" paper that did get submitted to peer-review is an even better way to show the lack of substance of their claims. They should do that more often.
    0 0
  19. I want to echo and amplify Tom P and DSL. Peer review is: 1) definitely *not* about whether the conclusions the authors draw about their results are correct 2) not really about whether their results themselves are correct either; more about weeding out the obvious mistakes 3) no guarantee that the procedures and analysis are even valid. Again, the process should filter out obvious severe problems, but reviewers can and do approve papers for publication when they suspect the methods are problematic--but promising for the science in some way. For example the analysis may seem innovative even if currently flawed. If the flaws are suspected rather than demonstrable, or otherwise not something feasibly addressed through revision, the reviewer may recommend publishing the paper as a conversation-starter, rather than simply rejecting it. Future refinements may address the perceived problems. I think Kevin C's "junk filter" is even better than "gatekeeper." But even that doesn't seem to quite capture it for me. Peer review keeps out stuff that appears to have no hope of adding value to the scientific discourse. Sometimes even things that reviewers can tell are probably wrong...are still *interesting*.
    0 0
  20. Thanks all for the insightful posts. Funny that just when the main post titled, "Who's your expert? The difference between peer review and rhetoric" was published, a renowned "skeptic" (Pat Michaels) came out with his rather inane (and at times juvenile) opinion piece in Forbes magazine (linked to @2 above). His diatribe shows that Pat Michaels is an expert in rhetoric and deception, and to that end is doing his utmost to sabotage the peer-review process by fabricating controversy and deliberately misinforming the public. One thing people ought to keep in mind about peer-review is that is is not perfect, never was and never will be, open review has its own issues (I have seen paper sin open review that have resembled food fights, not constructive). The beauty about peer-review is that is represents part of a continuum. Once the work is published it is then subjected to review by all those who read it, and anyone can submit a challenge. In this way, errors missed by the reviewers can be identified and rectified, or if the critique is without merit, the authors can defend their work. So time is the ultimate test, and thus far the physics underlying the theory of AGW has withstood scrutiny (aside from some bumps in the early days) going back to 1896, possibly even further back tot he days of Fourier in 1842. The same cannot be said for papers published by 'skeptics' like Michaels-- their sub par science and the fiasco at the journal Climate Research (when 'skeptics" were engage din pal review; funny how Michaels "forgets" that). In recent years 'skeptics' have had quite a few papers (or the data and methodologies used) overturned/refuted, some examples: Gerlich and Tscheuschner (2009) McLean et al. (2009) Douglass et al. (2007) Lindzen and Choi (2009) McKitrcik and Michaels (2007) (yes, Michaels again) Spencer and Braswell (2008,2009) I discuss this issue in more detail here, with embedded links. Ari has a long list of refuted "skeptic" papers here.
    0 0
  21. The comments above are interesting. But I note the article is not talking about the 'correctness' of peer reviewed papers or even the process. What it is pointing out is that if a person is claiming to be an expert, is writing stuff contrary to mainstream science and has not published on the subject matter at all, then what they say has to be treated with extreme caution. The Quadrant series of articles are at the extreme of nonsense of course but a good example nonetheless because the authors are promoting themselves as experts when they are not. Anyone reading even casually will see the contradictions and inconsistencies in the Quadrant articles and recognise them for utter nonsense, even if they've never heard of climate science before. However the authors are often referred to as experts by the media and the layperson might not realise the falsehood unless they checked their publication history.
    0 0
  22. sout@21 "Anyone reading even casually will see the contradictions and inconsistencies in the Quadrant articles and recognise them for utter nonsense..." I think you are overly optimistic. Anyone reading for actual content might, but most people will read it to reinforce what they already think. No analysis at all.
    0 0
  23. I have had several “debates” with my brother, a climate denier, on his ideas on climate science and also peer review. As a PhD immunologist, who has a number of reviewed papers and has reviewed many more, I think I have a pretty clear idea of what peer review is and so I once asked him if he had ever peer reviewed anything. His reply was that he had because his definition is that “Peer review is the evaluation of creative work or performance by other people in the same field in order to maintain or enhance the quality of the work or performance in that field.” It’s a cleaver little definition that comes from the Linux Information Project; he’s in the IT industry so it not surprising that he uses it. But his next statement was the kicker - “Nowhere does that definition deem publication in a peer reviewed journal as a primary vehicle for peer review.” When I read that I knew that this was the avenue he was using for all his bluster about the IPCC reports and pretty much any peer reviewed climate science. Of course its naïve in terms of science because publication is our bread and butter but it serves the climate denier so well. I’d bet a dollar that there are a few over at WUWT who believe that reading and commenting on blog posts serves as a valid form of peer review.
    0 0
  24. les at #8 has highlighted one fundamental "extra" element of peer review; i.e. the fact the most studies submitted for publication have already been "tested" by extended peer review at colleague, group, departmental and conference presentations. Most submitted papers of value have already been honed by passing through this process, and are submitted in the expectation that they are already broadly suitable for publication. However, the most important element of peer review, and the one most deficient in the dreary antics of denialist efforts at publication (we could list the tiny set of dismal papers that have resulted) is "self peer review". The vast majority of scientists have a burning desire to find out stuff about the natural world, and have high personal standards of quality and integrity. They want to get it right. So in the vast majority of cases, one can review a paper with the expectation that the submitting authors are doing so in good faith. They're not trying to sneak substandard analyses and interpretations into the scientific literature to serve tedious agendas. Sadly, that doesn't apply to a very, very tiny group of individuals who try to pervert the peer review process.
    0 0
  25. P Kelly, you mean all the stuff that goes with humans trying to engage in the social production of anything? Yah, it exists. Yah, the people who are privileged by it keep their mouths shut. Yah, the people who are misinformed as a result complain about it. Given the human conditions in which the process takes place, and knowing that no publication venue is free from the human error, what publication venue do you most highly respect? E&E? You'd do better by examining alleged corruption on a case-by-case basis, and then drawing conclusions based on the compiled case analyses. Just complaining about corruption in the peer review process is akin to complaining of corruption in national politics. Yah, duh--what's your point? Oh good grief, mods - I really wanted an answer to those questions (even though one was slightly rhetorical).
    0 0
    Response:

    [DB] Apologies, but when someone says "incestuous peer reviewing" that is beyond the pale.

    Feel free to reply to the rest, but be aware that this line of discussion treads thin ice with the comment policy.

  26. Patrick Kelly at 05:28 AM on 18 June, 2011 That's a dismal list of unsubstantiated stuff Patrick. The peer review process can be frustrating but it works pretty well in my experience. Of course it relies on the essential scientific integrity of those involved, and that may be the something you've failed to consider. You really need to give us some more specific information to flesh put your unsubstantiated assertions. For example why do we need "responses ro alleged corruption to the peer review process"? What "corruption" specifically. If it's only "alleged" why should we care? After all there are well substantiated examples of the real or attempted "corruption of the peer review process", that help us to understand the morives of the "corruptors".
    0 0
    Response:

    [DB] As I told DSL, this nears the line of acceptability.  The person you are engaging is likely unable to sustain comments that comply with the Comments Policy.

  27. Don't see the problem moderator (DB). I'm suggesting that Patrick needs to supply specific examples if we are to consider his assertions/insinuations seriously. I've given two specific examples of real and attempted efforts to bypass proper peer review. If we're discussing a particular issue (Patrick raised "alleged corruption of the peer review process"), surely it's imprtant to address this with respect to specifics. Incidentally, I don't actually undertand what "The person you are engaging is likely unable to sustain comments that comply with the Comments Policy". Are you suggesting that my post will force him to write an inappropriate response?! Can you clarify please...
    0 0
  28. O.K. I get it...you've deleted Patrick Kelly's post. I do think there are some really useful things to be considered in relation to the sort of insinuations that Patrick posted. They bear on the very nature of scientific knowledge, and its compilation in the scientific literature, a process which has sustained the scientific effort since the 17th century. The processes of scientific publishing are evolving (electronic publishing, open access, open peer review etc), but the essential elements are largely maintained. However, there's no question that some individuals with dubious agendas consider the peer review process a potentially soft target, in two ways. The first manifests in attempts to by-pass proper peer review by sneaking sub-standard papers into the scientific literature, and thus further dodgy agendas by giving the pretence of "scientific validity" to scientifically-unjustifird notions. The second (which relates to the first) is the insinuation that the peer review process is tilted against the efforts of particular groups to get their work published. Richard Lindzen (much discussed here recently), provides an example of each of these [Lindzen and Choi, Geophys. Res. Lett. (2009) is an objectively flawed paper that really shouldn't be in the scientific literature, although it's not a big deal scientifically-speaking; Lindzen and Choi (2011) is a flawed paper that is being used blogospherically to insinuate "unfair" peer review processes]. Likewise Soon and Baliunas, Climate Research (2003) [Editors resigned; publisher issues a statement that the paper shouldn't have been published in the form that it was], or Said, Wegman et al (2008) [paper retracted by the publishers due to plagiarism] are examples of agenda-led papers that found their way into the literature through bypassing proper peer-review. It's very useful to address specific examples to understand some of the processes of anti-science that cloud the abilities of citizens to make informed decisions...
    0 0
  29. For references sake here are published allegations by authors... I found these spending some time trying to research Patrick Kelley's post.. I'm NOT posting this to agree and as you know I applaud John Mashey's work ( http://www.sciencemag.org/content/332/6035/1250.summary) in countering Wegman et. al. McClean's complaint is here: http://scienceandpublicpolicy.org/images/stories/papers/originals/agu_censorship.pdf and Douglas and Christy's is here: http://www.americanthinker.com/2009/12/a_climatology_conspiracy.html I didn't get further than Douglas and Christy in going down the list of debunked skeptical papers. I do think that the papers did get published, that late doesn't matter in the larger scheme of things, and what the paper says ultimately is more important that problems with the peer review process. Peer review isn't perfect.... I know having been a reviewer many, many, may years ago the temptation to be overly critical in order to score points is high (just out of grad school). It's an art, an imperfect one. There are justified criticisms of it in a general way, and illustrations of individual failings. There isn't a good alternative.
    0 0
  30. whoops, wrong button. i still say if we can agree to consider anything viewed by a peer (lord monckton perhaps?) as "peer reviewed" we'll really be able to help a bunch of denialists out. who needs those pesky scientists anyhow?
    0 0
  31. A measure of the impact of a research article is the number of citations it receives in the peer-reviewed scientific literature. These are imperfect because of the difficulty of separating positive and negative citations (self-citation can be more easily filtered), but such numbers are routinely used to estimate quality of ones work in making a case for promotion or other research assessment exercises. Citations are also used as a measure of the "impact factor" of various scientific journals.
    0 0
  32. Peer review is gatekeeping but does it keep out error? Peer review is review by a few selected (by who?) for their expertise. Only if published is a Paper exposed to wider peer review. Above all, peer review is the very foundation on which science moves forward and, without being able to rely on it, science in its many areas of research could not be accepted as being error-free based on current knowledge. Little wonder then that those who do not expose their work to peer review or could not pass it if they did should attack it – and be resisted. But does this mean that it is always right or, for science, the best of all possible worlds?
    0 0
  33. With all due credit to Churchill: peer-review is the worst form of scientific gatekeeping except all the others that have been tried.
    0 0
    Moderator Response: [Dikran Marsupial] c.f. Newtongate ;o)
  34. 28 - Chris: Regarding the changes in how publishing works etc. It's interesting to note that in HEP, ePublising started way ahead of many other disciples, for various reasons, not least of all access to cutting edge computing in the 80's. The result of that was - apart from the invention of the web - a vast increase in community based peer review. Almost ever paper published in HEP was/is issued as pre-print and widely circulated within and between labs and departments. I think folks a kind of missing a big part of the point of a/ publishing and b/peer review. An acceptable paper isn't, in any way, shape or form required to be definitively true. They are expected to advance knowledge through the introduction of new data or application of methods. Peer review is, principally, that prior art has been considered reasonably completely so that there is originality and that results which conflict with prior published work are considered satisfactorily. So, 'peers' are people who are well versed in the relevant field and would know the state of the art - or at least be able to competently check the facts as required. It's not hard. In the climate world I see two problems. First it is widely multidisciplinary. That makes it hard for someone up to their elbows in ice to know the full state of the art with sun-gazers. That's not new in academia and seems pretty well under-control. However, it is from this fact we get a lot of "it's the sun", "it's natural cycles" etc. pronouncements. The other issues is the wide range of material being hoofed around as 'new' which neither contains new methods nor new data; and which has neither originated from within a world of seminars, coffee hour chats, conferences, pre-prints and publications; let alone been subject to (narrowly speaking) peer-review. i.e. rubbish.
    0 0

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us