Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
Keep me logged in
New? Register here
Forgot your password?

Latest Posts

Archives

Are surface temperature records reliable?

What the science says...

Select a level... Basic Intermediate Advanced

The warming trend is the same in rural and urban areas, measured by thermometers and satellites, and by natural thermometers.

Climate Myth...

Temp record is unreliable

"We found [U.S. weather] stations located next to the exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat. We found 68 stations located at wastewater treatment plants, where the process of waste digestion causes temperatures to be higher than in surrounding areas.

In fact, we found that 89 percent of the stations – nearly 9 of every 10 – fail to meet the National Weather Service’s own siting requirements that stations must be 30 meters (about 100 feet) or more away from an artificial heating or radiating/reflecting heat source." (Watts 2009)

Surveys of weather stations in the USA have indicated that some of them are not sited as well as they could be. This calls into question the quality of their readings.

However, when processing their data, the organisations which collect the readings take into account any local heating or cooling effects, such as might be caused by a weather station being located near buildings or large areas of tarmac. This is done, for instance, by weighting (adjusting) readings after comparing them against those from more rural weather stations nearby.

More importantly, for the purpose of establishing a temperature trend, the relative level of single readings is less important than whether the pattern of all readings from all stations taken together is increasing, decreasing or staying the same from year to year. Furthermore, since this question was first raised, research has established that any error that can be attributed to poor siting of weather stations is not enough to produce a significant variation in the overall warming trend being observed.

It's also vital to realise that warnings of a warming trend -- and hence Climate Change -- are not based simply on ground level temperature records. Other completely independent temperature data compiled from weather balloons, satellite measurements, and from sea and ocean temperature records, also tell a remarkably similar warming story.

For example, a study by Anderson et al. (2012) created a new global surface temperature record reconstruction using 173 records with some type of physical or biological link to global surface temperatures (corals, ice cores, speleothems, lake and ocean sediments, and historical documents).  The study compared their reconstruction to the instrumental temperature record and found a strong correlation between the two:

Fig 1

Temperature reconstruction based on natural physical and biological measurements (Paleo, solid) and the instrumental temperature record (MLOST, dashed) relative to 1901-2000. The range of the paleo trends index values is coincidentally nearly the same as the GST although the quantities are different (index values versus temperature anomalies °C).

Confidence in climate science depends on the correlation of many sets of these data from many different sources in order to produce conclusive evidence of a global trend.

Last updated on 13 January 2013 by dana1981. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Related Arguments

Further reading

Comments

Prev  1  2  3  4  5  6  7  Next

Comments 51 to 100 out of 322:

  1. Gavin Schmidt has a brief response to the Smith & Coleman bizarre claims that the "real" temperature stations' data have been replaced by averages of unrepresentative stations' data, and that data have been destroyed. Gavin's response is to Leanan's comment #9 on 17 January 2010 in the comments on the RealClimate post 2009 Temperatures by Jim Hansen.
  2. 53# have a look at http://www.uoguelph.ca/~rmckitri/research/nvst.html
    which graphs temp against station numbers, you can also access the University of Delware mpeg file which animates the global station numbers form 1950 - 1999. Watch China & the Soviet Union. If you reaslly want to spend the time, go to GISS and check out the temp graphs for stations in the SU...you will find 'most' of them stopped sending data after 1990.
  3. Re: #72 Jeff Freymueller at 17:00 PM on 27 February, 2010
    (in: Senator Inhofe's attempt to distract us from the scientific realities of global warming)
    http://skepticalscience.com/news.php?p=2&t=76&&n=147#9477
    "Anyone can download the original data and reanalyze it"

    Jeff,

    I am trying to do that, but the only source I know of is GHCN-Monthly Version 2 at the NCDC site:

    http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/index.php

    Do you know a better source? If so, a pointer is welcome. For this particular dataset is a complete mess. It is dominated by USHCN, poorly documented, metadata are insufficient, the adjustment procedure is arbitrary, coverage AFTER 1990 is deteriorating rapidly.

    Look into it and you'll see.

    In a post deleted by John I have provided some details. Suffice to say the NCDC adjustment algorithm has at least four outstanding break points at 1905, 1920, 1990 & 2006.

    On average for the 1920-1990 period they have applied a 0.36°C/century warming adjustment to the entire dataset. It's essentially the same for sites flagged "rural" and the rest (urban & suburban), no statistically significant difference in adjustment slopes (based on counterfactual assumption of no urbanization in this period perhaps).

    The dataset does not meet any reasonable open source standard.

    Still, NCDC at its site says it was "employed" in IPCC AR4 20th century temperature reconstruction.

    http://ber.parawag.net/images/GHCN_adjustments.jpg

    Looks like it is high time for a transparent open source community project to recollect worldwide temperature histories along with site assessments and ample metadata.
  4. #57, Berényi Péter

    The details are outside my specialty so I can offer only very limited help here. I went to the ftp site and poked around for a minute or two and found this readme file:

    ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt

    The ftp site looks like it is designed to make it simple for people to write automated scripts to grab all the data or updates, which is what I would be doing if I used this data. I do know from reading other blogs that the raw data is also available in addition to adjusted data, so you will have to poke around a bit, or send a question to the email address in the readme file if you can't find what you need after reading the documentation.

    GISS has the source code for its software online, so you can look into that for examples of reading the files and so on.
  5. Thanks, Jeff. It's GHCN-DAILY Version 2.1, I'll look into it.

    $ wget -r ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/
  6. We can add to "other lines of evidence for rising temperatures" also indirect evidence you mentioned elsewhere:

    - Greenland and Antarctica show net ice loss
    - Acceleration of glaciers in Greenland and Antarctica, particularly within the last few years.
    - Sea-ice loss in the Arctic is dramatically accelerating
    - Accelerating decline of glaciers throughout the world.
    - Rapid expansion of thermokarst lakes throughout parts of Siberia, Canada and Alaska
    - Disintegration of permafrost coastlines in the arctic
    - Poleward migration of species
    - Poleward movement of the jet streams (Archer 2008, Seidel 2007, Fu 2006)
    - Widening of the tropical belt
  7. Some recent analysis of USA surface temperatures, 16th March 2010, by Dr. Roy Spencer (http://www.drroyspencer.com/category/blogarticle/) suggests that there is sufficient doubt and perhaps significant differences to be found when closely examining the published data that warrants closer examination to accurately quantify the UHI effect, and how it impacts on both the accepted trends and also on how it affects other data that was calibrated against accepted surface measurements.
  8. johnd,
    it's always a good thing when other scientists come out with different analysis. But it needs to be done at the same quality level. The dataset Spencer uses did not go through the same quality control as GHN; also, data are not homogenized. Spencer corrected the raw data just for altitude and check for water coverage.
    Given that similar and more comprehensive analysis on the link between population and UHI has already been performed and accounted for, I'd be more carefull before claiming that "there is sufficient doubt".

    There are other things that I think need to be clarified. For example, Spencer found large UHI warming-population density differences for different years, which I find hard to explain. Even larger differences are found between USA and the rest of the world.
    Also, there's a sharp increase in the warming bias already between population densities of 5 and 20 per Km2, which again I find hard to understand. And it's worth noticing that the whole claim is based on the data for population densities below 200 per Km2, above which Spencer's results agree with CRUTem3. Spencer should also explain how satellite based temperatures can be fooled by population density.
    One last remark, the ISH dataset is released by NOAA which uses the GHN dataset for its analysis of temperature trends, I'm sure for good reasons.
  9. Riccardo, Spencer wasn't presenting his analysis as complete, but believed what differences he found are sufficient to justify a more complete independent analysis. Given that there are few stations where one can be confident of the data being not being biased by the UHI effect perhaps it does warrant careful analysis. Aren't satellite based temperature measurement equipment calibrated against "known" conventional temperature measurements? If not what are they calibrated against? The accuracy of satellite measurements despite the sophisticated instrumentation, will only be as accurate as the standard used to calibrate them.
  10. johnd,
    I know what Spencer and you think. I was giving reasons to think differently.
  11. johnd writes: Aren't satellite based temperature measurement equipment calibrated against "known" conventional temperature measurements? If not what are they calibrated against? The accuracy of satellite measurements despite the sophisticated instrumentation, will only be as accurate as the standard used to calibrate them.

    The AMSU temperature measurements are calibrated against two targets. There's a "hot" target located on the satellite itself (whose temperature is directly monitored using high-precision platinum resistence thermometers). For a "cold" target the sensor turns to measure the cosmic background radiation in open space (a very cold 3K). Real earth temperatures will fall between these two targets.

    The close agreement between satellite and surface temperatures is a bit of a problem for those skeptics who believe that the surface record is hopelessly contaminated by UHI effects. I've seen many commenters on other sites try to reconcile this by assuming that the satellite record is somehow "tuned" to match the surface trend, or surface stations are used to "calibrate" the AMSU satellite temperatures. But no such adjustment is actually used, and the close agreement between satellite and surface temperatures is real.
  12. #61 Ned at 00:20 AM on 5 April, 2010
    The close agreement between satellite and surface temperatures is a bit of a problem for those skeptics who believe that the surface record is hopelessly contaminated by UHI effect

    Ned, the problem with satellite "temperatures" is that satellites do not measure temperature, not even color temperature, but for a specific layer of atmosphere (e.g. lower troposphere) brightness temperature is measured in a single narrow IR band. This measurement may be accurate and precise, but it is insufficient in itself to recover proper atmospheric temperatures. In order to make that transition, you need an atmospheric model. With the model atmosphere you can calculate the brightness temperature backwards and tune parameters until a match is accomplished with satellite brightness temperature data. Then you can look at the lower troposphere temperature of the model and call it temperature.

    However, with no further assumptions, the relation is not reversible, i.e. many different atmospheric states lend the same brightness temperature as seen from above.

    The very assumptions in the model, that make reverse calculations possible are the hidden backlink to surface temperature data. For there is no other way to verify model reliability than compare it to actual in situ measurements. Therefore if the surface temperature record is unreliable, so are atmospheric models used to transform satellite measured brightness temperatures to atmospheric temperatures. That makes the whole satellite thing dependent on surface data, in spite of independent sensor calibration methods.
  13. BP #62, if what you say were true then would not the satellite and surface temperature results diverge more and more as time goes by?

    After all, your argument is essentially that the satellite temperature data was 'set' to conform to surface results (though in fact UAH originally came up with results significantly different from the surface results and only later came to line up after several errors were identified). However, now that those 'assumptions' needed to match the satellite record up are in place they are fixed. If the surface temperature continued to change, per the 'UHI error theory' for instance, then it should diverge from the satellite record which is still based on the assumptions needed to match up to the older temperatures.

    Yet we aren't seeing this sort of growing divergence. I believe that is because you are simply incorrect about the satellite record being deliberately 'set equal' to the surface record... as also demonstrated by the fact that they originally did not match and the primary adjustments made since then have had to do with correcting for sensor drift rather than baselining to the surface data.
  14. #63 CBDunkerson at 20:40 PM on 27 June, 2010
    though in fact UAH originally came up with results significantly different from the surface results and only later came to line up after several errors were identified

    Yes. And the motivation for debugging was the discrepancy. But the thing about conversion of brightness temperatures to proper temperatures using an atmospheric model was just a guess, I have not looked into the issue deeply enough, yet.

    However, I am pretty sure the surface database is tampered with.

    I have downloaded both v2.mean.Z and v2.mean_adj.Z from the GHCN v2 ftp site. According to the readme file data in the latter one are "adjusted to account for various non-climatic inhomogeneities".

    Then selected pairs of temperature values where for a 12 character station ID (includes country code, nearest WMO station number, modifier and duplicate number), a specific year and month both files contained valid temperatures (4,864,014 pairs for 1835-2010).

    For each pair I have calculated the adjustment as the difference of the value found in v2.mean_adj and v2.mean. Having done that, I have taken the average of adjustments for each year. It looks like this:



    It is really hard to come up with an error model that would justify this particular pattern of adjustments. One is inclined to think it's impossible.

    Note that for the last ninety years adjustments for various non-climatic inhomogeneities alone add about 0.26°C to the warming trend. If we also take into account the UHI effect which is not adjusted for properly, not much warming is left. Without soot pollution on ice and snow, we probably would have severe cooling.
  15. BP - a script and post on this. do climatologists falsify data?.

    I believe the papers used for homogenization are listed here.USHCN Do you have a problem with the methodology used here?
  16. "without soot pollution on ice & snow" - you mean you think that you can explain warming in ocean, satellite, and surface record away as "anomalies" as poor instrumental records, and then explain the loss of ice/snow around the world purely by black soot? And the sealevel rise as by soot-induced melting alone without thermal expansion? I guess similar strange measurement anomalies will explain upper stratospheric cooling and the IR spectrum changes at TOS and at surface. That is drawing one very long bow, BP. You could be right but I will stick with the simpler explanation - we ARE warming and our emissions are the major cause of it.
  17. BP, what act of contrition will you offer should your remark of "tampered with" prove faulty?

    Perhaps a more careful choice of words would be better?

    Also, this isn't some kind of fad thing you're bringing from elsewhere, is it?

    I'm not being nasty, just am bothered with words smacking of fraud and am really bored with impressionist fads. As I've said before, you make an effort but that makes it -more- disappointing when you succumb to the freshly-revealed-climate-science-conspiracy-of-the-week.

    Anyway how about explicitly publishing your (admittedly simple sounding but I'm a simpleton) arithmetic method you're using to produce your datapoints?
  18. I used to be impressed with the allowances you grant to BP and his wild (and long) meanderings laced with accusations (such as 'tampering') and insinuations, but it is starting to get very boring and frustrating. How many times can such accusations be allowed without proof, even if followed by apologies - although the apologies are never (as far as I can see) related to the accusations made, as can be seen for his 'apology' on the Ocean acidification thread, where he apologised for getting angry but not for the general accusations against 'climate science'.
  19. Not to swerve completely off-topic JMurphy but I'm not sure I can think of a single other skeptic I've witnessed actually admitting an error other than BP, here, though I can't remember exactly what was about or where, just that it was striking in its very novelty.

    I'm bothered by the fraud thing, very much so because it's hard to talk with somebody who starts with an assumption that data is cooked and I have to wonder how virtually all of our instrumental records could be either hopelessly flawed or run by the Mafia but on the other hand we're also sometimes treated to interesting little essays like this.

    I've spent (wasted according to some people) a lot of time in the past 3 years hanging out on climate blogs and Berényi Péter is quite unlike any other doubter I've run across.
  20. doug_bostrom wrote :

    "...but on the other hand we're also sometimes treated to interesting little essays like this".

    And to add one last comment on this diversion : that comment just proves my point. Anywhere else you care to research the subject of the Dialogo, you will find Simplicio described as a combination of two contemporary philosophers : Cesare Cremonini who famously refused to look through the telescope; and Ludovico delle Colombe, one of Galileo's main detractors. You will also find evidence of Galileo's good connections with Maffeo Barberini (later Urban VIII), who had written a poem in praise of Galileo's telescopic discoveries and actually agreed to the publication of the Dialogo. Why, then, would Simplicio be a parody of Urban ?
    It's all part of a pattern : BP finds the evidence he likes and agrees with and everything else (and everyone else) is wrong, fraudulent or part of the conspiracy.
  21. #65 scaddenp at 12:39 PM on 29 June, 2010
    I believe the papers used for homogenization are listed here.USHCN Do you have a problem with the methodology used here?

    According to the USHCN page you have linked they do adjustments in 6 steps. The first one "with the development at the NCDC of more sophisticated QC procedures [...] has been found to be unnecessary"

    Otherwise the procedure goes like this:

    RAW -> TOBS -> MMTS -> SHAP -> FILNET -> FINAL

    Proper process audit is impossible, because
    1. unified documentation of the procedure, including scientific justification and specification of algorithms applied is not available
    2. for step 2, 3, 4 & 6 at least references to papers are provided, for step 5 not even that
    3. neither executables nor source code and program documentation is provided for programs TOBS, MMTS, SHAP & FILNET.
    4. metadata used by the programs above to do their job is missing and/or unspecified
    5. clear statement whether the same automatic procedure were applied to GHCN v2 which is hinted at the USHCN Version 1 site is missing (if the arcane wording "GHCN Global Gridded Data" in the HTML header of that page is dismissed)

    Well, I have found something, not referenced in either USHCN or GHCN pages. It is ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/software/USHCN_v52d.20100217.tar.gz. There is software there (written in Fortran 77) and some rather messy documentation including an MS Word DOC file titled:

    USHCN Version 2.0 Update v1.0
    Processing System
    Documentation (another version number here?)
    Draft
    August 46, 2009

    Claude Williams
    Matthew Menne

    I do not know how authoritative it is. But I do know much better documentation is needed even on low budget projects, not to mention one multi thousand billion bucks policy decisions are supposed to be based on.

    The "Pairwise Homogeneity Algorithm (PHA)" promoted (but not specified) in this document is not referenced on any other USHCN or GHCN page. Google search "Pairwise Homogeneity Algorithm" site:gov returns empty.

    It would be a major job to do the usual software audit on this thing. One has to hire & pay people with the right expertise for it, then publish the report along with data.

    However, any scientist would run away screaming upon seeing a calibration curve like this, wouldn't she? It is V shaped with clear trends and multiple step-like changes. One would think with 6736 stations spread all over the world and 176 years in time providing 4,864,014 individual data points errors would be a little bit more independent allowing for the central limit theorem to kick in.

    At least some very detailed explanation is needed why are there unmistakable trends in adjustments commensurate with the effect to be uncovered and why this trend has a steep downward slope for the first half of epoch while just the opposite is true for the second half?

    BTW, the situation with USHCN is a little bit worse. Adjustment for 1934 is -0.465°C relative to those applied to 2007-2010 (like 0.6°C/century?). I'll post the USHCN graph later.

    #66 scaddenp at 12:39 PM on 29 June, 2010
    you think that you can explain warming in ocean, satellite, and surface record away as "anomalies" as poor instrumental records, and then explain the loss of ice/snow around the world purely by black soot? And the sealevel rise as by soot-induced melting alone without thermal expansion? I guess similar strange measurement anomalies will explain upper stratospheric cooling and the IR spectrum changes at TOS and at surface. That is drawing one very long bow

    One thing at a time, please. Let's focus on the problem at hand first, the rest can wait.
  22. BP #64

    I suggest you have a look at what the Clear climate project has to say about the ghcn data you've examined.

    Scientific code is almost never pretty - the goals are very different to what commercial programmers would expect, and technical debt accumulates at far faster rates compared even to poorly managed commercial projects. This is caused by the two camps having distinctly different goals (technical incompetence on the side of the scientists, and scientific incompetence on the side of the programmers, to be uncharitable).
  23. How depressing. In post #62 BP makes a whole series of very specific claims about the satellite temperature record, all of which are stated as factual with no qualifiers or caveats. Then, two posts later in #64, he casually mentions that those earlier statements were actually "just a guess, I have not looked into the issue deeply enough, yet."

    Then, to compound this, BP proceeds to claim to have discovered evidence of "tampering" (his own word) with the GHCN data set, based on comparing the raw and adjusted GHCN data sets using a naive unspatial averaging of all station data.

    I have pointed out to BP previously that you cannot compare the results of a simple global average of all stations to the gridded global temperature data sets because the stations are not distributed uniformly. Given that many people have looked into this in vastly more detail than BP, and have done it right instead of doing it wrong, I cannot fathom why BP thinks his analysis adds any value, let alone why it would justify sweeping claims about "tampering".

    Here's a comparison of the gridded global land temperature trend, showing the negligible difference between GHCN raw and adjusted data:



    This is based on results from Zeke Hausfather, one of a large and growing number of people who have done independent, gridded analyses of global temperature using open-source data and software.

    BP claims that the GHCN adjustment added "0.26 C" to the warming trend over the last 90 years (is that 0.26 C per 90 years, or is it 0.26 C per century over the last 90 years? "0.26 C" is not a trend).

    Using a gridded analysis, the actual difference in the trends is 0.04 C/century over the last 90 years. Over the last 30 years, the difference in trend between the raw and adjusted data is 0.48 C/century ... with the adjusted trend being lower than the raw trend. In other words, the "tampering" that BP has detected is, over the past 30 years, reducing the magnitude of the warming trend.

    Then, of course, there's the issue that land is only about 30% of the earth's surface. Presumably the effect of any adjustment to the land data needs to be divided by 3.33 to compare its magnitude to the global temperature trend.

    Once again, BP has drawn extreme and completely unjustified conclusions ("tampering") based on a very weak analysis. Personally, I am getting really tired of seeing this here.
  24. #67 doug_bostrom at 12:44 PM on 29 June, 2010
    how about explicitly publishing your (admittedly simple sounding but I'm a simpleton) arithmetic method you're using to produce your datapoints?

    Listen, I think the description of the procedure followed is clear enough, anyone can replicate it. I am not into "publishing" either, it not my job. It is not science proper. That would require far more resources and time. I am just trying to show you the gaps where PhD candidates could find their treasure. The trails can be followed, and if anyone is concerned about it, things I write here are published under the GNU Free Documentation License.

    I have not written proper software either just used quick-and-dirty oneliners in a terminal window. Anyway, here you go. This is what I did for USHCN as recovered from the .bash_history file.

    [Oops. Pressed the wrong button first]

    $ wget ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/v2.mean*
    $ grep '^425' v2.mean > ushcn.mean
    $ grep '^425' v2.mean_adj > ushcn.adj
    $ cat ushcn.mean|perl -e 'while (<>) {chomp; $id=substr($_,0,12); $y=substr($_,12,4); for ($m=1;$m<=12;$m++) {$t=substr($_,11+5*$m,5); printf "%s_%s_%02u %5d\n",$id,$y,$m,$t;} }'|grep -v ' [-]9999$' > ushcn.mean_monthly
    $ cat ushcn.adj|perl -e 'while (<>) {chomp; $id=substr($_,0,12); $y=substr($_,12,4); for ($m=1;$m<=12;$m++) {$t=substr($_,11+5*$m,5); printf "%s_%s_%02u %5d\n",$id,$y,$m,$t;} }'|grep -v ' [-]9999$' > ushcn.adj_monthly
    $ cut -c-20 ushcn.mean_monthly | sort > ushcn.mean_monthly_id
    $ cut -c-20 ushcn.adj_monthly | sort > ushcn.adj_monthly_id
    $ uniq -d ushcn.mean_monthly_id
    $ uniq -d ushcn.adj_monthly_id
    $ sort ushcn.mean_monthly_id ushcn.adj_monthly_id | uniq -d > ushcn.common_monthly_id
    $ (sed -e 's/^/0 /g' ushcn.mean_monthly; sed -e 's/^/1 /g' ushcn.adj_monthly; sed -e 's/^/2 /g' ushcn.common_monthly_id;)|sort +1 -2 +0 -1 > ushcn.composite_list
    $ sed -e 's/ */ /g' ushcn.composite_list|perl -e 'while (<>) {chomp; ($i,$id,$t)=split; if ($i==2 && $id eq $iid && $id eq $iiid) {$d=$tt-$ttt; printf "%s %d\n",$id,$d;} $iiid=$iid; $iid=$id; $ttt=$tt; $tt=$t;}' > ushcn.adjustments_monthly_by_station
    $ sed -e 's/^............_//g' -e 's/_.. / /g' ushcn.adjustments_monthly_by_station | sort > ushcn.adjustments_annual_list
    $ echo '#' >> ushcn.adjustments_annual_list
    $ cat ushcn.adjustments_annual_list | perl -e 'while (<>) {chomp; ($d,$t)=split; if ($d ne $dd && $dd ne "") {$x/=$n*10; printf "%s\t%.3f\n",$dd,$x; $n=0; $x=0;} $n++; $x+=$t; $dd=$d;}' > ushcn.adjustments_annual.txt
    $ openoffice -calc ushcn.adjustments_annual.txt
  25. In my comment above, the link associated with the phrase "many people have looked into this in vastly more detail than BP" is sub-optimal. The link there is to a cached page at google, when it should be to Zeke Hausfather's comparison of GHCN analyses.
  26. BP writes: I have not written proper software either just used quick-and-dirty oneliners in a terminal window.

    Maybe you should stop making allegations of fraud based on "quick and dirty oneliners"? Especially on a topic where many people have invested huge amounts of their own time on far more sophisticated analyses?
  27. #73 Ned at 21:52 PM on 29 June, 2010
    Here's a comparison of the gridded global land temperature trend, showing the negligible difference between GHCN raw and adjusted data

    What you call negligible is in fact a 0.35°C difference between adjustments for 1934 and 1994 in your graph. If it is based on Zeke Hausfather, then it's his assessment.

    Now, 0.35°C in sixty years makes a 0.58°C/century boost for that period. Hardly negligible. It is actually twice as much as the adjustment trend I have calculated above (0.26°C in ninety years, 0.29°C/century). About the same order of magnitude effect is seen for USHCN.



    It is 0.56°F (0.31°C) difference between 1934 and 1994, which makes a 0.52°C/century increase in trend for this period.

    Therefore, if anything, my calculation was rather conservative relative to more careful calculations. It should also be clear it has nothing to do with the grid, so stop repeating that, please.

    What is not shown by more complicated approaches is the curious temporal pattern of adjustments to primary data (because they tend to blur it).

    Finally dear Ned, would you be so kind as to understand first what is said, then you may post a reply.
  28. If anyone is uncertain about what to make of the conflicting claims from BP and me, please do the following:

    (1) Go here and download the spreadsheet that Zeke Hausfather compiled, showing annual global temperature reconstructions from many different analyses by many different people. The data are described in Zeke's post here.

    (2) Click on the tab labeled "Land Temp Reconstructions". If you want to see the effects of the GHCN adjustment, select columns A (year), E (v2.mean) and F (v2.mean_adj).

    (3) To see the effect of the adjustment, subtract column E from column F. To determine the trend in this adjustment over any period of time (like BP's weird choice of 1934-1994) select that range of years and fit a linear model to the differences, as a function of year (and multiply the slope by 100 to convert from degrees C/year to degrees C/century). For 1934-1994 the slope is 0.19 C/century. As stated above, over the past 30 years the adjustments have actually reduced the trend (slope -0.48 C/century).

    Contrary to what BP claims, it's really important to use some form of spatial weighting (e.g., gridding) when doing this, because the stations are not uniformly distributed. Taking a simple average would only be appropriate if there were no spatial autocorrelation in the adjustments. Given that stations in different countries have been administered differently, this seems like an extremely unlikely assumption.

    BP's claim that "more complicated" (that is, more correct) methods don't show the temporal evolution of adjustments is likewise inexplicable. All of the reconstructions produce annual temperature estimates.
  29. #78 Ned at 01:39 AM on 30 June, 2010
    Contrary to what BP claims, it's really important to use some form of spatial weighting (e.g., gridding) when doing this, because the stations are not uniformly distributed.

    OK, I have understood what's going on. In a sense you are right, but for a different reason what you think you are right for.

    You do not have to do any gridwork, just treat the US and the rest of the world separately.

    1. Until about 2005 the USHCN used to be heavily overrepresented in GHCN (since then it is getting underrepresented, 4.16% in 2010).
    2. Between about 1992 and 2005 up to 90% of GHCN readings came from USHCN, before that time it was 20-40%.
    3. Since 2006 there is no adjustment in USHCN and since 1989 for the rest of the world.
    4. Adjustments for USHCN are much bigger than for the rest of the world. They also follow a different pattern.

    It looks like two different adjustment procedures were applied to the US data and the rest of the world and the results were only put together after that.

    US land area is 9,158,960 km2, world is 148,940,000 km2, therefore the US has 6.1% of land. If the world is divided into two "regions": the US and the rest and area weighted average is calculated, trend in global adjustment is 0.1°C/century for 1900-2010 (the same figure is 0.39°C/century for USHCN).

    Unfortunately this peculiar feature of GHCN is undocumented.
  30. Berényi Péter, you're part way there. Handling the US and the rest of the world separately is a good start, and for a simple order-of-magnitude guesstimate it might be enough.

    But as with everything in statistics you need to understand your assumptions. Treating the rest of the world as homogeneous will not yield an unbiased estimate of the global mean adjustment unless either (a) the stations are distributed approximately uniformly in space or time, or else (b) the expected value of the adjustment for station X in year Y is independent of that station's location. (For that matter, this also applies to treating the US as homogeneous).

    We know that (a) is untrue. So the question is whether (b) is true, or close enough to true that you can live with the resulting bias. (As an aside, the existence of nonstationarity in the expected value of the adjustment is not evidence of "tampering" ... there are many valid reasons why stations in country 1 or state 1 would require different types of adjustments than stations in country 2 or state 2).

    Again, you can just assume that the impact of any spatial heterogeneity will be small, and ignore the bias in your calculations. That's essentially what you did above. Alternatively, you can weight the data spatially, e.g. by gridding, and remove the problem.

    Does this help?
  31. It seems to me that a lot of the questions people ask about the surface temperature data have been answered, at least in part, by the various "independent" (i.e., non-official) reconstruction tools that have been developed over the past six months.

    For example, all of the following questions have been addressed:

    (1) Can the "official" temperature records (GISSTEMP, HADCRU, NCDC) be replicated? [Yes]

    (2) Does the GHCN adjustment process have a large effect on the surface temperature trend? [Generally no]

    (3) Does the decrease in high latitude (or high altitude, or rural) stations have a large effect on the temperature trend? [No]

    (4) Does the location of stations at airports have a large effect on the the temperature trend? [No]

    (5) Does the overall decline in station numbers have an effect? Don't you need thousands or tens of thousands of stations to compute an accurate global temperature trend? [No, it can actually be done with fewer than 100 stations]

    There are probably other questions that I'm forgetting. Anyway, here are some handy links to tools that people have put together for do-it-yourself global temperature reconstruction. Many (but not all) of these are open-source, and many are very flexible, so that you can create reconstructions using different combinations of stations to test particular hypotheses.

    * Clear Climate Code (exact replication of GISSTEMP using Python).

    * Ron Broberg's blog "The Whiteboard" (replication of GISSTEMP and CRUTEMP)

    * Nick Stokes's GHCN processor

    * GHCN Processor by Joseph at Residual Analysis

    * Zeke Hausfather's temperature reconstructions (no single link, but see here and here)

    * Tamino

    * RomanM and Jeff Id

    * Chad at "Trees for the Forest"

    If there are others that I'm missing, maybe someone could add links in this thread.
  32. #80 Ned at 22:29 PM on 30 June, 2010
    as with everything in statistics you need to understand your assumptions

    Exactly. But you still don't get is. Adjustment algorithm applied by GHCN v2 is not the same for the US as for the rest of the world. And this fact is not documented.



    Overall effect of adjustment on trend may be small (0.1 K/century), but the adjustment procedure itself can't be correct.
  33. BP, the adjustments are not the same anywhere, because the adjustments are peculiar to the individual circumstances of those cases. The adjustment algorithm is not just a formula, because it needs to accomodate events such as a station getting run over by a bulldozer and being repaired.
  34. In other words, BP, if you take any two subsets of the stations, you will see the adjustments differ. Even though the same adjustment algorithm was applied.
  35. #84 Tom Dayton at 06:58 AM on 1 July, 2010
    if you take any two subsets of the stations, you will see the adjustments differ

    You must be kidding. It is not just any two subsets. What kind of algorithm can have this particular effect? I mean US data were adjusted upward by 0.27°C during the last 35 years while there was no adjustment at all for the rest of the world.

    Also, between 1870 and 1920 US trend was adjusted downward by 0.4°C while the rest of the world was adjusted slightly upwards.

    One should be able to tell what makes US weather stations so special.

    Anyway, I am just checking if there's any other pair of complementer subsets with such a weird behavior.
  36. BP, an example is an adjustment for the time of day at which a temperature was measured at a station. At least in the U.S., temperatures at many stations originally were measured at the same time every morning. Then many of the stations (all?) changed to measure at the same time every afternoon. Those stations' temperatures from before the time-of-measurement change had to be adjusted to eliminate the difference that was due to the time-of-day change.
  37. BP, not to barge in again but perhaps you could simply ask the folks responsible for an explanation of what you think you see?

    They seem to invite this:

    For all climate questions, please contact the National Climatic Data Center's Climate Services Division:
    Climate Services and Monitoring Division
    NOAA/National Climatic Data Center
    151 Patton Avenue
    Asheville, NC 28801-5001
    fax: +1-828-271-4876
    phone: +1-828-271-4800
    e-mail: ncdc.info@noaa.gov

    To request climate data, please e-mail: ncdc.orders@noaa.gov
  38. BP writes: But you still don't get is. Adjustment algorithm applied by GHCN v2 is not the same for the US as for the rest of the world. And this fact is not documented.

    How can I say this politely? You seem not to have read even the most introductory literature about the GHCN data. You might want to start with:

    Peterson, T. and R. Vose. 1997. An Overview of the Global Historical Climatology Network Temperature Database. Bulletin of the American Meteorological Society, 78(12): 2837-2849.

    Section 6 describes the adjustment process and points out explicitly that one adjustment process is used for data from the USHCN network, and a different process for the rest of the world.

    It is frankly stunning that you would not have read even the single most basic paper about the GHCN data set before leaping to the conclusion that the data have been "tampered with". It's especially ironic that you are apparently under the impression that you've discovered something new and that I don't understand it.

    So. Yes, there is a difference between the US and the rest of the world. But as I said above, that's only the first step. You are still better off using a gridded analysis rather than naively assuming that the expected value of the adjustment is stationary across the whole rest of the world.
  39. There are quite a few reasons to believe that the surface temperature record – which shows a warming of approximately 0.6°-0.8°C over the last century (depending on precisely how the warming trend is defined) – is essentially uncontaminated by the effects of urban growth and the Urban Heat Island (UHI) effect. These include that the land, borehole and marine records substantially agree; and the fact that there is little difference between the long-term (1880 to 1998) rural (0.70°C/century) and full set of station temperature trends (actually less at 0.65°C/century). This and other information lead the IPCC to conclude that the UHI effect makes at most a contribution of 0.05°C to the warming observed over the past century.
    http://www.globalwarmingsurvivalcenter.com
  40. Ron Broberg and Nick Stokes have created an entirely new gridded global surface temperature analysis that is independent of GHCN. It is based on the Global Summary of the Day (GSOD) records for a very large number of stations, available here.

    The main advantage of this is that it provides a semi-independent confirmation of the GHCN-based analysis that has been used for most of the surface temperature reconstructions up to this point. Other advantages include a larger number of stations, more stations in the Arctic and other remote locations, and no decrease in station numbers in recent years.

    Ron developed tools to acquire and reformat the GSOD data, and Nick then ran it through TempLS, his global temperature analysis program.

    The results are very similar to those from previous reconstructions using GHCN:



    Over the past three decades, both data sets (GHCN and GSOD) show similar trends (+2.5C/century) in Nick's analysis.

    If you find this all a bit confusing, the bottom line is that this is a radically new way of confirming the reliability of the existing surface station temperature analyses from GISSTEMP, HADCRU, etc.
  41. Ned,
    sometime I think that whatever the amount of data one can provide there's no way to convice some, hopefully just a few, guys. Nevertheless, it's always worth trying and keep all of us up to date with these new findings. Thanks.
  42. Moderator Response (to #50 Berényi Péter at 22:58 PM on 29 July, 2010 under 10 key climate indicators all point to the same finding: global warming is unmistakable): This level of detail and sheer space consumption does not belong on this thread. Put future such comments in the Temp Record Is Unreliable thread. But if you post too many individual station records, I will insist that you instead post summary statistics.

    Understood. However, it is much more work to produce correct summary statistics and it is also harder for third parties to check them. I would like to make it as transparent as possible.

    At this time I am only at the beginning of this job and just trying to assess the approximate width and depth of the issue. So let me show you just one more station.



    It's Baker Lake, Canada, Nunavut.

    It is pretty interesting, because at this site GHCN v2 has common coverage with Weather Underground for two periods, from November, 1996 to March, 2004 and another one from December, 2005 to May, 2010 with very few breakpoints in each.

    The difference in adjustment to GHCN raw data relative to the Weather Underground archive before and after 2005 is 0.91°C.

    What is the problem with this site?



    As we can see, temperature is decreasing sharply, even if the +0.91°C correction after 2005 is added (yellow line).

    Therefore it was best to remove it from v2.mean_adj after 1991 altogether. The extreme cold snap of 2004/2005 is removed even from the raw dataset.
  43. BP #92

    You've set yourself a massive job there. Your best bet to make it manageable is to take a random sample of about 10% of the available weather stations, and then examine the appropriate data at each of them to see what proportion of the surface station record might be problematic. The random sampling is important (something you do not appear to have done yet), as is properly assessing the statistical significance of the difference between the records (for which you will have to correct for autocorrelation, thus reducing statistical power).

    On the other hand, you could be satisfied that the satellite record is an independent record of temperature that in does not show a statistically significantly different trend to the surface record over the same period.
  44. Berényi - it's pretty obvious that you are searching for problems with the temperature records. However, in your search for problems of any kind, you are really ignoring the full data, the statistics.

    There are (as far as I can put it together) three completely independent data sets for surface temps: the GHCN stations, the GSOD data put together recently, and the satellite data streams (two major analyses of that).

    All three data streams, and all the numerous analysis techniques applied to them, agree on the trends. Multiple analyses of the GHCN data set alone by multiple investigators demonstrate that dropouts, station subsets, UHI adjustments or lack thereof - none of these affect the trend significantly.

    Analysis in detail of singular stations (which is what you have provided as far as I can see) fails to incorporate the statistical support of multiple data points, and the resulting reduction in error ranges. Are you selecting individual stations that have large corrections? Or what you see as large errors? If so then you are cherry-picking your data and invalidating your argument!

    If you can demonstrate a problem using a significant portion of the GHCN data set, randomly chosen and adjusted for area coverage, then you may have a point worth making. For that data set. And that data set only. But you have not done that. And you have certainly not invalidated either the satellite data or the (less adjusted) GSOD data indicating the same trends.

    Even if you prove some problem with the GHCN data (which I don't expect to happen), there are multiple independent reinforcing lines of evidence for the same trend data. That's data worth considering - robust and reliable.
  45. A small meta-note on proof and disproof - take it for what you will.

    A common tactic used by people who don't agree with a particular theory is to try to point out errors in portions of the supporting data. Unfortunately, what that does (if that person is correct) is to disprove a particular line of data, but with little or no effect on the theory. Invalidating a particular line of data does just, and only, that. If there are multiple supporting lines of data for a theory, this only means that a particular data set has some issues, and should be reconsidered as to it's validity or provenance.

    On the other hand, if you have reproducible, reliable data that contradicts a theory, then you may have something. Data that is solid, reproducible by others, and not consistent with the prevailing theory, points out issues with that theory.

    An excellent example of this can be found in the Michelson–Morley experiment of 1881. Michelson had expected to find reinforcing data for the Aether theory, but his experiment failed to find any evidence for an Aether background to the universe. This was reproducible, consistent, and contrary to the Aether theory - and one of the nails in it's coffin as a theory of the universe.

    Pointing out an issue with a singular data set (of many) doesn't do much to the theory that it supports - there are lots of data streams that support AGW. But if there is a solid, reproducible, contradictory data set - I personally would love to see it, I personally would like this to not be a problem. But I haven't, yet.

    Summary:

    - Reproducible, solid, contradictory data sets provide counterexamples to a theory, and may indicate that the theory is flawed.
    - Problems with individual data sets indicate just that, not invalidation of larger, multiply supported, theories.
  46. #93 KR at 12:48 PM on 2 August, 2010
    it's pretty obvious that you are searching for problems with the temperature records. However, in your search for problems of any kind, you are really ignoring the full data, the statistics

    Yes, it is obvious, the more so because I've told you. And it's also pretty important to get acquainted with individual cases, otherwise you don't even know what to look for.

    BTW, you are perfectly right in stating the full dataset has to be taken into account and that's what I am trying to do. It just can't be done in a single step, not even Rome was built in a single day.

    Even so, I am happy to announce there is something I can already show you, related to the structure of the entire GHCN.

    I have downloaded v2.mean, and wherever there were multiple records for a year/month pair at a site identified by a 11 digit number in v2.temperature.inv, I took their average. Then I have computed monthly average temperatures for each site and got temperature anomalies relative to these values.

    A 5 year running average of these anomalies for all the sites in GHCN at any given time looks familiar:



    More than 0.8°C increase is seen in four decades. However, standard deviation is huge, it varies between 1.6°C and 1.9°C. That is, the trend is all but lost in noise, which fact is seldom mentioned.



    But it gets worse. Skewness of temperature anomaly distribution can also be computed.



    It is really surprising. I put the two measure into the same figure, because the similarity in trends is striking.



    Skewness is the lack of symmetry in distribution. In GHCN it has changed from strong negative to strong positive in four decades.



    In the the sixties temperature anomaly distribution used to have a long low temperature tail, while currently it is vanishing and changing into a long high temperature tail.

    Temperature anomaly and skewness does not always go together. The transient warming between 1934-39 did just the opposite.



    Now, changes in skewness of temperature anomaly distribution are either real or not.

    In the first case it begs for an explanation. As the warming in the thirties was certainly not caused by CO2, it can even turn out to be a unique fingerprint of this trace gas.

    However, it can also be a station selection bias and that's what I'd bet on.

    Kurtosis of GHCN temperature anomaly distribution is also interesting. If this distribution would be normal, kurtosis would be zero. But it is not, and changing wildly.



    The last thing I'd like to show you for today is a temperature-kurtosis phase graph. After 1993 it turned into an entirely new direction and walked out to uncharted territory.



    It's happened just after the dramatic decrease of GHCN station number. Obviously there was a selection procedure involved in determining which stations should be dropped and it's unlikely it was a random one.

  47. BP #96

    You can save a lot of bother first by computing the same set of statistics on the other temperature records, and seeing if the summary distribution statistics that you're observing are different for each dataset.

    In fact this would improve the rigour of your analysis because you could then demonstrate that you're not jumping in with the preconceived notion that the temperature record is dud.

    Your use of words like "its gets worse" also detracts from the quality of your reporting. "Another interesting feature of this data" is a perfectly good phrase that helps to de-emphasise your preconceived notion of what's happening with the dataset.

    Also, are you correcting for the different sample size at each point in time with your measurement. If you haven't this casts strong doubt on the validity of your analysis to date.
  48. BP - an excellent and very interesting posting you present here.

    Kurtosis of the GHCN data set may be due to any number of things - I would put "weather" at the top of those. I'm not terribly surprised to see the temp kurtosis varying considerably over time, albeit with a rather constrained distribution (your figure 7); simple changes in year-to-year variability (tight means and lacks thereof) might account for that. If it were driven by station number reduction I would expect to see a trend in it, which I don't from your graphs. As to the temperature shift - that appears independent of kurtosis in your figure 7.

    Your standard deviation graph is much smaller than the rest - only 1967 to present. I would love to see it over the entire course of the data. As it is I would hesitate to draw any interpretations from it.

    The skewness, on the other hand, is extremely interesting. Smaller station counts should increase the variability of skewness - I'm not seeing that in the post-1993 data, but we may not have enough data yet. An upwards trend, a shift towards positive skewness, on the other hand, indicates more high temperature events than cold temperature events, which is exactly what I would expect (Figure 2) from an increasing temperature trend, matching the temperature anomalies.

    The skewness seems to me to be more related to warming trends than station bias, considering other analyses of station dropout which apparently bias temperature estimates slightly lower, not higher. The station dropout would therefore operate counter to the trend you see in skewness - it has to be stronger than the station dropout.

    Again, thanks for the analysis, BP. I do believe it supports the increasing temperature trends (which you might not like) - but a heck of a lot of work.
  49. As a reminder, BP's figures (like the first one in his comment above) are not particularly useful as long as he continues to use simple averages of the GHCN data set. That choice of method implicitly assumes that either (a) there is no spatial dependency in the climate statistics being examined, or (b) in every year the spatial distribution of stations is uniform. Since we know that both of these assumptions are invalid, one can't really draw any conclusions from his figures.

    In addition, BP writes: Obviously there was a selection procedure involved in determining which stations should be dropped and it's unlikely it was a random one.

    "Random" has a very specific meaning. It is unlikely that the probability of a given station dropping out of the GHCN record in a given year is random. That is not a problem, however. Statisticians and scientists work with data produced by systems with elements of non-randomness all the time.

    A more useful question is whether the change in numbers of stations has any impact on the derived global temperature trends. As has been emphasized many times here, it clearly does not have any meaningful impact.
  50. #99 Ned at 21:31 PM on 4 August, 2010
    As a reminder, BP's figures (like the first one in his comment above) are not particularly useful as long as he continues to use simple averages of the GHCN data set.

    In a sense that's true. But it is good for getting an overview, the big picture if you like. Also, if adjustment procedures are supposed to be homogeneous over the entire GHCN, this approach tells us something about the algorithms applied, if not about the actual temperature trends themselves.

    However, with a closer look it turns out there are multiple, poorly documented adjustment strategies varying both over time and regions. Some adjustments are done to the raw dataset, some are only applied later, some only to US data, some exclusively outside the US, but even then different things are done to data in different regions and epochs.

    A gridded presentation is indeed an efficient way to smear out these features. On the other hand, it is still a good idea to have a closer look on intermediate regional and temporal scales if one is to attempt to identify some of the adjustment strategies applied.

    For example here is the history of temperature anomalies over Canada for a bit more than three decades according to three independent datasets (click on the image for a larger version).



    I have chosen Canada, because of data availability and also because this country has considerable expanses in the Arctic where most of the recent warming is supposed to happen. The three datasets used were
    1. GHCN (Global Historical Climatology Network)
    2. The National Climate Data and Information Archive of Environment Canada
    3. and Weather Underground, an independent weather portal company (a spinoff from the University of Michigan)

    The three curves have some family resemblance, but beyond that their physical content is radically different. Weather Underground shows an almost steady decline since 1989 (that is, a 0.8°C cooling), GHCN a huge warming (more than a centigrade in three decades, almost 1.27°C in five years between 1994 and 1999) while Environment Canada something in between with practically no trend since 1985.

    Up to about 1995 the three curves go together nicely. With some offset correction (which has no effect on temperature anomaly trends) they could be brought even closer. The same is true after 1998. Therefore in this particular case most of the action happened in just four years between 1995 and 1998.

    In this period the divergence is very noticeable, so the next thing to do is to have a closer look at these years in Canadian datasets and to determine the exact cause(es) of discrepancy.

    Now I do have all the data necessary to do the analysis at my fingertips. Unfortunately I do not have too much time for this job, you may have to wait a bit. Neither was it always easy to collect the data. My IP has even got banned from Weather Underground for a while because they might have noticed the work the download script had been doing. Anyway, I have no intention to publish their dataset (as long as it stays put on their website), I just use it for statistical purposes.

    The spatio-temporal coverage patterns of the three datasets are different inside Canada. Weather Underground, understandably, has an excellent recent coverage, getting sparser as we go back in time. Fortunately for some sites their archive dataset goes back to January, 1973 (e.g Churchill Falls, Newfoundland). They also use WMO station numbers (at least in Canada), which is convenient (the connection between four letter airport identifiers and WMO numbers can get obscure in some cases).

    It is just the opposite with Environment Canada. Their coverage in early times is even better, than the previous dataset's (they go back to March, 1840), but it is getting sparser as we approach the present (unfortunately their station identifiers are different from those used by either GHCN or Weather Underground).

    This tendency of station death is even more pronounced in GHCN. It is not easy to understand why. GHCN has a particularly poor recent coverage in the Canadian Arctic, although this area is supposed to be very important for verification of computational climate models (Arctic Amplification and all).



    It is funny, that even the raw map used by GISS misses a fair number of the arctic islands that belong to Canada and shows sea in their place. At the same time Arctic coverage of Environment Canada is excellent. Their data are also said to be quality controlled and of course digitized. Why can't it be fed into GHCN? Looks like a mystery (I know there used to be a war between the two countries back in 1812 when seaborne British terrorists ate the President's dinner and set the White House aflame, but I thought it was over some time ago).

    Anyway, the very practice of making adjustments to a raw dataset prior to publication is a strange one, which would be considered questionable in any other branch of science. But should adjustments be done either way, if their overall magnitude is comparable to the long term trend, anything is measured but the trend itself.

    The double adjustment to raw Canadian data also makes understandable why USHCN have got a different treat than the rest of the world. It would be pretty venturesome to meddle with US raw data directly for the US, despite the recent legislative efforts of both major parties to put an end to this preposterous situation, is still an open society, more so than most other countries of the world. Therefore it was advisable to introduce US adjustments only in v2.mean_adj, which is a unique feature, not done for the rest.

    As the US is only a tiny fraction of the globe, at first sight it does not make much sense to go into such pains. But without the 0.52°C upward adjustment of the US trend, data from there would get inconsistent with neighboring Canadian ones. What is more, it would be somewhat inconvenient to explain why the US does not have this warming thing, but still needs cap & trade.

    It is also noticeable, that the strange divergence, if global, does not increase one's confidence in computational climate models parametrized on this very dataset.

Prev  1  2  3  4  5  6  7  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

TEXTBOOK

THE ESCALATOR

(free to republish)

THE DEBUNKING HANDBOOK

BOOK NOW AVAILABLE

The Scientific Guide to
Global Warming Skepticism

Smartphone Apps

iPhone
Android
Nokia

© Copyright 2014 John Cook
Home | Links | Translations | About Us | Contact Us