Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Twitter Facebook YouTube Mastodon MeWe

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Of Averages & Anomalies - Part 2A. Why Surface Temperature records are more robust than we think

Posted on 4 June 2011 by Glenn Tamblyn

In Part 1A and Part 1B we looked at how surface temperature trends are calculated, the importance of using Temperature Anomalies as your starting point before doing any averaging and why this can make our temperature record more robust.

In this Part 2A and a later Part 2B we will look at a number of the claims made about ‘problems’ in the record and how misperceptions about how the record is calculated can lead us to think that it is more fragile than it actually is. This should also be read in conjunction with earlier posts here at SkS on the evidence here, here & here that these ‘problems’ don’t have much impact. In this post I will focus on why they don’t have much impact.

If you hear a statement such as ‘They have dropped stations from cold locations so the result is now give a false warming bias’ and your first reaction is, yes, that would have that effect, then please, if you haven’t done so already, go and read Part 1A and Part 1B then come back here and continue.

Part 2A focuses on issues of broader station location. Part 2B will focus on issues related to the immediate station locale.

Now to the issues. What are the possible problems?

Urban Heat Island Effect

This is the first possible issue we will consider. The Urban Heat Island Effect (UHI) is a real phenomenon. In built-up urban areas the concentration of heat storing materials in buildings, roads, etc. such as concrete, bitumen, bricks and so on, and heat sources such as heaters, air-conditioners, lighting, cars, etc. all combine to produce a local ‘heat island’: a region where temperatures tend to be warmer than the surrounding rural land. This is well-known and you can even see its effects just looking at reports of daily temperatures. If we have weather stations inside such a heat island they will record higher temperatures than they would if they were in the surrounding country side. If we don’t make some sort of compensation for this then this could be a real source of bias in our result, and since we never see ‘cool islands’, its bias would be towards warming.

This is why the major temperature records include some method for compensating for it, either by applying a compensating adjustment to the broad result they calculate, or by trying to identify stations that have such an issue and adjusting them. GISTemp for example seeks to identify such urban stations and then adjust them so that the urban station’s long-term trend is brought into line with adjacent rural stations. There is also the question of identifying which stations are ‘urban’. Previous methods relied on records of how stations were classified, but this can change over time as cities grow out into the country, for example. GISTemp recently started using satellite observations of lights at night to identify urban regions – more light means more urban.

What other factors might limit or exaggerate the impact a heat island might have?

Is the UHI effect at the station growing, changing over time? Has the UHI effect in a city got steadily warmer, or has the area that is affected by the heat island expanded but the magnitude of the effect hasn’t changed. This will depend on things like how the density of the city changes, what sort of activities happen where, etc. For a station that has always been inside a city, say an inner city university, it will only be affected if the magnitude of the heat island effect increases. If the UHI at a site stays constant, then that isn't a bias to the trend.

On the other hand, a previously rural station that has been engulfed by an expanding city will most definitely feel some warming and will show a trend during the period of its engulfing, although again how much will depend on circumstances. And this will look like a trend.  If it has been engulfed by low density suburbia and its piece of ‘country’ has been preserved as a large park around it, the impact will be lower than if a complete satellite city has sprung up around it and it is on the pavement next to a 6 lane expressway.

But remember, the existing products include a compensation to try and remove UHI, UHI only impacts our long term temperature results if the magnitude of the effect is growing, and each station’s data still has to be added to the results for all other stations using Area Weighted Averaging. And since the vast majority of the Earths land area isn’t urban, UHI can only have a limited impact on the final result anyway. And the Oceans aren’t affected by UHI and they are 70% of the Earth's surface.

Airports

One particular example sometimes cited is the number of stations located at airports, with images being painted of ‘all those hot jet exhausts’ distorting the results. Firstly we are interested in daily average temperatures not instantaneous values. So the station would need to get hit by a lot of jets.

Think about a medium-sized airport. At its busiest it might have one aircraft movement (take off or landing) per minute. Each takeoff involves less than a minute at full power while the rest of the take off and landing, 10 minutes or so of taxiing, is at relatively low power. For the rest of the one to several hours that the aircraft is on the ground, its engines are off. So for each jet at the airport, its average power output over its entire stay there is a very tiny percentage of its full power. And many airports have night-time curfews when no aircraft are flying. So how much do the jets contribute to any bias?

Consider instead that the airport is like a mini-city – buildings and lots of concrete and bitumen tarmac. But also lots of grassed land in between. So the real impact of an airport on any station located there will be more like a mini-UHI effect. But how much does an airport grow? Usually they have a fixed area of land set aside for them. The number of runways and taxiways doesn’t change much. And the area of apron around the terminal buildings doesn’t change that much over time. So the magnitude of this UHI effect is unlikely to change greatly over time unless the airport is growing rapidly.

If an airport is located in a rural area then any changes to the climate in the airport is going to be moderated by effects from surrounding countryside since it after all a mini-city not a city. If an airport has always been inside an urban area such a Le Guardia in New York then it is going to be adjusted for by the UHI compensations described above. And a rural airport that has been enveloped by its city will eventually have a UHI compensation applied. So the airports that are most likely to have a significant impact need to be and remain rural, be so big that moderating effects from the surrounding countryside don’t have much effect, and be expanding so that their bias keeps growing and thus isn’t compensated out by the analysis method. Then they need to dominate the temperature record for large areas, with few other adjacent stations. And then there are no airports on the oceans. So any airport that is likely to have an impact needs to be near a large growing city to generate the large and increasing traffic volumes to cause the airport to be large and growing, in a region that is sparsely populated otherwise so there are few other stations. And most large growing cities tend to be near other such cities.

Islands

There is one special case sometimes cited in relation to GISTemp: islands. If the only station on an island in the ocean is at an airport or has ‘problems’, that islands data will then supposedly be used for the temperature of the ocean up to 1200 km away in all directions, extending any problems over a large area. This claim is missing one key point: the temperature series used to determine global warming trends is the combined Land and Ocean series. And when land data isn’t available such as around an island, ocean data is substituted instead.

This is some data from a patch of ocean in the South Pacific (OK, it's from around Tahiti, I’m a sucker for exotic locations). I calculated this by using GISTemp to calculate temperature anomalies for grids around the world for 1900 to 2010, using consecutively land only data, ocean only data and combined land & ocean data. I then calculated from the three values obtained from each grid point the percentage contribution to the combined land/ocean data of each of the two sources. The following graph shows the percentage contribution of the land data at each grid point. And for reference below I have listed the temperature stations in the area with their Lat/Long. Obviously this isn’t coming just from land only data and in grids too far from land the % contribution of land data falls to zero. Each 2º by 2º grid is approximately 200 x 200 km, much less than the 1200km averaging radius used by GISTemp.

% Land Contributionaround Tahiti 

There aren’t enough stations

A common criticism is that there aren’t enough temperature stations to produce a good quality temperature record. A related criticism is that the number of stations being used in the analysis has dropped off in recent decades and that this might be distorting the result. On the Internet comments such as ‘Do you know how many stations they have in California?’ – By implication not enough – are not uncommon. This seems to reflect a common misperception that you need large numbers of stations to adequately capture the temperature signal with all its local variability.

However, as I discussed in Part 1A, the combination of calculating based on Anomalies and the climatological concept of Teleconnection means that we need far fewer stations than most people realise to capture the long-term temperature signal. If this isn’t clear, perhaps re-read Part 1A.

So how few stations do we need to still get an adequate result? Nick Stokes ran an interesting analysis using just 61 stations with long reporting histories from around the world. His results, plotting his curve against CruTEM3, although obviously much noisier than the data from the full global temperature still produced a recognisably similar temperature curve even with just 61 stations worldwide!

Just 61 Stations!Just 61 Stations - Smoothed!

So even a handful of stations get you quite close. What reducing station numbers does is diminish the smoothing effect that lots of stations gives. But the underlying trend remains quite robust even with far fewer stations. What is perhaps more important is if the reduction in station numbers reduces ‘station coverage’ – the percentage of the land surface with at least one station within ‘x’ kilometres of that location. But as we discussed in Part 1A, Teleconnection means that ‘x’ can be surprisingly large and still give meaningful results. And with Anomaly based calculations, the absolute temperature at the station isn’t relevant; it is the long term change in the station we are working with.


The Thermometers are Marching!

A related criticism is that the decline in used station count has disproportionately removed stations from colder climates and thus introduced a false warming bias to the record. This has been labelled "The March of the Thermometers". With the secondary ‘conspiracy theory’ type claim that this is intentional, all part of the ‘fudging’ of the data. This can seem intuitively reasonable – surely if you remove cold data from your calculations the result will look warmer. And if that is the result then, hey, that could be deliberate.

But the apparent reasonableness of this idea rests on a mathematical misconception which we discussed in detail in Part 1A. If we average together the absolute temperatures from all the sites then most certainly removing colder stations would produce a warm bias. Which is one of the most important reasons why it isn’t done that way! Using that approach (what I called the Anomaly of Averages method) would produce a very unstable, unreliable temperature record indeed.

Instead what is done is to calculate the Anomaly for each station relative to its own history then average these anomalies (what I called the Average of Anomalies method).

Since we are interested in how much each station has changed compared to itself, removing a cold station will not cause a warming bias. Removing a cooling station would! The hottest station in the world could still be a cooling station if its long term average was dropping. 50 °C down to 49 °C is still cooling. Removing that station would add a warming bias. However, removing a station whose average has gone from -5 °C up to -4 °C would add a cooling bias since you have removed a warming station.

We are averaging the changes in the stations, not their absolute values. And remember that Teleconnection means that stations relatively close to each other tend to have climates that follow each other. So removing one station won’t have much effect if ‘adjacent’ stations are showing similar long term changes. So for station removals to add a definite warming bias we would need to remove stations that have or are showing less warming, remove other adjacent stations that might be doing the same, but leave any stations that are showing more warming. If this station removal was happening randomly, there is no reason to think that any effect from this would be anything other than random, not a bias.

If this were part of some ‘wicked scheme’, then the schemers would need to carefully analyse all the world's stations, look for the patterns of warming so they could cherry-pick the stations that would have the best impact for their scheme, and then ‘arrange’ for those station to become ‘unavailable’ from the supplier countries, while leaving the stations that support their scheme in place.  And why would anyone want to remove stations in the Canadian Arctic for example as part of their ‘scheme’? Some of the highest rates of warming in the world is happening up there. Why remove them to make the warming look higher? Maybe someone is scheming. I’ll let you think about how likely that is.

But what if the pattern of station removals is driven by other factors – physical accessibility of the stations, operating budgets to keep them running etc.? Wouldn’t the stations more likely to be dropped be the ones in remote, difficult to reach, and thus expensive locations? Like mountains, arctic regions, poorer countries? Which are substantially where the ‘biasing’ stations are alleged to have disappeared from. If you drop ‘difficult’ stations you are very likely to remove Arctic and Mountain stations.

Could it also be that the people responsible for the ongoing temperature record realise that you don’t need that many stations for a reliable result and thus aren’t concerned about the decline in station numbers – why keep using stations that aren’t needed if they are harder to work with?

For example, here are the stats on stations used by GISTemp. The number of stations rose during the 60's and dropped of during the 90’s but percentage coverage of the land surface only dropped off slightly. Where coverage is concerned, its not quantity that counts but quality.

Coverage from GISS

Station coverage from GISTemp

GISTemp 'extrapolates' 1200 kilometers

One particular criticism made of the GISTemp method is that 'they use temporatures from 1200 km away' usually spoken with a tome of incredulity and some suggestion that this number was plucked out of thin air.

Station Correlation Scatter Plots (HL87)As explained in Part 1A and Part 1B, the 1200 km area weighting scheme used by GISTemp is based on the known and observed phenomena of Teleconnection; that climates are connected over surprisingly long distances. The 1200 km range used by GISTemp was determined emprically to give the best balance between correlation between stations and area of coverage.

Figure 3, from Hansen & Lebedeff 1987 (apologies for the poor quality, this is an older paper) plots the correlation coefficients versus separation for the annual mean temperature changes between randomly selected pairs of stations with at least 50 common years in their records. Each dot represents one station pair. They are plotted according to latitude zones: 64.2-90N, 44.4-64.2N, 23.6-44.4N, 0-23.6N, 0-23,6S, 23.6-44.4S, 44.4-64.2S.

 

 

 

 

 

 

 

When multiple stations are located within 1200 km of the centre of a grid point, the value calculated is the weighted  average of their individual anomolies. A station 10 km from the centre has 100 times the weighting of a station 1000 km from the centre. And as discussed under the section on islands previously, for small islands, the ocean data predominates not the land data.

One area of some contention is temperatures in the Arctic Ocean. Unlike the Antarctic, the Arctic does not have temperature stations out on the ice. So the neasest temperature stations are on the coast around the Arctic Ocean, Greenland and some islands. And ocean temperature data can't be used instead since this is not available for the Arctic Ocean.

Other temperature products don't calulate a result for theArctic Ocean. The result is that when compiling the Global trend, the headline figure most people are interested in, this method effectively assumes that the Arctic Ocean is warming at the same rate as the global average. Yet we know the land around then Arctic is warming faster than the global average so it seems unreasonable to suggest that the ocean isn't, Satellite temperature measurements up to 82.5 N support this as does the decline of Arctic sea ice here, here & here.

So it seems reasonable that the Arctic Ocean would be warming at a rate comparable to the land. Since the GISTemp method is based on empirical data regarding teleconnection, projecting this out seems to me the better option since we know the alternative method will produce an underestimate. Many parts of the Arctic Ocean are significantly less than 1200 km from land, with the main region where this isn't the case being between Alaska & East Siberia and the North Pole.

Certainly the implied suggestion that GISTemp's estimates of Arctic Ocean anomalies are false isn't justified. It may not be perfect but it is better than any of the alternatives.

In this post we have looked at some of the reasons why the temperature trend may be more robust with respect to factors affecting the broader region in which stations are located than might seem the case. The method used to calculate temperature trends does seem to provide good protection against these kinds of problems

In Part 2B we will continue, looking at issues very local to a station and why these aren't as serious as many might think...

0 0

Printable Version  |  Link to this page

Comments

Comments 1 to 29:

  1. An important paper about long range correlations in arctic: Variations in Surface Air Temperature Observations in the Arctic, 1979–97. Rigor et al. 1999. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.167.1697&rep=rep1&type=pdf See fig. 5 and table 2
    0 0
  2. Very interesting warm, While winter temperatures are somewhat correlated between the land and ocean, in the summer they found, "no correlation between the interior land and ocean observations."
    0 0
  3. Yes. Surface temperature trends (CRU: 1.5 K/century, GISS: 1.7K/century) are pretty close to (RSS LT: 1.4K/century, UAH LT: 1.4K/century, and sea surface: 1.3K/century) So we have some confidence in the trends, which we then observe are less than the IPCC best estimate for the 'Low Scenario' of 1.8K/century.
    0 0
    Moderator Response:

    [DB] What is the paternity of your furnished graphic?

    [Dikran Marsupial] IIRC the projections have temperatures rising at an increasing rate, so an increase of 0.14 to 0.17 K/decade at the present time is perfectly consistent with a projected rise of 1.8 K/century. I suspect the projections have temperatures rising faster than 0.18K/decade towards the end of the century.
  4. @Eric, The explanation: "Note that an isothermal melt period can be observed in the time series for each dataset when the SAT reaches the ice melt point. During this period the SAT is maintained at about 0C until all the snow and ice in an area have melted." The solution: "During most seasons, SAT trends can be studied by simple statistical methods, but during summer, because these masses hold the SAT to the melting point of sea ice, detection of changes in SAT must rely on other, less direct indicators such as the length of the melt season."
    0 0
  5. Just a typo, search for "esplained in Part 1A" and fix the s. ...Unless Ricky Recardo is being quoted...
    0 0
  6. #3 ClimateWatcher: isn't it a bit silly to just put a linear trend on the IPCC expectations and assume the trend now is the same as the average? ...they aren't. The trend now is expected to be lower than the trend later.
    0 0
  7. Another excellent and clear article Glenn. I like the way you can see all the cherry-pick arguments falling apart as you read through it. One quick typo, in the 4th paragraph from the end "... are less significantly less ..." maybe lose the first 'less'. As there seems to be some discussion about it, and I don't actually know the answer ... how does GISS treat summer Arctic temperatures? The holding of surface temperatures in summer to about freezing so long as there is ice cover is well known, and I might assume that GISS accounts for this, but maybe I'm wrong? It would not seem right if summer surface temperatures were extrapolated to be well above freezing far out over the melting ice.
    0 0
  8. I'm glad our readers are reading the article so carefully! I fixed the typos noted by skywatcher and paulgrace - thanks guys. MarkR - yes, I would say "silly" is an appropriate description for somebody who averages out an exponential increase and pretends it's linear. I could think of a few other descriptive words as well!
    0 0
  9. Actually, in the Arctic there is a vast array of temperature metrics that are available to be used. They are not presently used by GISSTEMP, but I do expect them to be added in the near future to improve the quality of the anomolies. http://nsidc.org/cgi-bin/get_metadata.pl?id=g00791
    0 0
  10. It would be interesting to compare the anomoly of GISS to the DMI temperature data sets. http://ocean.dmi.dk/arctic/meant80n.uk.php
    0 0
    Response:

    [DB] Digging a little deeper gives an answer to that:

    "The temperature graphs are made from numerical weather prediction (NWP) "analysis" data. Analyses are the model fields used to start NWP models. They represent the statistically most likely state of the atmosphere, given the information available to make the analysis. Since the data are gridded, it is straight forward to deduce the average temperature North of 80 degree North.

    However, since the model is gridded in a regular 0.5 degree grid, the mean temperature values are strongly biased towards the temperature in the most northern part of the Arctic! Therefore, do NOT use this measure as an actual physical mean temperature of the arctic. The 'plus 80 North mean temperature' graphs can be used for comparing one year to an other."

    DMI presents us a tool; like any tool, it can be put to purposes other than intended.  Like a direct comparison to GISS...

  11. An NWP analysis is based on vastly more information thatn available from any single observing ststem. Data from ground, aircraft, bouys, ship, satellites, radiosondes, etc. are all combined to adjust the first guess field. As a consequence the quality of an analysis is much better than what can be obtained from gridding, or treating in other ways, data from a single or a few observing systems. To quote the 2nd paragraph of that link. So, being the quality is outstanding, we can compare year to year using the DMI data. From that one could get an anomoly to see if the warming/cooling is close to GISS data. As long as we are comparing the anomoly from 80 degrees north.
    0 0
  12. Very nice post. The land/ocean ocean proportion for the islands was a particularly nice piece of work. Nick Stokes revisted the 60 calculation question after he implemented area weighting in TempLS. Instead of picking them somewhat arbitrarily, he picked them to give optimal global coverage on the basis of the area weightings. The result is even more striking: (The post is here.)
    0 0
  13. Thanks for the typo fixes one and all. Warm, ETR Interesting paper by Rigor et al. When you think about the result, it makes some sense. In winter up there everything is Ice & Snow. In one sense it is ALL 'land'. So you are far more likely to correlation between 'land' and 'sea', particularly when you factor in the known phenomenon that air temps immediately above Snow/Ice are largely set by the snow and ice. In summer when the land is dark and ice free, and at least some of the sea is open water, the more usual ralationships between land and sea reestablish themselves. Skywatcher. To my knowledge, based on the data that GISTEMP put up, they rely on the available SST data for oceans and this doesn't cover the Arctic ocean since it isn't available where there is ice cover. And using data for only part of the year would a big No No. The phenomenon you describe is real and has an impact although there will be a limit to how high above the ice this extends. Conversely, this will not have the same effect in winter. Arctic warming can quite easily be extreme if winter temps moderate significantly even if summer ones don't. Less cold in winter impacts on sea ice thickness, snow cover thickness etc which then shows its results in the melt season even if the summer temps haven't changed as much. Climatewatcher. Yep, most of the metrics for temp change are pretty much in sync. You might be interested in this earlier post I did some time ago on satellite temperature products My take home from putting all this together is this. The method used by HADCruT (and NCDC, JMA) will underestimate global temp changes in a world where the arctic is warming more than the average. GISS will be closer to the reality although whether they over or under estimate is hared to say. WRT to the RSS & UAH Lower Troposphere satellite temps (LT), they are convegring as processing errors are resolved. Also there is a residual impact from the effect of a short overlap time between the NOAA 9 & NOAA 10 satellites that is playing through their differeing analysis methods. As I discuss in the earlier post, there is reason to think, from other analyses, that RSS & UAH may be underestimating the trend somewhat. Your graph also includes MT series from UAH & RSS. As I discuss in the earlier post, these trends are unphysical, being influenced by an impact from stratospheric cooling on the results. The only satellite temp' products that are close to being useful as presented are the LT series and the LS (Lower Stratosphere) series. So my take home from the take home. The temp records show quitre amazing agreement given the compexities of generating them. Kevin C You have to LOVE those names. Instead of stations with names like Lower Smithhampton, somethingorothergorsk, someonesbridge. .... Tahiti FAAA, Bora Bora, Hereheretue. Gauguin, RL Stevenson, that lovely woman from those cheap Tahiti tourism ads. Where are you now? You need someone to study the 'SLOW climatological changes of Foraminafera organisms in the Benthic environment of a lagoon encircled tropical island, one with, Palm Tree fertilisation influenced, reef organism biosystems and with particular focus on the impacts upon culinary practices wrt to sustainable harvesting of molluscs and cultural beliefs regarding fertility concommittant with said dietary practices. And the potential psychological benefits to 'the great literary enterpise' of such dietary and fertilty promoting practices. Where do I 'volunter'?
    0 0
  14. Hi Glenn, thanks for the informative reply. You're right about teh importance of the winter milding vs summer melting, and of course it's important for the series to be consistent in its methods. An analysis well worth reading that has a lot of bearing on the comparison between different temperature series, in case nobody's linked to it already: Tamino's "How Fast is Earth Warming?". So far as I'm aware he's submitted it for publication and it's in press, but am not sure where. Here's the key figure: The take-home message is that when corrected for the exogenous factors (solar, volcanic, ENSO), all temperature series agree exceptionally well on both the rate of warming and the year-to-year variability. Different series respond differently to the exogenous factors, e.g. the solar and ENSO contribution is twice as strong for lower troposphere measurements RSS and UAH compared to surface temperature datasets. It gives us both confidence in the magnitude of underlying warming (about 1.7C per century), and that this underlying warming rate has not slowed at all this last decade, despite the best efforts of the exogenous factors. The key issue, similar to what Glenn's showing here, is that not all the temperature datasets are measuring exactly the same thing, and Tamino's showing that the rate of warming in all datasets is comparable.
    0 0
  15. " If this station removal was happening randomly, there is no reason to think that any effect from this would be anything other than random, not a bias." It was not random. The opposite of random is not a "wicked scheme" to introduce artificial warming, rather it is nonrandom which will introduce a mix of artificial warming and artificial cooling. Removal of rural stations can introduce a warming bias and in some cases station removal tended to be rural (e.g. the end of the Soviet Union which postdates Hansen's paper). On the flip side, rural station removal could also introduce cooling caused by local aerosols (see http://academic.engr.arizona.edu/HWR/Brooks/GC572-2004/readings/charlson.pdf)
    0 0
  16. Especially damning is the removal of those rural satellites, a necessary step to ensure that the fake warming trend computed by GISS is reflected by an equally fake warming trend computed by UAH and RSS.
    0 0
  17. @Eric People have played with the dataset, with and without rural, with and without mountain, and with a without arctic station. The conclusion of all those studies is than removing those station drop the heating rate, not increasing it.
    0 0
  18. I plotted UAH and GISS after adding 0.25 to the UAH anomaly. Starting in 1979:

    And around 96 to the present:

    It's not entirely clear what to use for an offset (I used 0.25), but obviously GISS ran colder in the early 80's and/or warmer recently. Also GISS shows monthly spikes except for El Nino where UAH tends to spike above GISS.

    0 0
  19. Eric (skeptic) it would help if you actually read other posts, such as skywalker's, and even better it would help if you actually read serious analysis such as that offered by tamino (a professional statistician who specializes in time-series analysis) and linked by skywalker.
    0 0
  20. dhogaza, I posted those graphs to find out if someone would explain why GISS is lower in the early 80's (and/or higher at present) and could explain the monthly spikes in GISS.
    0 0
  21. Eric. When comparing Surface vs Satellite records it is important to remember that the satellites aren's producing a surface warming trend. The LT series cover a band in the lower troposphere centred at around 2.5 km altitude. As to removal of stations introducing a bias, this can only happen if the net of all station removals is of stations that are COOLING, not just COOLER. If the rural aerosol effect is real (I hadn't heard of that effect before), is it a fixed bias or one that changes over time? If it is a fixed bias for a station then removing that station doesn't necessarily bias to the whole record. Biases at any station are only going to influence the record if the bias changes over time.
    0 0
  22. Glenn, I understand your point about the change in bias, not the bias itself. The aerosol effect would likely be urban so removing rural stations would create an artificial cooling (provided the aerosols increased over the time period in question). I gave it as a counterexample. All such effects need to be analyzed, we can't simply claim that stations were randomly removed (not true) or that the bias change was random (unknown).
    0 0
  23. Eric (skeptic) @20, the interesting question is not why is the UAH trend lower than the GISS trend, it is why is it lower than the RSS trend: Given the close agreement between RSS satellite measured trends and the Hadley/CRU measured surface trend, it is likely that UAH is understating the trend for some as yet unknown reason. We know the GISS trend is slightly greater than the RSS and HadCRUt trend because it includes polar regions which are excluded from the others, and which are warming faster on average (much faster in the NH, slightly slower in the SH) than the non-polar regions of the globe.
    0 0
    Response:

    [dana1981] I wouldn't say "unknown", necessarily.  There have been several studies suggesting changes to the satellite data analysis, which would bring it more in line with what we expect as compared to the surface warming trend.  Fu et al. (2004) is one, and Vinnikov & Grody (2006) was another.  Tamino had a good discussion on this.

  24. Eric #20: You do realise that measuremets of surface temperature and lower tropospheric temperature are not measuring exactly the same thing? Tamino has lots of info on this, but safe to say your simplistic analysis is not up to the task. Are you considering that satellite analyses begin around 1979, at high solar activity, while they end in the recent deep low in solar activity, so introducing a skew? As the lower tropospheric measures are about twice as responsive to the small changes in solar activity, you would expect the skew of the temperature trend to be larger for the RSS and UAH than for surface temperature measurements. That's just one of several thing to consider, as Glenn's posts should make clear, as well as Tom's comment above - there's nothing desperately mysterious about the different trends of the different datasets.
    0 0
  25. Eric @22 In your example, if the aerosol effect is urban based then it is likely to be removed by UHI processing. And even if a rural station is removed, what is the station density in the neigbourhood of the UHI affected station. If their enough other rural stations remaing, that is still sufficient. Station coverage is the issue, not absolute station count. The theme of this series is that, for various reasons, the methods used to calcuate the temperature record are far more robust than is commonly portrayed. Not that it is perfect which it can't be. The key point therefore is whether any residual errors, biases etc are making any meaningful contribution to distorting the record which is the oft cited claim elsewhere. So while we may need to look for further factors to improve the quality of the record - which the teams who work on this do as their day job - the question is is the record good enough already to be relied upon for general understanding. Sure we might sqeeze the error margins down a bit more but is it 'good enough' already. And sisnce a range of analyses have all reached the same basic result, I suggest it can be. Others have already shown from the data that it is. In this series I am trying to focus on why the record is robust.
    0 0
  26. Eric, Tom, Skywatcher You might like to take a look at my earlier post on satellites here. UAH & RSS began to drift apart around 1987 when NOAA 9 & NOAA 10 failed to have a long enough overlap period to give a good correlation to keep the records in synch. There have been a number of criticisms of the processing methods used by the two teams with, it appears, UAH having to make more corrections.
    0 0
  27. skywatcher (#24), looking at the link from Glenn to his piece on satellite measurement your point about trend differences is valid and not just due to solar, but other factors discussed on that thread. For this thread's topic, I still haven't seen an explanation of the large monthly outliers in the GISS record compared to UAH. Perhaps the "damping" of those outliers can be explained in the satellite thread, or the GISS peaks can be explained here. Those affect the trend somewhat, but also show up in the 80's.
    0 0
  28. With regard to CRU, it's good to see Phil Jones finally clearing-up the 'not significant' comment so (wilfully) misunderstood by the so-called skeptics : Global warming since 1995 'now significant' I'm sure we will now no longer see this 'misunderstanding' in such a prominent position all over the Denialosphere... (Oh, is that a flying pig ?)
    0 0
  29. JMurphy An especially good point was that Prof. Jones highlighted the need for trends over longer timescales than the 16 years needed to achieve statistical sgnificance (cherry picking start and/or end dates to get the result you want invalidates the hypothesis test anyway - at least unless you account for the implicit multiple hypothesis testing).
    0 0

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us