In 1998 and 1999, American scientists Michael Mann, Raymond Bradley and Malcolm Hughes published two papers that reconstructed the average temperatures of the northern hemisphere back to the year 1000. The articles showed a temperature profile that gently declined from 1000 to 1850, fluctuating a little along the way, with a sudden increase in the late nineteenth and the twentieth centuries. The graph was nick-named “the Hockey Stick”, with its long relatively straight handle showing the stable pre-industrial climate and the blade representing the sudden uptick in the last 150 years.
The diagram was a striking depiction of the abrupt warming that had occurred since the Industrial Revolution compared to what happened before. For those opposed to the scientific consensus on Anthropogenic Global Warming (AGW), the Hockey Stick posed a threat and had to be broken.
Mann and other scientists were subjected to numerous investigations, all of which exonerated the Hockey Stick authors. Most importantly, other researchers, using alternative methods and new data, produced additional temperature curves that closely matched the original results of Mann et al. Nevertheless, the attacks on the original Hockey Stick continued, as has the harassment of Mann by right-wing pundits. If you need to deny the consensus on AGW, you have to keep repeating that the “Hockey Stick is Broken”. Never mind that it is intact and that there are enough new sticks to equip an NHL team.
He makes many of the same errors that contrarian critics make: ignoring the papers self-rated by the original authors; and making unwarranted assumptions about what the “no-position” abstracts and papers mean.
Powell’s methodology was to search the Web of Science to review abstracts from 2013 and 2014. He added the search term “climate change” to the terms “global climate change” and “global warming” that were used by C13. He examined 24,210 papers co-authored by 69,406 scientists and found only five papers written by four authors that explicitly reject AGW. Assuming the rest of the abstracts endorsed AGW, this gives consensus figures of 99.98% (by abstract) and 99.99% (by author).
His definition of explicit rejection would align roughly with the seventh level of endorsement used in C13: “Explicitly states that humans are causing less than half of global warming”. In the abstracts from 1991-2011, C13 found 9 out of 11,914 that fit level 7, which using Powell’s consensus calculation assumptions, would yield 99.92%. So, there is probably not much difference between the two approaches when it comes to identifying an outright rejection paper. It’s what you assume the other abstracts say—or do not say—that is the problem.
C13 also counted as “reject AGW” abstracts that: “Implies humans have had a minimal impact on global warming without saying so explicitly, e.g., proposing a natural mechanism is the main cause of global warming”. These are more numerous than the explicit rejections and include papers by scientists who consider that natural causes are more important than human causes in recent warming, but who do not outright reject some small human contribution.
Competing Climate Consensus Pacmen. Cook on the left, Powell on the right.
Posted on 24 March 2016 by Andy Skuce &
Originally published at Corporate Knights on March 17, 2016.
Sorry Ted Cruz. There's no conspiracy among scientists to exaggerate global warming by fudging the numbers.
Last year was the warmest year recorded since the measurement of global surface temperatures began in the nineteenth century. The second-warmest year ever was 2014. Moreover, because of the persisting effects of the equatorial Pacific Ocean phenomenon known as El Niño, many experts are predicting that 2016 could set a new annual record. January and February have already set new monthly records, with February half a degree Celsius warmer than any February in history.
This news is deeply unsettling for those who care about the future of the planet. But it is even more upsetting for people opposed to climate mitigation, since it refutes their favourite talking point – that global warming has stalled in recent years.
U.S. Congressman Lamar Smith claims there has been a conspiracy among scientists to fudge the surface temperature records upwards and has demanded, by subpoena, to have scientists’ emails released.
Senator and presidential candidate Ted Cruz recently organized a Senate hearing on the temperature record in which he called upon carefully selected witnesses to testify that calculations of temperature made by satellite observations of the upper atmosphere are superior to measurements made by thermometers at the Earth’s surface.
It’s easy to cherry-pick data in order to bamboozle people. The process of making consistent temperature records from surface measurements and satellite observations is complicated and is easy to misrepresent.
But the fact remains that there are no conspiracies afoot. Here’s why.
Posted on 13 January 2016 by Andy Skuce &
This article was originally published online at Corporate Knights and will appear in the hard copy Winter 2016 Edition of the Corporate Knights Magazine, which is to be included as a supplement to the Globe and Mail and Washington Post later in January 2016. The photograph used in the original was changed for copyright reasons.
Human civilization developed over a period of 10,000 years during which global average surface temperatures remained remarkably stable, hovering within one degree Celsius of where they are today.
If we are to keep future temperatures from getting far outside that range, humanity will be forced to reduce fossil fuel emissions to zero by 2050. Halving our emissions is not good enough: we need to get down to zero to stay under the 2 C target that scientists and policy makers have identified as the limit beyond which global warming becomes dangerous.
Shell boasting about its government-funded Quest CCS project, on a Toronto bus. (Photo: rustneversleeps) "Shell Quest captures over one-third of our oil sands upgrader emissions"
Many scenarios have been proposed to get us there. Some of these involve rapid deployment of solar and wind power in conjunction with significant reductions in the amount of energy we consume.
However, many of the economists and experts who have developed scenarios for the Intergovernmental Panel on Climate Change (IPCC) believe that the only way to achieve the two-degree goal in a growing world economy is to invest in large-scale carbon capture and storage (CCS) projects. These technologies capture carbon dioxide from the exhausts of power stations and industrial plants and then permanently store it, usually by injecting it into underground rock layers.
Even with massive deployment of CCS over coming decades, most scenarios modelled by the IPCC overshoot the carbon budget and require that in the latter part of the century, we actually take more carbon out of the atmosphere than we put into it. Climate expert Kevin Anderson of the Tyndall Centre for Climate Change Research at the University of Manchester recently reported in Nature Geoscience that, of the 400 IPCC emissions scenarios used in the 2014 Working Group report to keep warming below two degrees, some 344 require the deployment of negative emissions technologies after 2050. The other 56 models assumed that we would start rapidly reducing emissions in 2010 (which, of course, did not happen). In other words, negative emissions are required in all of the IPCC scenarios that are still current.
One favoured negative emissions technology is bioenergy with carbon capture and storage (BECCS). This involves burning biomass – such as wood pellets – in power stations, then capturing the carbon dioxide and burying it deep in the earth. The technology has not yet been demonstrated at an industrial scale. Using the large amounts of bioenergy envisioned in such scenarios will place huge demands on land use and will conflict with agriculture and biodiversity needs.
Posted on 31 December 2015 by Andy Skuce &
On Sunday November 22nd, 2015, Alberta's new centre-left Premier, Rachel Notley, announced that the province would be introducing an economy-wide carbon tax priced at $30 per tonne of CO2 equivalent, to be phased in in 2016 and 2017. Observers had been expecting new efforts to mitigate emissions since Notley's election in May 2015, but the scope and ambition of this policy took many by surprise.
Alberta, of course, is the home of the Athabasca oil sands and is one of the largest per-capita GHG emitters of any jurisdiction in the world. The new plan was nevertheless endorsed by environmental groups, First Nations and by the biggest oil companies, an extraordinary consensus that many would not have thought possible.
How was this done? I will try and explain the new policy as far as I can (the details are not all available yet), but the short answer is that a huge amount of credit is due to the panel of experts led by University of Alberta energy economist Andrew Leach and his fellow panelists. Not only did they listen to what all Albertans had to say, but they were thoughtful in framing a policy that is acceptable to almost everyone.
Alberta is the wealthiest province in Canada, with a population of 4.1 million. In 2013, greenhouse gas emissions were 267 Mt CO2 equivalent, about 65 tonnes per capita, which compares with the average for the rest of Canada of about 15 tonnes. Among US states only North Dakota and Wyoming are worse. Alberta's fugitive emissions of methane alone amount to 29 Mt CO2e, about 7 tonnes per person, which is a little more than the average for all GHGs per-capita emissions in the world.
From the Climate Leadership Report. The 2030 emissions do not consider savings from the new policy.
Posted on 9 December 2015 by Andy Skuce &
In the first part of this series, I examined the implications of relying on CCS and BECCS to get us to the two degree target. In the second part, I took a detailed look at Kevin Anderson's arguments that IPCC mitigation scenarios aimed at two degrees are biased towards unproven negative-emissions technologies and that they consequently downplay the revolutionary changes to our energy systems and economy that we must make very soon. In this last part, I'm going to look at the challenges that the world faces in fairly allocating future emissions from our remaining carbon budget and raising the money needed for climate adaptation funds, taking account of the very unequal past and present.
Until now, economic growth has been driven and sustained largely by fossil fuels. Europe and North America started early with industrialization and, from 1800 up to around 1945, this growth was driven mainly by coal. After the Second World War there was a period of rapid (~4% per year) economic growth in Europe, N America and Japan, lasting about thirty years, that the French refer to as Les Trente Glorieuses, The Glorious Thirty. This expansion was accompanied by a huge rise in the consumption of oil, coal and natural gas. After this there was a thirty-year period of slower growth (~2%) in the developed economies, with consumption fluctuations caused by oil-price shocks and the collapse of the Soviet Union. During this time, oil and coal consumption continued to grow, but not as steadily as before. Then, at the end of the twentieth century, economic growth took off in China, with a huge increase in the consumption of coal.
Source of the emissions data is from the CDIAC. See the SkS post The History of Emissions and the Great Acceleration for further details.
If we are to achieve a stable climate, we will need to reverse this growth in emissions over a much shorter time period, while maintaining the economies of the developed world and, crucially, allowing the possibility of economic growth for the majority of humanity that has not yet experienced the benefits of a developed-country middle-class lifestyle.
Here are the the annual emissions sorted by country and region:
Posted on 26 November 2015 by Andy Skuce &
The first part of this three-part series looked at the staggering magnitude and the daunting deployment timescale available for the fossil fuel and bioenergy carbon capture and storage technologies that many 2°C mitigation scenarios assume. In this second part, I outline Kevin Anderson's argument that climate experts are failing to acknowledge the near-impossibility of avoiding dangerous climate change under current assumptions of the political and economic status quo, combined with unrealistic expectations of untested negative-emissions technologies.
In plain language, the complete set of 400 IPCC scenarios for a 50% or better chance of meeting the 2 °C target work on the basis of either an ability to change the past, or the successful and large-scale uptake of negative-emission technologies. A significant proportion of the scenarios are dependent on both. (Kevin Anderson)
Kevin Anderson has just written a provocative article titled: Duality in climate science, published in Nature Geoscience (open access text available here). He contrasts the up-beat pronouncements in the run-up to the Paris climate conference in December 2015 (e.g. “warming to less than 2°C” is “economically feasible” and “cost effective”; “global economic growth would not be strongly affected”) with what he see as the reality that meeting the 2°C target cannot be reconciled with continued economic growth in rich societies at the same time as the rapid development of poor societies. He concludes that: “the carbon budgets associated with a 2 °C threshold demand profound and immediate changes to the consumption and production of energy”.
His argument runs as follows: Integrated Assessment Models, which attempt to bring together, physics, economics and policy, rely on highly optimistic assumptions specifically:
o Unrealistic early peaks in global emissions;
o Massive deployment of negative-emissions technologies.
He notes that of the 400 scenarios that have a 50% or better chance of meeting the 2 °C target, 344 of them assume the large-scale uptake of negative emissions technologies and, in the 56 scenarios that do not, global emissions peak around 2010, which, as he notes, is contrary to the historical data.
I covered the problems of the scalability and timing of carbon capture and storage and negative emissions technologies in a previous article.
From Robbie Andrew, adjusted for non-CO2 and land-use emissions.Note that these mitigation curves assume no net-negative emissions technologies deployed in the latter part of the century.
Posted on 16 November 2015 by Andy Skuce &
This post looks at the feasibility of the massive and rapid deployment of Carbon Capture and Storage and negative-emissions Bioenergy Carbon Capture and Storage technologies in the majority of IPCC scenarios that avoid dangerous global warming. Some observers question whether the deployment of these technologies at these scales and within the required time frames is achievable. This is Part One of a three-part series on the challenge of keeping global warming under 2 °C.
The various emissions models that have been used to produce the greenhouse gas concentration pathway to 2°Celsius vary considerably, but the majority of them require huge deployment of Carbon Capture and Storage (CCS) as well as net-negative global emissions in the latter part of the twenty-first century. The only negative emissions methods generally considered in these scenarios are bioenergy capture and storage (BECCS) and land-use changes, such as afforestation. For there to be net-negative emissions, positive emissions have to be smaller than the negative emissions.
Kevin Anderson (2015) (open-access text) reports that of the 400 scenarios that have a 50% chance or greater of no more than 2 °C of warming, 344 assume large-scale negative emissions technologies. The remaining 56 scenarios have emissions peaking in 2010, which, as we know, did not happen.
Sabine Fuss et al. (2014) (pdf) demonstrate that of the 116 scenarios that lead to concentrations of 430-480 ppm of CO2 equivalent, 101 of them require net negative emissions. Most scenarios that have net-negative emissions have BECCS providing 10-30% of the world’s primary energy in 2100.
From Fuss et al. (2014), showing the historical emissions (black), the four RCPs (heavy coloured lines) and 1089 scenarios assigned to one of the RCPs (light coloured lines).
Posted on 6 November 2015 by Andy Skuce &
Rising temperatures may inflict much more damage on already warm countries than conventional economic models predict. In the latter part of the twenty-first Century, global warming might even reduce or reverse any earlier economic progress made by poor nations. This would increase global wealth inequality over the century. (This is a repost from Critical Angle.)
A recent paper published in Nature by Marshall Burke, Solomon M. Hsiang and Edward Miguel Global non-linear effect of temperature on economic production argues that increasing temperatures will cause much greater damage to economies than previously predicted. Furthermore, this effect will be distributed very unequally, with tropical countries getting hit very hard and some northern countries actually benefitting.
Let me attempt a highly simplified summary of what they did. I’m not an economist and this analysis is not straightforward, so beware. If I confuse you, try Dana Nuccitelli’s take or Seth Borenstein’s or Bloomberg’s or The Economist's.
Firstly, Burke et al. looked at factors like labour supply, labour performance and crop yields and how they relate to daily temperature exposure. Generally these show little variation up to temperatures in the high twenties Celsius, at which point they fall off quickly. Secondly, those trends were aggregated to predict the relationship between annual average temperatures and the annual impact on economic output. Thirdly, they looked at annual economic output and average annual temperatures for individual countries for the period 1960-2010. Note that they only compared the economic effects of temperature change on individual countries, they did not correlate one country with another. Using these observations they were able to see how the observations compared with their predicted aggregate curve.
All figures from Burke et al. (2015).
This work showed that the GDP of countries with an annual average temperature of 13°C were the least sensitive to temperature changes. Colder countries on the left side of the hump would benefit from an increase in temperature, whereas warmer countries would see their output suffer as temperature increases. Note that the figure does not show that a certain temperature predetermines the level of wealth of a country (China, despite recent rapid growth is poorer than the US and Japan even though average annual temperatures are similar). Rather, it illustrates how susceptible countries are to increases or decreases in productivity relative to their annual average temperature.
There is some evidence that rich countries are slightly less affected by changes in temperature (the curve is a little flatter for them). There are few hot and wealthy countries examined in the study, so any general conclusions about them cannot be certain, but the evidence still points to them being more prone to damage from rising temperature than rich, cooler countries. No matter how rich you are, extra heat hurts the warm lands more than it does the temperate and the cool. You can’t buy your way out of the effects of global warming, except by moving away from the Equator or up into the highlands.
Posted on 8 October 2015 by Andy Skuce &
This article was first published in the Corporate Knights Magazine.
There is a supplementary article at my blog Critical Angle, that has more detail, links and references, along with an estimation of the GHG emissions (excluding end-use) associated with one liquefied natural gas project and the effect this will have on the feasibility of BC reaching its emissions targets.
There is a further piece at DeSmog Canada, where I compare the situation in BC's gas industry with the Volkswagen emissions reporting scandal, in which a corporation cheats on its emissions tests, with the tacit approval of industry-friendly regulators and governments, only to be exposed by independent researchers performing tests in real-world situations.
The push by British Columbia to develop a new liquefied natural gas (LNG) export industry raises questions about the impact such activities would have on greenhouse gas emissions, both within the province and globally.
One of the single most important factors relates to the amount of methane and carbon dioxide that gets released into the atmosphere, either deliberately through venting or by accident as so-called fugitive emissions. Fugitive emissions are the result of valves and meters that release, by design, small quantities of gas. But they can also come from faulty equipment and from operators that fail to follow regulations.
Photo by Jesús Rodríguez Fernández (creative commons)
According to the B.C. Greenhouse Gas Inventory Report 2012, there were 78,000 tonnes of fugitive methane emissions from the oil and natural gas industry that year. B.C. produced 41 billion cubic metres of gas in 2012. This means about 0.28 per cent of the gas produced was released into the atmosphere.
By North American standards, this is a very low estimate. The U.S. Environmental Protection Agency (EPA) uses a figure of 1.5 per cent leakage, more than five times higher. Recent research led by the U.S. non-profit group, Environmental Defense Fund (EDF), shows that even the EPA estimates may be too low by a factor of 1.5. B.C.’s estimate, in other words, would be about one-eighth of what has been estimated for the American gas industry.
Although the amounts of methane released are small compared to carbon dioxide emissions, methane matters because it packs a much bigger global warming punch. Determining the effect of methane emissions is complicated because molecules of methane only last in the atmosphere for a decade or so and the warming effect from its release depends on the time interval it is measured over. Compared to a given mass of carbon dioxide, the same mass of methane will produce 34 times as much warming over 100 years, or 86 times as much over 20 years.