The ‘ultimate objective’ of the 1992 Convention on Climate Change is to ‘prevent’, “dangerous anthropogenic interference with the climate system” by ‘stabilisation of GHG’.
The objective of the draft Paris treaty reads: “1. The objective of this agreement is to achieve net zero greenhouse gas emissions in line with the ultimate objective of the Convention …”
This is movement since 1992, but not very much, since true ‘stabilisation’ was always going to require ‘net zero’ emissions at some point. The critical question is, in order to ‘prevent’ something ‘dangerous’ from happening, how quickly does ‘net zero’ have to be reached?
Article 1 of the draft Paris text says that the ‘net zero’ objective is ‘in line’ with the ultimate objective of ‘preventing’ dangerous anthropogenic interference.
The preamble of the draft Paris text says that the Paris treaty is:
“In pursuit of the [ultimate] objective of the Convention as stated in its Article 2” and seeks to “to achieve its objective as stated in its Article 2, so as to stabilize greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system,”
(NB: the word ‘dangerous’ appears only 3 times in the draft Paris text, all those occurrences are in the Preamble, and all occur within the phrase “dangerous anthropogenic interference”).
Consequently, the draft Paris Treaty speaks as if dangerous interference lies in the future.
This raises two important prior questions:
1 – what is dangerous interference? How is that decided?
2 – has ‘dangerous interference’ already occurred? And if so, does the draft text need amending to reflect this? And what are the implications for the 1992 convention? Does it also need amending?
This post looks at the first question.
It is well known that the IPCC has adopted a scheme of measuring ‘dangerous interference’ by reference to changes in global average temperature.
Article 1 of the Copenhagen Accord of 2009 provides:
“To achieve the ultimate objective of the Convention to stabilize greenhouse gas concentration in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system, we shall, recognizing the scientific view that the increase in global temperature should be below 2 degrees Celsius, on the basis of equity and in the context of sustainable development, enhance our long-term cooperative action to combat climate change.”
The draft Paris text refers to 2 degrees, but also 1.5 degrees. It is important to note that these global average temperature changes are referred to as ‘limits’ and not ‘targets’. The term ‘targets’ is reserved for emission reduction efforts. (This will be discussed in a separate post)
But where did that idea of assessing ‘dangerous’ by reference to changes in global average temperature come from?
A recent influential article by IPCC adviser Petra Tschakert explains the origin of the 2 degree ‘target’:
“To date, the history of the 2°C target is well understood . In the 1960s and 1970s, doubling CO2concentration scenarios estimated an approximate 2°C warming. Economist William Nordhaus , often cited as the source of the targets, used 2°C in his early cost-benefit analyses for emission reductions, albeit as a heuristic and not a normative policy prescription. Shortly thereafter, a reframing of the climate question shifted the discussion from emission reductions to risks of climate change at levels potentially tolerable or disruptive and harmful. In 1991, the first target-based approaches to climate policy emerged, including the so-called ‘traffic light system’ to delineate distinct levels of risk expressed in temperature rise per decade and associated sea level rise. They ranged from limited risk and damage (green) to extensive risk and damage (amber) and significant societal disruptions and possible tipping points (red) . The boundary between green and amber was roughly associated with a 1°C increase while the boundary between amber and red approximated 2°C . Only 5 years later, the 1996 European Union declaration proposed the 2°C target as the maximum allowable global temperature above pre-industrial times by 2100, mainly to avoid major losses to threatened ecosystems such as coral reefs .
Consequently, the 2°C target became an anchor in mitigation debates, reaffirmed then in environmental circles and embraced in several high-level policy domains, stretching from Greenpeace in the early 1990s to the G8 meeting in 2005. At COP15 in Copenhagen in 2009, the 2°C target was officially sanctioned as essential policy guidance, with the hope that it may subsequently become a legal goal in a new climate agreement. ‘We agree that deep cuts in global emissions are required according to science, as documented in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report with a view to reduce global emissions so as to hold the increase in global temperature below 2 degrees Celsius…’ . In Cancun, at COP16 in 2010, parties agreed to reduce global greenhouse gas (GHG) emissions to keep the global average temperature below 2°C above pre-industrial levels, which became the so-called long-term global goal.”
Carbon Brief notes that in 1990 researchers in Stockholm produced a 165 page report examining what the measures of ‘danger’ might be. They suggested setting limits for sea level rise, CO2 concentrations in the air, and changes in global average temperature.
On the global average temperature rise, they suggested a 1 OR 2 degree global average temperature rise target. But noted:
“Temperature increases beyond 1.0°C may elicit rapid, unpredictable, and non-linear responses that could lead to extensive ecosystem damage”.
To summarise, the use of projected rises in global average temperature as a measure of dangerousness:
- Originated from a single economist
- Has had very limited scientific discussion in comparison with other possible measures
- Has never been the subject of any formal discussion among nation states as to whether it is a desirable measure or should be used in combination with other measures.
That is all going to change in Paris.
That is in part because a clear geographical divide has opened up between developed and developing countries on whether the 2 degrees limit achieves the aims of the 1992 convention. Tschakert writes:
“Among parties to the United Nations Framework Convention on Climate Change (UNFCCC), many Caribbean states proclaimed already at COP15 that a 2°C temperature rise was unacceptable as a safe threshold for the protection of small island states and that even a 1.5°C increase would undermine the survival of their communities. At COP16 in Cancun 1 year later, the Alliance of Small Island States (AOSIS) reiterated this claim. Several least developed countries (LDCs) joined AOSIS insisting on a long-term goal that would lower rising global average temperatures to below 1.5°C warming, accounting jointly for more than 100 of all countries most vulnerable to the negative impacts of climate change. This majority (>70%) among the parties comprises, besides the low-lying small island states, essentially all low- and middle-income countries, with the exception of two lower middle-income countries (India, Indonesia) and a few upper-middle income countries such as China, Brazil, Argentina, and Mexico; the parties that support a 2°C target are all high-income countries and nine upper middle-income countries, the above four included . The latter evidently unite the high-emitting, high-income OECD nations.”
Unfortunately, this is a backwards way of assessing more fundamental issues, namely: What is ‘dangerous’? How should it be measured? And what if things are already ‘dangerous?
Image Susanne Nilsson