This repository aims to organize a collective effort to bring GHG emissions and related data submitted by developing countries (non-AnnexI) to the UNFCCC into a standardized machine readable format. We focus on data not available through the UNFCCC DI interface which is mostly data submitted in IPCC 2006 categories.
The code is based on national-inventory-submissions
The repository is currently under initial development so a lot of things are still subject to change.
The repository is structured by folders. Here we list the folders in order of processing.
All data in this repository in the comma-separated values (CSV) files is formatted consistently with the PRIMAP2 interchange format.
The data contained in each column is as follows:
Name of the data source. Four country specific datasets it is \<ISO3\>-GHG-inventory
, where \<ISO3\>
is the ISO 3166 three-letter country code. Specifications for composite datasets including several countries will be added when the datasets are available.
The scenario specifies the submissions (e.g. BUR1, NC5, or Inventory_2021 for a non-UNFCCC inventory)
Provenance of the data. Here: "derived" as it is a composite source.
ISO 3166 three-letter country codes.
Gas categories using global warming potentials (GWP) from either Second Assessment Report (SAR) or Fourth Assessment Report (AR4).
Code Description
CH4 Methane CO2 Carbon Dioxide N2O Nitrous Oxide HFCS (SARGWP100) Hydrofluorocarbons (SAR) HFCS (AR4GWP100) Hydrofluorocarbons (AR4) PFCS (SARGWP100) Perfluorocarbons (SAR) PFCS (AR4GWP100) Perfluorocarbons (AR4) SF6 Sulfur Hexafluoride NF3 Nitrogen Trifluoride FGASES (SARGWP100) Fluorinated Gases (SAR): HFCs, PFCs, SF$_6$, NF$_3$ FGASES (AR4GWP100) Fluorinated Gases (AR4): HFCs, PFCs, SF$_6$, NF$_3$ KYOTOGHG (SARGWP100) Kyoto greenhouse gases (SAR) KYOTOGHGAR4 (AR4GWP100) Kyoto greenhouse gases (AR4)
Table: Gas categories and underlying global warming potentials
Units are of the form Gg/Mt/... <substance> / yr where substance is the entity or for CO$_2$ equivalent units Gg/Mt/... CO2 / yr. The CO$_2$-equivalent is calculated according to the global warming potential indicated by the entity (see above).
Categories for emission as defined in terminology <term>. Terminology names are those used in the climate_categories package. If the terminology name contains _PRIMAP is means that some (sub)categories have been added to the official IPCC category hierarchy. Added categories outside the hierarchy begin with the prefix M.
Original name of the category as presented in the submission.
Optional column. In some cases original category names have been translated to english. In this case these translations are stored in this column.
Years (depending on dataset)
This guide is for contributors. If you are solely interested in using the resulting data we refer to the relases of the data on zenodo which come with a DOI and are thus citeable.
This repository is not a pure git repository. It is a datalad repository which uses git for code and other small text files and git-annex for data files and binary files (for this repository mainly pdf files). The files stored in git-annex are not part of this repository but are stored in a gin repository at gin.hemio.de.
To use the repository you need to have datalad installed. To clone the repository you can use the github url, but also the gin url.
datalad clone git@github.com:JGuetschow/UNFCCC_non-AnnexI_data.git <directory_name>
clones the repository into the folder <directory_name>. You can also clone via git clone
. This avoids error messages regarding git-annex. Cloning works from any sibling.
The data itself (meaning all binary and csv files) are not downloaded automatically. Only symlinks are created on clone. Needed files can be obained using
datalad get <filename>
where <filename> can also be a folder to get all files within that folder. Datalad will look for a sibling that is accessible to you and provides the necessary data. In general that could also be the computer of another contributor, if that computer is accessible to you (which will normally not be the case). NOTE: If you push to the github repository using datalad your local clone will automatically become a sibling and of your machine is accessible from the outside it will also serve data to others.
For more detailed information on datalad we refer to the datalad handbook
The code is best run in a virtual environment. All python dependencies will be automatically installed when building the virtual environment using make venv
. If you don't wat to use a virtual environment you can find the dependencies in file code/requirements.txt
. As an external dependencies you need firefox-geckodriver and git-annex > XXX (2021 works, some 2020 versions also).
The code has not been tested under Windows and Mac OS.
The maintainers of this repository will update the list of submissions and the downloaded pdf files frequently. However, in some cases you might want to have the data early and do the download yourself. To avoid merge conflicts, please do this on a clean branch in your fork and make sure your branch is in sync with main
.
make update-bur
in the main project folder. This will create a new list of submissions. To actually download the files run make download-bur
.make update-nc
in the main project folder. This will create a new list of submissions. To actually download the files run make download-nc
.make download-ndc
.All download scripts create files listing the new downloads in the folder downloaded_data/UNFCCC. the filenames use the format 00_new_downloads_<type>-YYYY-MM-DD.csv where <type> is bur, nc, or ndc. Currently, only one file per type and day is stored, so if you run the download script more than once on a day you will overwrite your frist file (likely with an empty file as you have already downloaded everything) (see also issue #2).
All new submissions have to be added to country discussion pages (where they exist) so everyone can keep track of all submissions without having to check the data folder for updates.
See section [Contributing] below.
The idea behind this data package is that several people contribute to extracting the data from pdf files such that for each user the work is less than for individual data reading and in the same time data quality improves through institutionalized data checking. You can contribute in defferent ways.
The easiest way to contribute to the repository is via anlysis of submissions for data coverage. Before selecting a submission for analysis check that it is not yet listed as analyzed in the submission overview issues.
We usually read the data from the pdf submissions. However, the authors of the submission of course have the data in machine readable format. It's of great help for the data reading process if the data is available in machine readable format as it minimizes errors and is just much less work compared to pdf reading. So if you have good connections to authors of country submissions or the underlying data asking them to publish the data would be of great help. Publishing the dat is the optimal solution as it allows us to integrate it in this dataset. If you can obtain the data unofficially it still helps as it would allow for easy checking of results read from pdfs. Datasets created from machine readable data not publicly available can be added to the legacy_data folder.
Read data from pdfs (or machine readable format) in a reproducable way. We read data using tools like camelot. This enables a reproducable reading process where all parameters needed (page numbers, table boundaries etc) are defined in a script that reads the data from pdf and saves it in the PRIMAP2 interchange and native format. If you want to contribute through data reading, check out the country pages in the discussion section and the issues already created for submission selected for reading. If you start data reading for a submission please leave comment in the corresponding issue and issign the issue to yourself. If there is no issue for the submission, please add it using the template (TODO create issue template). When reading the data, please consider the data requirements when reading the data.
You can contribute also through checking data. For each submission we would like to have one person responsible for reading the data and one person responsible for checking the results for completeness and correctness. Look out for issues with the tag "Needs review".
There always issues open regarding coding, some of them easy to resolve, some harder.
Contributing is ouf course not limited to the categories above. I you have ideas for improvements just open an issue or a discussion page to discuss you idea with the community.
Optimally, all data that can be found in a submission should be read (emissions data, but also underlying activity data and socioeconomic data). However, it is often scattered throughout the documents and sometimes only single datapoint are available. Thus we have compiled a list of use cases and their data requirements as a basis for decisions on what to focus on. Emissions data is often presented in a similar tabular format reapeated for each year. Simetimes sectoral time series are presented in tables for individual gases. If these cases it makes sense to read all the data as the tables have to be read anyways and omitting sectoral detail does not save much time.
Activity data needed depends on use case. We have listed some use cases and their requirements below.