Appendix C. Preparing the NCDC Weather Data

This section gives a runthrough of the steps taken to prepare the raw weather data files so they are in a form that is amenable for analysis using Hadoop. If you want to get a copy of the data to process using Hadoop, you can do so by following the instructions given at the website that accompanies this book at http://www.hadoopbook.com/. The rest of this section explains how the raw weather data files were processed.

The raw data is provided as a collection of tar files, compressed with bzip2. Each year of readings comes in a separate file. Here’s a partial directory listing of the files:

1901.tar.bz2
1902.tar.bz2
1903.tar.bz2
...
2000.tar.bz2

Each tar file contains a file for each weather station’s readings for the year, compressed with gzip. (The fact that the files in the archive are compressed makes the bzip2 compression on the archive itself redundant.) For example:

% tar jxf 1901.tar.bz2
% ls -l 1901 | head
011990-99999-1950.gz
011990-99999-1950.gz
...
011990-99999-1950.gz

Since there are tens of thousands of weather stations, the whole dataset is made up of a large number of relatively small files. It’s generally easier and more efficient to process a smaller number of relatively large files in Hadoop (see Small files and CombineFileInputFormat), so in this case, I concatenated the decompressed files for a whole year into a single file, named by the year. I did this using a MapReduce program, to take advantage of its parallel processing ...

Get Hadoop: The Definitive Guide, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.