Using Free Geospatial Tools and Data Part 12: OpenStreetMap

For this installment, we will look at importing data from OpenStreetMap.org.  As I mentioned in an earlier post, OpenStreetMap is a cloud-sourced GIS dataset with the goal of producing a global dataset that anyone can use.  There are two ways to download this data: you can either use Bittorrent and download the entire planet from http://osm-torrent.torres.voyager.hr/ or download extracts from http://download.geofabrik.de/.  If you do not need the entire planet, I would highly recommend using geofabrik.  It has a fast downlink and they have finally added MD5 checksums so you can verify the integrity of your download.

Go to http://download.geofabrik.de/ and click on North America.  We will be using the .pbf format file so click the link near the top of the page named north-america-latest.osm.pbf.  It is about six gigabytes in size and the MD5sum is listed at the end of the paragraph.  Once the download is done in your browser, you can use the md5sum command under a Linux shell or download one of the many MD5sum clients for windows.  It will look similar to the below example output (it likely will not match exactly as the MD5 value will change as the data is modified.

bmaddox@girls:~/Downloads/geodata$ md5sum north-america-latest.osm.pbf 
d2daa9c7d3ef4dead4a2b5f790523e6d north-america-latest.osm.pbf
bmaddox@girls:~/Downloads/geodata$

Next go back to the main geofabrik site and then click on and download the Central America file.  This will give you Mexico and the other Central American files.  As listed above, once the download is done in your browser, check it with md5sum.  If the values do not match, you will want to redownload and rerun md5sum again until they do.

There are several programs you can use to import OpenStreetMap data into PostGIS.  They mainly differ on what schema they use and how they manipulate the data before it goes in.  For purposes of this post, we will be using the imposm program found at http://imposm.org/docs/imposm/latest/.  If you are on Ubuntu, it should be a simple apt-get install imposm away.  For Windows or other distributions, you can download it directly from the imposm website.  The tutorial on how to import data using imposm can be found here: http://imposm.org/docs/imposm/latest/tutorial.html.

Using imposm is a multi-stage process.  The first stage is to have it read the data and combine the files into several intermediary files.  First create a PostGIS database by running:

createdb -T gistemplate OSM

Now have imposm take the data and convert it into its intermediary files.  To do this, run a similar command to this:

bmaddox@girls:/data/data/geo$ imposm --read --concurrency 2 --proj EPSG:4326 ~/Downloads/geodata/*.pbf
[16:29:15] ## reading /home/bmaddox/Downloads/geodata/central-america-latest.osm.pbf
[16:29:15] coords: 500489k nodes: 10009k ways: 71498k relations: 500k (estimated)
[16:31:27] coords: 21524k nodes: 92k ways: 2464k relations: 5k
[16:31:28] ## reading /home/bmaddox/Downloads/geodata/north-america-latest.osm.pbf
[16:31:28] coords: 500489k nodes: 10009k ways: 71498k relations: 500k (estimated)
[17:40:22] coords: 678992k nodes: 1347k ways: 44469k relations: 229k
[17:40:23] reading took 1 h 11m 7 s
[17:40:23] imposm took 1 h 11m 7 s
bmaddox@girls:/data/data/geo$

Here, I changed to a different drive and can the imposm command to read from the drive where I downloaded the .pbf files.  I did this since reading is a disk intensive process and spitting it between drives helps to speed things up a bit.  Also, I differed from the tutorial as my install of QGIS could not render OpenStreetMap data in its native EPSG:900913 projection with data in the EPSG:4326 coordinate system that my Tiger data was in.  Unless you have an extremely high-end workstation, this will take a while.  Once the process is done, you will have the following files in the output directory:

bmaddox@girls:~/Downloads/geodata/foo$ dir
imposm_coords.cache imposm_nodes.cache imposm_relations.cache imposm_ways.cache

The next step is to take the intermediary files and write them into PostGIS.  Here you can use a wild card to read all of the .pbf files you downloaded.

bmaddox@girls:~/Downloads/geodata/foo$ imposm --write --database OSM --host localhost --user bmaddox --port 5432 --proj EPSG:4326
password for bmaddox at localhost:
[18:20:21] ## dropping/creating tables
[18:20:22] ## writing data
[2014-06-15 18:52:46,074] imposm.multipolygon - WARNING - building relation 1834172 with 8971 ways (10854.8ms) and 8843 rings (2293.0ms) took 426854.5ms
[2014-06-15 19:00:47,635] imposm.multipolygon - WARNING - building relation 2566179 with 4026 ways (4717.3ms) and 3828 rings (1115.6ms) took 89522.6ms
[19:15:20] relations: 244k/244k
[19:15:41] relations: total time 55m 18s for 244095 (73/s)
[00:35:28] ways: 46907k/46907k
[00:35:30] ways: total time 5 h 19m 49s for 46907462 (2444/s)
[00:40:21] nodes: 1437k/1437k
[00:40:22] nodes: total time 4 m 51s for 1437951 (4933/s)
[00:40:22] ## creating generalized tables
[01:44:47] generalizing tables took 1 h 4 m 24s
[01:44:47] ## creating union views
[01:44:48] creating views took 0 s
[01:44:48] ## creating geometry indexes
[02:15:02] creating indexes took 30m 14s
[02:15:02] writing took 7 h 54m 41s
[02:15:02] imposm took 7 h 54m 42s
bmaddox@girls:~/Downloads/geodata/foo$

As you can see from the above output, this took almost eight hours on my home server (quad core AMD with eight gig of RAM).  This command loads all of the data from the intermediate files into PostGIS.  However, we are not done yet.  Looking at the output, all it did was load the data and create indices.  It did not cluster the data or perform any other optimizations.  To do this, run the following imposm command:

bmaddox@girls:~/Downloads/geodata/foo$ imposm --optimize -d OSM --user bmaddox
password for bmaddox at localhost:
[17:18:12] ## optimizing tables
Clustering table osm_new_transport_areas
Clustering table osm_new_mainroads
Clustering table osm_new_buildings
Clustering table osm_new_mainroads_gen1
Clustering table osm_new_mainroads_gen0
Clustering table osm_new_amenities
Clustering table osm_new_waterareas_gen1
Clustering table osm_new_waterareas_gen0
Clustering table osm_new_motorways_gen0
Clustering table osm_new_aeroways
Clustering table osm_new_motorways
Clustering table osm_new_transport_points
Clustering table osm_new_railways_gen0
Clustering table osm_new_railways_gen1
Clustering table osm_new_landusages
Clustering table osm_new_waterways
Clustering table osm_new_railways
Clustering table osm_new_motorways_gen1
Clustering table osm_new_waterareas
Clustering table osm_new_places
Clustering table osm_new_admin
Clustering table osm_new_minorroads
Clustering table osm_new_landusages_gen1
Clustering table osm_new_landusages_gen0
Vacuum analyze
[19:24:38] optimizing took 2 h 6 m 25s
[19:24:38] imposm took 2 h 6 m 26s
bmaddox@girls:~/Downloads/geodata/foo$

On my system it took a couple of hours and clustered all of the tables and then did a vacuum analyze to update the database statistics.

The final step is to have imposm rename the tables to what they will be in “production mode”.  Run the following:

bmaddox@girls:~/Downloads/geodata/foo$ imposm -d OSM --user bmaddox --deploy-production-tables
password for bmaddox at localhost:
[11:00:06] imposm took 1 s
bmaddox@girls:~/Downloads/geodata/foo$

Your data should now be optimized and ready for use.  To test it, refer to an earlier post in this series where I discussed using QGIS and load some of the OSM data into it.

Your OSM database will have the following tables in it:

 List of relations
 Schema | Name | Type | Owner 
--------+----------------------+-------+---------
 public | osm_admin | table | bmaddox
 public | osm_aeroways | table | bmaddox
 public | osm_amenities | table | bmaddox
 public | osm_buildings | table | bmaddox
 public | osm_landusages | table | bmaddox
 public | osm_landusages_gen0 | table | bmaddox
 public | osm_landusages_gen1 | table | bmaddox
 public | osm_mainroads | table | bmaddox
 public | osm_mainroads_gen0 | table | bmaddox
 public | osm_mainroads_gen1 | table | bmaddox
 public | osm_minorroads | table | bmaddox
 public | osm_motorways | table | bmaddox
 public | osm_motorways_gen0 | table | bmaddox
 public | osm_motorways_gen1 | table | bmaddox
 public | osm_places | table | bmaddox
 public | osm_railways | table | bmaddox
 public | osm_railways_gen0 | table | bmaddox
 public | osm_railways_gen1 | table | bmaddox
 public | osm_transport_areas | table | bmaddox
 public | osm_transport_points | table | bmaddox
 public | osm_waterareas | table | bmaddox
 public | osm_waterareas_gen0 | table | bmaddox
 public | osm_waterareas_gen1 | table | bmaddox
 public | osm_waterways | table | bmaddox
 public | spatial_ref_sys | table | bmaddox
(25 rows)

The _gen0 and _gen1 tables are generalized and not as highly detailed as the other tables.  They are good for viewing data over large geographic areas (think nation scale).  With areas that large, it would take a lot of time to render the high resolution data.  Thus the _gen0 and _gen1 tables are simplified versions of the data for use at these resolutions.  You can use QGIS’s scale-dependent rendering to specify these tables and then go to the high-resolution tables upon zooming in.

Go forth and play with the additional free geospatial data you now have in your database 🙂

Posted in GIS

The Lost Research Paper

Towards the end of my tenure at the US Geological Survey, I was the project manager and principal investigator of Restoration of Data from Lossy Compression.  The goal of the project was to find ways to restore fine detail that was lost during lossy compression processes such as JPEG.  I had submitted an Open File report through the review process, but left the USGS in 2006 before the paper had completed review.  As I had left, it basically fell through the cracks and was never officially published.

I had forgotten about it until recently when I was updating my resume.  So, without further ado, I have put the paper here.  I took out the USGS logo and what not since it was never officially published by them.  So for a flashback into what I was doing in 2006, have fun reading it 🙂