Applying Deep Learning and Computer Vision to Lidar Part 2: Training Data

In part one I described some of the issues I had on a recent project that applied deep learning to geographic feature recognition in LiDAR and the file sizes of such data.  This time I want to talk about training data, how important it is, and how little there is for this type of problem.

One of the most important things in deep learning is having both quality training data and a good amount of such data.  I have actually written a previous post about the importance of quality data that you can read here.  At a bare minimum, you should typically have around one thousand samples of each object class you want to train a model to recognize.  Object classes in this case are geographic feature types that we want to recognize.

Training Data Characteristics

These samples should mirror the characteristics of the data that your model will come across during classification tasks.  With LiDAR in GeoTIFF format, the training data should be similar in resolution (0.5 meters in this case) and bit depth (32-bit) to the area for testing.  There should be variability in the training data.  Convolutional neural networks are NOT rotational invariant, meaning that unless you train your model on samples at different angles, it will not automatically recognize features.  In this case, your training LiDAR features should be rotated at different angles to account for differences in projection or north direction.

Balanced Numbers of Features per Class

Your training data should also be balanced, meaning that each class should have roughly the same number of training images where possible.  You can perform some tasks that we will talk about in a bit to help with this, but generally if your model is unbalanced, it could mistakenly “lean” towards one class more than others.

Related to this, when generating your training data, you should make sure your training and testing data contain samples from each of your object classes. Especially when imbalanced, it is very easy to use something like train_test_split from the sklearn library and have it generate a training set that misses some object classes. You should also shuffle your training data so that the samples from each class are sufficiently randomized and the model does not assume that features will appear in a specific order. To alleviate this, make sure you pass in something like:

… = train_test_split(…, shuffle=True, stratify=labels)

where labels is a list of your object classes. For most cases this will ensure the order of your samples is sufficiently randomized and that your training/testing data contains samples of each object class.

Complexity

Geographic features also have a varying complexity in how they appear in LiDAR.  The same feature can “look” differently based on its size or other characteristics.  Humans will always look like humans, so training on them is fairly simple.  Geographic features can be in the same class but appear differently based on factors such as weathering, erosion, vegetation, and even if it’s a riverbed that is dry part of the year.  This increased complexity means that samples need to have enough variability, even in the same object class, for proper training.

Lack of Training Data for This Project

With all of this out of the way, let us now talk about the issues that faced this particular project.  First of all, there is not a lot of labeled training data for these types of features.  At all.  I used various search engines, ChatGPT, even bought a bucket of KFC so I could try to throw some bones to lead me to training data (although I don’t know voodoo so probably read them wrong).

There is a lot of data about these geographic features out there, but not labeled AND in LiDAR format.  There is an abundance of photographs of these features.  There are paper maps of these features.  There are research papers with drawings of these features.  I even found some GIS data that had polygons of these features, but the matching elevation model was too low of a resolution to be useful.

In the end I was only able to find a single dataset that matched the bit depth and spatial resolution that matched the test data.  There were a couple of problems with this dataset though.  First, it only had three feature classes out of a dozen or more.  Second, the number of samples of each of these three classes were way imbalanced.  It broke down like this:

  • Class 1 – 123 samples
  • Class 2 – 2,214 samples
  • Class 3 – 9,700 samples

Realistically we should have just tried for Classes 2 and 3, but decided to try to use various techniques to help with the imbalance.  Plus, since it was a bit of a research project, we felt it would be interesting to see what would happen.

Data Augmentation

There are a few different methods of data augmentation you can do to add more training samples, especially with raster data.  Data augmentation is a technique where you generate new samples from existing data so that you can enhance your model’s generalization and generate more data for training. The key part of this is making sure what the methods you use do not change the object class of the training sample.

Geometric Transformations

The first thing you can do with raster data is to apply geometric transformations (again, as long as they do not change the object class of your training sample).  Randomly rotating your training images can help with the rotational invariance mentioned above.  You can also flip your images, change their scale, and even crop them as long as the feature in the training sample remains.

You can gain several benefits from applying geometric transformations to your training data. If your features can appear at different sizes, scaling transforms can help your model become better generalized on feature size. With LiDAR data, suppose someone did not generate the scene with North at the top. Here, random rotations can help the model generalize to rotation so it can better detect features regardless of angle.

Spatial Relationships

Regardless of what type of augmentations you apply to LiDAR, you have to be mindful that you do not change the spatial characteristics of the data. Consider color space augmentation, something that is common with other areas of deep learning and computer vision. With LiDAR, modifying the brightness/contrast would actually be changing the elevation and/or reflectance values of the data. In some applications, especially those highly based on reflectance values such as detecting types of vegetation, this might be useful. In high-resolution geography, you could end up altering the training data in such a way that it no longer represents real-world features.

Wrapping Up

I think I will end this one here as it got longer than I expected and I’m tired of typing 😉  Next time I’ll cover issues with image processing libraries and 32-bit LiDAR data.

Applying Deep Learning and Computer Vision to LiDAR Part1: File Sizes

Introduction

I recently had an interesting project where a client wanted to see if certain geographic features could be found by using deep learning on LiDAR converted to GeoTIFF format.  I had already been working on some code to allow me to use any of the Tensorflow built-in models as R-CNNs, so this seemed like the perfect opportunity to try this.  This effort wasn’t without issues, and I thought I would detail them here in case anyone else is interested.  File sizes, lack of training data, and a video card with an add-on fan that sounded like a jet engine turned out to be interesting problems for the project.

I decided to split this up into multiple posts.  Here in Part 1, I will be covering the implications of doing deep learning and computer vision on LiDAR where file sizes can range in the hundreds of gigabytes for imagery.

What is LiDAR?

Example LiDAR point cloud courtesy the United States Geological Survey

LiDAR is a technology that uses laser beams to measure distances and movements in an environment. The word LiDAR comes from Light Detection And Ranging, and it works by sending out short pulses of light and measuring the time it takes for them to bounce back from objects or surfaces. You may have even seen it used on your favorite television show, where people will fly a drone to perform a scan of a particular area.  LiDAR can create high-resolution maps of various terrains, such as forests, oceans, or cities. LiDAR is widely used in applications such as surveying, archaeology, geology, forestry, atmospheric physics, and autonomous driving. 

Archaeologists have made a lot of recent discoveries using LiDAR.  In Central and South America, lost temples and other structures from ancient civilizations such as the Aztecs and the Inca have been found in heavily forested areas.  Drone-based LiDAR can be used to find outlines of hard-to-see foundations where old buildings used to stand.

LiDAR scans are typically stored in a point-cloud format, usually LAS or LAZ or other proprietary and unmentionable formats.  These point clouds can be processed in different ways.  It is common to process them to extract the ground level, tree top level, or building outlines.  This is convenient as these points can be processed for different uses, but not so convenient for visualization.

LiDAR converted to a GeoTIFF DEM

These data are commonly converted into GeoTIFF format, a raster file format, so that they can be used in a GIS.  In this form, they are commonly used as high-resolution digital elevation format (DEM) files.  These files can then be used to perform analysis tasks such as terrain modeling, hydrological modeling, and others.

File Sizes

Conversion to GeoTIFF might result in smaller file sizes and can be easier to process in a GIS, but the files can still be very large.  For this project, the LiDAR file was one hundred and three gigabytes. It was stored as a 32-bit grayscale file so that the elevations of each point on the ground could be stored at a high resolution.  This is still an extremely large file, and not able to be fully loaded into memory for deep learning processing unless a very high-end computer was used (spoiler: I do not have a terabyte of RAM on my home computer).

Using CUDA on a GPU became interesting.  I have a 24 gigabyte used Tesla P40 that I got for cheap off eBay.  Deep learning models can require large amounts of memory that can quickly overwhelm a GPU.  Things like data augmentation, where training images are slightly manipulated on the CPU to provide more samples to help with generalization, take up main memory.  The additional size of the 32-bit dataset and training samples led to more memory being taken up than normal.

Deep learning models tend to require training data to be processed in batches.  These batches are small sets of the input data that are processed during one iteration of training.  It’s also more efficient for algorithms such as stochastic gradient descent to work on batches of data instead of the entire dataset during each iteration.  The sheer size of the training data samples meant that each batch took up a large amount of memory.

Finally, it was impossible to run a detection on the entire LiDAR image at one time.  The image had to be broken up into chunks that could be loaded into memory and run in a decent amount of time.  I made a random choice of cutting the image into an 8×8 grid, resulting in sixty-four images.  I wanted to be able to break up the processing so I could run it and combine the results at the end.  At the time, I had not yet water-cooled my Tesla, so the cooling fan I had attached to it sounded like a jet engine while running.  Breaking it into chunks meant that I could process things during the day and then stop at night when I wanted to sleep.  Believe me, working on other projects during the day while listening to that fan probably made me twitch a bit more than normal.

Conclusion

So that’s it for Part 1. I hope I’ve touched on some of the issues that I came across while trying to processing LiDAR with deep learning and computer vision algorithms. In Part 2 I’ll discuss gathering training data (or the lack of available training data).

Revisiting Historic Topographic Maps Part 1

My first professional job during and after college was working at the US Geological Survey as a software engineer and researcher. My job required me to learn about GIS and cartography, as I would do things from writing production systems to researching distributed processing. It gave me an appreciation of cartography and of geospatial data. I especially liked topographic maps as they showed features such as caves and other interesting items on the landscape.

Recently, I had a reason to go back and recreate my mosaics of some Historic USGS Topomaps. I had originally put them into a PostGIS raster database, but over time realized that tools like QGIS and PostGIS raster can be extremely touchy when used together. Even after multiple iterations of trying out various overview levels and constraints, I still had issues with QGIS crashing or performing very slowly. I thought I would share my workflow in taking these maps, mosaicing them, and finally optimizing them for loading into a GIS application such as QGIS.  Note that I use Linux and leave how to install the prerequisite software as an exercise for the reader.

As a refresher, the USGS has been scanning in old topographic maps and has made them freely available in GeoPDF format here. These maps are available at various scales and go back to the late 1800s. Looking at them shows the progression of the early days of USGS map making to the more modern maps that served as the basis of the USGS DRG program. As some of these maps are over one-hundred years old, the quality of the maps in the GeoPDF files can vary widely. Some can be hard to make out due to the yellowing of the paper, while others have tears and pieces missing.

Historically, the topographic maps were printed using multiple techniques from offset lithographic printing to Mylar separates. People used to etch these separates over light tables back in the map factory days. Each separate would represent certain parts of the map, such as the black features, green features, and so on. While at the USGS, many of my coworkers still had their old tool kits they used before moving to digital. You can find a PDF here that talks about the separates and how they were printed. This method of printing will actually be important later on in this series when I describe why some maps look a certain way.

Process

There are a few different ways to start out downloading USGS historic maps. My preferred method is to start at the USGS Historic Topomaps site.

USGS Historic Maps Search

USGS Historic Maps Search

 

 

 

 

 

 

 

It is not quite as fancy a web interface as the others, but it makes it easier to load the search results into Pandas later to filter and download. For my case, I was working on the state of Virginia, so I selected Virginia with a scale of 250,000 and Historical in the Map Type option. I purposely left Map Name empty and will demonstrate why later.

Topo Map Search

Topo Map Search

 

 

 

 

 

 

Once you click submit, you will see your list of results. They are presented in a grid view with metadata about each map that fits the search criteria. In this example case, there are eighty-nine results for 250K scale historic maps. The reason I selected this version of the search is that you can download the search results in a CSV format by clicking in the upper-right corner of the grid.

Topo Map Search Results

Topo Map Search Results

 

 

 

 

 

 

After clicking Download to Excel (csv) File, your browser will download a file called topomaps.csv. You can open it and see that there is quite a bit of metadata about each map.

Topo Map CSV Results

Topo Map CSV Results

 

 

 

 

 

 

If you scroll to the right, you will find the column we are interested in called Download GeoPDF. This column contains the download URL for each file in the search results.

Highlighted CSV Column

Highlighted CSV Column

 

 

 

 

 

 

For the next step, I rely on Pandas. If you have not heard of it, Pandas is an awesome Python data-analysis library that, among a long list of features, lets you load and manipulate a CSV easily. I usually load it using ipython using the commands in bold below.

bmaddox@sdf1:/mnt/filestore/temp/blog$ ipython3
Python 3.6.6 (default, Sep 12 2018, 18:26:19) 
Type "copyright", "credits" or "license" for more information.

IPython 5.5.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import pandas as pd

In [2]: csv = pd.read_csv("topomaps.csv")

In [3]: csv
Out[3]: 
   Series     Version  Cell ID     ...      Scan ID GDA Item ID  Create Date
0    HTMC  Historical    69087     ...       255916     5389860   08/31/2011
1    HTMC  Historical    69087     ...       257785     5389864   08/31/2011
2    HTMC  Historical    69087     ...       257786     5389866   08/31/2011
3    HTMC  Historical    69087     ...       707671     5389876   08/31/2011
4    HTMC  Historical    69087     ...       257791     5389874   08/31/2011
5    HTMC  Historical    69087     ...       257790     5389872   08/31/2011
6    HTMC  Historical    69087     ...       257789     5389870   08/31/2011
7    HTMC  Historical    69087     ...       257787     5389868   08/31/2011
..    ...         ...      ...     ...          ...         ...          ...
81   HTMC  Historical    74983     ...       189262     5304224   08/08/2011
82   HTMC  Historical    74983     ...       189260     5304222   08/08/2011
83   HTMC  Historical    74983     ...       707552     5638435   04/23/2012
84   HTMC  Historical    74983     ...       707551     5638433   04/23/2012
85   HTMC  Historical    68682     ...       254032     5416182   09/06/2011
86   HTMC  Historical    68682     ...       254033     5416184   09/06/2011
87   HTMC  Historical    68682     ...       701712     5416186   09/06/2011
88   HTMC  Historical    68682     ...       701713     5416188   09/06/2011

[89 rows x 56 columns]

In [4]:

As you can see from the above, Pandas loads the CSV in memory along with the column names from the CSV header.

In [6]: csv.columns
Out[6]: 
Index(['Series', 'Version', 'Cell ID', 'Map Name', 'Primary State', 'Scale',
       'Date On Map', 'Imprint Year', 'Woodland Tint', 'Visual Version Number',
       'Photo Inspection Year', 'Photo Revision Year', 'Aerial Photo Year',
       'Edit Year', 'Field Check Year', 'Survey Year', 'Datum', 'Projection',
       'Advance', 'Preliminary', 'Provisional', 'Interim', 'Planimetric',
       'Special Printing', 'Special Map', 'Shaded Relief', 'Orthophoto',
       'Pub USGS', 'Pub Army Corps Eng', 'Pub Army Map', 'Pub Forest Serv',
       'Pub Military Other', 'Pub Reclamation', 'Pub War Dept',
       'Pub Bur Land Mgmt', 'Pub Natl Park Serv', 'Pub Indian Affairs',
       'Pub EPA', 'Pub Tenn Valley Auth', 'Pub US Commerce', 'Keywords',
       'Map Language', 'Scanner Resolution', 'Cell Name', 'Primary State Name',
       'N Lat', 'W Long', 'S Lat', 'E Long', 'Link to HTMC Metadata',
       'Download GeoPDF', 'View FGDC Metadata XML', 'View Thumbnail Image',
       'Scan ID', 'GDA Item ID', 'Create Date'],
      dtype='object')

The column we are interested in is named Download GeoPDF as it contains the URLs to download the files.

In [7]: csv["Download GeoPDF"]
Out[7]: 
0     https://prd-tnm.s3.amazonaws.com/StagedProduct...
1     https://prd-tnm.s3.amazonaws.com/StagedProduct...
2     https://prd-tnm.s3.amazonaws.com/StagedProduct...
3     https://prd-tnm.s3.amazonaws.com/StagedProduct...
4     https://prd-tnm.s3.amazonaws.com/StagedProduct...
5     https://prd-tnm.s3.amazonaws.com/StagedProduct...
6     https://prd-tnm.s3.amazonaws.com/StagedProduct...
7     https://prd-tnm.s3.amazonaws.com/StagedProduct...
                            ...                        
78    https://prd-tnm.s3.amazonaws.com/StagedProduct...
79    https://prd-tnm.s3.amazonaws.com/StagedProduct...
80    https://prd-tnm.s3.amazonaws.com/StagedProduct...
81    https://prd-tnm.s3.amazonaws.com/StagedProduct...
82    https://prd-tnm.s3.amazonaws.com/StagedProduct...
83    https://prd-tnm.s3.amazonaws.com/StagedProduct...
84    https://prd-tnm.s3.amazonaws.com/StagedProduct...
85    https://prd-tnm.s3.amazonaws.com/StagedProduct...
86    https://prd-tnm.s3.amazonaws.com/StagedProduct...
87    https://prd-tnm.s3.amazonaws.com/StagedProduct...
88    https://prd-tnm.s3.amazonaws.com/StagedProduct...
Name: Download GeoPDF, Length: 89, dtype: object

The reason I use Pandas for this step is that it gives me a simple and easy way to extract the URL column to a text file.

In [9]: csv["Download GeoPDF"].to_csv('urls.txt', header=None, index=None)

This gives me a simple text file that has all of the URLs in it.

https://prd-tnm.s3.amazonaws.com/StagedProducts/Maps/HistoricalTopo/PDF/DC/250000/DC_Washington_255916_1989_250000_geo.pdf
https://prd-tnm.s3.amazonaws.com/StagedProducts/Maps/HistoricalTopo/PDF/DC/250000/DC_Washington_257785_1961_250000_geo.pdf
https://prd-tnm.s3.amazonaws.com/StagedProducts/Maps/HistoricalTopo/PDF/DC/250000/DC_Washington_257786_1961_250000_geo.pdf
…
https://prd-tnm.s3.amazonaws.com/StagedProducts/Maps/HistoricalTopo/PDF/WV/250000/WV_Bluefield_254032_1961_250000_geo.pdf
https://prd-tnm.s3.amazonaws.com/StagedProducts/Maps/HistoricalTopo/PDF/WV/250000/WV_Bluefield_254033_1957_250000_geo.pdf
https://prd-tnm.s3.amazonaws.com/StagedProducts/Maps/HistoricalTopo/PDF/WV/250000/WV_Bluefield_701712_1957_250000_geo.pdf
https://prd-tnm.s3.amazonaws.com/StagedProducts/Maps/HistoricalTopo/PDF/WV/250000/WV_Bluefield_701713_1955_250000_geo.pdf

Finally, as there are usually multiple GeoPDF files that cover the same area, I download all of them so that I can go through and pick the best ones for my purposes. I try to find maps that are around the same data, are easily viewable, are not missing sections, and so on. To do this, I use the wget command and use the text file I created as input like so.

bmaddox@sdf1:/mnt/filestore/temp/blog$ wget -i urls.txt 
--2018-09-23 13:00:41--  https://prd-tnm.s3.amazonaws.com/StagedProducts/Maps/HistoricalTopo/PDF/DC/250000/DC_Washington_255916_1989_250000_geo.pdf
Resolving prd-tnm.s3.amazonaws.com (prd-tnm.s3.amazonaws.com)... 52.218.194.10
Connecting to prd-tnm.s3.amazonaws.com (prd-tnm.s3.amazonaws.com)|52.218.194.10|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 32062085 (31M) [application/pdf]
Saving to: ‘DC_Washington_255916_1989_250000_geo.pdf’
…
…

Eventually wget will download all the files to the same directory as the text file. In the next installment, I will continue my workflow as I produce mosaic state maps using the historic topographic GeoPDFs.

Geonames Part 2

Since I needed it as part of my job, I finally got around to finishing up the Geonames scripts in my github repository misc_gis_scripts.  In the geonames subdirectory is a bash script called dogeonames.sh.  Create a PostGIS database, edit the bash file, and then run it and it will download and populate your Geonames database for you.

Note that I’m not using the alternatenamesv2 file that they’re distributing now.  I checked with a hex editor and they’re not actually including all fields on each line, and Postgres will not import a file unless each column is there.  I’ll probably add in a Python file to fix it at some point but not now 🙂

Another Fixed GNIS Dataset

When I went to import the latest GNIS dataset into my local PostGIS database, I found that it contains the same issues I’ve been reporting for the past few years.   You can find my fixed version of the dataset here.

As a disclaimer, while I used to work there, I no longer have any association with the US Geological Survey or the Board of Geographic Names.

For those interested, here is the list of problems I found and fixed:

ID 45605: Duplicate entry for Parker Canyon, AZ. The coordinates are in Sonora, Mexico.
ID 45606: Duplicate entry for San Antonio Canyon, AZ. The coordinates are in Sonora, Mexico.
ID 45608: Duplicate entry for Silver Creek, AZ. The coordinates are in Sonora, Mexico.
ID 45610: Duplicate entry for Sycamore Canyon, AZ. The coordinates are in Sonora, Mexico.
ID 567773: Duplicate entry for Hovey Hill, ME. The coordinates are in New Brunswick, Canada.
ID 581558: Duplicate entry for Saint John River, ME. The coordinates are in New Brunswick, Canada.
ID 768593: Duplicate entry for Bear Gulch, MT.  The coordinates are in Alberta, Canada.
ID 774267: Duplicate entry for Miners Coulee, MT.  The coordinates are in Alberta, Canada.
ID 774784: Duplicate entry for North Fork Milk River, MT.  The coordinates are in Alberta, Canada.
ID 775339: Duplicate entry for Police Creek, MT.  The coordinates are in Alberta, Canada.
ID 776125: Duplicate entry for Saint Mary River, MT.  The coordinates are in Alberta, Canada.
ID 778142: Duplicate entry for Waterton River, MT.  The coordinates are in Alberta, Canada.
ID 778545: Duplicate entry for Willow Creek, MT.  The coordinates are in Alberta, Canada.
ID 798995: Duplicate entry for Lee Creek, MT.  The coordinates are in Alberta, Canada.
ID 790166: Duplicate entry for Screw Creek, MT.  The coordinates are in British Columbia, Canada.
ID 793276: Duplicate entry for Wigwam River, MT.  The coordinates are in British Columbia, Canada.
ID 1504446: Duplicate entry for Depot Creek, WA.  The coordinates are in British Columbia, Canada.
ID 1515954: Duplicate entry for Arnold Slough, WA.  The coordinates are in British Columbia, Canada.
ID 1515973: Duplicate entry for Ashnola River, WA.  The coordinates are in British Columbia, Canada.
ID 1516047: Duplicate entry for Baker Creek, WA.  The coordinates are in British Columbia, Canada.
ID 1517465: Duplicate entry for Castle Creek, WA.  The coordinates are in British Columbia, Canada.
ID 1517496: Duplicate entry for Cathedral Fork, WA.  The coordinates are in British Columbia, Canada.
ID 1517707: Duplicate entry for Chilliwack River, WA.  The coordinates are in British Columbia, Canada.
ID 1517762: Duplicate entry for Chuchuwanteen Creek, WA.  The coordinates are in British Columbia, Canada.
ID 1519414: Duplicate entry for Ewart Creek, WA. The coordinates are in British Columbia, Canada.
ID 1520446: Duplicate entry for Haig Creek, WA. The coordinates are in British Columbia, Canada.
ID 1520654: Duplicate entry for Heather Creek, WA. The coordinates are in British Columbia, Canada.
ID 1521214: Duplicate entry for International Creek, WA. The coordinates are in British Columbia, Canada.
ID 1523541: Duplicate entry for Myers Creek, WA. The coordinates are in British Columbia, Canada.
ID 1523731: Duplicate entry for North Creek, WA. The coordinates are in British Columbia, Canada.
ID 1524131: Duplicate entry for Pack Creek, WA. The coordinates are in British Columbia, Canada.
ID 1524235: Duplicate entry for Pass Creek, WA. The coordinates are in British Columbia, Canada.
ID 1524303: Duplicate entry for Peeve Creek, WA. The coordinates are in British Columbia, Canada.
ID 1525297: Duplicate entry for Russian Creek, WA. The coordinates are in British Columbia, Canada.
ID 1525320: Duplicate entry for Saar Creek, WA. The coordinates are in British Columbia, Canada.
ID 1527272: Duplicate entry for Togo Creek, WA. The coordinates are in British Columbia, Canada.
ID 1529904: Duplicate entry for McCoy Creek, WA. The coordinates are in British Columbia, Canada.
ID 1529905: Duplicate entry for Liumchen Creek, WA. The coordinates are in British Columbia, Canada.
ID 942345: Duplicate entry for Allen Brook, NY. The coordinates are in Quebec, Canada.
ID 949668: Duplicate entry for English River, NY. The coordinates are in Quebec, Canada.
ID 959094: Duplicate entry for Oak Creek, NY. The coordinates are in Quebec, Canada.
ID 967898: Duplicate entry for Trout River, NY. The coordinates are in Quebec, Canada.
ID 975764: Duplicate entry for Richelieu River, VT. The coordinates are in Quebec, Canada.
ID 1458184: Duplicate entry for Leavit Brook, VT. The coordinates are in Quebec, Canada.
ID 1458967: Duplicate entry for Pike River, VT. The coordinates are in Quebec, Canada.
ID 1028583: Duplicate entry for Cypress Creek, ND. The coordinates are in Manitoba, Canada.
ID 1035871: Duplicate entry for Mowbray Creek, ND. The coordinates are in Manitoba, Canada.
ID 1035887: Duplicate entry for Gimby Creek, ND. The coordinates are in Manitoba, Canada.
ID 1035890: Duplicate entry for Red River of the North, ND. The coordinates are in Manitoba, Canada.
ID 1035895: Duplicate entry for Wakopa Creek, ND. The coordinates are in Manitoba, Canada.
ID 1930555: Duplicate entry for Red River of the North, ND. The coordinates are in Manitoba, Canada.
ID 1035882: Duplicate entry for East Branch Short Creek, ND. The coordinates are in Saskatchewan, Canada.
ID 1782010: Duplicate entry for Manitoulin Basin, MI. The coordinates are in Ontario, Canada

Using Free Geospatial Tools and Data Part 12: OpenStreetMap

For this installment, we will look at importing data from OpenStreetMap.org.  As I mentioned in an earlier post, OpenStreetMap is a cloud-sourced GIS dataset with the goal of producing a global dataset that anyone can use.  There are two ways to download this data: you can either use Bittorrent and download the entire planet from http://osm-torrent.torres.voyager.hr/ or download extracts from http://download.geofabrik.de/.  If you do not need the entire planet, I would highly recommend using geofabrik.  It has a fast downlink and they have finally added MD5 checksums so you can verify the integrity of your download.

Go to http://download.geofabrik.de/ and click on North America.  We will be using the .pbf format file so click the link near the top of the page named north-america-latest.osm.pbf.  It is about six gigabytes in size and the MD5sum is listed at the end of the paragraph.  Once the download is done in your browser, you can use the md5sum command under a Linux shell or download one of the many MD5sum clients for windows.  It will look similar to the below example output (it likely will not match exactly as the MD5 value will change as the data is modified.

bmaddox@girls:~/Downloads/geodata$ md5sum north-america-latest.osm.pbf 
d2daa9c7d3ef4dead4a2b5f790523e6d north-america-latest.osm.pbf
bmaddox@girls:~/Downloads/geodata$

Next go back to the main geofabrik site and then click on and download the Central America file.  This will give you Mexico and the other Central American files.  As listed above, once the download is done in your browser, check it with md5sum.  If the values do not match, you will want to redownload and rerun md5sum again until they do.

There are several programs you can use to import OpenStreetMap data into PostGIS.  They mainly differ on what schema they use and how they manipulate the data before it goes in.  For purposes of this post, we will be using the imposm program found at http://imposm.org/docs/imposm/latest/.  If you are on Ubuntu, it should be a simple apt-get install imposm away.  For Windows or other distributions, you can download it directly from the imposm website.  The tutorial on how to import data using imposm can be found here: http://imposm.org/docs/imposm/latest/tutorial.html.

Using imposm is a multi-stage process.  The first stage is to have it read the data and combine the files into several intermediary files.  First create a PostGIS database by running:

createdb -T gistemplate OSM

Now have imposm take the data and convert it into its intermediary files.  To do this, run a similar command to this:

bmaddox@girls:/data/data/geo$ imposm --read --concurrency 2 --proj EPSG:4326 ~/Downloads/geodata/*.pbf
[16:29:15] ## reading /home/bmaddox/Downloads/geodata/central-america-latest.osm.pbf
[16:29:15] coords: 500489k nodes: 10009k ways: 71498k relations: 500k (estimated)
[16:31:27] coords: 21524k nodes: 92k ways: 2464k relations: 5k
[16:31:28] ## reading /home/bmaddox/Downloads/geodata/north-america-latest.osm.pbf
[16:31:28] coords: 500489k nodes: 10009k ways: 71498k relations: 500k (estimated)
[17:40:22] coords: 678992k nodes: 1347k ways: 44469k relations: 229k
[17:40:23] reading took 1 h 11m 7 s
[17:40:23] imposm took 1 h 11m 7 s
bmaddox@girls:/data/data/geo$

Here, I changed to a different drive and can the imposm command to read from the drive where I downloaded the .pbf files.  I did this since reading is a disk intensive process and spitting it between drives helps to speed things up a bit.  Also, I differed from the tutorial as my install of QGIS could not render OpenStreetMap data in its native EPSG:900913 projection with data in the EPSG:4326 coordinate system that my Tiger data was in.  Unless you have an extremely high-end workstation, this will take a while.  Once the process is done, you will have the following files in the output directory:

bmaddox@girls:~/Downloads/geodata/foo$ dir
imposm_coords.cache imposm_nodes.cache imposm_relations.cache imposm_ways.cache

The next step is to take the intermediary files and write them into PostGIS.  Here you can use a wild card to read all of the .pbf files you downloaded.

bmaddox@girls:~/Downloads/geodata/foo$ imposm --write --database OSM --host localhost --user bmaddox --port 5432 --proj EPSG:4326
password for bmaddox at localhost:
[18:20:21] ## dropping/creating tables
[18:20:22] ## writing data
[2014-06-15 18:52:46,074] imposm.multipolygon - WARNING - building relation 1834172 with 8971 ways (10854.8ms) and 8843 rings (2293.0ms) took 426854.5ms
[2014-06-15 19:00:47,635] imposm.multipolygon - WARNING - building relation 2566179 with 4026 ways (4717.3ms) and 3828 rings (1115.6ms) took 89522.6ms
[19:15:20] relations: 244k/244k
[19:15:41] relations: total time 55m 18s for 244095 (73/s)
[00:35:28] ways: 46907k/46907k
[00:35:30] ways: total time 5 h 19m 49s for 46907462 (2444/s)
[00:40:21] nodes: 1437k/1437k
[00:40:22] nodes: total time 4 m 51s for 1437951 (4933/s)
[00:40:22] ## creating generalized tables
[01:44:47] generalizing tables took 1 h 4 m 24s
[01:44:47] ## creating union views
[01:44:48] creating views took 0 s
[01:44:48] ## creating geometry indexes
[02:15:02] creating indexes took 30m 14s
[02:15:02] writing took 7 h 54m 41s
[02:15:02] imposm took 7 h 54m 42s
bmaddox@girls:~/Downloads/geodata/foo$

As you can see from the above output, this took almost eight hours on my home server (quad core AMD with eight gig of RAM).  This command loads all of the data from the intermediate files into PostGIS.  However, we are not done yet.  Looking at the output, all it did was load the data and create indices.  It did not cluster the data or perform any other optimizations.  To do this, run the following imposm command:

bmaddox@girls:~/Downloads/geodata/foo$ imposm --optimize -d OSM --user bmaddox
password for bmaddox at localhost:
[17:18:12] ## optimizing tables
Clustering table osm_new_transport_areas
Clustering table osm_new_mainroads
Clustering table osm_new_buildings
Clustering table osm_new_mainroads_gen1
Clustering table osm_new_mainroads_gen0
Clustering table osm_new_amenities
Clustering table osm_new_waterareas_gen1
Clustering table osm_new_waterareas_gen0
Clustering table osm_new_motorways_gen0
Clustering table osm_new_aeroways
Clustering table osm_new_motorways
Clustering table osm_new_transport_points
Clustering table osm_new_railways_gen0
Clustering table osm_new_railways_gen1
Clustering table osm_new_landusages
Clustering table osm_new_waterways
Clustering table osm_new_railways
Clustering table osm_new_motorways_gen1
Clustering table osm_new_waterareas
Clustering table osm_new_places
Clustering table osm_new_admin
Clustering table osm_new_minorroads
Clustering table osm_new_landusages_gen1
Clustering table osm_new_landusages_gen0
Vacuum analyze
[19:24:38] optimizing took 2 h 6 m 25s
[19:24:38] imposm took 2 h 6 m 26s
bmaddox@girls:~/Downloads/geodata/foo$

On my system it took a couple of hours and clustered all of the tables and then did a vacuum analyze to update the database statistics.

The final step is to have imposm rename the tables to what they will be in “production mode”.  Run the following:

bmaddox@girls:~/Downloads/geodata/foo$ imposm -d OSM --user bmaddox --deploy-production-tables
password for bmaddox at localhost:
[11:00:06] imposm took 1 s
bmaddox@girls:~/Downloads/geodata/foo$

Your data should now be optimized and ready for use.  To test it, refer to an earlier post in this series where I discussed using QGIS and load some of the OSM data into it.

Your OSM database will have the following tables in it:

 List of relations
 Schema | Name | Type | Owner 
--------+----------------------+-------+---------
 public | osm_admin | table | bmaddox
 public | osm_aeroways | table | bmaddox
 public | osm_amenities | table | bmaddox
 public | osm_buildings | table | bmaddox
 public | osm_landusages | table | bmaddox
 public | osm_landusages_gen0 | table | bmaddox
 public | osm_landusages_gen1 | table | bmaddox
 public | osm_mainroads | table | bmaddox
 public | osm_mainroads_gen0 | table | bmaddox
 public | osm_mainroads_gen1 | table | bmaddox
 public | osm_minorroads | table | bmaddox
 public | osm_motorways | table | bmaddox
 public | osm_motorways_gen0 | table | bmaddox
 public | osm_motorways_gen1 | table | bmaddox
 public | osm_places | table | bmaddox
 public | osm_railways | table | bmaddox
 public | osm_railways_gen0 | table | bmaddox
 public | osm_railways_gen1 | table | bmaddox
 public | osm_transport_areas | table | bmaddox
 public | osm_transport_points | table | bmaddox
 public | osm_waterareas | table | bmaddox
 public | osm_waterareas_gen0 | table | bmaddox
 public | osm_waterareas_gen1 | table | bmaddox
 public | osm_waterways | table | bmaddox
 public | spatial_ref_sys | table | bmaddox
(25 rows)

The _gen0 and _gen1 tables are generalized and not as highly detailed as the other tables.  They are good for viewing data over large geographic areas (think nation scale).  With areas that large, it would take a lot of time to render the high resolution data.  Thus the _gen0 and _gen1 tables are simplified versions of the data for use at these resolutions.  You can use QGIS’s scale-dependent rendering to specify these tables and then go to the high-resolution tables upon zooming in.

Go forth and play with the additional free geospatial data you now have in your database 🙂

Posted in GIS

Using Free Geospatial Tools and Data Part 11: NGA Geonames

Updated 23 March 2018: Changed for new size necessary for the cc2 column

It’s been a while since I’ve made a post, so thought I’d keep going with the data series.  This time around I’ll be talking about how to make your own local copy of the NGA Geonames database.  This database is similar to GNIS, but covers the whole globe and also has information on location such as airfields, pipelines, and so on.

First, download the following files from the Geonames website:

  • admin1CodesASCII.txt
  • admin2Codes.txt
  • allCountries.txt
  • alternateNamesV2.txt
  • countryInfo.txt
  • featureCodes_en.txt
  • hierarchy.txt
  • iso-languagecodes.txt
  • timeZones.txt
  • userTags.txt

Some of them are zipped, so you’ll need to unzip them into the same directory as the others for ease of use.  Next, create your geonames database by running:

bmaddox@girls:~/Downloads/geodata$ createdb -T gistemplate Geonames

Next, we will create the table for the main points file, which is called allCountries.txt.  Run the following command from the same directory where you have all of the Geonames files:

bmaddox@girls:~/Downloads/geodata$ psql -d Geonames 
psql (9.3.4)
Type "help" for help.
Geonames=#

This will put you into the PostgreSQL command line.  Now create the table to hold the data in the allCountries.txt file:

Geonames=# create table geoname (
geonameid int,
name varchar(200),
asciiname varchar(200),
alternatenames text,
latitude float,
longitude float,
fclass char(1),
fcode varchar(10),
country varchar(2),
cc2 varchar(170),
admin1 varchar(20),
admin2 varchar(80),
admin3 varchar(20),
admin4 varchar(20),
population bigint,
elevation int,
dem int,
timezone varchar(40),
moddate date
);
CREATE TABLE
Geonames=#

Now we will use a built-in PostgreSQL command to load data in the DB.  There are two forms of it, the long way specifies the column names in order on the command line, the other just the file name.  We will be using the short way here:

Geonames=# \copy geoname from allCountries.txt null as '';
Geonames=#

This loads the data, but it is not yet ready to be usable by a GIS.  We will need to create a geometry column for the data and then use the latitude and longitude columns to create a point column in the geometry.

Geonames=# SELECT AddGeometryColumn( 'geoname', 'the_geom', 4326, 'POINT', 2);
 addgeometrycolumn 
------------------------------------------------------
 public.geoname.the_geom SRID:4326 TYPE:POINT DIMS:2 
(1 row)
Geonames=#

This command creates the geometry column, and specifies an EPSG of 4326 (WGS84).  Now we need to insert the latitude and longitudes of the points into this column:

Geonames=# update geoname SET the_geom = ST_PointFromText('POINT(' || longitude || ' ' || latitude || ')', 4326);
UPDATE 8943136
Geonames=#

This will take a while as PostGIS must read each point, convert it into the proper format, and then add it into the geometry column.  Now we need to add a geospatial index on this column to make the queries faster.  Again, it may take a while to run.

Geonames=# create index geoname_the_geom_gist_idx on geoname using gist (the_geom);
CREATE INDEX
Geonames=#

Once this is done, we should optimize this table as I mentioned in a previous post.  We need to analyze the database and then cluster it on the points.

Geonames=# vacuum analyze geoname;
VACUUM
Geonames=# cluster geoname using geoname_the_geom_gist_idx;
CLUSTER
Geonames=# analyze geoname;

There are several auxiliary tables we should now add to the geonames database.  These define the values used in the various columns and can be used in a JOIN statement in a GIS.  I’m going to leave out the vacuum analyze steps but you should perform it on each table below.  The first will be the alternatename table, which holds data from the  alternateNames.txt file.  This file contains a list of other names some of the points are known by and is connected to the geoname table by the geonameId column:

Geonames=# create table alternatename (
alternatenameId int,
geonameid int,
isoLanguage varchar(7),
alternateName varchar(400),
isPreferredName boolean,
isShortName boolean,
isColloquial boolean,
isHistoric boolean
);
CREATE TABLE
Geonames=# \copy alternatename from alternateNames.txt null as '';
Geonames=#

Next we move on to the iso-languagecodes.txt file.  This file contains ISO-638 standard names for all of the countries in the database.

Geonames=# create table "isolanguage" (
 iso_639_3 char(3),
 iso_639_2 char(10),
 iso_639_1 char(3),
 language_name varchar(100)
);
CREATE TABLE
Geonames=# \copy isolanguage from iso-languagecodes.txt null '' delimiter E'\t' csv header
Geonames=#

Next we will create and load the countryInfo.txt file, which contains information about each country such as iso codes, phone number formats, and so on.  First, we need to remove the comment lines from the start of the file to make things easier.  You can either do this with a text editor and delete every line that starts with the # character, or you can run the following command from bash:

bmaddox@girls:~/Downloads/geodata$ egrep -v "^[[:blank:]]*#" countryInfo.txt > countryInfo2.txt

With this done, we can proceed with the import as normal:

Geonames=# create table "countryinfo" ( 
 iso_alpha2 char(2),
 iso_alpha3 char(3),
 iso_numeric integer,
 fips_code varchar(3),
 name varchar(200),
 capital varchar(200),
 areainsqkm double precision,
 population integer,
 continent varchar(2),
 tld varchar(10),
 currencycode varchar(3),
 currencyname varchar(20),
 phone varchar(20),
 postalcode varchar(100),
 postalcoderegex varchar(200),
 languages varchar(200),
 geonameId int,
 neighbors varchar(50),
 equivfipscode varchar(3)
);
CREATE TABLE
Geonames=# \copy countryinfo from countryInfo2.txt null as '';
Geonames=#

Next we do the timeZones.txt file:

Geonames=# create table "timezones" (
countrycode char(2),
TimeZoneId varchar(30),
gmtoffset double precision,
dstoffset double precision,
rawoffset double precision
);
CREATE TABLE
Geonames=# \copy timezones from timeZones.txt null '' delimiter E'\t' csv header
Geonames=#

Next we do the admin1CodesASCII.txt table, which matches ascii names of administrative divisions to their codes:

Geonames=# CREATE TABLE "admin1codesascii" ( 
code CHAR(10), 
name TEXT, 
nameAscii TEXT, 
geonameid int 
); 
CREATE TABLE
Geonames=# \copy admin1codesascii from admin1CodesASCII.txt null as '';
Geonames=#

Now we do the admin2Codes.txt file that maps the admin2code values to their textual entries.

Geonames=# CREATE TABLE "admin2codes" (
 code varchar(30),
 name_local text,
 name text,
 geonameid int
);
CREATE TABLE
Geonames=# \copy admin2codes from admin2Codes.txt null as '';
Geonames=#

Next is featureCodes_en.txt, which maps feature codes to their descriptions:

Geonames=# CREATE TABLE "featurecodes" ( 
code CHAR(7), 
name VARCHAR(200), 
description TEXT 
); 
CREATE TABLE
Geonames=# \copy featurecodes from featureCodes_en.txt null as '';
Geonames=#

Next is the userTags.txt file that contains user-contributed tagging to the points.

Geonames=# create table "usertags" (
geonameid int,
tag varchar(40)
);
CREATE TABLE
Geonames=# \copy usertags from userTags.txt null as '';
Geonames=#

Finally we will handle the hierarchy.txt file, which contains parent-child relationships modeled from the admin1-4 codes.

Geonames=# create table "hierarchy" (
parentId int,
childId int,
type varchar(40)
);
CREATE TABLE
Geonames=# \copy hierarchy from hierarchy.txt null as '';
Geonames=#

You now should have your own complete copy of the Geonames database.  They do publish updates regularly, so you can either recreate the tables or enter in their changes files.  You may also wish to index the type column of allcountries so you can create custom views that only display things like airports, towers, and so on.

Posted in GIS