Mandy was in a car accident yesterday morning on her way home from work. She’s ok, just got a bump on her head. The car, however, didn’t fare so well, but hopefully with the insurance company it will be fixed and as good as new soon.
Hey all, I am finally in the process of updating my merged state- and national-level TIGER files to the 2014 data that they have put out. You can find them at my GIS Data Page. Note that Roads are not uploaded yet but I already updated the links on the download page so you will get 404 errors until I get them uploaded. I cannot promise it will be tonight since I have to sleep sometime 😉 If you find any 404s on the others, let me know in case I missed a link.
As usual, these are my own value added files that I am publishing in case some people find them useful. If you use these and your business fails, your wife leaves you, your dog dies, and you write a country music song about it, not my fault.
I’ll admit it, I really like old maps. I especially like old topographic maps. I think it started when I used to work for the US Geological Survey. To me, it’s interesting to see how things change over time. From my old urban grown prediction days, I like to see where and when populations change.
Since the USGS put out their Historical Topographic Map Collection (HTMC), I’ve been playing with the maps off and on for about a year now. I finally decided to start merging available maps of the same scale and vintage to study and possibly do feature extraction down the road. I’ll be placing them for download here in case anyone is interested as I process them.
I thought I’d share how I put them together in case anyone else is interested. The first step is to go to the USGS website and pick the files you want to use. The files there are available in GeoPDF format. First thing you need to understand is that you may not find a map covering your area of interest at your scale of interest and vintage. Not everything managed to survive to the current day. For example, I made a merged 125K map of Virginia and most of southern VA is missing at that resolution.
Once I download the GeoPDFs I want to work on, I use a modified version of the geopdf2gtiff.pl script from the Xastir project. The link to my modifications can be found here. I use LZW compression for my GeoTIFFs as it’s lossless and keeps the quality from the GeoPDFs. It is a Perl script and requires that you have GDAL and the GDAL API installed. What it does is calculate the neat-lines of the GeoPDF and then clips it to the neat-line while converting it to a GeoTIFF. Running it as as simple as:
Once you have all of your GeoPDF files download and converted, the next step is to merge them. The fastest way I’ve found to merge involves using gdalbuildvrt and gdal_translate, also from GDAL. The first step is to create a virtual dataset of all of your files by running something like:
gdalbuildvrt -resolution highest -a_srs EPSG:4326 merged.vrt parts/*.tif
The options I chose here are to pick the highest pixel resolution (-resolution) based on the input files. For this case the resolutions should be the same, but this way I don’t have to go through and verify that. Next I change the projection of the output file to WGS84 (-a_srs). Next is the file name of the virtual dataset and then the input files.
Now that the virtual dataset is done, it’s time to actually merge all of the files together. The virtual dataset contains the calculated bounding box that will contain all of the input files. Now we use gdal_translate to actually create the merged GeoTIFF file:
gdal_translate -of GTiff -co COMPRESS=LZW -co PREDICTOR=2 merged.vrt ~/merged.tif
Here again I use LZW compression to losslessly compress the output data. Note that gdal_translate will automatically add an Alpha channel as Band 4 in the image to denote areas that had no input data. That’s why we do NOT add the -addalpha flag to gdalbuildvrt. For performance tips, I’d suggest keeping the source data and output file on separate drives unless you’re running something like a solid state drive. To give you an idea of the output file sizes, Virginia merged (which did have a lot of areas missing), was around 500 megabytes.
Next you’ll need a Shapefile to use as a cut file to clip the data. Since I have the Census Tiger 2013 data in a local PostGIS database (see previous posts to this blog), I used QGIS to select just the VA state outline and then saved it as a Shapefile.
Finally, we will use gdalwarp to clip the merged GeoTIFF against the state outline to produce the clipped GeoTIFF that is just the state itself. This operation can take a bit of time depending on how powerful a machine you’re running it on. The command you will use is similar to this:
gdalwarp --config GDAL_CACHEMAX 1024 -wm 1024 -cutline va_outline.shp -crop_to_cutline -multi -t_srs EPSG:4326 -co COMPRESS=LZW -co PREDICTOR=2 -co BIGTIFF=YES -co TILED=YES ~/merged.tif clipped.tif
Some of the command line parameters I used are optional, I just tend to leave them in since I do a lot of copying and pasting 😉 First we tell GDAL to increase the size of its caches using the –config GDAL_CACHEMAX and -wm options. Next we specify the file to clip against with the -cutline and -crop_to_cutline options. The -multi option tells GDAL to process using multiple threads. I again specify the output projection and the LZW compression parameters. Here I also specify the BIGTIFF option just in case the output file goes over four gigabytes. Finally, I tell gdalwarp to tile the output TIFF so it will load faster in a GIS by separating it into tiles.
The output will look something like the below figure. I’ll start posting files as I get time. Hope everyone is having a great holiday!
I’ve been keeping up with tracking how many botnets are out there scanning WordPress blogs. I’ve eventually resorted to blocking huge chunks of the Internet via .htaccess files. So far it’s been quite effective in limiting the number of botnet login attempts.
If anyone is interested, I’ve put the limit portion of my .htaccess file here. Feel free to use it and customize for your needs.
Lately I’ve been getting tons and tons of login attempts from what appear to be botnets. Since I’m getting tired of banning the IPs individually, I’m temporary taking to banning entire countries and ISP’s from hitting my blog. If you’re in that group, sorry guys. Take it up with your ISP.
Here are some stats I’ve been gathering.
IP Addresses Grouped by ISPs
|220.127.116.11||astral telecom sa|
|18.104.22.168||china unicom zhejiang province network|
|22.214.171.124||chinanet guangdong province network|
|126.96.36.199||citycom networks pvt ltd|
|188.8.131.52||comsenz technology ltd|
|184.108.40.206||desenvolve solucoes de internet ltda|
|220.127.116.11||desokey mohamed hassan centerarabs|
|18.104.22.168||erfahren co. ltd.|
|22.214.171.124||forest eternal communication tech. co.ltd|
|126.96.36.199||fpt telecom company|
|188.8.131.52||go daddy netherlands b.v.|
|184.108.40.206||hetzner online ag|
|220.127.116.11||hetzner online ag|
|18.104.22.168||ho chi minh city post and telecom company|
|22.214.171.124||host papa inc.|
|126.96.36.199||iomart hosting limited|
|188.8.131.52||ip pool for ctrls|
|184.108.40.206||ipx server gmbh|
|220.127.116.11||iweb dedicated cl|
|18.104.22.168||leaseweb germany gmbh|
|22.214.171.124||limestone networks inc|
|126.96.36.199||mainstream consulting group inc|
|188.8.131.52||media temple inc|
|184.108.40.206||media temple inc|
|220.127.116.11||media temple inc.|
|18.104.22.168||media temple inc.|
|22.214.171.124||nap de las americas-madrid s.a.|
|126.96.36.199||new dream network llc|
|188.8.131.52||online data services jsc|
|184.108.40.206||ovh hosting inc|
|220.127.116.11||pt. cyberindo aditama|
|18.104.22.168||pt. des teknologi informasi|
|22.214.171.124||radore veri merkezi hizmetleri a.s.|
|126.96.36.199||rcs & rds business|
|188.8.131.52||salay telekomunikasyon ticaret limited sirketi|
|184.108.40.206||sc eurosistem srl|
|220.127.116.11||sc webfactor srl|
|18.104.22.168||skyware sp. z o.o.|
|22.214.171.124||soc. comercial wirenet chile ltda.|
|126.96.36.199||softcom america inc.|
|188.8.131.52||softcom america inc.|
|184.108.40.206||softlayer technologies inc|
|220.127.116.11||teknik data internet teknolojileri san.tic.ltd. sti|
|18.104.22.168||tolvu- og rafeindapjonusta sudurlands ehf|
|22.214.171.124||turk telekomunikasyon anonim sirketi|
|126.96.36.199||vietnam data communication company|
|188.8.131.52||vietnam data communication company|
|184.108.40.206||webfusion internet solutions|
|220.127.116.11||webfusion internet solutions|
|18.104.22.168||webfusion internet solutions|
As you can see, I get a bunch from Godaddy and French ISP Ovh. I’ve also banned Godaddy IP’s, Ovh, and Media Temple. I’ll be adding others once I find all of their allocated net ranges.
For reference, here’s a copy of my current list along with attempts:
Guess I should be flattered that I’m getting all of this “attention” 🙂
For this installment, we will look at importing data from OpenStreetMap.org. As I mentioned in an earlier post, OpenStreetMap is a cloud-sourced GIS dataset with the goal of producing a global dataset that anyone can use. There are two ways to download this data: you can either use Bittorrent and download the entire planet from http://osm-torrent.torres.voyager.hr/ or download extracts from http://download.geofabrik.de/. If you do not need the entire planet, I would highly recommend using geofabrik. It has a fast downlink and they have finally added MD5 checksums so you can verify the integrity of your download.
Go to http://download.geofabrik.de/ and click on North America. We will be using the .pbf format file so click the link near the top of the page named north-america-latest.osm.pbf. It is about six gigabytes in size and the MD5sum is listed at the end of the paragraph. Once the download is done in your browser, you can use the md5sum command under a Linux shell or download one of the many MD5sum clients for windows. It will look similar to the below example output (it likely will not match exactly as the MD5 value will change as the data is modified.
bmaddox@girls:~/Downloads/geodata$ md5sum north-america-latest.osm.pbf d2daa9c7d3ef4dead4a2b5f790523e6d north-america-latest.osm.pbf bmaddox@girls:~/Downloads/geodata$
Next go back to the main geofabrik site and then click on and download the Central America file. This will give you Mexico and the other Central American files. As listed above, once the download is done in your browser, check it with md5sum. If the values do not match, you will want to redownload and rerun md5sum again until they do.
There are several programs you can use to import OpenStreetMap data into PostGIS. They mainly differ on what schema they use and how they manipulate the data before it goes in. For purposes of this post, we will be using the imposm program found at http://imposm.org/docs/imposm/latest/. If you are on Ubuntu, it should be a simple apt-get install imposm away. For Windows or other distributions, you can download it directly from the imposm website. The tutorial on how to import data using imposm can be found here: http://imposm.org/docs/imposm/latest/tutorial.html.
Using imposm is a multi-stage process. The first stage is to have it read the data and combine the files into several intermediary files. First create a PostGIS database by running:
createdb -T gistemplate OSM
Now have imposm take the data and convert it into its intermediary files. To do this, run a similar command to this:
bmaddox@girls:/data/data/geo$ imposm --read --concurrency 2 --proj EPSG:4326 ~/Downloads/geodata/*.pbf [16:29:15] ## reading /home/bmaddox/Downloads/geodata/central-america-latest.osm.pbf [16:29:15] coords: 500489k nodes: 10009k ways: 71498k relations: 500k (estimated) [16:31:27] coords: 21524k nodes: 92k ways: 2464k relations: 5k [16:31:28] ## reading /home/bmaddox/Downloads/geodata/north-america-latest.osm.pbf [16:31:28] coords: 500489k nodes: 10009k ways: 71498k relations: 500k (estimated) [17:40:22] coords: 678992k nodes: 1347k ways: 44469k relations: 229k [17:40:23] reading took 1 h 11m 7 s [17:40:23] imposm took 1 h 11m 7 s bmaddox@girls:/data/data/geo$
Here, I changed to a different drive and can the imposm command to read from the drive where I downloaded the .pbf files. I did this since reading is a disk intensive process and spitting it between drives helps to speed things up a bit. Also, I differed from the tutorial as my install of QGIS could not render OpenStreetMap data in its native EPSG:900913 projection with data in the EPSG:4326 coordinate system that my Tiger data was in. Unless you have an extremely high-end workstation, this will take a while. Once the process is done, you will have the following files in the output directory:
bmaddox@girls:~/Downloads/geodata/foo$ dir imposm_coords.cache imposm_nodes.cache imposm_relations.cache imposm_ways.cache
The next step is to take the intermediary files and write them into PostGIS. Here you can use a wild card to read all of the .pbf files you downloaded.
bmaddox@girls:~/Downloads/geodata/foo$ imposm --write --database OSM --host localhost --user bmaddox --port 5432 --proj EPSG:4326 password for bmaddox at localhost: [18:20:21] ## dropping/creating tables [18:20:22] ## writing data [2014-06-15 18:52:46,074] imposm.multipolygon - WARNING - building relation 1834172 with 8971 ways (10854.8ms) and 8843 rings (2293.0ms) took 426854.5ms [2014-06-15 19:00:47,635] imposm.multipolygon - WARNING - building relation 2566179 with 4026 ways (4717.3ms) and 3828 rings (1115.6ms) took 89522.6ms [19:15:20] relations: 244k/244k [19:15:41] relations: total time 55m 18s for 244095 (73/s) [00:35:28] ways: 46907k/46907k [00:35:30] ways: total time 5 h 19m 49s for 46907462 (2444/s) [00:40:21] nodes: 1437k/1437k [00:40:22] nodes: total time 4 m 51s for 1437951 (4933/s) [00:40:22] ## creating generalized tables [01:44:47] generalizing tables took 1 h 4 m 24s [01:44:47] ## creating union views [01:44:48] creating views took 0 s [01:44:48] ## creating geometry indexes [02:15:02] creating indexes took 30m 14s [02:15:02] writing took 7 h 54m 41s [02:15:02] imposm took 7 h 54m 42s bmaddox@girls:~/Downloads/geodata/foo$
As you can see from the above output, this took almost eight hours on my home server (quad core AMD with eight gig of RAM). This command loads all of the data from the intermediate files into PostGIS. However, we are not done yet. Looking at the output, all it did was load the data and create indices. It did not cluster the data or perform any other optimizations. To do this, run the following imposm command:
bmaddox@girls:~/Downloads/geodata/foo$ imposm --optimize -d OSM --user bmaddox password for bmaddox at localhost: [17:18:12] ## optimizing tables Clustering table osm_new_transport_areas Clustering table osm_new_mainroads Clustering table osm_new_buildings Clustering table osm_new_mainroads_gen1 Clustering table osm_new_mainroads_gen0 Clustering table osm_new_amenities Clustering table osm_new_waterareas_gen1 Clustering table osm_new_waterareas_gen0 Clustering table osm_new_motorways_gen0 Clustering table osm_new_aeroways Clustering table osm_new_motorways Clustering table osm_new_transport_points Clustering table osm_new_railways_gen0 Clustering table osm_new_railways_gen1 Clustering table osm_new_landusages Clustering table osm_new_waterways Clustering table osm_new_railways Clustering table osm_new_motorways_gen1 Clustering table osm_new_waterareas Clustering table osm_new_places Clustering table osm_new_admin Clustering table osm_new_minorroads Clustering table osm_new_landusages_gen1 Clustering table osm_new_landusages_gen0 Vacuum analyze [19:24:38] optimizing took 2 h 6 m 25s [19:24:38] imposm took 2 h 6 m 26s bmaddox@girls:~/Downloads/geodata/foo$
On my system it took a couple of hours and clustered all of the tables and then did a vacuum analyze to update the database statistics.
The final step is to have imposm rename the tables to what they will be in “production mode”. Run the following:
bmaddox@girls:~/Downloads/geodata/foo$ imposm -d OSM --user bmaddox --deploy-production-tables password for bmaddox at localhost: [11:00:06] imposm took 1 s bmaddox@girls:~/Downloads/geodata/foo$
Your data should now be optimized and ready for use. To test it, refer to an earlier post in this series where I discussed using QGIS and load some of the OSM data into it.
Your OSM database will have the following tables in it:
List of relations Schema | Name | Type | Owner --------+----------------------+-------+--------- public | osm_admin | table | bmaddox public | osm_aeroways | table | bmaddox public | osm_amenities | table | bmaddox public | osm_buildings | table | bmaddox public | osm_landusages | table | bmaddox public | osm_landusages_gen0 | table | bmaddox public | osm_landusages_gen1 | table | bmaddox public | osm_mainroads | table | bmaddox public | osm_mainroads_gen0 | table | bmaddox public | osm_mainroads_gen1 | table | bmaddox public | osm_minorroads | table | bmaddox public | osm_motorways | table | bmaddox public | osm_motorways_gen0 | table | bmaddox public | osm_motorways_gen1 | table | bmaddox public | osm_places | table | bmaddox public | osm_railways | table | bmaddox public | osm_railways_gen0 | table | bmaddox public | osm_railways_gen1 | table | bmaddox public | osm_transport_areas | table | bmaddox public | osm_transport_points | table | bmaddox public | osm_waterareas | table | bmaddox public | osm_waterareas_gen0 | table | bmaddox public | osm_waterareas_gen1 | table | bmaddox public | osm_waterways | table | bmaddox public | spatial_ref_sys | table | bmaddox (25 rows)
The _gen0 and _gen1 tables are generalized and not as highly detailed as the other tables. They are good for viewing data over large geographic areas (think nation scale). With areas that large, it would take a lot of time to render the high resolution data. Thus the _gen0 and _gen1 tables are simplified versions of the data for use at these resolutions. You can use QGIS’s scale-dependent rendering to specify these tables and then go to the high-resolution tables upon zooming in.
Go forth and play with the additional free geospatial data you now have in your database 🙂
Towards the end of my tenure at the US Geological Survey, I was the project manager and principal investigator of Restoration of Data from Lossy Compression. The goal of the project was to find ways to restore fine detail that was lost during lossy compression processes such as JPEG. I had submitted an Open File report through the review process, but left the USGS in 2006 before the paper had completed review. As I had left, it basically fell through the cracks and was never officially published.
I had forgotten about it until recently when I was updating my resume. So, without further ado, I have put the paper here. I took out the USGS logo and what not since it was never officially published by them. So for a flashback into what I was doing in 2006, have fun reading it 🙂
Updated 23 March 2018: Changed for new size necessary for the cc2 column
It’s been a while since I’ve made a post, so thought I’d keep going with the data series. This time around I’ll be talking about how to make your own local copy of the NGA Geonames database. This database is similar to GNIS, but covers the whole globe and also has information on location such as airfields, pipelines, and so on.
First, download the following files from the Geonames website:
Some of them are zipped, so you’ll need to unzip them into the same directory as the others for ease of use. Next, create your geonames database by running:
bmaddox@girls:~/Downloads/geodata$ createdb -T gistemplate Geonames
Next, we will create the table for the main points file, which is called allCountries.txt. Run the following command from the same directory where you have all of the Geonames files:
bmaddox@girls:~/Downloads/geodata$ psql -d Geonames psql (9.3.4) Type "help" for help. Geonames=#
This will put you into the PostgreSQL command line. Now create the table to hold the data in the allCountries.txt file:
Geonames=# create table geoname ( geonameid int, name varchar(200), asciiname varchar(200), alternatenames text, latitude float, longitude float, fclass char(1), fcode varchar(10), country varchar(2), cc2 varchar(170), admin1 varchar(20), admin2 varchar(80), admin3 varchar(20), admin4 varchar(20), population bigint, elevation int, dem int, timezone varchar(40), moddate date ); CREATE TABLE Geonames=#
Now we will use a built-in PostgreSQL command to load data in the DB. There are two forms of it, the long way specifies the column names in order on the command line, the other just the file name. We will be using the short way here:
Geonames=# \copy geoname from allCountries.txt null as ''; Geonames=#
This loads the data, but it is not yet ready to be usable by a GIS. We will need to create a geometry column for the data and then use the latitude and longitude columns to create a point column in the geometry.
Geonames=# SELECT AddGeometryColumn( 'geoname', 'the_geom', 4326, 'POINT', 2); addgeometrycolumn ------------------------------------------------------ public.geoname.the_geom SRID:4326 TYPE:POINT DIMS:2 (1 row) Geonames=#
This command creates the geometry column, and specifies an EPSG of 4326 (WGS84). Now we need to insert the latitude and longitudes of the points into this column:
Geonames=# update geoname SET the_geom = ST_PointFromText('POINT(' || longitude || ' ' || latitude || ')', 4326); UPDATE 8943136 Geonames=#
This will take a while as PostGIS must read each point, convert it into the proper format, and then add it into the geometry column. Now we need to add a geospatial index on this column to make the queries faster. Again, it may take a while to run.
Geonames=# create index geoname_the_geom_gist_idx on geoname using gist (the_geom); CREATE INDEX Geonames=#
Once this is done, we should optimize this table as I mentioned in a previous post. We need to analyze the database and then cluster it on the points.
Geonames=# vacuum analyze geoname; VACUUM Geonames=# cluster geoname using geoname_the_geom_gist_idx; CLUSTER Geonames=# analyze geoname;
There are several auxiliary tables we should now add to the geonames database. These define the values used in the various columns and can be used in a JOIN statement in a GIS. I’m going to leave out the vacuum analyze steps but you should perform it on each table below. The first will be the alternatename table, which holds data from the alternateNames.txt file. This file contains a list of other names some of the points are known by and is connected to the geoname table by the geonameId column:
Geonames=# create table alternatename ( alternatenameId int, geonameid int, isoLanguage varchar(7), alternateName varchar(400), isPreferredName boolean, isShortName boolean, isColloquial boolean, isHistoric boolean ); CREATE TABLE Geonames=# \copy alternatename from alternateNames.txt null as ''; Geonames=#
Next we move on to the iso-languagecodes.txt file. This file contains ISO-638 standard names for all of the countries in the database.
Geonames=# create table "isolanguage" ( iso_639_3 char(3), iso_639_2 char(10), iso_639_1 char(3), language_name varchar(100) ); CREATE TABLE Geonames=# \copy isolanguage from iso-languagecodes.txt null '' delimiter E'\t' csv header Geonames=#
Next we will create and load the countryInfo.txt file, which contains information about each country such as iso codes, phone number formats, and so on. First, we need to remove the comment lines from the start of the file to make things easier. You can either do this with a text editor and delete every line that starts with the # character, or you can run the following command from bash:
bmaddox@girls:~/Downloads/geodata$ egrep -v "^[[:blank:]]*#" countryInfo.txt > countryInfo2.txt
With this done, we can proceed with the import as normal:
Geonames=# create table "countryinfo" ( iso_alpha2 char(2), iso_alpha3 char(3), iso_numeric integer, fips_code varchar(3), name varchar(200), capital varchar(200), areainsqkm double precision, population integer, continent varchar(2), tld varchar(10), currencycode varchar(3), currencyname varchar(20), phone varchar(20), postalcode varchar(100), postalcoderegex varchar(200), languages varchar(200), geonameId int, neighbors varchar(50), equivfipscode varchar(3) ); CREATE TABLE Geonames=# \copy countryinfo from countryInfo2.txt null as ''; Geonames=#
Next we do the timeZones.txt file:
Geonames=# create table "timezones" ( countrycode char(2), TimeZoneId varchar(30), gmtoffset double precision, dstoffset double precision, rawoffset double precision ); CREATE TABLE Geonames=# \copy timezones from timeZones.txt null '' delimiter E'\t' csv header Geonames=#
Next we do the admin1CodesASCII.txt table, which matches ascii names of administrative divisions to their codes:
Geonames=# CREATE TABLE "admin1codesascii" ( code CHAR(10), name TEXT, nameAscii TEXT, geonameid int ); CREATE TABLE Geonames=# \copy admin1codesascii from admin1CodesASCII.txt null as ''; Geonames=#
Now we do the admin2Codes.txt file that maps the admin2code values to their textual entries.
Geonames=# CREATE TABLE "admin2codes" ( code varchar(30), name_local text, name text, geonameid int ); CREATE TABLE Geonames=# \copy admin2codes from admin2Codes.txt null as ''; Geonames=#
Next is featureCodes_en.txt, which maps feature codes to their descriptions:
Geonames=# CREATE TABLE "featurecodes" ( code CHAR(7), name VARCHAR(200), description TEXT ); CREATE TABLE Geonames=# \copy featurecodes from featureCodes_en.txt null as ''; Geonames=#
Next is the userTags.txt file that contains user-contributed tagging to the points.
Geonames=# create table "usertags" ( geonameid int, tag varchar(40) ); CREATE TABLE Geonames=# \copy usertags from userTags.txt null as ''; Geonames=#
Finally we will handle the hierarchy.txt file, which contains parent-child relationships modeled from the admin1-4 codes.
Geonames=# create table "hierarchy" ( parentId int, childId int, type varchar(40) ); CREATE TABLE Geonames=# \copy hierarchy from hierarchy.txt null as ''; Geonames=#
You now should have your own complete copy of the Geonames database. They do publish updates regularly, so you can either recreate the tables or enter in their changes files. You may also wish to index the type column of allcountries so you can create custom views that only display things like airports, towers, and so on.
contains information about physical and cultural geographic features in the United States and associated areas, both current and historical (not including roads and highways). The database holds the Federally recognized name of each feature and defines the location of the feature by state, county, USGS topographic map, and geographic coordinates.
You can download the 79 megabyte GNIS zip file from here. You will want to select the NationalFile as it is not broken up into individual states. Importing GNIS into PostGIS is slightly more complicated as it does not come as a Shapefile, but instead as a 293 megabyte text file once it is unzipped from the above. Download the file, unzip it, and open a command window where the unzipped file is. Note that the last time I did this on Windows, using the command line client was an exercise in pain due to how Windows handles code pages and character types. On Windows it might be easier to do this inside something like pgadmin.
To import, first create a database inside PostgreSQL using something like the following:
createdb -T gistemplate USGS
Once done, you will want to run
psql -d USGS
to start the PostgreSQL database client. Now you will want to create the table to hold the data. To do this, copy and paste this statement into the psql client window:
CREATE TABLE gnis ( feature_id integer NOT NULL, feature_name character varying, feature_class character varying, state_alpha character(2), state_numeric character(2), county_name character varying, county_numeric character(3), primary_lat_dms character varying, prim_long_dms character varying, prim_lat_dec real, prim_long_dec real, source_lat_dms character varying, source_long_dms character varying, source_lat_dec real, source_long_dec real, elev_in_m integer, elev_in_ft integer, map_name character varying, date_created date, date_edited date );
Note that I COULD have spent the time figuring out the maximum size of each column, instead of just making them varchars, to save space. But, again, I’m lazy 🙂
Now to import, you will run the following command. The .txt files is over two million rows, so it could take a while to import depending on the speed of your system.
USGS=# \copy gnis from NationalFile_20140204.txt DELIMITER '|' CSV HEADER
If you get a file not found error,run \copy with the full path to the NationalFile. Depending on when you do this, the file name may be different based on when it was last updated.
We are not done yet. There is no actual geospatial geometry column in the database. We will need to create one from the existing columns. To do this, first we must create a geometry column to hold the geospatial points.
USGS=# SELECT AddGeometryColumn('public', 'gnis', 'geom', 4326, 'POINT', 2);
This command tells PostgreSQL to add a geometry column named geom to the gnis table in the public schema using NAD83. Now we need to actually populate this column. We need to take the latitude and longitude columns in the table and convert them into a binary representation that PostGIS uses internally.
USGS=# update public.gnis SET geom = ST_PointFromText('POINT(' || prim_long_dec || ' ' || prim_lat_dec || ')', 4326);
Now we need to add a geospatial index on the geom column. You need an index to use the data in apps such as QGIS as it makes area look-ups much faster.
USGS=# create index gnis_geom_gist_idx on gnis using gist(geom);
Now that we have an index, we need to create our database statistics and cluster it on the geom column. As I mentioned in a previous post, you will run these commands in order (waiting for each one to complete before running the next):
USGS=# vacuum analyze gnis; USGS=# cluster gnis using gnis_geom_gist_idx; USGS=# analyze gnis;
And now we are done. You have your own local copy of GNIS that you can use in visual GIS tools or from the command line. There are some fun things you can do with the data, such as the below figure shows where I used QGIS to load all points in GNIS that have my last name in them (my modesty astounds even me 😉