I came across an old backup set the other day and found a copy of the CVS -> Subversion repository I kept that included a lot of code that I wrote, inherited, or maintained. The code is at least ten years old now so likely not of use to anyone. I mainly did it to preserve the source for historical reasons. If anyone is interested, you can find it at https://github.com/briangmaddox.
Author Archives: bigbubba
QuickTip: SuiteCRM 7.6 and DreamHost
My wife (who is a realtor now 🙂 wanted a CRM so I thought I’d set SuiteCRM up on our domain so she didn’t have to pay for a commercial one. We go through DreamHost (who I would highly recommend for a hosting company BTW) and everything I had read said in theory it should work just fine.
It didn’t.
I banged on it a little bit and finally got it working. In case anyone is interested, here are the steps I did that I’m copying and pasting out of an email I sent back to DreamHost’s technical support in case anyone else has problems (I’m lazy and don’t feel like retyping it :). I think the root cause is that SuiteCRM creates a config.php as part of the installation instead of having one there where you can edit the default file and directory permissions by default.
- Unzipped it under my top-level domain and then renamed it so the url would be XXX/suitecrm.
- Temporarily renamed my .htaccess so that it wouldn’t interfere with it.
- Did a chmod -R 775 suitecrm from the top-level domain directory.
- Made the PHP mods to my .php/5.5/phprc like your SugarCRM wiki mentioned and made some alterations just in case:
post_max_size = 50M
upload_max_size = 50M
max_input_time = 999
memory_limit = 140M
upload_max_filesize = 50M
suhosin.executor.include.whitelist = upload
max_execution_time = 500 - Started the installation. After entering in the db information and what not, clicked next and let it run. While it hung, it did at least create some subdirectories that it needed but created them with the “wrong” permissions since it does not create a config.php until you start to install it.
- Did a killall php55.cgi to stop the installers.
- Did another chmod -R 775 on the suitecrm directory from my top-level directory.
- Reran the install and this time it worked like a charm.
- Put my .htaccess back and then edited the default permissions in config.php like the DreamHost SugarCRM talk page mentions.
Census 2015 State/National Data Remix Done
Yes, just in time for 2016… I’ve finished uploading and changed the links for the 2015 Census Tiger data converted to state or national datasets from the county files. I’ve also stopped posting the single-file entries as there is not really a need to do that. Hop over to https://brian.digitalmaddox.com/blog/?page_id=202 if you’re interested.
2015 Census Data is Coming
Yes, it’s close to the end of the year…. but I’m currently putting together the Census 2015 Tiger data and will have it uploaded here under the GIS Data page. Stay tuned.
No, Using Interfaces (or Abstractions) Alone Does NOT Mean You’re “Object Oriented”
Since I’ve been dealing with a lot of Java and now C# code over the past few years, I’ve noticed one thing: Java and C# programmers love interface classes. In fact, it seems that most Java and C# programmers think that they cannot have a concrete class that does not inherit from some interface. I was curious about this in a lot of the C# code I’ve had to deal with so I asked why. The answer I got was “that way we are using abstractions and encapsulating things.”
Wrong. Just, wrong.
“Why not smart guy?” you might ask. First off, let’s look at some definitions. An interface is used to define a set of functionality that derived classes must implement. Interfaces can only contain method and constant declarations, not definitions. An abstraction reduces and factors out details so that the developer can focus on only a few concepts at a time. It is similar to an interface, but instead of only containing declarations, it can contain partial definitions while forcing derived classes to re-implement certain functionality.
With these definitions, we see that an interface is just a language construct. It really just specifies a required syntax. In some languages, they are not even classes. What went wrong? Well, historically, it appears that people got the wrong idea that an interface splits contracts from implementations, which is a good thing in object-oriented programming because it encapsulates functionality. An implementation does not do this, IT CAN’T. Remember, an interface simply specifies what functions must be present and what their returns are. It does not enforce how computations should be done. Consider the following interface pseudo code that defines an imaginary List with a count variable that specifies how many elements are in the list:
interface MyList { public void AddItem(T item); public int GetNumItems(); }
So, where does the above enforce a contract that each added item will increment an internal counter? How does it FORCE me as a programmer to increment an internal counter? It doesn’t; it can’t. Since an interface is purely an empty shell, I as a user am free to do as I like as long as I just follow the interface definition. If I don’t want to increment an internal counter, I don’t have to do so. This does not really fulfill the object oriented dependency inversion principle (DIP), which states (as quoted by Wikipedia):
A. High-level modules should not depend on low-level modules. Both should depend on abstractions. B. Abstractions should not depend on details. Details should depend on abstractions
In common speak, this basically means that we can focus on high-level design and issues by ignoring the low-level details. We use abstractions to encapsulate functionality so that we are guaranteed that the low-level details are taken care of for us. Consider the following pseudo code abstract List class:
abstract class MyList { public void AddItem(T item) { AbstractedAdd(item); this.internalcounter++; } public int GetNumItems() { return this.internalcounter; } abstract private void AbstractedAdd(T item); }
With the abstract class, we actually have a contract now that fulfills the DIP. As an abstraction can contain a partial definition, we have a defined AddItem() function that calls an abstract internal function but also increments the internal counter. While it is a loose guarantee, we are guaranteed that the internal counter is incremented every time AddItem() is called. We now do not have to worry that the abstraction will take care of the item counter for us.
What appears to have happened over the years is that student programmers heard about things like the DIP and warped it to think it means that every class must have an interface (when they mean abstraction), whether or not the class is designed to be used only once. This I think can be attributed to teachers not doing a good job at differentiating interfaces from abstractions and not really teaching what encapsulation means. Thinking like this also led to the second problem.
Secondly, a lot of people did not get the message that “all software should be designed to be reusable” got discredited after the 1990’s when it turned out that this philosophy needlessly complicates code. Trying to design code like this ends up with a huge Frankenstein’s monster that is hopelessly complex, prone to errors, and really does not face reality that being a Jack of all trades means you’re a master of none. This created a somewhat tongue-in-cheek object oriented principle called the Reused Abstraction Principle (RAP) that says “having only one implementation of a given interface is code smell.” We refactor code to pull out duplicate functionality because it helps to keep the code base small. It improves reliability because having a single implementation of potentially duplicated code means we don’t have several duplicate implementations that may differ in how they are done.
However, this does not mean that code HAS to have duplicated functionality “just because.” If your problem domain only has one instance of a use case, it really is OK to just have a single concrete class that implements this. Focus on a good design that encapsulates the functionality of your problem domain, not worrying that every piece of functionality must be reusable. Later on, if you problem domain is expanded and you end up with duplicate functionality, refactor it and then have an interface or abstract class. Needless use of interfaces and abstractions just doubles the number of classes in your code base, and in most languages abstractions will have a performance penalty due to issues like virtual table lookups. Simple use of interfaces and abstractions does not make you a cool kid rock-star disciple of the Gang of Four.
The Day After
Updating the Merged TIGER Files to the 2014 Dataset
Hey all, I am finally in the process of updating my merged state- and national-level TIGER files to the 2014 data that they have put out. You can find them at my GIS Data Page. Note that Roads are not uploaded yet but I already updated the links on the download page so you will get 404 errors until I get them uploaded. I cannot promise it will be tonight since I have to sleep sometime 😉 If you find any 404s on the others, let me know in case I missed a link.
As usual, these are my own value added files that I am publishing in case some people find them useful. If you use these and your business fails, your wife leaves you, your dog dies, and you write a country music song about it, not my fault.
Clipped Virginia Historic Map (125K Scale, 1888 to 1902 vintages)
Here’s the first map I’ve done based on the GeoPDFs from the USGS Historic Map Collection. I found all of the available maps for Virginia at the 125K scale and the vintages were from 1888 to 1902. It’s a GeoTIFF that is 484M in size and compressed using LZW lossless compression.
More Fun with Old Maps
I’ll admit it, I really like old maps. I especially like old topographic maps. I think it started when I used to work for the US Geological Survey. To me, it’s interesting to see how things change over time. From my old urban grown prediction days, I like to see where and when populations change.
Since the USGS put out their Historical Topographic Map Collection (HTMC), I’ve been playing with the maps off and on for about a year now. I finally decided to start merging available maps of the same scale and vintage to study and possibly do feature extraction down the road. I’ll be placing them for download here in case anyone is interested as I process them.
I thought I’d share how I put them together in case anyone else is interested. The first step is to go to the USGS website and pick the files you want to use. The files there are available in GeoPDF format. First thing you need to understand is that you may not find a map covering your area of interest at your scale of interest and vintage. Not everything managed to survive to the current day. For example, I made a merged 125K map of Virginia and most of southern VA is missing at that resolution.
Once I download the GeoPDFs I want to work on, I use a modified version of the geopdf2gtiff.pl script from the Xastir project. The link to my modifications can be found here. I use LZW compression for my GeoTIFFs as it’s lossless and keeps the quality from the GeoPDFs. It is a Perl script and requires that you have GDAL and the GDAL API installed. What it does is calculate the neat-lines of the GeoPDF and then clips it to the neat-line while converting it to a GeoTIFF. Running it as as simple as:
geopdf2gtiff.pl inputfile.pdf
Once you have all of your GeoPDF files download and converted, the next step is to merge them. The fastest way I’ve found to merge involves using gdalbuildvrt and gdal_translate, also from GDAL. The first step is to create a virtual dataset of all of your files by running something like:
gdalbuildvrt -resolution highest -a_srs EPSG:4326 merged.vrt parts/*.tif
The options I chose here are to pick the highest pixel resolution (-resolution) based on the input files. For this case the resolutions should be the same, but this way I don’t have to go through and verify that. Next I change the projection of the output file to WGS84 (-a_srs). Next is the file name of the virtual dataset and then the input files.
Now that the virtual dataset is done, it’s time to actually merge all of the files together. The virtual dataset contains the calculated bounding box that will contain all of the input files. Now we use gdal_translate to actually create the merged GeoTIFF file:
gdal_translate -of GTiff -co COMPRESS=LZW -co PREDICTOR=2 merged.vrt ~/merged.tif
Here again I use LZW compression to losslessly compress the output data. Note that gdal_translate will automatically add an Alpha channel as Band 4 in the image to denote areas that had no input data. That’s why we do NOT add the -addalpha flag to gdalbuildvrt. For performance tips, I’d suggest keeping the source data and output file on separate drives unless you’re running something like a solid state drive. To give you an idea of the output file sizes, Virginia merged (which did have a lot of areas missing), was around 500 megabytes.
Next you’ll need a Shapefile to use as a cut file to clip the data. Since I have the Census Tiger 2013 data in a local PostGIS database (see previous posts to this blog), I used QGIS to select just the VA state outline and then saved it as a Shapefile.
Finally, we will use gdalwarp to clip the merged GeoTIFF against the state outline to produce the clipped GeoTIFF that is just the state itself. This operation can take a bit of time depending on how powerful a machine you’re running it on. The command you will use is similar to this:
gdalwarp --config GDAL_CACHEMAX 1024 -wm 1024 -cutline va_outline.shp -crop_to_cutline -multi -t_srs EPSG:4326 -co COMPRESS=LZW -co PREDICTOR=2 -co BIGTIFF=YES -co TILED=YES ~/merged.tif clipped.tif
Some of the command line parameters I used are optional, I just tend to leave them in since I do a lot of copying and pasting 😉 First we tell GDAL to increase the size of its caches using the –config GDAL_CACHEMAX and -wm options. Next we specify the file to clip against with the -cutline and -crop_to_cutline options. The -multi option tells GDAL to process using multiple threads. I again specify the output projection and the LZW compression parameters. Here I also specify the BIGTIFF option just in case the output file goes over four gigabytes. Finally, I tell gdalwarp to tile the output TIFF so it will load faster in a GIS by separating it into tiles.
The output will look something like the below figure. I’ll start posting files as I get time. Hope everyone is having a great holiday!
Keeping up with the Botnets Round 2
I’ve been keeping up with tracking how many botnets are out there scanning WordPress blogs. I’ve eventually resorted to blocking huge chunks of the Internet via .htaccess files. So far it’s been quite effective in limiting the number of botnet login attempts.
If anyone is interested, I’ve put the limit portion of my .htaccess file here. Feel free to use it and customize for your needs.