Hey Why Does My Linux Laptop Keep Waking up?

A couple of weeks ago I got myself a nice cheap laptop that was on sale for Black Friday. However today I noticed that every time I closed the lid and put the laptop aside, it was waking up. Slack was making notification noises when the lid was closed.

First thing I did was check out /proc/acpi/wakeup by running:

grep enabled /proc/acpi/wakeup

My aha moment came when I saw XHC was enabled to wake up the system from a suspend.

XHC S3 *enabled pci:0000:00:14.0

The above line in the output says that USB 3.0 (XHCI) devices are able to wake up the laptop. Since I have a USB wireless mouse plugged in, I noticed of course it would roll around after I’d close the lid, thus waking up the laptop.

An easy fix for this is to make a file in /etc/udev/rules.d like the following;

xhc.rules

ACTION=="add", KERNEL=="0000:00:14.0", SUBSYSTEM=="pci", RUN+="/bin/sh -c 'echo XHC > /proc/acpi/wakeup'"

For your own system, replace the 0000:00:14.0 with the address given after the pci: output in the above grep.

Now I can close the lid and not have to worry about turning the mouse off first.

QuickTip: SuiteCRM 7.6 and DreamHost

My wife (who is a realtor now 🙂 wanted a CRM so I thought I’d set SuiteCRM up on our domain so she didn’t have to pay for a commercial one.  We go through DreamHost (who I would highly recommend for a hosting company BTW) and everything I had read said in theory it should work just fine.

It didn’t.

I banged on it a little bit and finally got it working.  In case anyone is interested, here are the steps I did that I’m copying and pasting out of an email I sent back to DreamHost’s technical support in case anyone else has problems (I’m lazy and don’t feel like retyping it :).  I think the root cause is that SuiteCRM creates a config.php as part of the installation instead of having one there where you can edit the default file and directory permissions by default.

  1. Unzipped it under my top-level domain and then renamed it so the url would be XXX/suitecrm.
  2. Temporarily renamed my .htaccess so that it wouldn’t interfere with it.
  3. Did a chmod -R 775 suitecrm from the top-level domain directory.
  4. Made the PHP mods to my .php/5.5/phprc like your SugarCRM wiki mentioned and made some alterations just in case:
    post_max_size = 50M
    upload_max_size = 50M
    max_input_time = 999
    memory_limit = 140M
    upload_max_filesize = 50M
    suhosin.executor.include.whitelist = upload
    max_execution_time = 500
  5. Started the installation.  After entering in the db information and what not, clicked next and let it run.  While it hung, it did at least create some subdirectories that it needed but created them with the “wrong” permissions since it does not create a config.php until you start to install it.
  6. Did a killall php55.cgi to stop the installers.
  7. Did another chmod -R 775 on the suitecrm directory from my top-level directory.
  8. Reran the install and this time it worked like a charm.
  9. Put my .htaccess back and then edited the default permissions in config.php like the DreamHost SugarCRM talk page mentions.

Fun with my My Book World Edition II

After the house fire in 2010, we picked up a My Book World Edition II to house all of our files that we did not want to loose (such as photos, videos of the girls, and so on).  Before the fire, I had a software RAID 5 going in one of the servers I had at the house.  Fortunately, that server was the one the fire department picked to throw out the back window during the fire and it sat out in the water hoses and rain.  The good thing was that I was doing forensics at the FBI then so I was able to recover data off the drives (all Western Digital) and transferred them to the My Book.

A week or so ago, I had to power off the My Book to take care of something.  After I turned it back on I noticed that the LED on the front had the ominous “something is wrong with me” flash going on.  I logged in and it didn’t see one of the drives at all and claimed it was missing.  ssh’d into the drive and sure enough the drive was not being detected by the kernel when it booted.

I took the drive out and put it in a USB enclosure to see what was wrong.  The SMART status said it was OK until I tried to do an extended SMART test of the drive.  It erred out with a bad sector.  Out of curiosity I ran badblocks against it to see how many sectors were bad in case I could try to coax it back to life.  A day and a half later (was a 2TB drive), badblocks finished but I noticed it wouldn’t list all of the bad sectors it had found on the drive.  I poked around some more and eventually found that badblocks had recorded 30 GIGABYTES of bad sectors.  Not K, not Meg, but gig.  OK drive is dead.

I had another 2TB drive in the server that I was using for temp storage when I would process things.  Also had my virtual machines and PostGIS database on it.  I figured I could use that as the new drive B in the RAID so I backed up everything from the remaining drive in the My Book and on the drive in the server to another USB drive, cleared the partition table, and put it into the My Book.

The expected behavior when you do this is that when you turn the My Book back on, it will see the new drive and rebuild the RAID to the drive as I had the My Book running RAID 1.  No such luck in my case.  At this point it would just sit and not finish booting (the My Book Worlds are embedded Linux devices for the uninitiated).  I finally found this article online that had a recovery script that would wipe a drive, download the WD firmware, and re-image the drive.  Took both drives out of the WD, re-imaged the working A drive, and put it back in.  Finally it successfully booted again and thought itself a brand new WD mybook (just in degraded mode as there was no drive B).  Turned it off, put the drive from the server in as drive B, then turned it back on.  Voila, the system came back, saw the fresh drive B with no partition table, and rebuilt the RAID.  Now just waiting for rsync to finish copying the back up from the USB drive back to the RAID.

The moral of this story: storage sucks and I want the holocubes IBM promised us 15 years ago!

Kdenlive and glibc double free crashes after rendering

I thought I’d post this information in case it helps someone else.  I was having issues with Kdenlive where I would get a glibc double free crash at the end of the rendering process.  It was consistent and very annoying as you can imagine.  Looking on-line, I found similar problems with the OpenCV Frei0r plugins (which I didn’t have installed), similar problems with previous versions of melt (Kdenlive uses it for rendering), and so on.

Turns out my problem was with having clips with spaces in the file names.  On my Fedora 19 system (x86-64, Kdenlive 0.9.6.2, mlt  0.8.8.5, ffmpeg 1.2.2.1) this appears to have been the root of the problem since removing the spaces makes things consistently render without errors.  Might give this a try if you’re having problems out there.

Building OpenCV 2.3.1 on Ubuntu 11.04

Getting OpenCV 2.3.1 to compile on Ubuntu can be interesting.  The first issue is tracking down all of the dependencies you need to get the different parts of it to work properly.  There is more information on dependencies needed at the OpenCV wiki here and here.  I found this page on a blog as well which will help to get a lot of the dependencies down.

For me specifically, I had a couple of problems on 11.04.  Make sure you have the following packages installed to enable gstreamer and unicap support:

libgstreamer-plugins-base0.10-dev
libgstreamer0.10-dev
libunicap2
libunicap2-dev
libucil2-dev

The second major problem is that OpenCV 2.3.1 doesn’t fully track the latest releases of ffmpeg.  This patch helps, but bear in mind that it’s for OpenCV 2.3.0 and there have been changes between versions.  You’ll have to install the patch by hand to take care of some of the differences.

Once this is done, you should be ready to build.  On my system I ran cmake as:

cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_PYTHON_SUPPORT=ON -D WITH_TBB=ON -D WITH_XINE=ON -D WITH_UNICAP=ON -D BUILD_EXAMPLES=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_DOCUMENTATION=ON -D BUILD_SHARED_LIBS=ON ..

If you have all of the extra media repositories on Ubuntu enabled, I’d highly recommend NOT disabling shared libraries when building OpenCV.  You’ll avoid some linking errors due to concurrent versions of some of the multimedia libraries that might be installed.

After that, when you compile make sure you add in something like -I/usr/local/include and -L/usr/local/lib to your makefile to make sure you’re pulling in the version you just compiled instead of the default and you should be good to go.