Archive for February, 2011

In-depth testing with Mocha

by Oliver on Friday, February 25th, 2011.

I’ve recently been satisfying my need for more programming time and reading up on some of the aspects which really complete a good application, like adequate organisation, good design and reusability of library functions, error handling and of course testing. I’ve been using Cucumber for a while now in conjunction with Puppet which represents the BDD aspect, but haven’t really delved into TDD until now since I’m actually actively writing Ruby.

The code I’m working on had a basic set of Test::Unit tests, which I’ve added to for my additions to the codebase and I’ve been trying to extend some of the testing as much as possible while not turning it into a rabbit hole. Since this application interacts with Puppet and files on disk, testing some aspects is just plain impossible without going to extraordinary measures to hack around the interaction with external libraries, files etc. That’s where Mocha steps in.

OK, so mocking is just another area of technology that I’m years behind in, but I’m not sure I had use for it before. I’ll be the first to admit that the syntax confused the hell out of me the first time I saw its usage in the Puppet test suite, but I think I “get it” now, and it is certainly delivering the functionality I need.

Consider a situation where we want to test some aspect of an application that relies on Puppet. I’ll use IRB here to demonstrate:

irb(main):001:0> require 'rubygems'
=> true
irb(main):002:0> require 'mocha'
=> true
irb(main):003:0> require 'puppet'
=> true
irb(main):004:0> Puppet.parse_config
=> #<EventLoop::Timer:0x2ae5349457e8 ....
irb(main):005:0> Puppet[:vardir]
=> "/home/ohookins/.puppet/var"
irb(main):006:0> Puppet.expects(:[]).with(:vardir).returns('/tmp/foo')
=> #<Mocha::Expectation:0x2ae534937c60 ....
irb(main):007:0> Puppet[:vardir]
=> "/tmp/foo"

How cool is that? Yes of course, we could have fed Puppet a custom configuration file or set the configuration parameter ourselves, but if our code actually calls Puppet.parse_config then our efforts are lost, since our test setup code is run before this time. Mocha magically sets the stage for testing where we rig up the appropriately faked environment so we can only test the core functionality of that part of the code.

I’m exploring some slightly more complex use cases now with Mocha, but it is another valuable tool to add to my programmer’s toolkit.

Tags: , , ,

Friday, February 25th, 2011 Tech No Comments

An odd obsession with hardware utilisation

by Oliver on Saturday, February 19th, 2011.

I’m sure I’m not alone in my personification of hardware, but in extension to that I like to know that my hardware is doing something. The thought of it sitting there idle just bugs me. So when I install a fresh new Jenkins server and it is sitting there waiting for jobs to be fired off, it saddens me just a little that it isn’t utilised more.

The flip-side situation is when the machine is overutilised, or even just adequately utilised. Just as it is frustrating in one way to have a job that completes before you even have time to make a coffee, it is frustrating to have to wait for several jobs to complete that take an hour. In reality, the sweet-spot in the middle where you have the best of both world (latency and throughput) is so hard to achieve you end up having to choose one or the other.

I guess this is what cloud computing is for, which is yet another area I feel like I’m two years behind in. My understanding is that there are no real guarantees of throughput or latency. You have the promise of “infinite” scalability, but no real idea of how things will perform on any given VM. This makes knowing exactly how much performance you will get at any given point in time completely non-deterministic (obviously I am speaking about the public cloud here). What does this matter to an OCD sysadmin who likes his hardware to be well utilised? Probably nothing.

Going to the cloud does make sense, but who in this industry would never miss the endless rows of blinkenlights in the datacenter on the occasional visit? I doubt there would be a single person among us (and if you answered “yes”, what is wrong with you? ;))

Tags: , , ,

Saturday, February 19th, 2011 Tech, Thoughts No Comments

I’m done with shell scripting

by Oliver on Saturday, February 12th, 2011.

I think I will call this week the last I use shell script as my primary go-to language. Yes, by trade I am a systems administrator but I do have a Bachelor of Computer Science degree and there is a dormant programmer inside me desperately trying to get out. I feel like shell script has become the familiar crutch that I go back to whenever faced with a problem, and that is becoming frustrating to me.

Don’t get me wrong – there is a wealth of things that shell script (and I’m primarily referring to BASH (or Bourne Again SHell) here rather than C SHell, Korn SHell or the legendary Z SHell) can do, even more so with the old-school UNIX tools like grep, sed, awk, cut, tr and pals. In fact, if you have the displeasure of being interviewed by me, a good deal of familiarity with these tools will be expected of you. They have their place, that is what I am trying to say, but the reflex of reaching for these tools needs to be quietened somewhat.

The straw that broke my camel’s back in this instance was this sorry piece of scripting by yours truly. It’s not an exemplary piece of “code” and I think that demonstrates how little I cared about it at this point. I was briefly entertained by the idea of implementing a simple uploader for Flickr in shell script, and I did actually manage to write it up in a fairly short amount of time, and it did then successfully upload around 4GB of images. The problem was that while the initial idea was simple enough, the script took on a life of its own (especially once the intricacies of Flickr’s authentication API were fully realised) and became much more complex than initially envisaged.

Despite this, I had started out with the goal of making a reasonably “pure” shell uploader, and stuck to my guns. What I should have done, was call it quits when I started parsing the REST interface’s XML output with grep – that was a major warning sign. Now I have a reasonably inflexible program that barely handles errors at all and only just gets the job done. I had a feature request from some poor soul who decided to use it and I was actually depressed at the prospect of having to implement it – that’s not how a programmer should react to being able to extend the use of his/her work!

From a technical standpoint, shell is a terrible excuse for a “language”. The poor typing system, excruciating handling of anything that should be an “object” when all you generally have to work with are string manipulation tools, and a “library” that is basically limited by what commands you have available on the system. I know that I have probably barely plumbed the depths of what BASH is capable of, but when the basics are just so hard to use for what are frequently used programming patterns, I don’t really see the point.

So from next week, I’ve decided to reach for Python or Ruby when I have to code something up that is more than a few lines’ worth, or of reasonable complexity. Not that I don’t already use Python and Ruby when the occasion calls for it, but I think that those occasions are too few and far between. Shell scripting is an old-school sysadmin crutch and it is time to fully embrace the DevOps mentality and get into serious programming mode.

Tags: , , , , ,

Saturday, February 12th, 2011 Tech, Thoughts 2 Comments

Work environment of a geek

by Oliver on Tuesday, February 8th, 2011.

For a long time I’ve shunned stereotypes such as the hacker sitting in a dark basement listening to repetitive music with headphones on while they code into the early hours of the morning. My early hours of the morning are usually spent rocking the baby back to sleep and then trying to get back to sleep myself, and I don’t live in a basement. Nor do I particularly enjoy listening to techno, trance or any derivative genres or sub-genres of music. I do own headphones, but that is my sole claim to fame.

Last week I had the luxury of being locked away (by choice) from colleagues and developers for a whole week to spend some uninterrupted time problem-solving and I must admit, I enjoyed not being interrupted frequently. I don’t think I have been in a position where I could actually work on things solidly for hours at a time and not have my concentration broken. It just doesn’t happen anymore.

We watched The Social Network recently, and I was amused by the constant used of “he’s wired in” but there is obviously more than just a grain of truth in the stereotype. I had to return to my regular station this week (not a total loss due to having my dual 24″ monitors) but the noise in the office was quite distracting. I put my headphones on and fired up Mark Morgan’s Vault Archives. It’s a nice bunch of post-apocalyptic atmosphoric tracks which are remixes of the originals from the Fallout series of games. I’ve never played the games themselves (although I’m tempted to, now) but it seemed to actually help me concentrate.

I’ve managed to spend all of today “wired in” and so the playlist necessarily needed to be extended somewhat. I managed to find and listen to:

  • Mark Morgan – Planescape: Torment
  • System Shock 1 & 2 soundtracks
  • Half Life 2 soundtrack
  • Dark Side of Phobos (Doom 1 remix)

I also had some suggestions from my brother for other soundtracks I could explore:

  • Resident Evil 4
  • Mirror’s Edge
  • Machinarium

Any other suggestions?

Tags: , , ,

Tuesday, February 8th, 2011 Tech, Thoughts No Comments

Dependencies stored in grey matter

by Oliver on Wednesday, February 2nd, 2011.

I have a Zotac ZBox which I use an my HTPC, and generally it works pretty well. One thing that is slightly troublesome is the HDMI audio, which seems to rely on having the ALSA backports modules package installed in order for it to work. Remembering this is key, though, since when the package is installed it does not automatically get updated when you upgrade the kernel package.

Most packaging systems rely on the fact that you only have the one instance of a package around at a time, and as time passes you upgrade these packages between versions (a notable exception is the Gem packaging system). Kernel packages are the exception to the rule and not only can you have several present on your system at a time, but this is usually desirable so that you can test stabliity, rollback etc. For this reason the version number of the kernel creeps into the package name, making it effectively treated as a unique package (and since the file paths differ, it is unique on disk as well). The DEB package format handles upgrades by way of a meta-package which pulls in the latest version of the kernel package. RPM uses some other magic I can’t recall right now.

In the case of the linux-backports-modules-alsa package, the same idea applies. However where the kernel meta-package pulls in the newer kernel package when there is an update available, it can’t do the same for this package since not everybody wants it installed automatically. Since I do want it installed automatically but am not in a position to change the packages, this puts me in a slightly irritating position. Ideally there would be some hook that I could use to pull in the latest version of this package whenever a new kernel package is installed (and in fact there is, in /etc/kernel/postinst.d/) but anything triggered synchronously with the kernel upgrade will fail since the dpkg transaction is not yet complete and starting a new one will be blocked.

The trigger could in fact schedule an at job to install the newer alsa package a few minutes later, but I don’t like the asynchronous nature of this method and the likelihood of failure (what if I reboot immediately after to activate the new kernel?) although I can’t see an obvious alternative. Does anybody have any suggestions?

The work around for this to prevent having to remember to install the latest version is to make use of the kernel package maintainer hook directory: /etc/kernel/postinst.d. Scripts in this directory are called after installation of a new kernel with the first parameter being the new kernel version in uname -r style format.

Tags: , , , , , , ,

Wednesday, February 2nd, 2011 Tech No Comments