mocha

Exceptional circumstances

by Oliver on Thursday, November 24th, 2011.

I’m still building up to my article on how to properly mock calls to Rake’s sh to facilitate testing of your Rakefile tasks but I haven’t quite worked up the strength to trace the calls through the libraries and into Eigenclass wonderland. For the moment, I’ve got just enough fodder to write a bit about determinism in testing.

Since Friday, both a coworker and I have been hit by the same testing failure. Not the exact same one mind you as we are working on two different pieces of software but the same failure in that the order of our tests matter. Anyone experienced in TDD will tell you that you need to make sure your tests don’t depend on the order in which they run so that your units of testing don’t build upon previous assumptions and don’t fully test each unit in isolation. Easier said than done unfortunately.

It would be nice if we could test the cartesian product of all of our units in all possible orders to rule out any unplanned dependency but it is simply not feasible for anything but the smallest programs. In any case, our small programs managed to cause a small bit of hell just fine, thankyou! Enough babbling and onto the code:


desc 'Run tests'
RSpec::Core::RakeTask.new('test') do |t|
  t.pattern = 'spec/*_spec.rb'
end

This is a fairly standard task that sets up RSpec tests using RSpec’s Rake helpers. Not much to see here – except for the fact that my two test files run in opposite orders on different machines. I managed to rule out different Ruby versions (exactly the same, installed in both places by RVM), Gem versions (same of Rake, RSpec and all relevant dependencies), even the locale on both machines was identical to rule out sorting order differences in the filenames.

RSpec uses RSpec::Core::RakeTask#pattern to assemble a FileList with the pattern you have set. FileList (defined by Rake) basically uses Dir.glob to get its dirty work done:


# Add matching glob patterns.
    def add_matching(pattern)
      Dir[pattern].each do |fn|
        self << fn unless exclude?(fn)
      end
    end

Dir[] just aliases Dir.glob, and within the source of that you can find the following:


dp = readdir(dirp->dir);

readdir(3) just returns the directory entries in the order they are linked together, which is also not related to inode numbering but as best as I can tell is from outer leaf inwards (since the most recently created file is listed first).

Now I have some idea of why the testing order can be different, but I'm no closer to the cause of the problem - my tests succeed when run in one order and not in the opposite order. The errors look something like this:


  1) Nagios#new raises an exception when the command file is missing
     Failure/Error: expect { Nagios.new(@cmd_file, @status_file) }.to raise_error(NagiosFileError, /not found/)
       expected NagiosFileError with message matching /not found/, got #
     # ./spec/nagios_spec.rb:26

Hmm, a bare Exception object with no information about where it came from. I had a few suspicions:

  • I have no idea how to code Ruby
  • I changed the code in some subtle way and broke the tests legitimately
  • Some Mock object's lifetime is longer than expected and sticking around

This last idea seemed most plausible since I was able to put some debugging code into the constructor of the object and it was not being called at all. Mocha usefully has an unstub method which allows you to remove stubs on an object/class you had previously set up and return it to its previous state, but this seemed to be a no-go:


     Failure/Error: Nagios.unstub(:new)
       The method `new` was not stubbed or was already unstubbed

I installed the very useful ruby-debug and invoked that just before the failing tests started and did some poking around but wasn't able to find much (although that's probably more to do with lack of skill than the debugger lacking functionality). How to proceed? Unfortunately the easiest way forward seemed to be the brute force method - comment out tests in the preceding file until it starts working then narrow the field. Fortunately the problem uncovered itself immediately:


  # Change the existing mock to be something we can pick up in output
  before(:each) do
    Nagios.stubs(:new).returns('surprise !')
  end

Then in the test output:


 NoMethodError:
       undefined method `cmd_file' for "surprise !":String
     # ./spec/nagios_spec.rb:55

Hmm, that's no good. The stubbing is surviving between test files. TL;DR - It's good to read the documentation! RSpec supports several mocking frameworks - FlexMock, Mocha, RR, bring your own framework and of course, RSpec itself. I already knew of this before but since I've been working mostly with Test::Unit up until recently it didn't register in my brain whatsoever and I just reached for Mocha out of habit.

It turns out that if you use a different mocking framework but don't tell RSpec about it, weird stuff happens - that is to say, the kind of behaviour you see above. You add a small block to your test code something like this:


RSpec.configure do |config|
  config.mock_framework = :mocha
end

Once I had confirmed this was the actual problem and my tests were passing, I decided to rewrite all my mocking code to use RSpec mocking anyway - less gems, easier to maintain.

So the main takeaways from this experience are:

  • Read the documentation!
  • Ruby is powerful and frequently easy to use, but can still not be easy to interrogate when there are problems happening that don't seem to make sense.
  • Having source code on hand to see what your software is really doing is an awesome thing. Admittedly, the metaprogramming behind testing and mocking frameworks can be some mind-bending stuff, but the code is there when you want to dive in.
  • Testing can be frustrating, but awesome.
  • Ordering is not always the problem. Try to get your tests order-independent, but just like code coverage it can be a large time sink and the law of diminishing returns applies.

Tags: , , , , , , ,

Thursday, November 24th, 2011 Tech 1 Comment

In-depth testing with Mocha

by Oliver on Friday, February 25th, 2011.

I’ve recently been satisfying my need for more programming time and reading up on some of the aspects which really complete a good application, like adequate organisation, good design and reusability of library functions, error handling and of course testing. I’ve been using Cucumber for a while now in conjunction with Puppet which represents the BDD aspect, but haven’t really delved into TDD until now since I’m actually actively writing Ruby.

The code I’m working on had a basic set of Test::Unit tests, which I’ve added to for my additions to the codebase and I’ve been trying to extend some of the testing as much as possible while not turning it into a rabbit hole. Since this application interacts with Puppet and files on disk, testing some aspects is just plain impossible without going to extraordinary measures to hack around the interaction with external libraries, files etc. That’s where Mocha steps in.

OK, so mocking is just another area of technology that I’m years behind in, but I’m not sure I had use for it before. I’ll be the first to admit that the syntax confused the hell out of me the first time I saw its usage in the Puppet test suite, but I think I “get it” now, and it is certainly delivering the functionality I need.

Consider a situation where we want to test some aspect of an application that relies on Puppet. I’ll use IRB here to demonstrate:

irb(main):001:0> require 'rubygems'
=> true
irb(main):002:0> require 'mocha'
=> true
irb(main):003:0> require 'puppet'
=> true
irb(main):004:0> Puppet.parse_config
=> #<EventLoop::Timer:0x2ae5349457e8 ....
irb(main):005:0> Puppet[:vardir]
=> "/home/ohookins/.puppet/var"
irb(main):006:0> Puppet.expects(:[]).with(:vardir).returns('/tmp/foo')
=> #<Mocha::Expectation:0x2ae534937c60 ....
irb(main):007:0> Puppet[:vardir]
=> "/tmp/foo"

How cool is that? Yes of course, we could have fed Puppet a custom configuration file or set the configuration parameter ourselves, but if our code actually calls Puppet.parse_config then our efforts are lost, since our test setup code is run before this time. Mocha magically sets the stage for testing where we rig up the appropriately faked environment so we can only test the core functionality of that part of the code.

I’m exploring some slightly more complex use cases now with Mocha, but it is another valuable tool to add to my programmer’s toolkit.

Tags: , , ,

Friday, February 25th, 2011 Tech No Comments