Just What Are You Testing? Write Tests Intently

15 08 2009

Unit tests need to be robust and reliable. If your tests frequently raise false positives, or worse, fail to report actual errors, then they are not providing the level of comfort they should.

Effective test code, like production code, expresses its intent. Test code that is too general can be brittle, or worse, outright fail.

Here’s a simplified example, based on some code I’ve been working on in TriSano.

class Loinc < ActiveRecord::Base
  validates_presence_of :loinc_code

What we have here is a simple ActiveRecord class, named Loinc. Its only validation ensures that a loinc code is present. If we were to write a spec for this validation, it might look like this:

  it "should not be valid if loinc code is blank" do
    Loinc.create(:loinc_code => nil).should_not be_valid

This test is actually very expressive. In fact, the code reads almost the same as the description. “A new loinc (with no value for loinc code) should not be valid.”

What we’ll start to see, however, is that this code is not actually expressing the proper intent of the test. Let’s make a change:

class Loinc < ActiveRecord::Base
  validates_presence_of :loinc_code
  validates_presence_of :scale_id

Now we’ve added a second validation, on the scale_id field. Our spec still passes, so everything’s hunky-dory, yes? Well, no.

The intent of our test code is to verify that a blank loinc code makes the instance invalid. The code actually tests that a blank loinc code *or* a blank scale id makes the instance invalid.

Pragmatically, this means that we haven’t properly isolated this test.

Nothing is broken yet, but, when merging in the commit that includes the scale id change, let’s assume the developer accidentally merges away the loinc code validation (hey, it happens). So we have:

class Loinc < ActiveRecord::Base
  validates_presence_of :scale_id

When we run our test, it still passes! That wasn’t what we intended at all.

To fix the test, consider what behavior we are expecting. Since this is a Rails app, we are expecting that, If a user tries to create or update a Loinc instance with a blank loinc code, they will receive an error message. That should be the intent of our test.

In code, our spec might look like this:

  it "should produce an error if loinc code is blank" do
    Loinc.create.errors.on(:loinc_code).should == "can't be blank"

Our test is still expressive (well, maybe, a little less expressive), but now it is expressing our programs actual intent, and the validation tests for the loinc code field are isolated from other fields’ validations. If we run our test now, we receive the failure we’d expect.


  • TriSano on GitHub
  • Practices of an Agile Developer: Practice 25 – Program Intently and Expressively
  • Advertisements

Ruby TestCase: Resumable Assertions

14 02 2008

It’s hard to believe that, for all of Smalltalk’s elegance, it isn’t more widely accepted. I just read the SUnit chapter in Squeak By Example. The SUnit TestCase class has a resumable assert method (TestCase#assert:description:resumable:). The use case for this is testing objects in a collection. Here’s a code example:

(1 to: 30) do: [ :each |
    self assert: each even description: each printString, ' is odd' resumable: true]

This code produces a test failure and outputs each object in the collection that fails the test (every odd number, in this example).

I decided that I wanted something like that in Ruby. Well, I want it in Java too, but coding in Java’s no fun 🙂

I was thinking of something like this:

1.upto 30 do | each |
       assert_and_resume each % 2 == 0, "#{each} is odd"

After poking through the TestCase source, I decided the easiest approach would be to write a custom assert, wrapping the assert_block method. I just needed to be able to catch the failed assertion and report the failure without stopping the test method.

So I wrangled a little code and came up with a solution that works like this:

1.upto 30 do | each |
       assert_and_resume( "#{ each } is odd" ) { each % 2 == 0 }

Pretty close to what I was thinking.

And here’s the implementation:

module Test
  module Unit

    # Resumable assertions are in a different module then the other
    # assertions because they have a dependency on
    # TestCase#add_failure.  The standard assertions can be included
    # anywhere, so including Resumable assertions in the Assertions
    # module might break existing code.
    module ResumableAssertions

      require 'test/unit/assertions'
      def assert_and_resume( description, &block )
          assert_block description, &block
        rescue AssertionFailedError => e
          add_failure e.message, e.backtrace


    # Now we hook our assertion up to the test case. I could have just
    # re-opened TestCase, but I wanted to keep the Assertion logic
    # separate from the TestCase logic, as was the original author's
    # design.
    require 'test/unit/testcase'
    TestCase.send( :include, ResumableAssertions )


Not much to it. The only other change that I might like to make is to modify the output so that it’s clear that one method failed many times. On a lark, I ran this through ci_reporter and then used Antwrap to generate a junit report. Here’s an snapshot of what that report looks like:

Resumable Assert Test Report