BusterJS vs Chai/Mocha

BusterJS, a node.js-oriented test framework, is notable for having a lot of ways to run tests, including one where it fires up an HTTP server that your browser connects to, as well as a way, seemingly, to attach several browsers to a single running test harness. Having opted against exploring the whole node.js / browser module dichotomy up until now, I skipped all of that stuff and went straight to the instructions for just running the thing in a browser. (I should note in passing that the test-runner Mocha, covered earlier, also has a lot of non-browser-focussed running modes which I didn’t cover, among which the most intriguing is the Nyan Runner).

Anyways, the browser stuff was quite easy to set up, and I returned my focus to writing tests in the BusterJS idiom.  As it turned out, there wasn’t all that much different about it.  BusterJS comes with a BDD-oriented test mode which functions almost exactly like Mocha’s does, and it also sports a BDD-flavored assertion style which is very similar, if a bit less tricksy in its syntax.

To illustrate, here’s a snippet of the Mocha/Chai tests:

describe('WeightedList', function() {

  describe('Constructor', function() {

    it('should succeed with no arguments', function() {
      expect(new WeightedList()).to.be.ok;
    });

    it('should throw errors on bad inputs', function() {
      var badConstructor = function() {
        return new WeightedList( {'wrong': 'field names'} );
      };
      expect(badConstructor).to.Throw(Error);
    });
  });
});

And here is their BusterJS equivalent:

var spec = describe("Weighted Lists", function() {

  describe('Constructor', function() {

    it('should succeed with no arguments', function() {
      expect(new WeightedList()).toBeDefined();
    });

    it('should throw errors on bad inputs', function() {
      var badConstructor = function() {
        return new WeightedList( {'wrong': 'field names'} );
      };
      expect(badConstructor).toThrow();
    });

  });
});

As you can see, the broad strokes are virtually identical. The main difference I’ve noticed is that Chai uses a little more trickery to get its assertions into a quasi-DSL style, with statements like result.should.have.length(1); and new WeightedList().should.be.ok; as opposed to BusterJS’s more conventional expect(result.length).toEqual(1); and expect(new WeightedList()).toBeDefined();.  While I appreciate the vigor with which the Chai folks were able to bend Javascript’s syntax to their will, I actually find the more conventional BusterJS syntax to be a little easier to read, since it looks basically like I expect Javascript to look.

Overall, these assertion frameworks both seem adequate for testing Javascript code. I’m really not thrilled about the way one tests code for exceptions in either language, where it’s necessary to pass in a function which is expected to throw an error, but I can’t really think of a better way to do it in Javascript. I’m going to continue porting the tests over to BusterJS, but I don’t know that I’ll have all that much more that’s interesting to say about it. If you’re interested in the nitty-gritty, the details can be found by contrasting the Chai and BusterJS assertion documentation.

I should probably also note in passing that the particular uses I’m putting the testing frameworks through are a little unusual. Most people writing tests won’t be testing a pure-javascript library with no user-interface whatsoever, and a lot of the appeal of the various frameworks seems to be the integration they offer with various other pieces of the Javascript ecosystem.

With that in mind, my next task is to finally figure out what the whole Javascript module system is all about, and likewise to figure out how I can package the library I’ve got into something usable in a node.js context. Meanwhile, based on a preliminary reading of the AMD vs CommonJS pedagogy in the blogosphere, my idea of just including the library code on a web page via a <script type="text/javascript" src="js-weighted-list.js"/> tag is laughably naive, and I’ll need to add a compilation step in there, because what is Javascript if not an Ada-style B&D language which benefits from the intense scrutiny of static program analysis?

In any event, adding a compilation step will probably give me a ready excuse to jump over to CoffeeScript.

Forging on with javascript testing frameworks

I’ve got what I consider to be a decent set of unit tests in Chai, though I’m certainly a good deal away from a complete set. (As an aside, the inherently non-deterministic nature of js-weighted-list presents something of a testing challenge in itself, totally apart from the particular framework in use; I’ll have more to say about this in future.)  My impressions of it aren’t radically different from how I felt about it last time – I’m still not crazy about the syntax, but it has offered me one or two pleasant surprises when it found scope bugs I had missed.

There’s another  aspect of the framework I’ve enjoyed as well. As some background, I’m mostly investigating BDD frameworks, but I have to confess that I don’t really see how BDD is all that different from unit-test based testing, apart from the terminology in use; frankly the difference between a test suite and a spec doesn’t seem all that large to me, and I’m dubious about the vague claims of readability to non-programmers that seem to haunt the rhetoric of some BDD sites I’ve seen. But anyways, according to the classic BDD (or TDD) approach, I’m going about things backwards: I’m writing a suite of tests to cover code which already exists instead of starting with the tests and then writing code to satisfy them.

Certainly I’m not the first programmer in the history of the world to approach testing this way, but I did enjoy adding in some undefined specifications for a few missing features for the library and seeing them show up as elements in the list:

    it('should increase the length of the list');
    it('should validate its weights');

Mocha testrunner with pending specs

Seeing the tasks come up with little blue dots in the test-runner output, and then change to red and then green as I implemented first the test code and then the implementing code in the library was a pretty satisfying process, and I can definitely see its appeal. This is a whole aspect of BDD I’ll probably miss out on in this comparison, since I’m not planning on reimplementing the library itself over and over again. In recognition of this, though, I’m hoping to think up some features and hold them in abeyance so I can try more specification-first type development.

At this point I have a decent feel for Chai, and I will move on to the next framework; I’m thinking I’ll try out BusterJS, since from a cursory examination of Jasmine it looks to be very similar to Chai. Another thing which is bubbling up towards the top of my priority list is trying out the coveraje library, which measures test coverage for javascript unit tests. However, as a node.js only project, this will require me to figure out how to get the same test suite and module system running on the command-line and the browser, and I’d just as soon keep my focus on the browser for the moment.

First few stabs at Chai

Continuing on my investigation of what’s what in the state of Javascript testing frameworks, I downloaded Chai and ported several of my existing Qunit tests over to it. Chai is more or less a plugin for Mocha, which is a test harness framework that allows you to organize your tests, while leaving the meaty assertion bits of testing to outside libraries. The decoupling of these two aspects of a testing framework seems like a pretty neat approach, and I may investigate some of the other assertion plugins available for Mocha besides Chai in the future, assuming I’m not completely burnt out on testing frameworks by then. Meanwhile, Chai itself has a plugin infrastructure, with hooks available that expand its vocabulary or integrate it with other libraries such as jQuery and Backbone.

I haven’t got too far yet, but I think I can safely say that I’m not in love with the syntax. Chai exposes two main verbs, which do about the same thing. expect(foo).to.be(5); is the more obvious syntax; optionally Chai can instead attach the function should() to the global Object prototype, which allows assertions like this: foo.should.be(5);. Overall combined with Mocha’s BDD test style, you wind up with code like this:

describe('WeightedList', function() {

  describe('Constructor', function() {
    it('should succeed with no arguments', function() {
      expect(new WeightedList()).to.be.ok;
    });
  });

  describe('Empty List', function() {
    var wl = new WeightedList();

    it('should have length zero', function() {
      wl.should.have.length(0);
    });

    it('should give an empty list on shuffle', function() {
      wl.shuffle().should.deep.equal([]);
    });
  });
});

I suppose it’s readable enough, maybe I just still haven’t gotten over Javascript’s relentless use of function(){} blocks.   I will say that seeing some similar tests (but for Jasmine, a competing framework) written in CoffeeScript is certainly easier on the eyes.

Error reportIn any event, despite my trepidation at the syntax, Chai has already helped me in one way I hadn’t expected: it found a bug in my code.  Running the test that calls shuffle on an empty list, the test runner produced the error “Error: global leaks detected: heap, result.”  Sure enough, looking at my code, I’d left off the var declarations from two variables in one of the inner methods there.  Adding in the var statements caused the test to pass again.  Thanks Chai!  As I experiment with other frameworks I might try taking the declarations out again to see whether anyone else will catch the same thing.

 

Javascript unit testing

(I’m still here. During my time away from the blog I got a job. Hopefully things are about to be calmer and I’ll be updating more frequently.)

I spent a while burnishing up the weighted-list javascript library I wrote last spring, in the process catching up with the latest goings-on in Javascript. I have a real love-hate relationship with Javascript the language and its software ecosystem, but that’s probably a topic for another day; for now I guess I’ll just snarkily mention hasOwnProperty() and leave it at that.  In any event, the ascendency of Node.js seems to have sparked a mini-renaissance in software packages written in javascript. Things are moving pretty rapidly and there aren’t necessarily clear favorites in a lot of areas, so you wind up with a great variety of alternatives for various types of useful libraries.

The resulting competition is probably good for the overall quality of javascript open-source software, but it can be a little difficult to determine which packages are good to use right now. All of which is a lead-up to say that I’ve been spending some time looking into Javascript unit-testing frameworks.

js-weighted-list currently has a light suite of unit tests written in Qunit.  Qunit seems like an adequate framework overall, and being able to run my tests in a browser is pretty convenient, but it has a few aspects I’m not crazy about: the documentation is a little thin on the ground and I’m not crazy about the actual syntax of the tests, especially for tests where I’m expecting the code under test to throw an exception. I’m also not sure that Qunit is the new hotness any longer, which is of course of paramount importance in Javascript these days.

With that in mind, I’m going to try, as an experiment, to reproduce the minimal suite of unit tests I’ve got in a few other frameworks and see which ones I prefer.  The candidates I’ve got in mind so far are MochaChai, BusterJS, Jasmine, and Pavlov.  I’m not completely sure I’ll have the attention-span necessary to do a full-on shootout between them, but hopefully I can at least try a few out and see.  The weighted-list library is currently small enough that it’s pretty easy and fast to test, and as a bonus, I will hopefully shake some bugs out of it along the way.

While I’m on the subject, another idea I’ve got is to fix up a page that will run selections thousands of times and graph the results, so that I can have some idea as the whether the library works correctly or not.  But that’s another topic for a different day.