Well, that took a while

It’s been a minute since my last update, during which time I’ve learnt a great deal about node.js as well as the whole AMD / CommonJS mess mentioned in my last post. Without drowning in detail, here are a few things I’ve been thinking about.

I’ve made the switch to CoffeeScript for a good deal of my Javascript code, mostly because I was worried about wearing out the keys on my keyboard which comprise the phrase function(){}. I rather enjoy CoffeeScript’s cleaned-up syntax, at least for now, and I like how it hews to Javascript’s essential character while smoothing out its rougher bits. I’ve still got plenty of reservations about it, and it sometimes errs a bit too far on the loosey-goosey side of the syntax fence for my taste, but anything that will save me from the ongoing insults to programming-language aesthetics coming out of the Javascript world has my sincere gratitude. (The latest of these I’ve had to endure is the comma-first style, which is just one of the awful emergent results from the most-used data transport format of our day, the JSON spec, needing to fit into three quarters of a page of EBNF for some reason. Why complicate it with an optional comma at the end, or optional quotes around key values? But that’s a rant for another day.)

Meanwhile I also went down a fairly long rabbit hole in learning about electronics – this started with getting a Raspberry Pi and then got a little more low-level with some Ardiuno stuff, and by now I’ve got a bunch of nascent electronics projects in the offing, mostly concerning various robots. The Pi in particular has a robust Python infrastructure, and I have a pretty good imaging-related project in mind, but in the course of writing the software for it I decided that this would be a good opportunity to experiment with node.js development, and so I’ve since ported the stuff I’d written from Python to CoffeeScript and node.js.

More is surely to come about all that, and as usual I ought to have some code available on github before too long. In the course of all this I’ve also learned a great deal more about the server-side javascript ecosystem, and I think I should actually be ready to get the weighted-probability list library into much better shape.

Initial Jasmine impressions

Ok, I must confess that midway into my half-baked project of comparing Javascript testing frameworks, I’m getting a little bored with the entire project. Largely this is due to the fact that of the three major BDD-oriented frameworks I’ve seen so far, the syntax and semantics are roughly the same. Really the only major difference I’ve seen so far is that Mocha warns me about global namespace pollution, and the other frameworks do not.

As a point of reference, I was able to take the nascent set of tests I’d come up with for BusterJS and port them over virtually unchanged to Jasmine, with a few simple syntax changes relating to the setup / teardown methods. I’ll probably finish out the tests in one framework or the other and then move on to more interesting things, like figuring out the whole module mess that seems to be the current state of the art.

BusterJS vs Chai/Mocha

BusterJS, a node.js-oriented test framework, is notable for having a lot of ways to run tests, including one where it fires up an HTTP server that your browser connects to, as well as a way, seemingly, to attach several browsers to a single running test harness. Having opted against exploring the whole node.js / browser module dichotomy up until now, I skipped all of that stuff and went straight to the instructions for just running the thing in a browser. (I should note in passing that the test-runner Mocha, covered earlier, also has a lot of non-browser-focussed running modes which I didn’t cover, among which the most intriguing is the Nyan Runner).

Anyways, the browser stuff was quite easy to set up, and I returned my focus to writing tests in the BusterJS idiom.  As it turned out, there wasn’t all that much different about it.  BusterJS comes with a BDD-oriented test mode which functions almost exactly like Mocha’s does, and it also sports a BDD-flavored assertion style which is very similar, if a bit less tricksy in its syntax.

To illustrate, here’s a snippet of the Mocha/Chai tests:

describe('WeightedList', function() {

  describe('Constructor', function() {

    it('should succeed with no arguments', function() {
      expect(new WeightedList()).to.be.ok;
    });

    it('should throw errors on bad inputs', function() {
      var badConstructor = function() {
        return new WeightedList( {'wrong': 'field names'} );
      };
      expect(badConstructor).to.Throw(Error);
    });
  });
});

And here is their BusterJS equivalent:

var spec = describe("Weighted Lists", function() {

  describe('Constructor', function() {

    it('should succeed with no arguments', function() {
      expect(new WeightedList()).toBeDefined();
    });

    it('should throw errors on bad inputs', function() {
      var badConstructor = function() {
        return new WeightedList( {'wrong': 'field names'} );
      };
      expect(badConstructor).toThrow();
    });

  });
});

As you can see, the broad strokes are virtually identical. The main difference I’ve noticed is that Chai uses a little more trickery to get its assertions into a quasi-DSL style, with statements like result.should.have.length(1); and new WeightedList().should.be.ok; as opposed to BusterJS’s more conventional expect(result.length).toEqual(1); and expect(new WeightedList()).toBeDefined();.  While I appreciate the vigor with which the Chai folks were able to bend Javascript’s syntax to their will, I actually find the more conventional BusterJS syntax to be a little easier to read, since it looks basically like I expect Javascript to look.

Overall, these assertion frameworks both seem adequate for testing Javascript code. I’m really not thrilled about the way one tests code for exceptions in either language, where it’s necessary to pass in a function which is expected to throw an error, but I can’t really think of a better way to do it in Javascript. I’m going to continue porting the tests over to BusterJS, but I don’t know that I’ll have all that much more that’s interesting to say about it. If you’re interested in the nitty-gritty, the details can be found by contrasting the Chai and BusterJS assertion documentation.

I should probably also note in passing that the particular uses I’m putting the testing frameworks through are a little unusual. Most people writing tests won’t be testing a pure-javascript library with no user-interface whatsoever, and a lot of the appeal of the various frameworks seems to be the integration they offer with various other pieces of the Javascript ecosystem.

With that in mind, my next task is to finally figure out what the whole Javascript module system is all about, and likewise to figure out how I can package the library I’ve got into something usable in a node.js context. Meanwhile, based on a preliminary reading of the AMD vs CommonJS pedagogy in the blogosphere, my idea of just including the library code on a web page via a <script type="text/javascript" src="js-weighted-list.js"/> tag is laughably naive, and I’ll need to add a compilation step in there, because what is Javascript if not an Ada-style B&D language which benefits from the intense scrutiny of static program analysis?

In any event, adding a compilation step will probably give me a ready excuse to jump over to CoffeeScript.

Forging on with javascript testing frameworks

I’ve got what I consider to be a decent set of unit tests in Chai, though I’m certainly a good deal away from a complete set. (As an aside, the inherently non-deterministic nature of js-weighted-list presents something of a testing challenge in itself, totally apart from the particular framework in use; I’ll have more to say about this in future.)  My impressions of it aren’t radically different from how I felt about it last time – I’m still not crazy about the syntax, but it has offered me one or two pleasant surprises when it found scope bugs I had missed.

There’s another  aspect of the framework I’ve enjoyed as well. As some background, I’m mostly investigating BDD frameworks, but I have to confess that I don’t really see how BDD is all that different from unit-test based testing, apart from the terminology in use; frankly the difference between a test suite and a spec doesn’t seem all that large to me, and I’m dubious about the vague claims of readability to non-programmers that seem to haunt the rhetoric of some BDD sites I’ve seen. But anyways, according to the classic BDD (or TDD) approach, I’m going about things backwards: I’m writing a suite of tests to cover code which already exists instead of starting with the tests and then writing code to satisfy them.

Certainly I’m not the first programmer in the history of the world to approach testing this way, but I did enjoy adding in some undefined specifications for a few missing features for the library and seeing them show up as elements in the list:

    it('should increase the length of the list');
    it('should validate its weights');

Mocha testrunner with pending specs

Seeing the tasks come up with little blue dots in the test-runner output, and then change to red and then green as I implemented first the test code and then the implementing code in the library was a pretty satisfying process, and I can definitely see its appeal. This is a whole aspect of BDD I’ll probably miss out on in this comparison, since I’m not planning on reimplementing the library itself over and over again. In recognition of this, though, I’m hoping to think up some features and hold them in abeyance so I can try more specification-first type development.

At this point I have a decent feel for Chai, and I will move on to the next framework; I’m thinking I’ll try out BusterJS, since from a cursory examination of Jasmine it looks to be very similar to Chai. Another thing which is bubbling up towards the top of my priority list is trying out the coveraje library, which measures test coverage for javascript unit tests. However, as a node.js only project, this will require me to figure out how to get the same test suite and module system running on the command-line and the browser, and I’d just as soon keep my focus on the browser for the moment.

First few stabs at Chai

Continuing on my investigation of what’s what in the state of Javascript testing frameworks, I downloaded Chai and ported several of my existing Qunit tests over to it. Chai is more or less a plugin for Mocha, which is a test harness framework that allows you to organize your tests, while leaving the meaty assertion bits of testing to outside libraries. The decoupling of these two aspects of a testing framework seems like a pretty neat approach, and I may investigate some of the other assertion plugins available for Mocha besides Chai in the future, assuming I’m not completely burnt out on testing frameworks by then. Meanwhile, Chai itself has a plugin infrastructure, with hooks available that expand its vocabulary or integrate it with other libraries such as jQuery and Backbone.

I haven’t got too far yet, but I think I can safely say that I’m not in love with the syntax. Chai exposes two main verbs, which do about the same thing. expect(foo).to.be(5); is the more obvious syntax; optionally Chai can instead attach the function should() to the global Object prototype, which allows assertions like this: foo.should.be(5);. Overall combined with Mocha’s BDD test style, you wind up with code like this:

describe('WeightedList', function() {

  describe('Constructor', function() {
    it('should succeed with no arguments', function() {
      expect(new WeightedList()).to.be.ok;
    });
  });

  describe('Empty List', function() {
    var wl = new WeightedList();

    it('should have length zero', function() {
      wl.should.have.length(0);
    });

    it('should give an empty list on shuffle', function() {
      wl.shuffle().should.deep.equal([]);
    });
  });
});

I suppose it’s readable enough, maybe I just still haven’t gotten over Javascript’s relentless use of function(){} blocks.   I will say that seeing some similar tests (but for Jasmine, a competing framework) written in CoffeeScript is certainly easier on the eyes.

Error reportIn any event, despite my trepidation at the syntax, Chai has already helped me in one way I hadn’t expected: it found a bug in my code.  Running the test that calls shuffle on an empty list, the test runner produced the error “Error: global leaks detected: heap, result.”  Sure enough, looking at my code, I’d left off the var declarations from two variables in one of the inner methods there.  Adding in the var statements caused the test to pass again.  Thanks Chai!  As I experiment with other frameworks I might try taking the declarations out again to see whether anyone else will catch the same thing.

 

Javascript unit testing

(I’m still here. During my time away from the blog I got a job. Hopefully things are about to be calmer and I’ll be updating more frequently.)

I spent a while burnishing up the weighted-list javascript library I wrote last spring, in the process catching up with the latest goings-on in Javascript. I have a real love-hate relationship with Javascript the language and its software ecosystem, but that’s probably a topic for another day; for now I guess I’ll just snarkily mention hasOwnProperty() and leave it at that.  In any event, the ascendency of Node.js seems to have sparked a mini-renaissance in software packages written in javascript. Things are moving pretty rapidly and there aren’t necessarily clear favorites in a lot of areas, so you wind up with a great variety of alternatives for various types of useful libraries.

The resulting competition is probably good for the overall quality of javascript open-source software, but it can be a little difficult to determine which packages are good to use right now. All of which is a lead-up to say that I’ve been spending some time looking into Javascript unit-testing frameworks.

js-weighted-list currently has a light suite of unit tests written in Qunit.  Qunit seems like an adequate framework overall, and being able to run my tests in a browser is pretty convenient, but it has a few aspects I’m not crazy about: the documentation is a little thin on the ground and I’m not crazy about the actual syntax of the tests, especially for tests where I’m expecting the code under test to throw an exception. I’m also not sure that Qunit is the new hotness any longer, which is of course of paramount importance in Javascript these days.

With that in mind, I’m going to try, as an experiment, to reproduce the minimal suite of unit tests I’ve got in a few other frameworks and see which ones I prefer.  The candidates I’ve got in mind so far are MochaChai, BusterJS, Jasmine, and Pavlov.  I’m not completely sure I’ll have the attention-span necessary to do a full-on shootout between them, but hopefully I can at least try a few out and see.  The weighted-list library is currently small enough that it’s pretty easy and fast to test, and as a bonus, I will hopefully shake some bugs out of it along the way.

While I’m on the subject, another idea I’ve got is to fix up a page that will run selections thousands of times and graph the results, so that I can have some idea as the whether the library works correctly or not.  But that’s another topic for a different day.

Generating weighted-probability lists in Javascript

js-weighted-list test page screenshot

I’ve made a good deal of progress on my Google App Engine application, about which more later.  During the course of it, I started to keenly feel the need for a weighted random list.  That is, I have a list and want to select a random sample from it, but it has an unequal (weighted) probability distribution; I want some of the items to be selected more frequently than others.  I also need to modify the weights depending on what else is going on in the program (eg, in response to user input or web service results.  Also, this is all happening on the front-end, so it needs to work in javascript.

Poking around for a while I couldn’t find anything obviously relevant to this, but I did find a pretty good breakdown of the problem and the algorithm to follow in this Stack Overflow answer.  This implementation makes use of a heap in order to enable a binary search on the probability space.  While digging through to understand it, I wound up reimplementing the thing in Javascript, and then made a bunch of improvements to the API for it.  It’s pretty basic and doesn’t have any particular dependencies, so I’m releasing it as its own library, js-weighted-list.

It is available on github under the MIT license.  There are still several things I need to improve with it, in particular it’s severely under-tested, but it’s decently documented and immediately usable for what I need it to do.

Ars hexica

After spending a while uselessly flailing around trying to get the Gimp to do what I wanted it to do, I wised up and checked out some directions for tile creation on the Wesnoth site, which pointed me in the direction of the open-source SVG editor Inkscape. Inkscape, among other nice features, has a polygon tool which is able to make regular hexagons and the like. It will also export selected objects as PNG files, so with minimal effort I’ve been able to replace the tiles I swiped from Wesnoth with new, boring ones of my own devising. Among other improvements, I’ve got lines around the hexes now, and the hexagons themselves are regular. Encouragingly, I didn’t need to change the javascript at all to deal with the new sizes, since it works by inspecting the img sizes from their attributes directly and basing its math on that.

Along those same lines, I’ve been pretty pleased with how easy it is to separate presentation logic from presentation markup in jQuery.  Apart from switching image sizes, I can see that it would be fairly easy to set up the function which generates random tiles, randomTile(), to dynamically query the DOM tree, find all of the possible tiles, and then pick one at random (currently it uses a static list of possible tile names).  Together with attaching arbitrary custom data to each possible tile, this seems like a promising approach to some sort of game, with a classic game engine / game content separation.  However, Barbarian Prince is not that game, being based on a static game map, and part of the point of this exercise has been to learn Scala, so it’s back to the server side after this.

After finding Inkscape, I spent a while  messing around with different sizes for the hexes in the hexmap.  For a while I was considering using either SVG or HTML5 canvas elements or some such to represent the map, with the idea that it would allow a more easy way to zoom in and out from the map.  I feel like both of those areas represent rather large and interesting rabbit holes, though, and I’ve only just emerged from the jQuery rabbit hole I was in.  Moreover, having distinct tiles seems like a good match for the game as it was written.

The next steps are to head back out to terra incognita and add this stuff to the project:

  • A more beefed-up definition of the hexmap definition (as a JSON, uh, schema).
  • A persistent store on the Scala side which is capable of storing and retrieving this definition (most likely via mongodb, since I’m familiar with it and it has a well-regarded Scala interface).
  • Simple REST service definitions on the Scala side, and, oh yeah, a web server framework to serve them up, most likely with Scalatra.
  • An Ajax call from index.html to get the persisted hexmap from the Scala server and display it.

I will note in passing that it has occurred to me that Barbarian Prince as she is spoke could probably be easily implemented purely through jQuery and HTML5 client-side storage (which I guess seems to mean indexedDB as of February 2012), keeping all game logic and map creation on the client side.  This would certainly reduce network latency for players, as well as the need for highly-skilled developers on server side.  This is an approach that somebody who isn’t trying to get experience in programming Scala should probably pursue.

Javascript hexmap display

hexagon map

After a good bit of head-scratching I’ve managed to make some progress on what I assumed was going to be one of the more difficult aspects of the hex-map project: actually displaying the hexes in a browser.  This also turned out to be a good opportunity to finally dig into jQuery a little more deeply, too.  I’ve come up with a rough idea of how to proceed from here, too; I’ll have the page make Ajax calls out to Scala to retrieve map details in JSON and it should be pretty easy to dynamically construct the hex grids from there.

Github, probably wisely, doesn’t display checked-in HTML directly in the browser, but I’ll attach a screenshot here; interested parties can check out the code on github under src/main/webapp.  Currently the page generates a new random map on every page refresh, with each map being a mixture of roughly 2/3 desert tiles and 1/3 ocean tiles.  The data structure is a simple 2-dimensional array:

 var map = [["desert", "ocean", "desert"],
            ["desert", "ocean", "desert"],
            ["desert", "desert", "ocean"]]

Obviously for a full game implementation the elements would be complex JSON structures, not strings, but this suffices for now.  To tile the hexes together, I’m simply laying them on top of one another via CSS absolute positioning; each image is a rectangle (actually, a square) with the diagonal edges cut out of the sides.  I was a little worried this would make it cumbersome to attach jQuery event handlers to the right places, but I currently have a handler that displays the (x,y) coordinates of a hex when a user mouses over one and it seems very usable, if not perfect around the edges of hexes.

The code I’m using to position the tiles looks roughly like this (note: some cleanup is still pending in the actual code; what follows is the gist but see the code itself for details).  Here tile is a string from the previous array, and x and y are row / column numbers.

function placeTile(tile, x, y) {
  var tile$ = $("." + tile, "#templates").children().clone();

  tile$.hexMapPosition(x, y).appendTo($('#hexmap'));
}

This code clones images based on the tile name from the map data structure. The images themselves are stored in a hidden div:

<div id="templates" class="hidden">
  <div class="tiles desert">
    <img src="images/tiles/desert.png" width="72" height="72"/>
  </div>
  <div class="tiles ocean">
    <img src="images/tiles/ocean.png" width="72" height="72"/>
  </div>
</div>

It then positions them via a jQuery extension, hexMapPosition(), which examines the image’s height and width and then calculates an absolute position for the top left pixel.

  (function($) {
    $.fn.hexMapPosition = function(row, column) {
      var tile_width = this.attr("width");
      var tile_height = this.attr("height");

      // Haven't done the math to check these but they work
      var y_offset = tile_height / 2;
      var x_offset = tile_width / 4;

      var xpos = row * (tile_width - x_offset));
      var ypos = column * tile_height);

      if (row % 2 == 0) {
        ypos -= y_offset;    // Every other row, offset down
      }

      return this.css({"position": "absolute", "top": ypos, "left": xpos});
    }
  })(jQuery);

This isn’t perfect, but it’s a good place to start. Some problems I’ve got with it:

  • The hexagon pngs are ones I’ve borrowed from the Battle for Wesnoth project, and are a bit irregular (literally – the top edges are shorter than those on the sides). This does seem to make the math easy, though. I should note here that the PNGs are published under the GPLv2, though to be honest, I have no clue how the GPLv2 could possibly apply to images.
  • I’m a little worried that overlaying further PNGs on top of these ones (for map highlighting, roads, player markers, etc) will interfere with the mouseover events, but I’ll need to experiment.
  • I had a lot of trouble trying to get little borders to appear between hexes by monkeying with the positioning values above.

Edit: having recently discovered jsfiddle, I uploaded a fiddle of this there. I’ve also manage to clean up the code so that it is currently in much better shape on jsfiddle than it is in github, but the github version will be changing and it will be nice to keep a clean version of the basic idea around.