How to manage dependencies when unit testing Javascript.

November 3rd, 2012

In my recent post, with the presentation about unit testing and Jasmine, there was one slide missing. That slide should have been titled “dealing with dependencies”, and it should have read :

Dependencies : Mock EVERYTHING.

The goal of writing unit tests is to test the smallest part of your code. These tests should be fast and stable.

But modules have dependencies, it’s a fact of life, anything without dependencies or dependants, isn’t very useful at all.

When you start testing your dependencies in your unit tests, it makes them slower and less stable, they then become integration tests. Integration tests are useful, but more expensive to run and should be run later in your pipeline.

So what’s the solution? As I said before:


Let’s create a contrived example of a module with a dependency:

var moduleA = function(dependencyB) {

  var calculateInvoice = function(amount, hours) {
    return dependencyB.addTAX((amount*hours));

  return {
    calculateInvoice : calculateInvoice


And a Jasmine unit test to test it:

#include dependencyB; //pseudo code

describe("Module A", function() {

  describe("the calculate Invoice method", function() {

    it("should calculate the correct amount for the invoice", function() {

      var hours = 10,
            rate = 20,
            testModule = new ModuleA(dependencyB),
            newInvoiceAmount = testModule.calculateInvoice(hours, rate);



In the test above, where I’ve included the dependency, what we’ve actually created is an integration test. It tests that the combination of the two modules work together.

Now let’s imagine the “addTAX” method in “dependency B” changes. It’s been a hard year, so the government has decided to raise VAT in order to pay for bailing out some more ailing banks.

What will happen to our test?

As soon as another developer goes in to change the TAX amount from 0.175 to 0.25, our test will start mysteriously failing.

Why? Is the code broken? No. Is the test broken? Yes.

The expectation that “newInvoiceAmount” should equal 235 is now incorrect, it should now equal 250.

So what do we do?

In practice, what this means is when creating our Jasmine unit test for “moduleA” we need to mock out our “dependency B”, which we would do with a line of code that looks something like this:

mockedDependencyB = {},
mockedDependencyB.multiply = jasmine.createSpy('multiply').andReturn(235);

Add that to the rest of our assertion and we get:

 it("should calculate the correct amount for the invoice", function() {

      var hours = 10,
            rate = 20,
            mockedDependencyB = {},
            mockedDependencyB.multiply = jasmine.createSpy('multiply').andReturn(235);
            testModule = new ModuleA(mockedDependencyB),
            newInvoiceAmount = testModule.calculateInvoice(hours, rate);

Now when we run our test, we are safe from refactoring’s going on in our dependencies, we know when the code is really broken and can have greater confidence that our tests are showing us real errors. It doesn’t matter if our Tax rate changes again, because we are not testing that anymore, we are testing that the logic in “moduleA” is doing what we think it should.

Like everything, there are some exceptions to the rule, instances where you may want the real dependency, but in most cases, you can remember the simple rule:


Unit testing with Jasmine

October 18th, 2012

I recently gave a talk on unit testing with Jasmine. Below are the slides.

Update : I have uploaded the files I used for my live demo to github : Jasmine-rhino they include a sample project and a runner script ./ which will easily integrate with Jenkins.

Writing your own Geohash algorthim, part 1

October 26th, 2011

Sometimes you find yourself with a random computing science seeming problem that you just want to roll up your sleeves and get you hands dirty with. I had one of those moments, about 4 months ago.

Now, four months may seeeem like a really long time, but as a classic lean saying say’s, results are not the point. It may have taken four months, but in those four months I learned some valuable lessons about computer programming. I thought it would be worth sharing them.

The problem.

The first part of the process is to frame the problem. On we have some functionality that means whenever you drag the map, the url bar updates with the new latitude/longitude.

Which is fine.


You live in China.

In China you are not allowed to display latitude/longitude. Instead you must hide it from the sight of your viewers, incase they see it and realise the world is not the distorted place they’ve been told it is by their government. Go figure. So, my problem was how to save the location, via the url bar, without using the lat/long.

Googling this topic leads you to three likely looking solutions.


The geohash appears to be a relatively new (2008) discovery, invented by Gustavo Niemeyer, you can find the implementation at

I spent sometime replicating this technique in Javascript, essentially, it involves, systematic storing of deltas (or I guess diff’s for people who know source control). The starting premise is that for example, a latitude exists between -90 and 90 degrees. The first delta identifies whether it is in the first or second half of that range. So if I took the latitude, 52.51607, I would programatically ask, is that between -90 and 0 or 0 and 90, clearly it would be the latter so my first delta would be 1, if it had been -52.51607 then my delta would be 0. Doing this recursively I can then create a pattern of 1’s and 0’s that slowly drill down to my desired latitude. Then, once i’ve done that, I can take that binary string and convert it to integers between 0 and 32 and then using base32 represent it as a string.

It means for the coordinate pair 57.64911,10.40744, you end up with a geohash of u4pruydqqvj which is a saving of 6 characters (including points and the comma delimiter). Not bad. But what are the other options?

Patent No : 7, 302, 343

Working for Nokia, I have luxury of being able to tap into a vast treasure trove of both Microsoft and Nokia patents without the fear of being sued. Now, I’m not saying I think software patents are a good thing, but it was a relief when I found this patent which is described as compact text encoding of Longitude/Latitude was owned by Microsoft rather than Apple. It seemed to be exactly what I was looking for.

This technique takes the long/lats and converts them into non-negative numbers. So for example 47.64932 converts to 18,583,657 to be honest, don’t ask me why this works, I followed the formulas in the paper and boom, got the right number, but the theory behind converting to a non-negative number is lost on me. Then with this integer, you can generate a base-n (so in this case they used base32 again) string.

So with the example co-ordinate pair, 47.6493, -122.12926 you end up with a hash of ry7cx4tp95, a saving of nine whole characters! Pretty damn impressive, but there was one final method I wanted to research, mainly because it was named after a place in my homeland.

The Maidenhead Locator System


Developed in 1980 in Maidenhead, England, this system was designed by VHF managers and used by amateur radio enthusiasts. The Maidenhead Locator system was created with the use case of being able to transmit a location within 12km accuracy in a simple 6 character string using morse code.

It essentially works on the idea of converting the world map into a grid, based on degrees and assigning them a letter or a number. Each character in the string and precisely positioned and different locations have different boundaries. An example of a Maidenhead Locator system hash is:


Stolen from wikipedia, below is a run down of what each of the characters does.

  • Character pairs encode longitude first, and then latitude.
  • The first pair (a field) encodes with base 18 and the letters “A” to “R”.
  • The second pair (square) encodes with base 10 and the digits “0″ to “9″.
  • The third pair (subsquare) encodes with base 24 and the letters “A” to “X”.
  • The fourth pair (extended square) encodes with base 10 and the digits “0″ to “9″.

Interesting, a system invented in the 80’s that achieves a small, human readable hash of a lat/long with only 8 characters but alas with one large draw back, which is that it’s not as accurate as I need.

Having invested a large amount of time replicating in Javascript each of these techniques, I realised that what I need was a hybrid system, using the Maidenhead locator concept as a basis. In part 2, I will explore the javascript I need to write a geohash which matches the precision of the Geohash and Microsoft patent solution with the number of characters from the Maidenhead system.

Look ma, no hands! Simulating dragging in the browser.

May 12th, 2011

Following on from yesterdays post, now I know how to measure the framerate of my application, I need to simulate the dragging behavior of the user on

One can do this in something like selenium or webdriver, but i’d like to do mine in javascript, which actually turns out to be not so difficult. All we need to do, is create a drag path, generate some HTML events and fire them on the correct object. For this example I will use the jQuery draggable page.

First things first, we need to get some key points. What does the drag path look like? Where do I want my draggable item to go?

For the HTML mouse events as a minimum for dragging, I need to know 4 variables, screenX, screenY and clientX and clientY. Lets write a quick jQuery (for this article I am being a lazy javascripter) function that allows me to click on the screen and output the mousemove data to the console and then end.

$(document).mousedown(function() {
    $(document).mousemove(function(event) {
    $(document).mouseup(function() {


The output I get from that gives me a set of arrays of points where my drag animation must go to:


So first lets declare that in another array:

var dragPoints = [

Ok, so what next. Well, now we need a function to fire the html events.

var dispatchHTMLMouseEvent = function(mouseEventType, coords, target) {

    var evt = document.createEvent("MouseEvents");
    evt.initMouseEvent(mouseEventType, true, true, window, 0,
    coords[0], coords[1], coords[2], coords[3], false, false, false, false, 0, null);


This function takes 3 arguments: the event type, which for us, is mousedown, mousemove and mouseup, the coordinates to move to and the element to target when firing the event.

So finally we need the timer function to periodically fire the events.

var sendMouseDrag = function(element, dragPoints) {
    dispatchHTMLMouseEvent("mousemove", dragPoints[i], element);
    if(i < dragPoints.length-1) {
      setTimeout(function() {
          sendMouseDrag(element, dragPoints);
      }, 10);
    } else {
        dispatchHTMLMouseEvent("mouseup", dragPoints[i], element);


And then the code that calls it all:

var i = 1;
var element = document.getElementById("draggable");
dispatchHTMLMouseEvent("mousedown", dragPoints[0], element);
sendMouseDrag(element, dragPoints);

And that’s it! Now if you plug that code into the console of chrome or firefox, on that page, then you will see the draggable box magically fly itself around. If you want the full executable source code it’s below.

var dragPoints = [

var sendMouseDrag = function(element, dragPoints) {
    dispatchHTMLMouseEvent("mousemove", dragPoints[i], element);
    if(i < dragPoints.length-1) {
      setTimeout(function() {
          sendMouseDrag(element, dragPoints);
      }, 10);
    } else {
        dispatchHTMLMouseEvent("mouseup", dragPoints[i], element);


var dispatchHTMLMouseEvent = function(mouseEventType, coords, target) {

    var evt = document.createEvent("MouseEvents");
    evt.initMouseEvent(mouseEventType, true, true, window, 0,
    coords[0], coords[1], coords[2], coords[3], false, false, false, false, 0, null);


var i = 1;
var element = document.getElementById("draggable");
dispatchHTMLMouseEvent("mousedown", dragPoints[0], element);
sendMouseDrag(element, dragPoints);

Measuing framerate with Javascript

May 11th, 2011

Yesterday, I started with what I thought was a fairly trivial task, measuring the framerate of

Although I had never done it before, I knew the theory. At least, that’s what I thought. Create a timer. Get the time. Compare it to the last time. Average a result. and hey presto. You’ve got yourself a framerate timer. So I went off and found a few other implementations and hacked together some code.

I also played around with Mr.Doobs stats.js project on github, which was a very pleasant interface indeed, and can be installed as a bookmarklet, but sadly didn’t have anyway to retrieve the data.

As I meandered round the internet absorbing information, I stumbled upon an article from Mozilla, and low and behold, my world was rocked. After reading the article it turns out that timers in javascript are independent from the render loop. Which means they could run several times while the screen was being repainted, or indeed the screen could be repainted several times for a particularly slow javascript execution.Balls. I need a new solution.

Kindly however, the article offered some hope, the window.mozPaintCount property, which essentially records the number of times the screen has been painted starting when the document begins loading.

So, I hacked myself together a small amount of new code:

var lastFrame = window.mozPaintCount;
    lastFrame = window.mozPaintCount;
}, 1000);

Obviously this only works on mozilla, but it seems to provide the correct output. However it came back with some confusing results.

Try plugging it into for example using the firebug console. I get back the following results:

257, 12, 11, 13, 12, 12, 11, 11, 12, 13, 11, 13, 12, 13,11...

Ignoring the first number, which is just the delay before the code is executed. You’ll notice a pattern, which is, even when nothing is happening, the screen is repainted on average 12 times a second. It would appear that the browser throttles down the framerate when nothing is happening and ramps it up when the dom or javascript is chucking changes into the viewport.

So I thought I would test this, by creating another little snippet. This one, creates a div and updates it every 60 microseconds to see if the browser would keep pace.

var tmpNode = document.createElement("div"); = "position: absolute; top:0; left:0; width: 10px; height: 10px;background-color: blue; z-index: 998345";
setInterval(function() {
var newStyle = ("block") ? "none" : "block"; = newStyle;
}, 60);

So I loaded that and the framerate code in the browser on googles home page, and started seeing the following results:

378, 25, 29, 25, 24, 27, 26, 21, 25, 26, 26, 26

Hmmm, again curious, so it doesn’s honor my 60 frames a second, but it does bump the refresh rate up the around 25 frames a second.

Confused yet? I am.

indexOf in if statements is dangerous

April 23rd, 2011

I was recently cleaning up some legacy code, looking for an intermittent bug and I encountered the following code:

    var data = "someData";
    if(data.indexOf("e")) {
        //do something here

Now, in 99.9% of cases the string was the same so the “e” was present and the code entered into the condition and did the right thing.

However, in the remaining 0.1% the code looked like this:

    var data = "enterSomeData";
    if(data.indexOf("e")) {
        //do something here

This scenario would fail, but it’s not at first glance obvious why. Until you remember what indexOf does and what in javascript is a falsey value. Consider:

if ("foo".indexOf("z")) // returns -1 which evals to true
if ("foo".indexOf("f")) // returns 0 which evals to false
if ("foo".indexOf("o")) // returns 1 which evals to true

The bug was primarily hiding because there were never any cases where the “e” was not present at all, and so arguably, you could remove the whole if cause as it’s unnecessary. However, the person who wrote the code must have assumed that indexOf returns a falsey value if the “e” is not present and a truthy value if it is, a not so unimaginable leap and indeed, the same mistake that I first made when I looked at the code.

So for my book, that’s what Douglas Crockford would label as a bad part of the language, a part with the potential to trip you up if your not paying attention. Ideally something like JSLint would check to make sure you are comparing the return of indexOf with either > 0 or !== -1.

BDD + CSS = DDCSS, introducing a new dsl for the web.

March 8th, 2011

Usually when I have an idea, I try to code it first, then often I get half way through and get side tracked by something else and the idea never gets finished and so I never get to write about it. This time I thought I would experiment and write the idea up first before writing any code, and then if I never get to write it and it becomes famous, at least I can say, “Well, I thought of that ages ago”.

I’m a big fan of Dan North, especially some of the concepts he pioneered like BDD. It really changed the way I thought about code, writing literate programming may have been around for some time but this was the first introduction I had to it.

I’ve also been around presentation code and CSS for a long time. In fact it was one of the first declarative languages I mastered. When I was earning a living through CSS, it was more of an artform, a careful balance between unrealistic requirements from print turned digital designers, browser quirks, performance and simplicity were all key to creating what you could call clean code.

Now, the challenges are different with CSS, you don’t have to worry so much about cross browser quirks as the older out of date browsers have less influence. You can use more CSS3 properties that take away a lot of the old pain of for example rounded corners and enable more of the print designs to be created. Performance can be automatically optimized with compressors, but you still have the problem of the language of CSS.

There have been some attempts to make CSS more like a classic programming language following the tenants of inherence and mixins. To be honest I think this misses the point.

The most common use of CSS is to enable a coder to convert the static designs of a designer into a working web site. But the static designs are only part of the story. In agile we say that a story card is a place holder for a conversation. When you start every story it is your job to go to the designer and have a conversation. That conversation transfers knowledge about the design. For example, you might go and ask for the specific hex colour of a module or the number of pixels padding it has or the radius of the round corners.

Then you convert that data into CSS. So CSS is already a DSL, just not a very good one. It does not satisfy the programming domain and it does not satisfy the design domain. So ok, we could go the way of less and try to make CSS more like a programming language, but why not go the other way like we do in BDD? Why not let’s use natural language from the design domain to help us create more readable code that generates CSS, code that the designer and programmer can use alike to achieve their goals.

Introducing DDCSS, or Design driven cascading style sheets.

Lets take the following element of design from a famous website:

Now in terms of CSS, it would look something like this:

.moduleA {

    border: 1px solid red;
    margin: 0 10px 5px 10px;
    padding-top: 10px;
    width: 200px;


Now in DDCSS it would look more like this:

    a generic Module A has
    a 1px solid red border,
    a 0 margin-top,
    a 10px margin-left and margin-right and padding-top
    a 200px width.

So, not much difference so far, Why bother? Well, how often is CSS this simple? More likey, you do this CSS and then it doesn’t work in a certain scenario, so you go back to the designer and you find out a variation. Then your code begins to look something like this:

.moduleA {

    border: 1px solid red;
    margin: 0 10px 5px 10px;
    padding-top: 10px;
    width: 200px;


.situationB .moduleA {

    width: 150px;


And this is where DDCSS starts to look better and much less terse than the raw CSS.

    a generic Module A has
    a 1px solid red border,
    a 0 margin-top,
    a 10px margin-left and margin-right and padding-top,
    a 200px width,

   Given Module A is inside Scenario B
   Then Module A has
   a 150px width.

Now using a DSL more like this, we could start to easily share code between our designer and programmer. What’s more is it creates an easy accesible way for the two to have a discussion, if your DDCSS is starting to look really messy or there is a lot of repetition with only tiny changes, then maybe it time to take it back to the designer and say we have a consistency problem. Not only that but it’s something you could take to a third party like a product owner to look through and have discussions about what could be changed. Finally I allows small changes to be edited directly by the designer. It becomes both a code generating style guide and a reference to write tests against.

So, finally, now I’ve written about it - anyone interested in building it?

When using canvas, make sure you set the width and height explicitly

August 27th, 2010

According to this explanation on stackoverflow, it appears that if you set the width and height of your canvas element in css

canvas {

    height: 20px;
    width: 20px;


whenever you render vectors on the canvas they will appear stretched as css only sets the container size not the canvas size. So you have to do something like this:

    var canvas = document.getElementById("canvas");
    canvas.setAttribute("width", "20);
    canvas.setAttribute("height", "20);

How. Strange.

Where to put your CSS hacks - conditioning your conditionals.

March 3rd, 2010

I’ve had/heard/seen this argument many times on blog after blog. So I thought it would be a useful blog post to highlight to upsides and down sides to each argument.

Conditional Comments

Conditional commenting is the practice of putting code in special comments in your HTML document that get executed only in specified IE browers, it usually looks something like this:

  • Pro’s

  • Keeps hacks separate making style sheet look clean.
  • Allows for automated validation of main style sheet
  • Enables clean easy use of completely browser specific enhancement code like filters and expressions
  • Is backwards compatible
  • Con’s

  • Encourages people to write browser specific CSS instead of writing better CSS. (broken windows theory)
  • Decoupling of styles can result in more bugs when people forget to update the conditional stylesheet. Also bugs can be harder to track down.
  • Is an extra HTTP request

Inline CSS Hacks

Inline CSS hacks are where you write *hacked* attribute, property pairs in your CSS using combinations of ascii characters to take advantage of bugs in different CSS parsers, looking something like this:

  • Pro’s

  • Keeps hacks together with real code for easier tracing/debugging.
  • Less likely duplication of code
  • Con’s

  • Encourages people to use hacks instead of writing better CSS. (broken windows theory)
  • Stops automated validation of CSS as hacks are in the core code.
  • Hacks can be unreliable and have adverse effects on different browsers other than the infamously un-robust IE family.
  • Is not backwards compatible, if a hack gets fixed the rendering will break on later browsers

Ultimately it’s a preference thing and you can spin these pro’s and con’s either way to support your chosen method of development, but once it’s been chosen all developers need to stick with it. I think the important thing is that everyone needs to be vigilant that both are used with extreme caution and care as a last last resort in the CSS.

I see two useful things that could be created to follow this up to create the desired behaviour:

  • setting up some code that analyses the amount of CSS versus the amount of hacks or conditionals, then having a theoretical limit say 5% that you are not allowed to go above for a successful build.
  • Having a rule to write a detailed reasoned 3 line discourse in comments giving a description of why and in what circumstance each hacked rule is required.

Personally I opt of the conditional method, but that’s because I have a bizarre obsession with automated validation of CSS. See CSSOrder.

Cleaning up production code with JSLint. Once and for all.

November 24th, 2009

Sit back. Close your eyes. Imagine the scene.

Your client is sat round a big 3 inch thick mahogany table, in a tastefully decorated 1930’s art deco hotel conference room. They lean back in their reclining leather chairs whilst sipping chilled Harrods mineral water served in crystal wine glasses. The leather creaks. The sun pierces the cloud glinting through the gaps in the blinds filling the room with a coruscating light, a dazzlingly you are praying to match in the events that follow. Formalities are exchanged and weather is discussed. A thin veil of cultural formality gives you a brief respite from the reason you all know you are there.

You tap the mouse connected to the macbook. The projector flickers into life. The first page of your masterpiece is lit up. Each pixel a glorious testament to your craftsmanship and the pain, sweat and blood you’ve slaved over for the past iteration. Each click though the journey provides the perfect accompaniment to your commentary, going deep and deeper into your world, building a crescendo around it that signifies your masterful control of your media. Click follows click. Ouuu follows ahhh. And then…



Has this or something along similar lines ever happened to you before?

Your heart sinks. Your face burns as the blood rushes like a stampede of elephants into your cheeks. You blush. You stutter a laugh. Make a self deprecating witticism about the realities of live demos. You click ok. You proceed with the demo. Inside you are crying tears of shame and remorse. “Why didn’t I catch that before?” you think.

Which, by the way, is wrong.

If you had more time to think about the question you should have actually asked is “why did that happen at all?”. To which the answer would have been, it shouldn’t.

Debug code, should never make it into the production environment. It’s just bad practice. Plain and simple. So the question becomes, “how do we stop it?”.

On my latest project I’ve been working with CI or Continuous Integration. I won’t go into the details of how CI works, you can find that out yourself.

As part of this, I’ve been using 3 very important tools: Maven, JSLint4Java and JSLint itself.

Maven is a build tool that allows me to add the marvellous JSLint4Java ant plugin to it. This means when Maven is triggered by the CI system, JSLint gets run on all the Javascript and CSS files in my project. If they fail the validation: the build fails. Simple as that.


Now, this gives us a lot power, to help both enforce code consistency, valid code and good code.

But up until recently, JSLint has not had the functionality to check for the javascript global alerts, consoles, debug or opera statements. But as of 19th of November 2009, after suggesting a console check, on the JSLint working group forum. The ability to be able to check for the globals: console, alert, Opera, prompt and debug have been added to JSLint by programming/Javascript legend Douglas Crockford himself.


The new functionality comes under the ASSUME keyword, that when you switch off using the following code:

/*devel : false, debug : false */

will fail the file if any of those statements are present in the code…

This brings us one step closer to the holy grail of front end code RELIABILITY and eradicates the scenario I wrote about at the beginning of the post. But still doesn’t completely protect you from looking like an idiot in front of the client, so take care!