Posts Tagged ‘automated testing’

A short story about the value of automated testing.

Saturday, July 9th, 2011

For the last few weeks, my team has been working on some pretty fundamental changes for Nokia maps on the web.

But that’s the short term. For the last six months my team has been working hard on getting a suite of automated tests up and running, and I’m proud to say we now have close to 1000 Jasmine unit tests and 200 cucumber acceptance scenarios running on every commit.

So I was delighted, when I heard that this week, one of the new changes had broken 52 of our acceptance tests.

Why should I be delighted?

Doesn’t that mean the team now has to go away and fix 52 acceptance tests? Well, yes, and that was certainly the way the team saw it, until I said:

“Now imagine we didn’t have those tests. What would have happened?”

And then we all started to realise, the power of automated testing.

So if we didn’t have automated testing, what would have actually happened?

Well, someone would have gone into the code. Found the piece of code that needed changing. And changed it. Probably tested it in the one place they knew about. Committed it. And moved on. Completely oblivious to the fact that they’d broken the application in 51 other different places.

Then the build would have gone to QA, the QA’s would have found the 52 different breakages, raised 52 new bugs, and sent them back to the developers.

Then the developers would have gone into the code, tried to fix those 52 bugs, and probably caused another 50 regressions. Which would then go to the QA team, be raised and go back to the developers for fixing. Eventually this hugely inefficient cycle would have continued until the build gets to an extremely brittle stability, a bit like building a house of cards, and it would have gone live.

However, with the tests, those bugs were caught. Immediately. And fixed. Immediately. Those 52 broken acceptance tests, saved us potentially months and months of rework.

So the moral of the story is, when you reduce the feedback cycle of a bug, with the saftey net of automated tests, it allows you to develop in confidence, giving you warning to what you’ve broken and giving you the opportunity to fix it then and there (however having the discipline to do this is another story). In Lean, this is referred to as ‘building integrity and quality into the product from the beginning of the cycle’, rather than trying to add it in at the end with a torturous QA cycle.

Where to put your CSS hacks - conditioning your conditionals.

Wednesday, March 3rd, 2010

I’ve had/heard/seen this argument many times on blog after blog. So I thought it would be a useful blog post to highlight to upsides and down sides to each argument.

Conditional Comments

Conditional commenting is the practice of putting code in special comments in your HTML document that get executed only in specified IE browers, it usually looks something like this:

  • Pro’s

  • Keeps hacks separate making style sheet look clean.
  • Allows for automated validation of main style sheet
  • Enables clean easy use of completely browser specific enhancement code like filters and expressions
  • Is backwards compatible
  • Con’s

  • Encourages people to write browser specific CSS instead of writing better CSS. (broken windows theory)
  • Decoupling of styles can result in more bugs when people forget to update the conditional stylesheet. Also bugs can be harder to track down.
  • Is an extra HTTP request

Inline CSS Hacks

Inline CSS hacks are where you write *hacked* attribute, property pairs in your CSS using combinations of ascii characters to take advantage of bugs in different CSS parsers, looking something like this:

  • Pro’s

  • Keeps hacks together with real code for easier tracing/debugging.
  • Less likely duplication of code
  • Con’s

  • Encourages people to use hacks instead of writing better CSS. (broken windows theory)
  • Stops automated validation of CSS as hacks are in the core code.
  • Hacks can be unreliable and have adverse effects on different browsers other than the infamously un-robust IE family.
  • Is not backwards compatible, if a hack gets fixed the rendering will break on later browsers

Ultimately it’s a preference thing and you can spin these pro’s and con’s either way to support your chosen method of development, but once it’s been chosen all developers need to stick with it. I think the important thing is that everyone needs to be vigilant that both are used with extreme caution and care as a last last resort in the CSS.

I see two useful things that could be created to follow this up to create the desired behaviour:

  • setting up some code that analyses the amount of CSS versus the amount of hacks or conditionals, then having a theoretical limit say 5% that you are not allowed to go above for a successful build.
  • Having a rule to write a detailed reasoned 3 line discourse in comments giving a description of why and in what circumstance each hacked rule is required.

Personally I opt of the conditional method, but that’s because I have a bizarre obsession with automated validation of CSS. See CSSOrder.