Archive for May, 2009

Is frontend & backend done the wrong way round?

Thursday, May 14th, 2009

I’ve been thinking about this post for a few weeks now, what stopped me from writing it initially was that I couldn’t decide which an analogy to use. I had too possible contenders, the car industry or the housing industry. The car industry is generally a better comparison tool as it is soo well regarded as being the pinnacle of efficiency, however the housing industry seems to make more sense in this context.

So I want to start thinking about how we build a room in a house. Lets say we have a shell of a house and want to create a bedroom to sleep in.

So firstly we’re going to need walls. Which will be made by our carpenters, who create a frame and screw it into place. They then add drywall, which is prefabricated plaster board to act as a base layer for the wall. Then the plasterer will come along and apply a layer of render to the wall, it will dry and then we can put wall paper on it.

Now in web development, what currently seems to be the practice is for front-end developers to be given a head start on templates, a lead in time of one interaction, say 2 weeks. Then on the next iteration the back end team pick up the stories where they develop the back end functionality and integrate the front end templates.

Now to me this seems like the house process in reverse, the plasterer has 2 weeks to create a render, the wall then gets built and the carpenters then have to try and apply the render to the walls themselves. This results in two outcomes, the carpenters getting frustrated as they don’t specialise in plastering and can’t see why it isn’t right, the second is that the wall looks shit and the plasters have to be called in again to do it again. This to me appears very wasteful and pointless.

We should be moving towards a process where, the the back end builds the bare bones of the site 2 weeks before the front end gets to it to apply the final render. This will enabled the front end coders to apply the superficial level and plaster over any cracks that may have appeared and the back end free to focus on providing a solid base layer.

The future of testing.

Sunday, May 10th, 2009

Wow. I just watched an inspired talk by James Whittaker of Microsoft Corp. On first inspection it does appears like he’s just consumed 176 cups of coffee, but if you wind up to his speed and aren’t offended by his informal ‘off-the-wall’ style he’s actually got some pretty interesting points.

The insights from the gaming industry are very interesting, the ‘heads up display’ is a cracking idea, I’ve always wished there was a way of looking inside the javascript compiler to watch my code getting executed rather than having to use breakpoints and step overs.

I would also like to see the further adoption of heat maps to analyse and visualise code and a way of generating live reactive decision tree’s would be absolutely f*cking awesome.

The concepts of cloud testing and packing up vm’s to enable quick reproduction of tests is also quite important and again I’m sure will be adopted and modified by the web community.

One thing he missed and is something I am quite keen to see adopted is the recording of testers actions. For some reason, a fair number of bugs that can’t be reproduced are down to the tester’s not realising what events took place in order to construct that scenario. With recorded screens the developers and the testers can easily watch the full set of events unfold and diagnose the problem far quicker as there may be things happening that don’t manifest visually but that the developer knows is occurring in the background.

At the beginning of the talk, he shows a microsoft envisaging video that he then claims could never work due to bugs, I disagree, with people like him around and business leaders realising the full benifits of proper testing, we can make that happen and sooner that one might think.

Seeking the perfect build process.

Monday, May 4th, 2009

I am currently working on a project with a continuously integrating multi-stage build process.

Every time you commit a new build gets triggered and flows down the pipeline of testing stages. Problem is it doesn’t work very well.

One of the primary problems is that the tests getting run seem to be incredibly brittle and unstable which causes a lot of broken builds. Which in turn prevents people checking in. Which in turn creates a backlog of checkins. Which results in people forgetting what the code they need to  check in does. Which results in more broken builds. Which results in a myriad of other problems, including people losing code so massive wastage. Very un-lean.

So the process is broken. The solution is  that a broken build shouldn’t stop other developers working.

How do we implement this?

Well, one suggestion is that we have an automated revert process that every time a build fails, it automatically reverts to the last good build. This works in principle - a developer commits, it breaks the build, the build reverts, the next developer checks in, the broken build commit gets pushed to the back of the queue & creates an incentive to checkin working code.

However in reality, the build is not a instantaneous process. It takes around 15 minutes to get through all the various stages in the process. Arse.

This means the above scenario actually results in: Developer checks in broken code, build runs, meanwhile another developer checks in working code that requires some bit of broken code, build fails, build reverts, next checking code also fails as requires reverted code. Everyone stabs each other in the face.

So, how do we solve this?

Well, lets create an analogy, if we look at the build like it is a lego brick wall, and each commit is a new brick in the wall. So someone clicks on a brick which is unstructually sound. What are the other brick layers doing?

Well, some of them are working on the adjacent walls, and their brick work is unaffected by removing the broken brick, so their bricks should remain unchanged as long as they don’t sit upon the broken brick or any of the bricks that were installed at the same time as the broken brick.

Bricks that were built on top of the broken brick or any of the bricks installed at the same time as the broken brick, well, they’re fucked.  Because it’s Lego, you can’t just slide out the broken brick and replace it, you have to dismantle the wall and take out any bricks above it in order to replace the broken brick, which results in a large number of angry brickies f’ing and blindin’ and going off to read the sun and get a bacon sandwich.

So one way to fix this is to change the build process. Instead of letting people build on top of other bricks before they’re checked for stability, you take their bricks and hold them until the bricks they rely on have been checked, then if the parent bricks are safe, the dependant bricks are put on and checked themselves. If further bricks are put on then they are put in the dependant queue and checked in turn.

Voila, you have a smooth parallel build process.

Augmented reality: just another cheap trick?

Sunday, May 3rd, 2009

Augmented reality has had a big buzz around it recently. With video’s like this coming to the floor:

But I think this misses the point, as although it’s very technically clever, the point of augmented reality is being able to affect reality of the user, this is just a video of some-one’s affected reality which has no more impact than a normal video, he might as well just have done the work in post production with aftereffects and it would have looked a lot better. A much better example of the future of AR can be seen here:

 

Having said that though, there are a lot of promising things coming out that are different and are allowing everyday users to feel the power of AR.

AR’s modern roots  lie in the software we saw emerge years back that allowed you to use your webcam to play poor quality interactive games mapped onto your surrounds. It has recently come to prominence with some easy to use flash libraries and interesting alternative applications. So I thought I’d delve into the reality of what this means for the general populace.

Firstly, there are some really great implications for digital art, that have already surfaced, for example, the tagged in motion demo of artist DAIM http://www.youtube.com/watch?v=d4WZpYFRhg4&feature=related are really interesting and begin to give an idea of the scope of AR and what the future might hold. The idea of one using the real world as a kind of canvas to add virtual elements is certainly not new and as well as being a bit of eye candy from sci-fi films seems quite close to an idea promoted on TED a few months ago, where product information is projected onto items you are looking at. One can almost envisage a Twitter style world where you subscribe to peoples augmentation feeds and as you wander round the globe you run across virtual graffiti and artifacts that your peers have scattered around.

The second obvious application is for games and real world simulations that allow people to interact with both each other and a real and virtual world as shown in this HP advert:

 http://www.youtube.com/watch?v=BUOHfVXkUaI&feature=player_embedded

although I’m slightly skeptical, what happens if the game happens to decide to take your child into a notoriously dangerous neighbourhood littered with hookers, gangs and crack dens? - Then the game’s stakes become slightly higher… 

Finally there are the straight up commercial applications. I’m sure the notorious media ad merchants who delight in nothing less than forcing their ads into every available orifice conceivable are drooling at the ability to insert their adverts into yet another layer of virtual reality - creating a minority report style world where a virtual Jamie Oliver is following you around the supermarket espousing nuggets of cheeky cockney cooking wisdom.

So what holds us back from this wave of new content delivery? Practically & technologically speaking, the idea of wearing an enormous computer and a giant headset while wandering around seems a little far fetched, but with screens implanted in contact lenses http://www.physorg.com/news119797260.html  and the processing power of mobile phones advancing at an unstoppable rate, maybe it’s not that far off.

On a social level I wonder how people will take to integrating with a new virtual layer of reality, do we already have enough to deal with in the one reality we can see is is there scope for an ever expanding number of layers where augmentation provides an experience the world we could never have achieved before?