Posts Tagged ‘CSS’

BDD + CSS = DDCSS, introducing a new dsl for the web.

Tuesday, March 8th, 2011

Usually when I have an idea, I try to code it first, then often I get half way through and get side tracked by something else and the idea never gets finished and so I never get to write about it. This time I thought I would experiment and write the idea up first before writing any code, and then if I never get to write it and it becomes famous, at least I can say, “Well, I thought of that ages ago”.

I’m a big fan of Dan North, especially some of the concepts he pioneered like BDD. It really changed the way I thought about code, writing literate programming may have been around for some time but this was the first introduction I had to it.

I’ve also been around presentation code and CSS for a long time. In fact it was one of the first declarative languages I mastered. When I was earning a living through CSS, it was more of an artform, a careful balance between unrealistic requirements from print turned digital designers, browser quirks, performance and simplicity were all key to creating what you could call clean code.

Now, the challenges are different with CSS, you don’t have to worry so much about cross browser quirks as the older out of date browsers have less influence. You can use more CSS3 properties that take away a lot of the old pain of for example rounded corners and enable more of the print designs to be created. Performance can be automatically optimized with compressors, but you still have the problem of the language of CSS.

There have been some attempts to make CSS more like a classic programming language following the tenants of inherence and mixins. To be honest I think this misses the point.

The most common use of CSS is to enable a coder to convert the static designs of a designer into a working web site. But the static designs are only part of the story. In agile we say that a story card is a place holder for a conversation. When you start every story it is your job to go to the designer and have a conversation. That conversation transfers knowledge about the design. For example, you might go and ask for the specific hex colour of a module or the number of pixels padding it has or the radius of the round corners.

Then you convert that data into CSS. So CSS is already a DSL, just not a very good one. It does not satisfy the programming domain and it does not satisfy the design domain. So ok, we could go the way of less and try to make CSS more like a programming language, but why not go the other way like we do in BDD? Why not let’s use natural language from the design domain to help us create more readable code that generates CSS, code that the designer and programmer can use alike to achieve their goals.

Introducing DDCSS, or Design driven cascading style sheets.

Lets take the following element of design from a famous website:

Now in terms of CSS, it would look something like this:

.moduleA {

    border: 1px solid red;
    margin: 0 10px 5px 10px;
    padding-top: 10px;
    width: 200px;


Now in DDCSS it would look more like this:

    a generic Module A has
    a 1px solid red border,
    a 0 margin-top,
    a 10px margin-left and margin-right and padding-top
    a 200px width.

So, not much difference so far, Why bother? Well, how often is CSS this simple? More likey, you do this CSS and then it doesn’t work in a certain scenario, so you go back to the designer and you find out a variation. Then your code begins to look something like this:

.moduleA {

    border: 1px solid red;
    margin: 0 10px 5px 10px;
    padding-top: 10px;
    width: 200px;


.situationB .moduleA {

    width: 150px;


And this is where DDCSS starts to look better and much less terse than the raw CSS.

    a generic Module A has
    a 1px solid red border,
    a 0 margin-top,
    a 10px margin-left and margin-right and padding-top,
    a 200px width,

   Given Module A is inside Scenario B
   Then Module A has
   a 150px width.

Now using a DSL more like this, we could start to easily share code between our designer and programmer. What’s more is it creates an easy accesible way for the two to have a discussion, if your DDCSS is starting to look really messy or there is a lot of repetition with only tiny changes, then maybe it time to take it back to the designer and say we have a consistency problem. Not only that but it’s something you could take to a third party like a product owner to look through and have discussions about what could be changed. Finally I allows small changes to be edited directly by the designer. It becomes both a code generating style guide and a reference to write tests against.

So, finally, now I’ve written about it - anyone interested in building it?

When using canvas, make sure you set the width and height explicitly

Friday, August 27th, 2010

According to this explanation on stackoverflow, it appears that if you set the width and height of your canvas element in css

canvas {

    height: 20px;
    width: 20px;


whenever you render vectors on the canvas they will appear stretched as css only sets the container size not the canvas size. So you have to do something like this:

    var canvas = document.getElementById("canvas");
    canvas.setAttribute("width", "20);
    canvas.setAttribute("height", "20);

How. Strange.

Where to put your CSS hacks - conditioning your conditionals.

Wednesday, March 3rd, 2010

I’ve had/heard/seen this argument many times on blog after blog. So I thought it would be a useful blog post to highlight to upsides and down sides to each argument.

Conditional Comments

Conditional commenting is the practice of putting code in special comments in your HTML document that get executed only in specified IE browers, it usually looks something like this:

  • Pro’s

  • Keeps hacks separate making style sheet look clean.
  • Allows for automated validation of main style sheet
  • Enables clean easy use of completely browser specific enhancement code like filters and expressions
  • Is backwards compatible
  • Con’s

  • Encourages people to write browser specific CSS instead of writing better CSS. (broken windows theory)
  • Decoupling of styles can result in more bugs when people forget to update the conditional stylesheet. Also bugs can be harder to track down.
  • Is an extra HTTP request

Inline CSS Hacks

Inline CSS hacks are where you write *hacked* attribute, property pairs in your CSS using combinations of ascii characters to take advantage of bugs in different CSS parsers, looking something like this:

  • Pro’s

  • Keeps hacks together with real code for easier tracing/debugging.
  • Less likely duplication of code
  • Con’s

  • Encourages people to use hacks instead of writing better CSS. (broken windows theory)
  • Stops automated validation of CSS as hacks are in the core code.
  • Hacks can be unreliable and have adverse effects on different browsers other than the infamously un-robust IE family.
  • Is not backwards compatible, if a hack gets fixed the rendering will break on later browsers

Ultimately it’s a preference thing and you can spin these pro’s and con’s either way to support your chosen method of development, but once it’s been chosen all developers need to stick with it. I think the important thing is that everyone needs to be vigilant that both are used with extreme caution and care as a last last resort in the CSS.

I see two useful things that could be created to follow this up to create the desired behaviour:

  • setting up some code that analyses the amount of CSS versus the amount of hacks or conditionals, then having a theoretical limit say 5% that you are not allowed to go above for a successful build.
  • Having a rule to write a detailed reasoned 3 line discourse in comments giving a description of why and in what circumstance each hacked rule is required.

Personally I opt of the conditional method, but that’s because I have a bizarre obsession with automated validation of CSS. See CSSOrder.

CSSUnit : experimenting with unit testing presentation code.

Thursday, October 1st, 2009

Not all developers are created equal.

In a perfect world, everyone would be super diligent and proficient at creating CSS, but in reality this is not the case. In some cases less experienced developers can make mistakes, create inconsistant code or not reuse exiting code. Even in other cases when more than one experienced developer works on a project, you can still end up with inconsistency and repeated code just by the dint of different working styles.

cssunitscreenshotI thought I’d start experimenting with automated testing of frontend presentation code, focussing on regression testing. This topic is not really discussed a lot, as the standard response is that there is no replacement for an eyeball test, but humans are by nature unreliable beasts and I’d like to change that and make front-end more of an accepted science.

My hope is that by trying out some techniques and bringing them into the forum I can at least start a discussion that results in an advancement of this field. Allowing us to escape some of the common traps we see at the moment and mitigate some of the risks associated with our profession in the same way back end coding has with unit testing.

Existing approaches

This is by no means the first time anyone has looked at this problem. There are other software solutions available like Hp’s WinRunner, but in my opinion they are generally unsuccessful or not fit for purpose. The existing solutions rely on doing algorithmic pixel-based comparisons of screens. Of which the process involves, a screen being designed & built. A “good version” of the screen captured and stored. Then every time the application goes through the build process, the screen is re-captured and compared for differences to the master. Any deviations are noted and the build process fails.

Now this works for static screens. But the problem is that most of the time our applications do not have static screens, the content dynamically changes and therefore, everytime it does so, the build process would fail, invalidating the automated nature of the process. Not only that, but these screens render differently in each browser, so you end up taking 4-8 captures, which exponentially increases the potential to fail incorrectly. In essence these tests are too brittle and so not actually very useful, as they break too easily.

Taking a step back

Given that doing very low level atomic checking seems to be unhelpful, let’s analyse what the actual process is, that a human developer uses to validate a page by eye.

When I look at a page against a design, I don’t compare pixel by pixel. I compare a higher level. First I look at the design. From the design I create a number of mental rules. In my head I create a list of all the different font variations, the different colors used and the rough layout. If the design has too many variations in these things, then it is inconsistent and hence bad user design, in which case I end up going back to the designers and asking for a higher level of standardisation. When we have this basic checklist of “design principles”, we can compare the font size, weight and face, compare colors, compare widths, heights and alignments to the implementation. This enables us to take a design and an implementation and quickly gauge at a high level whether it is likely to be correct.

Taking this principle changed the way I started thinking about unit testing CSS, what if we could formalise this set of design principles and turn them into a programmatic set of rules that we could test each page against?

For this, my experiment led me to create cssUnit, a framework for checking style consistency.

Where cssUnit might help.

Like all software projects, not all processes are suitable for every situation, this is no exception. You will have to evaluate whether css unit testing is right for your project, the key factors I would take into account for this are, project lifespan, number of developers and size of site.

The scenarios in which cssUnit testing will be helpful are:

  • Large corporate web presence where there is a overall style guide but many different sites/microsites maintained by lots of developers
  • where you have a very fluid team - may often be short term freelancers
  • where you are training up a young/unfamiliar team of front-end developers,
  • situations where there is overlap between front end and backend - but not necessarily the possibility to maintain quality
  • co-located or remote teams

As well as being a tool to maintain quality - unit testing is also a way of communicating and distributing knowledge in a direct fashion, few developers will bother to spend the time reading the style guide, but many will learn the rules through failing tests.

What cssUnit is not.

cssUnit is not a number of things - I thought I would just mention the ones I did not mean it to be

Not cross-browser tested - although in principle it should be easy to make cross browser - it is not - not yet anyway - if it proves to be useful, then perhaps there is a case for making it cross-browser

Not beta - cssUnit is not by any means complete, and is as expressed before is more of an experiment, I have only coded for the scenarios I thought of, I’m sure there are better ways of doing this, but this is at least a starting point to explore how it might be done.

Not a service - lots of testing platforms have become services recently, but this is not currently my aim. There are many directions this could take, at the moment I’m not sure making it a service is the correct path.

Not easy - like all unit testing, cssUnit takes time to set up and implement in the beginning. The hope is that in the long term it saves you more time than it costs. It is also dependant on you having a strict style guide - with out standardisation in your design, cssUnit becomes useless.

animating alpha in PNG’s in IE7 cause black artifacts

Tuesday, September 29th, 2009

Just a quick entry to point out something that I found out recently. I’ve long had the problem in IE7 where applying opacity to an element with a PNG image as a background or image using jQuery (or any other library) results in the transparent pixels turning black. It seems the ever robust IE7 engine has a bug (oh my god who would have guessed?), but with an accident and a bit of research I have come up with a work around.

Back in the days before JavaScript libraries existed a developer often had to write their own cross browser transparency functions, somewhat of a rite of passage I would imagine, at least it was for me.

Having written a few of these routines myself, I was pretty confident I knew how jQuery was achieving this effect cross browser. In standardised browsers (FF, Safari, Opera, Chrome etc.) one has just to use the opacity property (- it wasn’t always this way, back in the pre FF2 days one had to use the -moz-opacity property as well) but in IE you have to use one of the wondrous non-standard propriety filters, a prospect that is about as attractive as participating in a Jade Goody munging contest.

The IE filters are notoriously SHIT and behave in a sinuos fashion to anything you might try and do with them. But as mentioned previously, the combination of the opacity filter and a transparent PNG (8 or 32) has the marvellous side effect of turning any transparent pixels in to black, until the filter is reset. Like so:


So, I spent some time trawling the web for workable answers or solutions but none seemed to present themselves. But then it struck me that on the same site I was working on I’d already acheived the effect and it hadn’t broken on IE7. How was this possible? I set about creating some test pages to figure out exactly what was needed for it to work in IE7. I discovered three vital steps that need to be taken.

  • Put the image or background image in a child of the element you are fading and fade the parent.
  • Position the element, it has to be relative or absolute, static doesn’t seem to work.
  • Put a background colour on the parent - this one really sucks but it seems the engine can’t hack it unless you do.

Then finally you will achieve a result like the following:


So that’s it, embrace the hacks and join me in the Microsoft hell of unmaintainable, bloated, bastardised semi opaque code. This development lark, It’s fun. Honest.

CSS Pre loaded - avoid FOUC with practical progressive enhancement using JS & CSS inheritance

Sunday, July 26th, 2009

Graceful degradation is an art form. The combination of XHTML, CSS and Javascript have never been one of coding beauty and will never win you any semantic accolades or the adoration of your clients or peers. Yet it is vital. In this day and age, progressive enhancement is a balance point between accessibility, usability and code optimization, on a large site you cannot afford to overlook it. So the question arises, what limited best practice can we draw up to make our lives easier and our products better? This article aims to offer a practical look at a set of technique’s used to help separate CSS and JS, to make apps more easily degradable, and to rid yourself of the dreaded FOUC (Flash of unstyled content).

I am a firm believer in separating out as much styling information as possible from JS to CSS. From a very basic level of having all static file references, colors & dimensions in CSS class selectors and NOT in the Javascript, to more advanced techniques of using CSS classes to control your animations (a feature native to Mootools and built in to jQuery using one of the many plugins). Anything to do with CSS in the JS needs to have a damn good reason to be there, otherwise rip it up, stop it from cluttering your logic code and put it in the CSS.

So to illustrate lets create a basic example of progressive enhancement with a standard tooltip on an element, where the behavior involves a div appearing on rollover.

<div id="links">
	<a href="#" class="tooltip_information">Link</a>
	<span class="tooltip">Something witty about this link</span>

Now, the basic initial approach may run along the lines of waiting for the DOM to load using a DOM load event, rooting through the DOM and finding all instances .tooltip and applying some CSS styles to them to position them and use the rollover behavior. As follows (all JS written in jQuery syntax) :

$(document).ready(function() {
	$("#links").css({position: "relative"});
		position: "absolute",
		top: "5px",
		left: "5px",
		display: "none"
	}).bind("mouseover", function() {
			display: "block"


Fine. It’s degradable, it works, but it’s messy and a bitch for anyone else to maintain and bug fix. Lets go through and see what we can clean up. First lets move our styling into the CSS.

.tooltip_Container {
	position: relative;

.tooltip_information {
	position: absolute;
	top: 5px;
	left: 0px;
	display: none;

.tooltip_information_visible {
	display: block;

This makes our JavaScript look like this:

$(document).ready(function() {
	$(".tooltip").addClass("tooltip_information").bind("mouseover", function() {


Sweet. Now I know this is a fairly simple example. You may want to have all kinds of fancy ‘fade’ or ‘movement’ effects going on. However this does show you, on a basic level, how much cleaner your JavaScript becomes if you move your style rules out into the CSS. Also, it means if you want to add more style changes, it’s a much easier process with a considerably smaller risk of breaking your JS logic due to missing a comma or semi colon. Consequently maintenance could even be taken over by folks who are less JS savvy, leaving you more time to build the next JavaScript implementation of coverflow or whatever it is UI coders do with their spare time…

Now, here’s where I change the paradigm and flip things on their head somewhat. The previous approach is fine for smaller web sites with limited interactivity, but once you add some other stuff going on and a heavy longer page, you start to run into speed and load time problems doing all these element lookups and events. So what can we do to mitigate these? Well, firstly, rather than using JS to add all these class names let’s leverage the power of CSS inheritance to help us out. Instead of adding a class to elements let’s add a generic one to the body, my favorite name is “JSEnabled”.

$(document).ready(function() {
	$(".tooltip").bind("mouseover", function() {


our CSS becomes :

.JSEnabled .links {
	position: relative;

.JSEnabled .tooltip {
	position: absolute;
	top: 5px;
	left: 0px;
	display: none;

.JSEnabled .tooltip_information_visible {
	display: block;

This allows you to deal with multiple JS CSS changes in a much simpler clean cut way and do it quickly in one hit using CSS inheritance, rather than multiple hits using JS to find all your elements. This does wonders for your IE6 load times and allows you to easily test what your site will look like without JavaScript enabled. It also further enhances the ability of others to easily extend the JS/CSS you have written without hacking around inside your precious code.

The final trick or fairy dust mentioned in the introduction is something that has long hampered progressively enhanced online apps. You are trying to build a progressively enhanced site but you’ve got a bit of an overweight lardy DOM, or more likely a poor web connection.
When this happens the DOM ready function just doesn’t cut it and often leads to a nasty FOUC side effect where the non JS page is rendered but the DOM ready doesn’t kick in early enough. Leaving the user with an ugly little animated scene of the page reordering itself to hide all the fun interactive bits - somewhat like the curtain coming up too early on the stage and the audience seeing all the props being put into place.

So how can we detect whether the page has JS capability before the content loads? We can’t just hard code the JSEnabled class onto the body, as this would break the progressive enhancement for anybody with JavaScript turned off. For a short and concise solution - see the following snippet to be embedded in the head of your HTML, before anything else:

var elements = document.getElementsByTagName("html");
elements[0].className += " JSEnabled"

Effectively this add the JSEnabled class to the html tag instead of the body, right at the beginning of the page load, allowing you to take advantage of your JSEnabled CSS right from the word ‘go’. It integrates particularly well with the new frameworks coming out at the moment that allow you to passivly download modules of your JS code post load for secondary functionality to speed up initial load time. But most importantly when the page does first load everything appears where it’s supposed to!

vote it on