Monday, May 12, 2008

Life Tip #1 - Birthday Candles

When you a placing birthday candles on a cake, make sure that you put them the right way up. The base of a candle, once lit, burns faster and more brightly than the wick-end of the candle.

This achieves a number of things:

  • The candles burn much faster - you need to belt out "Happy Birthday" just a bit quicker
  • The cake looks a lot more exciting covered in flame - always good unless it's ice cream cake
  • The cake gets covered in wax - never good

I wish I had photos from the birthday cake I witnessed on the weekend. Classic.

Additional:
I looked up Happy Birthday on Wikipedia and did not expect the copyright issues around the song to be so ridiculous. Still the words come out of copyright at the end of this year. Woo!

Coding Tip #1 - Object Identifiers

When designing an interface that allows callers to request an object from some data storage, ensure that your interface accepts unsigned integers rather than signed ones. If you use a signed integer then you need to have a guard clause to reject a negative parameter. However, if you ensure your identifier type accurately models the data it represents. Then your guard clause is implicitly implemented by your parameter. This means you can drop the explicit guard clause from your code. Cleaner, faster, better code. Everyone is a winner.


Apologies for the image of code. Google Blogger doesn't really allow anything except poorly formatted text.


Note:
Obviously this does not hold true if your information model allows negative identifiers.

Testing Tip #1 - Boundary testing business objects

When testing business objects, for example an object that represents a Person or a Customer, you often have a minimum set of data requirements. I.e. a customer must have a last name. As well as having a set of optional data attributes (first name, date of birth, gender, etc).

To provide full test coverage you would mathematically need to provide N factorial minus M factorial combinations where N is the number of attributes and M is the number of mandatory attributes. I don't think I have ever had enough time to test that much, incredibly boring, testing and nor would I, had I.

In my experience the best pass through is to look at the two bounds. The minimum set where you specify the object with the bare minimum of attributes, the mandatory attributes, and each of these attributes has as little data as possible. In our customer example this would be a last name that is one single character in length.

The next test case would include every single attribute populated to it's maximum extent. So if you have a twenty character first name you specify all twenty characters. I document this as my all set.

These two test cases have been enough for every system I've tested. Using them you can prove every attribute being supplied and every attribute being absent, or supplied to their bare minimum allowed.


I don't include out-of-bounds testing here. That is, I don't check for 21 character first name objects, nor do I test for zero length last names. That specific boundary testing, I document as separate tests against the user interface. This achieves two things. Firstly, my test cases are granular and relate to a specific business requirement. This then means that defects raised are very specific and generally easier to track down. Secondly it cuts the amount of testing I have to do to what is more than likely going to cause defects.

Additional:
If you have more complex data types. For instance, if there is a business rule that states that attribute-x is only provided when attribute-z is set. Then that specific combination is already covered by your all set (attribute-z is set) and your minimum set (attribute-z is not set). I would additionally include test cases to ensure that any user interface validation of these attributes occurs.

Sunday, May 11, 2008

Automated deployment - Why it is a good idea

Grant raised the concept of the automatic deployment in a comment on a post I made the other day about controlling test environments. I've been giving it a bit of thought of late and I've come up with a quick test you can do to see if you should automate your deployment process. As you go through the test you will also see various ways how automated deployment can improve your development practices.

Once Upon a Time...
Let me start with a tale of woe.


The past two weeks I've been working with a developer who has been lumped with some less than satisfactory code. Also lacking is a suitable development environment for him to work within. In an effort to get something suitable into test I have been working with him to solve the various problems. This past week has seen about15-25 deployments into test (don't get me started on unit testing). When he is not around another person from his team will do the work. Everyone of them at some point failed to deploy correctly.

Why? Firstly their application is made of a number of disparate components that all are deployed individually. They don't have version numbers to identify the latest build. They are deploying to a cluster. They are rushing to get late code into test. The testers don't have control of the test environment (yet).


Consider this
If your deployment is as simple as a point and click install and that is it, your failure points are: installing the right version and actually doing it (don't laugh). Two failure points, ignore the triviality for now. If you have a configuration file to manually adjust, there is another point. Add them up, one for each step you have to do in order. If you have deployment instructions. Add one more as those have to be followed, if you don't have deployment documentation add 1 million points. If you have to rebuild a fresh machine each time. Add 1 for each step. If you are smart and have a prepared image. Add a single point if you need to deploy it each time. Zero points if you don't.

I think you are starting to get what gives you points and what reduces your score. Stay with me as we go back to our example:

I don't know the full details of the application but I know there are about five components each of which needs to be deployed. So 5 * 3 (version, doing it, following instructions). Three of the installed components need to be turned on each time. So that is 3 more points. 18 is our score so far.

How many machines do you have to deploy to? One host? Score one for the number of targets Clustered array of four machines. Score 4. Pretty simple scoring system. Write this down as your targets score.

For our example we have a two hosts load balanced. So we score 2.

How frequently are you going to deploy your code? Once per iteration, how many iterations in the project? 10? Score one point each time you deploy per iteration, times iterations. Record this as frequency.

How many test environments are there? Dev, Dev-Int, Test, Non-functional, Release Candidate Integration, Pre-Production, Production? Here are seven from a in my opinion pretty regular configuration. 7 Points. Once again, in my example, just one, test. Add it up for your environment score.


Failure Points
Ok, so our formula for calculating the failure points of a deployment:

Failure Points = deployment-steps X frequency X environment X targets

Example = 18 x 2 x 15 = 540 points of failure

Not bad. You may argue that once you have tested your deployment that you shouldn't have any failures after that. It is tested after all. That is a good point, but remember we are not even talking about deployment testing here. Just vanilla dropping an application onto a box.

We (this is a team project after all, shared wins/shared losses) had 540 chances in a one week period to stuff up an aspect of the deployment process. Aside from the code failures, we had probably 10 deployment failures including not installing the code onto both machines in the cluster. Those particular defects are about as much fun to detect as a race condition.

Automated Deployment
How much you automate will directly impact the chances for deployment failure. Our two constants for the act of deployment were: actually doing it and installing the correct version.

Performing the work is now done by the auto-deployer. You still need to click the go button for certain environments. Automatic deployment implies that latest valid build is used so that problem is solved.

Individual deployment steps should be wrapped up into your installer. I mean every step. Installing software, opening ports on routers, configuration files. If you do some research you will find somebody has automated already it for you, or there is an API for it. If by chance that isn't done, do it yourself and then share the love.

Next up is the deployment to each machine on the cluster. Once again this should be handled in by your autodeployer. So that one is fixed, score a zero.

After that was the total number of deployments. That shouldn't change. As long as your autodeployer is operational and you click the go button as required. You should be down to a score of 5 (once for each environment from test afterwards).

With our example we should go from 540 failure points to 5. One for each deployment that has occured over the past week. Triggered by the test team as required. There are no other manual steps.

Bonus Feature
If the latest build is unusable for testing. Allow the testers to flag it as so (Build Quality) and have the autodeployer ignore that build for future deployments.


Conclusion
You may realise by now, that I have been a little bit over the top with my example. Furthermore, every iteration you don't deploy to every environment. You and I know this, but it won't change your score that much. You may also think of more places in which the scoring system should change. Post them as a comment and I'll put together a little spreadsheet you can use.

I am not going to tell you how to automate your deployment process. I've got an idea on one way to do it and I'll post about when I've done it. In the meantime here are a couple of other ideas to get you started (thanks to Grant for these):

  • Use PsExec
  • Use putty if you are not on a windows box
  • Via TFS Build here and here

Before I go some more juicy content: your autodeployer should not be used until you have tested it through all environments. Including deployment into production.

Wednesday, May 7, 2008

Dalmore - 12yo

For the past year and a half to two years I have been a part of the Australian Single Malt Whisky Club. Each month they send me a bottle of single malt whisky that I would otherwise be unable to purchase from the local alcohol merchants. It's a passion that started after a fantastic time I had exploring Scotland's whisky in 2004.

Today I was delivered a fancy bottle of Dalmore 12yo. I'll quote the site as they don't have a permalink.

Dalmore is literaly, "the big meadowland". The distillery is situated North of the the traditional highlands, drawing its water from the Alness River, near the city of Inverness.

Colour: Rich, deep, golden mahogany.

Nose: Intense and firm. Well structured with silky smooth malty tones - a hint of Oloroso sherry lingers in the background. It shows great finesse, extolling fragrances of orange, marmalade and spiced notes.

Taste: Good attack on the mouth, more elegance than muscle. The aged Oloroso butts smooth its rich, fleshy body with great harmony. Almost a concentrated citric mouth-feel captivates and tantalises the middle part of your tongue. An aftertaste of great abundance rewards the palate. A Highland malt of great distinction.

Not sure if all of that actually applies to the whisky. My nose and palate is not as honed as whoever wrote that. I find whisky descriptions a lot like a real estate advertisement. You never really know until you give it a good look yourself and to be weary of "renovator's delights". This is especially relevant to whisky.

So I have my dram and shall give it a go. This is live whisky-blogging. The interwub is a powerful beast often abused. :)

Colour, all correct. It has the fantastic colour to it. Nose, not sure. It's a little blocked and my ability to detect faint scents has never been tip-top. Sorry to disappoint you all but from what I can tell there is no marmalade.

The taste is delightful, strong yet smooth and the faint citrus on the tip of your tongue is present. I rather like this.

Onto the shelf it goes for an occasion that warrants it, like a guest. So if you ever come around for dinner. Ask for a dram of the Dalmore 12yo, to (probably) misquote Iain Banks in his book Raw Spirit.

The perfect size for a dram is one that pleases the guest and the host.

Monday, May 5, 2008

NIN - This one's on him.

I've been a fan of Nine Inch Nails since the mid 90's. Somewhere around the time Broken came out, but it was Pretty Hate Machine that was album I heard and fell in love with.

Over the past few six months Trent has becoming more and more aware/involved with his significant and loyal fan base as well as fully embracing the concept of creative commons. It's been pretty special to watch him go from four full, very good albums (Pretty Hate Machine, Downward Spiral, Fragile, With Teeth) over the period of sixteen odd years to three albums (Year Zero, Ghosts I-IV and The Slip) over the next two.

This is pretty awesome from a fan perspective. We get lots of new content and each album is increasingly free or of low cost. Radiohead were the first to trial this process for their album In Rainbows. After which Trent marketed the Saul Williams album: The inevitable rise and liberation of Niggy Tardust as free or $5 if you were so inclined. I didn't pay the $5 as Saul Williams isn't my style of music. I still downloaded it and had a listen. I liked a couple of songs allot but the rest were as I was expecting, not what I'm interested in.

Now you may think, $5 that is pretty cheap, you should fork over the cash. You still have the album don't you? My argument is that $5 is cheap, but its not a micro-payment. It still has a value and while Saul Williams has not acquired the money of me this time. The next time I will be interested again because I know that there is a chance I may like his album. He has something that usually costs a lot of money. Exposure.

The Exposure aspect is important. Trent and Nine Inch Nails have exposure so that when Ghosts I-IV came out. I first ordered the free version because I was a little short of cash. Some while after I purchased the version that was right for me. Turns out its the glossy double CD back for $75. $70 more than the cheapest version. Why? Firstly I like buying CDs. Sure it's wasteful, but you get more than the music, you get something tangible to hold and a series of images that are somehow more real than a pdf document of the same thing. There is effort in a NIN package that correlates to the effort that goes into each track. Secondly, because I roughly know where the money is going. To the artist for the work he has put into the album.

Trent made a lot of money off Ghosts and it is a decent album, not his best, not his worst either. A worthy item in in the NIN catalogue.

What is the point of this post about free music. Nine Inch Nails released an album today called "The Slip". As he says on his news post: This one is on him. You can't dislike an artist who gives as much as he gets.

Whether or not the flurry of productivity coming from Trent will have an impact on the quality of his work is one that can only be answered in hindsight. As a fan I'm completely biased and won't be ashamed of that.

For those who like Nine Inch Nails. Enjoy!


note: I call him Trent because I got tired of writing Trent Reznor very quickly.

Sunday, May 4, 2008

Lol

I saw this ad on Facebook. I wonder if the extra income I could be earning is related to editing advertising material for grammatical errors.