Friday, May 30, 2008

Hark, where are thou keys?

I was talking to my mothra (mum) last night and she had recently found an email I had sent back in August 2005. If I remember correctly I sent this to my boss because I was late and I couldn't find my car keys.

I’ve been looking for half an hour, to the point I’ve looked into the shower.
I’ve searched high and low, left and right. In a bank of snow, with and without light.
I’ve looked far and wide, deep and shallow. Under my feet and, under my pillow.
So if you see my keys then let me know. A reward you will see, to work I will go.
To ease my thoughts and waste my time. I had a coffee and wrote this rhyme.

I think it took a few days to find my keys. They had fallen down the back of one our La-Z Boy chairs and had gotten caught in the internal framework. This meant that they couldn't be found by putting your hand down the side of the chair, nor could they be seen by lifting the chair off the ground. Fun!

Have a good weekend!

Thursday, May 29, 2008

Attachments and Requirements in QualityCenter with C#

Previously I covered creating new requirements and how to handle requirement post failure. Today I'll cover attachments which are fairly trivial but have a gotcha that might not be immediately obvious.

To add an attachment to a requirement, the requirement must already have been posted. Otherwise you will get an exception. The following code is how to add the attachment. It is very simple.

code - adding attachments to requirements

Debugging
When you set the path of the image it will be adjusted from something similar to: "D:\\Path\\to\\my\\image.jpg"
to:
"C:\\DOCUME~1\\user\\LOCALS~1\\Temp\\TD_80\\504857af\\Attach\\ REQ1722\\D:\\path\\to\\my\\image.jpg"

Whether this is an artifact of the Quality Center API or the .NET development environment is beyond my knowledge of either.

The gotcha?
You have to add your attachments after creating any child objects. Otherwise every single child requirement (and their children will get a flag added on them saying that a parent requirement has changed and that you should review the potential impacts of such a change). Sometimes you want this, sometimes you don't. If you do it accidentally it can take a long time to remove all those flags.

Wednesday, May 28, 2008

Handling Post failure with QC Requirements in C#

Yesterday I covered creating new requirements and updating custom fields. Today I will discuss post failure and why handling it is non-trivial.

Quality Center has a requirement that no two requirements share the same name where they have the same parent. If you attempt to save a requirement like so using the QC UI it gives you an error and the user tries again. If you try using C# you get a COM Exception.

It also resets your object.

I couldn't believe it either. It means you can't add an incremental counter to the test case name and try posting again until it works. It means that if you are building your requirement up from a source document, you have to go and redo your work. This is a pain if you were using recursion.

There is a solution. It is not complicated either. What I did was to define a new class called SafeQcRequirement which owns standard QC requirement object and each of the attributes that you care about. In a worse case scenario you are duplicating all of the attributes. In my case it was only a handful.

As you build up your requirement, store all the data in the safe requirement. When it comes time to post, build the real QC requirement and post. If it fails, rebuild again with a new name try again. Keep going until you succeed.

The following screenshot is what my post routine resembles.

code in a bitmap, hideous... apologies... a solution is almost here.

The second protected post method is a recursive loop that just tries until it succeeds. There are no gaurds for stack overflows or more than uint requirements. Based on my usage scenarios this is unlikely. If it does occur, shotgun not testing.

In actual fact, any duplicate requirement names are recorded and fed back to the User Centered Design people (who created the source document) and they update their source document so that the duplicates no longer exist.

Regarding the exceptions: you don't get a useful exception from COM, so just capture the base exception and use the message text to determine if it is the one we care about.

Our requirements are sourced from an Axure document and in the next week I'll put a post about parsing their document for requirements. Tomorrow I'll cover adding attachments to requirements which is fairly trivial but there are still a few gotchas.

Tuesday, May 27, 2008

Creating a Quality Center requirement using C#

Unfortunately Quality Center's Open Test Archtecture (OTA) API doesn't have any documentation for C#. Its all in Visual Basic which can make it a little bit of a trial to work out what is required. In the case of creating requirements, the documentation states:
Passing NULL as the ItemData argument creates a virtual object, one that does not appear in the project database. After creating the item, use the relevant object properties to fill the object, then use the Post method to save the object in the database.
Sounds simple enough until you find you that null doesn't work, neither does a null object of the type, a null object, zero, nothing and about 15 other things we tried that could be considered null. In the end we needed the singleton System.DBNull.Value object to be passed in.

TDAPIOLELib.Req req = (TDAPIOLELib.Req) m_reqFactory.AddItem (System.DBNull.Value) ;

Once you have created a requirement, you can start setting the value of each attribute. C# Properties are provided for standard attributes while custom attributes must be set using the direct operator.

req["QC_USER_01"] = "my value" ;

Where QC_USER_01 is the name of the attribute previously defined in QC. One you have set your values, call post to write to the QC server or call undo to reset any changes you have made to the object.

req.Post () ;
req.Undo () ;


If your Post call fails, then you need to handle it accordingly. It is non-trivial and I'll talk about that tomorrow.

note: I would like to thank all the brave developers who walked past my desk whilst this was being worked out and helpfully shouted different ideas for what null could be.

Monday, May 26, 2008

Leveraging MSVC's output window

If you use scripts to identify parts of your code that may not meet standards, you can simplify the job of the developer by making use of Visual Studio's Output Window. Any output from a script that appears on a seperate line in the output window with the following format is clickable.

$filepath($line) : $message

When you click on it, the user is taken to the specified file and line and the message will be displayed in the status bar. This is how it works when you let Visual Studio perform builds for you.

So now that you know how to format user messages in the output window, you need to be able to run your script to that output is sent to the output window.

This is pretty easy to do... in Visual Studio, go to: Tools > External Tools and setup you're tool a bit like mine. My example is a perl script that looks for //TODO comments in code and writes the information to the standard output.


The "Use Output Window" is the kicker here. Otherwise it'll run in a separate console window and be less useful to you. When you write your scripts, you should dump the output to the standard output. That is what gets piped into the output window.

With this in mind you can start writing scripts to useful feedback to developer like:
  • not consting parameters
  • identifying classes / methods without unit tests
  • poor coding standards / formatting
  • implementing FxCop like functionality (for non C# languages)
Basically anything you can identify in a script. I use it for enforcing coding standards, allowing other developers to mark areas for me that could be performance tuned.

It is also useful for doing ASCII to Unicode conversions. Conversions like so are non-trivial and carte-blanche find and replace methods generally don't work. You can write a script to identify potential areas for change and then tackle them one at a time.

Happy Scripting!


note: I don't know how to get it to show up in the Error List window (not that I've looked recently) but if anyone knows, send me a link. That would be super.

Sunday, May 25, 2008

The potential defect

For those that want to know the defect was a memory problem. The application (Nokia Nseries PC Suite) when you changed from tab to tab, would acquire a several 100k of private bytes. If you keep changing tabs the memory usage continues to climb. This application is designed to start with the computer and remain operational the entire time (its for phone synchronisation). If you didn't turn your computer off at night after a few days of usage its memory usage can get quite high.

Personally I don't really use the application, as a matter fact it bugged me, because it starts when Windows starts and I don't like applications to do that without asking first. I was looking for a way to turn that "feature" off and just happened to have process explorer open on my secondary monitor set to sort by private bytes. After much clicking about, I noticed it work its way up the list... after that, like all good testers, I was deliberately trying to workout what was causing the memory usage to rise.

Reporting Defects in Proprietary Software

I, like just about everyone else on the planet, find bugs in proprietary software. Sometimes I like to report them to the vendor. One thing that really annoys me is how hard it is to report a bug. Often these sites have no Contact Us about Gaping Holes In Our Software, or more frequent Contact Us regarding Potential Software Problems.

This is even more the case for non-software companies that produce software. Often to accompany their hardware. I shall use Nokia in my example. I would still say that they are a non-software company (they make phones) but are heading in the direction of being at least partially a software company (they are trying to buy Trolltech) and invest in other software companies like Symbian.

Nowhere on their site was a place to report a defect in any of their software products. In the end I sent an email with the defect report to the customer service people responsible for my phone and asked them to forward it on. Not an ideal solution and normally if it gets to this point I don't bother. Today I must be feeling extra kind.

At this point I would state my hard-line position and say something like: Seriously, if you produce software in any form provide a mechanism by which users can report bugs in your software. An email address is usually enough for me. A potentially better solution is to make use of Windows Error Reporting. I've never used it, but I can't see how it couldn't be useful. Every little bit of information regarding your applications reliability in the field is useful. Note that WER is only useful for crashing or hanging apps.

The problem is that non-software companies that produce software, traditionally, don't have organisation practices around defect reporting, management and the eventual software evolution that occurs from that. The smaller or less mature the company is, the greater the chance of them not having such practices.

There is not much you can do about that. Supporting software is an expensive process. Even if they wanted to, it may not be possible for them to setup the requisite infrastructure. Some may argue that they should have thought of this before getting in the game. That doesn't change the fact that the software is already written, and if the vendor has gone bankrupt then you have zero chance.

What about the concept of a open defect registry? Where users can report problems with bugs against the software that they use. It would mean that defect is documented somewhere but doesn't solve the following problems:
  1. doesn't mean the developer will fix it
  2. someone has to manage the defects being report to handle duplicates or non-a-defect
  3. the service would effectively be doing the work of software companies for free
  4. still doesn't help if the vendor is out of business
The first issue applies to defects reported by formal process with the vendor. You can't help that. The second two points could be solved by charge vendors a nominal flat fee to access the defect information regarding their software. This does two things, pays for someone to manage defects and secondly allow for the operational costs to be spread out over many organisations. Reducing the cost of defect management across the board.

The final problem of vendor's going out of business. What we need on top of that is an open source graveyard where applications that are still being used but are no longer supported have the code release to the public. They are kept at a common location allowing open-source developers to revive them to fix bugs or extend per demand.

Eventual advantages of this process allow users to rate the importance of a defect, giving vendors better visibility into the top 20% of defects. An API for automate defect reporting by software and better software all around.

note: This wasn't intended to be a post about a shared service of defect reporting. Train of thought blogging, ftw!

Saturday, May 24, 2008

It was so easy I thought I was kidding myself

I came home today to find my mantis server had stopped working. Something was wrong with the some aspect of the HDD. Not entirely sure yet, but I've put the drive in another machine and copied all the valuable data of it.

I was impressed just how easy it was to restore some aspects of the server. The physical machine I moved it to already had Apache on it. So it was a cut and paste of the Virtual Host and Listen line to configure and a bounce to get it going.

The next step was to reconfigure the router so that the server could be reach from the interwub. This was a 5 second job. Quick test, no database! Of course!

So I jumped on to the machine and quickly realised I couldn't access the existing MySQL install. Argh, what was I going to do? Try and reconfigure the current install of MySQL to point at the dodgy servers data? I thought I would have a look in the MySQL folder to see how the data was stored. There was a folder called data, and in that a folder with the same name as the database.

I wonder what would happen if I just cut and paste that

So I tempted fate (I have backups) and logged into MySQL, show databases and there it was. Ha! How awesome is that? OK, now for a real test. to access it via the mantis web-app. Works perfectly. Once again, awesome! Awesome like a million hot dogs.

Now, allow me to digress. I love it when applications don't bind themselves to the system that they are running on or force uses jump through some farcical ceremony just to perform a simple task. For instance, my brother has a Toshiba Gigabeat for his mp3 player. I remember him trying to get his songs onto it. You had to install their application and then work this ghastly interface just to transfer some songs. I think it was more than 24 hour period before he had everything working.

My Creative MuVo is literally a flash drive that I could drag and drop files onto. The music player app was smart enough to work out folder structures and to not trust the file system state as it last remembered it. My Nokia N95 8GB thankfully supports both methods.

Last time I checked most users know how to drag and drop a file and while it might not be immediately obvious that your application can support such advanced functionality, it is nothing that good documentation can't fix. If you have to spend lots of money writing an application, try making it one that wraps a GUI around the cut and paste process. You will save money if nothing else.

Anyway, I have just finished copying some music onto my phone. Time to go to sleep with some soft tunes ghosting through my headphones.

Wednesday, May 21, 2008

Obfuscating Inflation

The past few days I have been pondering whether or not the changing in net weight of a product is factored in when calculating the consumer price index (CPI). This relates back to a rant I had earlier in the year about the subtle 1g decrease in each packet of chips within a box of chips. Across 15 packets, there was 15g decrease or 5% saving they are made per box.

So I asked a friend who works at the Australian Bureau of Statistics (ABS) whether or not they factored this in. She didn't know but she me a link to the CPI document. It's quite an enthralling read.

The important bit is this paragraph:

Some changes are relatively easy to deal with while others prove more difficult if not intractable. A marginal change in say the weight of the can of tomato soup from 440gms to 400gms can be handled relatively easily by computing the quality adjusted price by reference to the price per gram. If the list or observed price is unchanged, the quality adjusted price will record an increase of 440/400 or 10%. Quality changes due to either a change in brand or the ingredients pose more difficult measurement problems for which we generally have no ready solution and are forced to treat the change as if it were a change in sample. Some item categories are particularly prone to a high rate of turnover in the specific brands or varieties available, and we are constantly adjusting our samples, again ensuring sample changes are introduced in such a way that the index reflects only pure price change and not differences in the cost of the old and new samples - note that this can be considered as a guiding principle in calculating the CPI.

So, yes they do factor it in, and consider many more things than I had considered. Jolly good!


Some enthralling reads:
Note: replace enthralling with gruelling.

Monday, May 19, 2008

Winamp and FLV

Even though Winamp comes with a plug-in to support FLV (Flash Video) it doesn't work. AVI playback didn't seem to work out of the box either.

There is a fairly simple solution that involves installing a couple of little apps and turning off Winamp's default plug-in.

The link is here (on a winamp forum)

Friday, May 16, 2008

Late Copy Instantiation - Addendum

Regarding my post the other day about delaying an object copy until required a friend asked for me to show my working out. So here is the "proof". The scenarios are somewhat contrived as I didn't spend much effort initially as I felt the tip was self explanatory.

Overview
The basic setup for the test is a simple function that either takes a by-value object to force the copy or a const-reference as the most efficient non-pointer implementation. The object has a method called isValid which returns true if one of its attributes is greater than zero. This attribute is defined in the constructor and therefore remains constant over the life of the object.

To give the object something to do whilst in the function, there is a second method called update. This increments a different attribute by one. It also enforces the requirement that a non-const object exist at some point.

Objects
There are three types of objects. A simple object, a string object and a complex object. Each one has different members to make the copy constructor increasingly complex.

The Single Object has 8 32 bit unsigned int attributes and that is all. The String Object has the same eight attributes as the Single Object and on top of this are five emptystl string objects. Finally the Complex object has the same attributes as the String Object as well as three dynamic arrays that are allocated to 100 bytes in length on construction and threestl vectors of differing types. The Complex Object has a custom copy constructor to copy the dynamic byte array.

Simple (click to enlarge)

String (click to enlarge)

Complex (click to enlarge)


Tests
There were two basic tests. Always exit early and never exit early. There was also two builds. Debug mode and a Release build with "full optimisation" that didn't favour speed or size. Both were compiled using the Microsoft Visual Studio 2005 compiler.

These are the release-full compiler switches:
/Ox /GL /D "WIN32" /D "NDEBUG" /D "_CONSOLE" /D "_UNICODE" /D "UNICODE" /FD /EHsc /MD /Fo"Release-Full\\" /Fd"Release-Full\vc80.pdb" /W3 /nologo /c /Wp64 /Zi /TP /errorReport:prompt

Each test was placed inside a loop and executed 50,000,000 times so that the timing code would return cycles greater than zero. I used timing code that had a resolution of one second. Any finer granularity wouldn't have really made a difference.

test code (click to enlarge)


Hardware/Software Configuration
The computer details are:
  • Microsoft Windows XP Professional, Version 2002, SP2
  • AMD Athlon 64bit X2 Dual Core 4200+ (2.21GHz)
  • 2GB 800Mhz DDR3 RAM

The machine had a fair few other applications open at the same time but none had focus aside from the executing test. Naturally the other apps are still waking and sleeping from time to time and while they may impact the overall run time they won't impact the outcome of the test.

Results
Here are the results. Numbers indicate clocks per second. The finest granularity I used was a second. The zero numbers are, more than likely, caused by the optimiser realising that the isValid method will always return true or false and as such it removed that code out of the equation. In the exit-early scenarios this collapses the function to nothing.

The simple object with always copy ends up being "zero time" I believe because the compiler knew in advance the number of iterations and the functionality in the update was trivial (integer increment). This means it could calculate an end result without running the loop at all.

Irrespective of the poor test setup it is blindingly obvious that the const-ref scenario wins as soon as the object being copied becomes non-trivial.

results (click to view same image in own window...)


Conclusion

There you have it, const reference parameters with late copying are faster than pass by value copied parameters for anything but simple data objects. They are marginally slower when the late copy needs to occur but this delay is the time it takes to make a four byte copy. Which could be offset by using the const-reference object for as long as possible, therefore enabling the compiler to make better optimisations. Const objects are easier to optimise than non-const.

If my memory of Java is true, then this applies to Java. I don't know if it applies to C#. I suspect that C# does some of this in the background.

Thursday, May 15, 2008

The value of hyphens

It is the contract renewal time of the year for me. I have a little tip for you. If you intend to extend your contract, make sure you use words like extend or renew. While wording up an email regarding my conditions of an extension I found I was about to resign... or was it re-sign?


Wednesday, May 14, 2008

Coding Tip #3 - Late Call by Value Instantiation (C++)

When you have a function that takes anything but a simple data type as a parameter and that parameter is being passed in by value, a copy of the object is created. This may be exactly what you wanted. It is standard behaviour. For example, you probably want a copy of your object when putting the object into a collection.

Figure 1. pass by value

If your method has gaurd clauses to ensure only valid objects are added to the collection then there is a chance your code is being wasteful. The early exit event that occurs when the gaurd clauses are true will mean that the original object copy was not required.

You can improve this by doing a const pass-by-reference. You then use the const reference to evaluate your gaurd clauses. If you exit early, all you have spent is 4 bytes on the stack (32bit of course) and you save on the time it takes to copy.

Figure 2. pass by const-reference with late copy

If your code then needs its own copy of the data, create it as a local variable using the const reference as parameter for the copy constructor.

The benefits are directly related to the following:
  • how many chances for early exit exist
  • the size of the object begin copied
  • whether a deep or shallow copy is occuring
  • how long it takes to copy
  • the frequency of early exit calls in relation to copy calls

Tuesday, May 13, 2008

Coding Tip #2 - First Run Tracing

I have a specific method for testing new code that can't be tested by traditional means. By traditional I mean unit tests with mock objects. By code that can't be easily tested I am usually referring to graphics code where a successful implementation is a subjective one. Or, where you are directly allocating memory on a GPU and don't have privy to all the gooey details.

Now, I use the following method for all code I write. Even when I do write unit tests. The first thing I do is step through the code, line by line, mentally desk checking memory references and values. Making sure it all works as I expected it to. I don't have the statistics to back it up but my feelings on it are it gives you a good indication if the logic you have implemented makes sense while it is running. Not just while you are coding it (it just about always makes sense while you are coding it). It allows you to identify failures as they occur rather than waiting for the unit tests to complete. You also get the benefit of analysing live data rather than static lines of code.

When I get an error, I drop a break point in, stop debugging and fix it. After the rebuild I can let it run to the break point because I have confidence up to that line. I continue to go through my code until it works. Depending on the complexity of the code I may trace through some alternate paths. In any case I then go an write any additional unit tests I may have realised I needed whilst tracing through and then let my code execute over all the unit tests.

This may seems a little bit OTT but the next time you implement polymorphism with virtual inheritance you may thank me. On more than one occasion I have uttered the words: "wait a minute, how did my code get over here?".

Monday, May 12, 2008

Life Tip #1 - Birthday Candles

When you a placing birthday candles on a cake, make sure that you put them the right way up. The base of a candle, once lit, burns faster and more brightly than the wick-end of the candle.

This achieves a number of things:

  • The candles burn much faster - you need to belt out "Happy Birthday" just a bit quicker
  • The cake looks a lot more exciting covered in flame - always good unless it's ice cream cake
  • The cake gets covered in wax - never good

I wish I had photos from the birthday cake I witnessed on the weekend. Classic.

Additional:
I looked up Happy Birthday on Wikipedia and did not expect the copyright issues around the song to be so ridiculous. Still the words come out of copyright at the end of this year. Woo!

Coding Tip #1 - Object Identifiers

When designing an interface that allows callers to request an object from some data storage, ensure that your interface accepts unsigned integers rather than signed ones. If you use a signed integer then you need to have a guard clause to reject a negative parameter. However, if you ensure your identifier type accurately models the data it represents. Then your guard clause is implicitly implemented by your parameter. This means you can drop the explicit guard clause from your code. Cleaner, faster, better code. Everyone is a winner.


Apologies for the image of code. Google Blogger doesn't really allow anything except poorly formatted text.


Note:
Obviously this does not hold true if your information model allows negative identifiers.

Testing Tip #1 - Boundary testing business objects

When testing business objects, for example an object that represents a Person or a Customer, you often have a minimum set of data requirements. I.e. a customer must have a last name. As well as having a set of optional data attributes (first name, date of birth, gender, etc).

To provide full test coverage you would mathematically need to provide N factorial minus M factorial combinations where N is the number of attributes and M is the number of mandatory attributes. I don't think I have ever had enough time to test that much, incredibly boring, testing and nor would I, had I.

In my experience the best pass through is to look at the two bounds. The minimum set where you specify the object with the bare minimum of attributes, the mandatory attributes, and each of these attributes has as little data as possible. In our customer example this would be a last name that is one single character in length.

The next test case would include every single attribute populated to it's maximum extent. So if you have a twenty character first name you specify all twenty characters. I document this as my all set.

These two test cases have been enough for every system I've tested. Using them you can prove every attribute being supplied and every attribute being absent, or supplied to their bare minimum allowed.


I don't include out-of-bounds testing here. That is, I don't check for 21 character first name objects, nor do I test for zero length last names. That specific boundary testing, I document as separate tests against the user interface. This achieves two things. Firstly, my test cases are granular and relate to a specific business requirement. This then means that defects raised are very specific and generally easier to track down. Secondly it cuts the amount of testing I have to do to what is more than likely going to cause defects.

Additional:
If you have more complex data types. For instance, if there is a business rule that states that attribute-x is only provided when attribute-z is set. Then that specific combination is already covered by your all set (attribute-z is set) and your minimum set (attribute-z is not set). I would additionally include test cases to ensure that any user interface validation of these attributes occurs.

Sunday, May 11, 2008

Automated deployment - Why it is a good idea

Grant raised the concept of the automatic deployment in a comment on a post I made the other day about controlling test environments. I've been giving it a bit of thought of late and I've come up with a quick test you can do to see if you should automate your deployment process. As you go through the test you will also see various ways how automated deployment can improve your development practices.

Once Upon a Time...
Let me start with a tale of woe.


The past two weeks I've been working with a developer who has been lumped with some less than satisfactory code. Also lacking is a suitable development environment for him to work within. In an effort to get something suitable into test I have been working with him to solve the various problems. This past week has seen about15-25 deployments into test (don't get me started on unit testing). When he is not around another person from his team will do the work. Everyone of them at some point failed to deploy correctly.

Why? Firstly their application is made of a number of disparate components that all are deployed individually. They don't have version numbers to identify the latest build. They are deploying to a cluster. They are rushing to get late code into test. The testers don't have control of the test environment (yet).


Consider this
If your deployment is as simple as a point and click install and that is it, your failure points are: installing the right version and actually doing it (don't laugh). Two failure points, ignore the triviality for now. If you have a configuration file to manually adjust, there is another point. Add them up, one for each step you have to do in order. If you have deployment instructions. Add one more as those have to be followed, if you don't have deployment documentation add 1 million points. If you have to rebuild a fresh machine each time. Add 1 for each step. If you are smart and have a prepared image. Add a single point if you need to deploy it each time. Zero points if you don't.

I think you are starting to get what gives you points and what reduces your score. Stay with me as we go back to our example:

I don't know the full details of the application but I know there are about five components each of which needs to be deployed. So 5 * 3 (version, doing it, following instructions). Three of the installed components need to be turned on each time. So that is 3 more points. 18 is our score so far.

How many machines do you have to deploy to? One host? Score one for the number of targets Clustered array of four machines. Score 4. Pretty simple scoring system. Write this down as your targets score.

For our example we have a two hosts load balanced. So we score 2.

How frequently are you going to deploy your code? Once per iteration, how many iterations in the project? 10? Score one point each time you deploy per iteration, times iterations. Record this as frequency.

How many test environments are there? Dev, Dev-Int, Test, Non-functional, Release Candidate Integration, Pre-Production, Production? Here are seven from a in my opinion pretty regular configuration. 7 Points. Once again, in my example, just one, test. Add it up for your environment score.


Failure Points
Ok, so our formula for calculating the failure points of a deployment:

Failure Points = deployment-steps X frequency X environment X targets

Example = 18 x 2 x 15 = 540 points of failure

Not bad. You may argue that once you have tested your deployment that you shouldn't have any failures after that. It is tested after all. That is a good point, but remember we are not even talking about deployment testing here. Just vanilla dropping an application onto a box.

We (this is a team project after all, shared wins/shared losses) had 540 chances in a one week period to stuff up an aspect of the deployment process. Aside from the code failures, we had probably 10 deployment failures including not installing the code onto both machines in the cluster. Those particular defects are about as much fun to detect as a race condition.

Automated Deployment
How much you automate will directly impact the chances for deployment failure. Our two constants for the act of deployment were: actually doing it and installing the correct version.

Performing the work is now done by the auto-deployer. You still need to click the go button for certain environments. Automatic deployment implies that latest valid build is used so that problem is solved.

Individual deployment steps should be wrapped up into your installer. I mean every step. Installing software, opening ports on routers, configuration files. If you do some research you will find somebody has automated already it for you, or there is an API for it. If by chance that isn't done, do it yourself and then share the love.

Next up is the deployment to each machine on the cluster. Once again this should be handled in by your autodeployer. So that one is fixed, score a zero.

After that was the total number of deployments. That shouldn't change. As long as your autodeployer is operational and you click the go button as required. You should be down to a score of 5 (once for each environment from test afterwards).

With our example we should go from 540 failure points to 5. One for each deployment that has occured over the past week. Triggered by the test team as required. There are no other manual steps.

Bonus Feature
If the latest build is unusable for testing. Allow the testers to flag it as so (Build Quality) and have the autodeployer ignore that build for future deployments.


Conclusion
You may realise by now, that I have been a little bit over the top with my example. Furthermore, every iteration you don't deploy to every environment. You and I know this, but it won't change your score that much. You may also think of more places in which the scoring system should change. Post them as a comment and I'll put together a little spreadsheet you can use.

I am not going to tell you how to automate your deployment process. I've got an idea on one way to do it and I'll post about when I've done it. In the meantime here are a couple of other ideas to get you started (thanks to Grant for these):

  • Use PsExec
  • Use putty if you are not on a windows box
  • Via TFS Build here and here

Before I go some more juicy content: your autodeployer should not be used until you have tested it through all environments. Including deployment into production.

Wednesday, May 7, 2008

Dalmore - 12yo

For the past year and a half to two years I have been a part of the Australian Single Malt Whisky Club. Each month they send me a bottle of single malt whisky that I would otherwise be unable to purchase from the local alcohol merchants. It's a passion that started after a fantastic time I had exploring Scotland's whisky in 2004.

Today I was delivered a fancy bottle of Dalmore 12yo. I'll quote the site as they don't have a permalink.

Dalmore is literaly, "the big meadowland". The distillery is situated North of the the traditional highlands, drawing its water from the Alness River, near the city of Inverness.

Colour: Rich, deep, golden mahogany.

Nose: Intense and firm. Well structured with silky smooth malty tones - a hint of Oloroso sherry lingers in the background. It shows great finesse, extolling fragrances of orange, marmalade and spiced notes.

Taste: Good attack on the mouth, more elegance than muscle. The aged Oloroso butts smooth its rich, fleshy body with great harmony. Almost a concentrated citric mouth-feel captivates and tantalises the middle part of your tongue. An aftertaste of great abundance rewards the palate. A Highland malt of great distinction.

Not sure if all of that actually applies to the whisky. My nose and palate is not as honed as whoever wrote that. I find whisky descriptions a lot like a real estate advertisement. You never really know until you give it a good look yourself and to be weary of "renovator's delights". This is especially relevant to whisky.

So I have my dram and shall give it a go. This is live whisky-blogging. The interwub is a powerful beast often abused. :)

Colour, all correct. It has the fantastic colour to it. Nose, not sure. It's a little blocked and my ability to detect faint scents has never been tip-top. Sorry to disappoint you all but from what I can tell there is no marmalade.

The taste is delightful, strong yet smooth and the faint citrus on the tip of your tongue is present. I rather like this.

Onto the shelf it goes for an occasion that warrants it, like a guest. So if you ever come around for dinner. Ask for a dram of the Dalmore 12yo, to (probably) misquote Iain Banks in his book Raw Spirit.

The perfect size for a dram is one that pleases the guest and the host.

Monday, May 5, 2008

NIN - This one's on him.

I've been a fan of Nine Inch Nails since the mid 90's. Somewhere around the time Broken came out, but it was Pretty Hate Machine that was album I heard and fell in love with.

Over the past few six months Trent has becoming more and more aware/involved with his significant and loyal fan base as well as fully embracing the concept of creative commons. It's been pretty special to watch him go from four full, very good albums (Pretty Hate Machine, Downward Spiral, Fragile, With Teeth) over the period of sixteen odd years to three albums (Year Zero, Ghosts I-IV and The Slip) over the next two.

This is pretty awesome from a fan perspective. We get lots of new content and each album is increasingly free or of low cost. Radiohead were the first to trial this process for their album In Rainbows. After which Trent marketed the Saul Williams album: The inevitable rise and liberation of Niggy Tardust as free or $5 if you were so inclined. I didn't pay the $5 as Saul Williams isn't my style of music. I still downloaded it and had a listen. I liked a couple of songs allot but the rest were as I was expecting, not what I'm interested in.

Now you may think, $5 that is pretty cheap, you should fork over the cash. You still have the album don't you? My argument is that $5 is cheap, but its not a micro-payment. It still has a value and while Saul Williams has not acquired the money of me this time. The next time I will be interested again because I know that there is a chance I may like his album. He has something that usually costs a lot of money. Exposure.

The Exposure aspect is important. Trent and Nine Inch Nails have exposure so that when Ghosts I-IV came out. I first ordered the free version because I was a little short of cash. Some while after I purchased the version that was right for me. Turns out its the glossy double CD back for $75. $70 more than the cheapest version. Why? Firstly I like buying CDs. Sure it's wasteful, but you get more than the music, you get something tangible to hold and a series of images that are somehow more real than a pdf document of the same thing. There is effort in a NIN package that correlates to the effort that goes into each track. Secondly, because I roughly know where the money is going. To the artist for the work he has put into the album.

Trent made a lot of money off Ghosts and it is a decent album, not his best, not his worst either. A worthy item in in the NIN catalogue.

What is the point of this post about free music. Nine Inch Nails released an album today called "The Slip". As he says on his news post: This one is on him. You can't dislike an artist who gives as much as he gets.

Whether or not the flurry of productivity coming from Trent will have an impact on the quality of his work is one that can only be answered in hindsight. As a fan I'm completely biased and won't be ashamed of that.

For those who like Nine Inch Nails. Enjoy!


note: I call him Trent because I got tired of writing Trent Reznor very quickly.

Sunday, May 4, 2008

Lol

I saw this ad on Facebook. I wonder if the extra income I could be earning is related to editing advertising material for grammatical errors.