Wednesday, June 4, 2008

Hello, Goodbye

This post marks the end of this blog on Blogger. After publishing I will be finishing the migration of all my content to distributelife.com. Distributed Life is my own blog and the direction I want to take with my content. If I get the automagic confibulated correctly, your RSS feeds will be automatically routed.

All the posts will be left here (for people who have linked externally) but I will be redirecting all links over to distributedlife.com.

My time at Blogger has not been entirely bad. While I do complain, Blogger did and has served a purpose that I found useful at the time of me starting. That is no longer the case and as such I am moving that fits my requirements.

I hope to see you there.


Regards,
Ryan Boucher (aka Chad Aubergine Stone)

Blogger Defects

This week's defect is one that has been building up for a while. So much so that I really should have reported these sooner and they may have been resolved by now. In any case I found a better solution for me, but you will have to wait for my next post.

These have not been directly reported to blogger. Their defect reporting mechanism is for the user to post on Google Groups. This is hardly a formal support mechanism in my eyes. I prefer to raise my defect through formal channels. The owner will get back to me if need be. Having a public knowledge base is fine, but it should be updated and maintained after each report. This allows similar defects to be collated together. It also means that I am not searching through a forum to see if someone else has raised the defect. There are 17,000 posts under Blogger Support on Google Groups. End Rant.


Defect: Future dates
The 'post options' section of a post by default sets the current date and time of when you created the post. The problem is that this information is used for future dating (or in my case past-dating posts). I can take up to two weeks to write a post. Some of my longer, more involved, posts do take that long as I primarily write my posts in between compiles. Little snippets here and there. When I finally do post the date of the post is published from when I started my draft. This is not common usage. Specific post dates should only be applied if the user specifies the date. Otherwise the date/time of the post should be when I click "publish post". This defect has only existed since they implemented future dating functionality.

To replicate:

  1. Create blogger post before midnight

  2. Wait until after midnight

  3. Publish post

  4. The post will be published with yesterday's date rather than today's.

Workaround:

Manually update the post date/time to a few minutes ago before you post.



Reliability Issue: Connection with blogger.com
Just recently the number of times the autosave process fails because it cannot connect to blogger is increasing. this never used to occur but it is at the current point in time occurring approximately once every two days.

This forces me to cut and paste my post into a notepad document and then re-edit my post. After this I must re-create my links. Not a pleasurable process.

I am unsure how to replicate this but it may be related to the number of times the autosave process is fired off. It may be worth reducing that amount to a timer or similar. Or implementing a mechanism by which the connection is reestablished.


Potential Defect: XML Support
Blogger posts do not like XML markup. This makes it difficult to write about xml. In my post here I struggled because of this issue. Cutting and pasting xml into my edit box at one point caused me to lose my entire post due to some on-save process.

If blogger.com is steadfast in its lack of support for xml content presentation, a solution is to sanitise message text on save to avoid such issues.


Usability: Poor script performance on low-end machines
The "Save Now" / "Autosave" function is too eager. It fires off on every keystroke. This impacts performance on low end machines where it can almost impossible to write properly. I shudder to consider performance on a mobile device.

Part of the problem is the lack of flexibility in the autosave mechanism. Low end users should be provided with a session based option to disable autosave or at least push it out to a configurable time limit.

Finally, the autosave option fires on nonvisible characters. For instance, using the arrow keys to navigate around a post will cause the autosave feature to fire. This not required and a performance penalty on low end machines.


Suggested usability enhancements
  • Images when added are always inserted at the top of the post. This is a pain, especially for someone who is writing a long post.

  • A mechanism should be provide (outside of HTML crafting) to allow users to specify wider posts. The current post width is akin to writing on the side of a milk carton. While there is some justification for maximising the post width to a fixed line length to improve readability it, ideally should be left to the users.

Why I am making defects public

I am liking the concept of making "charity" defect reports. Rather than ranting and raving about the ineptitude of developer X because of problem Y, I will raise a defect, make a post and hope that it gets fixed. I call it a charity because I won't charge for this service; it's more about making software better. Every second post won't be a defect report either. More likely no more than one a week depending on how many bugs are inhibiting my current work. I won't go looking for defects; I get paid to do that already and have much better things to do with my spare time.


My justification and reasoning for making them public and not just silently reporting the defect are as follows:


Firstly, one can never be sure that a defect will be fixed once it has been reported. The vendor may just ignore the defect thinking it only effects a minority. Making it public can allows other users to become aware of the defect and can therefore add their weight to the "urgency factor".


Secondly, inexperienced users often suffer from a lack of self-confidence with computers. When something doesn't work they blame themselves. If they read about a defect that I or someone else raises they realise it wasn't their fault. This may strengthen their resolve towards continuing the use of the application.


Next, some bugs present an inability to achieve a workflow. Making the issues apparent may not provide the workaround, but may enable someone else to uncover a workaround. This can be appended to the initial defect which is now the common source for knowledge on the defect. This can maximise the dissipation of knowledge to users and provides a problem and solution in a single place. This leads me to my next point.


This is all indexed by Google. When I have a problem with an application I search for a solution, then a workaround and finally, if I can't find either, a different application. By making a defect report public I can help others in their defect resolution quests.


Defect resolution
With the Nokia defect, someone posted a comment saying "thanks". I don't expect that. What I do expect, as I'll raise a defect directly with the vendor/developer, is that when the defect is resolved, they will notify me. If such an event occurs. I will edit and top-post my original message with the updated information saying something like: "Defect has been fixed in version X.Y. Upgrade to solve this problem".

This now provides the ideal scenario. Anyone who hasn't upgraded yet, and searches for the problem, will find the defect and solution posted together.


note: I don't expect every defect I raise to be solved post-haste. Some may never be. But I've done what I can to help the situation outside of forking my own branch of the code and fixing it myself.

Tuesday, June 3, 2008

Fifty!


50 Posts! w00t!


As a cricket fan I think it is appropriate that I raise my keyboard in recognition of this milestone. For those that don't know anything about cricket when a batting player reaches a significant milestone they raise their bat in recognition of the milestone. Batters are also raising their bat to recognise the support they get from the team and from the crowd.

I consider writing fifty posts an achievement. Whilst by no stretch would I call it an impossible task and nor am I the first to achieve this goal. But much like a cricket player the first fifty you make on the big stage is memorable, your first century even more so. The first milestones are always important because they confirm the belief that you hold inside that you can achieve your goals. Anyone can write a blog post (and anyone does) much in the same way that anyone can hold a cricket bat (and anyone should). I also wouldn't say that my fifty posts are elegantly crafted prose but when I first picked up a cricket bat I couldn't use it either. As a matter of fact it took me five years to reach double figures and another three seasons after that to score my first and only fifty. Each time I went out to bat I took what I learnt from my previous efforts and slowly became the player I am today.

Before you think that I am a total cricketing inebriate I have always been a bowler and you may think that a cricketing analogy involving five wicket innings and ten wicket matches would be more appropriate. But if I only did what I was already good at I would never have scored that fifty and I would never have written these fifty posts.

With my analogy of cricket well and truly overdone, my raising of the keyboard is a sign of thanks to those that have supported my efforts (proofreading, editing, commenting or just reading) and I hope that this milestone will become the foundation of a much greater innings. I'm sure it will as long as I can keep learning from each post.

note: Yes, this what I look like.

Monday, June 2, 2008

Advertising @ the French Open

I was watching some tennis on the weekend and noticed that advertisers are required to change their brand colours to suite the basic French Open colour scheme. At first I thought this was just a "European" angle on advertising as I remembered being told a story about a town in Germany that managed to force McDonald's to change its logos and general colour combinations to suite the style of the town.

After doing a bit of searching on the tubes I couldn't find any pictures of that town, so you will have to believe me. What I did see is that the Australian Open didn't go to the same extents as the French Open. Here is a image from the Australian Open and here is one from the French Open. While the difference isn't significant, the French Open setup was a lot easier on my eyes with the advertising blending in and creating a much nicer visual experience.

If only we could start getting some websites to care about how much advertising affects their overall visual experience.

Friday, May 30, 2008

Hark, where are thou keys?

I was talking to my mothra (mum) last night and she had recently found an email I had sent back in August 2005. If I remember correctly I sent this to my boss because I was late and I couldn't find my car keys.

I’ve been looking for half an hour, to the point I’ve looked into the shower.
I’ve searched high and low, left and right. In a bank of snow, with and without light.
I’ve looked far and wide, deep and shallow. Under my feet and, under my pillow.
So if you see my keys then let me know. A reward you will see, to work I will go.
To ease my thoughts and waste my time. I had a coffee and wrote this rhyme.

I think it took a few days to find my keys. They had fallen down the back of one our La-Z Boy chairs and had gotten caught in the internal framework. This meant that they couldn't be found by putting your hand down the side of the chair, nor could they be seen by lifting the chair off the ground. Fun!

Have a good weekend!

Thursday, May 29, 2008

Attachments and Requirements in QualityCenter with C#

Previously I covered creating new requirements and how to handle requirement post failure. Today I'll cover attachments which are fairly trivial but have a gotcha that might not be immediately obvious.

To add an attachment to a requirement, the requirement must already have been posted. Otherwise you will get an exception. The following code is how to add the attachment. It is very simple.

code - adding attachments to requirements

Debugging
When you set the path of the image it will be adjusted from something similar to: "D:\\Path\\to\\my\\image.jpg"
to:
"C:\\DOCUME~1\\user\\LOCALS~1\\Temp\\TD_80\\504857af\\Attach\\ REQ1722\\D:\\path\\to\\my\\image.jpg"

Whether this is an artifact of the Quality Center API or the .NET development environment is beyond my knowledge of either.

The gotcha?
You have to add your attachments after creating any child objects. Otherwise every single child requirement (and their children will get a flag added on them saying that a parent requirement has changed and that you should review the potential impacts of such a change). Sometimes you want this, sometimes you don't. If you do it accidentally it can take a long time to remove all those flags.

Wednesday, May 28, 2008

Handling Post failure with QC Requirements in C#

Yesterday I covered creating new requirements and updating custom fields. Today I will discuss post failure and why handling it is non-trivial.

Quality Center has a requirement that no two requirements share the same name where they have the same parent. If you attempt to save a requirement like so using the QC UI it gives you an error and the user tries again. If you try using C# you get a COM Exception.

It also resets your object.

I couldn't believe it either. It means you can't add an incremental counter to the test case name and try posting again until it works. It means that if you are building your requirement up from a source document, you have to go and redo your work. This is a pain if you were using recursion.

There is a solution. It is not complicated either. What I did was to define a new class called SafeQcRequirement which owns standard QC requirement object and each of the attributes that you care about. In a worse case scenario you are duplicating all of the attributes. In my case it was only a handful.

As you build up your requirement, store all the data in the safe requirement. When it comes time to post, build the real QC requirement and post. If it fails, rebuild again with a new name try again. Keep going until you succeed.

The following screenshot is what my post routine resembles.

code in a bitmap, hideous... apologies... a solution is almost here.

The second protected post method is a recursive loop that just tries until it succeeds. There are no gaurds for stack overflows or more than uint requirements. Based on my usage scenarios this is unlikely. If it does occur, shotgun not testing.

In actual fact, any duplicate requirement names are recorded and fed back to the User Centered Design people (who created the source document) and they update their source document so that the duplicates no longer exist.

Regarding the exceptions: you don't get a useful exception from COM, so just capture the base exception and use the message text to determine if it is the one we care about.

Our requirements are sourced from an Axure document and in the next week I'll put a post about parsing their document for requirements. Tomorrow I'll cover adding attachments to requirements which is fairly trivial but there are still a few gotchas.

Tuesday, May 27, 2008

Creating a Quality Center requirement using C#

Unfortunately Quality Center's Open Test Archtecture (OTA) API doesn't have any documentation for C#. Its all in Visual Basic which can make it a little bit of a trial to work out what is required. In the case of creating requirements, the documentation states:
Passing NULL as the ItemData argument creates a virtual object, one that does not appear in the project database. After creating the item, use the relevant object properties to fill the object, then use the Post method to save the object in the database.
Sounds simple enough until you find you that null doesn't work, neither does a null object of the type, a null object, zero, nothing and about 15 other things we tried that could be considered null. In the end we needed the singleton System.DBNull.Value object to be passed in.

TDAPIOLELib.Req req = (TDAPIOLELib.Req) m_reqFactory.AddItem (System.DBNull.Value) ;

Once you have created a requirement, you can start setting the value of each attribute. C# Properties are provided for standard attributes while custom attributes must be set using the direct operator.

req["QC_USER_01"] = "my value" ;

Where QC_USER_01 is the name of the attribute previously defined in QC. One you have set your values, call post to write to the QC server or call undo to reset any changes you have made to the object.

req.Post () ;
req.Undo () ;


If your Post call fails, then you need to handle it accordingly. It is non-trivial and I'll talk about that tomorrow.

note: I would like to thank all the brave developers who walked past my desk whilst this was being worked out and helpfully shouted different ideas for what null could be.

Monday, May 26, 2008

Leveraging MSVC's output window

If you use scripts to identify parts of your code that may not meet standards, you can simplify the job of the developer by making use of Visual Studio's Output Window. Any output from a script that appears on a seperate line in the output window with the following format is clickable.

$filepath($line) : $message

When you click on it, the user is taken to the specified file and line and the message will be displayed in the status bar. This is how it works when you let Visual Studio perform builds for you.

So now that you know how to format user messages in the output window, you need to be able to run your script to that output is sent to the output window.

This is pretty easy to do... in Visual Studio, go to: Tools > External Tools and setup you're tool a bit like mine. My example is a perl script that looks for //TODO comments in code and writes the information to the standard output.


The "Use Output Window" is the kicker here. Otherwise it'll run in a separate console window and be less useful to you. When you write your scripts, you should dump the output to the standard output. That is what gets piped into the output window.

With this in mind you can start writing scripts to useful feedback to developer like:
  • not consting parameters
  • identifying classes / methods without unit tests
  • poor coding standards / formatting
  • implementing FxCop like functionality (for non C# languages)
Basically anything you can identify in a script. I use it for enforcing coding standards, allowing other developers to mark areas for me that could be performance tuned.

It is also useful for doing ASCII to Unicode conversions. Conversions like so are non-trivial and carte-blanche find and replace methods generally don't work. You can write a script to identify potential areas for change and then tackle them one at a time.

Happy Scripting!


note: I don't know how to get it to show up in the Error List window (not that I've looked recently) but if anyone knows, send me a link. That would be super.

Sunday, May 25, 2008

The potential defect

For those that want to know the defect was a memory problem. The application (Nokia Nseries PC Suite) when you changed from tab to tab, would acquire a several 100k of private bytes. If you keep changing tabs the memory usage continues to climb. This application is designed to start with the computer and remain operational the entire time (its for phone synchronisation). If you didn't turn your computer off at night after a few days of usage its memory usage can get quite high.

Personally I don't really use the application, as a matter fact it bugged me, because it starts when Windows starts and I don't like applications to do that without asking first. I was looking for a way to turn that "feature" off and just happened to have process explorer open on my secondary monitor set to sort by private bytes. After much clicking about, I noticed it work its way up the list... after that, like all good testers, I was deliberately trying to workout what was causing the memory usage to rise.

Reporting Defects in Proprietary Software

I, like just about everyone else on the planet, find bugs in proprietary software. Sometimes I like to report them to the vendor. One thing that really annoys me is how hard it is to report a bug. Often these sites have no Contact Us about Gaping Holes In Our Software, or more frequent Contact Us regarding Potential Software Problems.

This is even more the case for non-software companies that produce software. Often to accompany their hardware. I shall use Nokia in my example. I would still say that they are a non-software company (they make phones) but are heading in the direction of being at least partially a software company (they are trying to buy Trolltech) and invest in other software companies like Symbian.

Nowhere on their site was a place to report a defect in any of their software products. In the end I sent an email with the defect report to the customer service people responsible for my phone and asked them to forward it on. Not an ideal solution and normally if it gets to this point I don't bother. Today I must be feeling extra kind.

At this point I would state my hard-line position and say something like: Seriously, if you produce software in any form provide a mechanism by which users can report bugs in your software. An email address is usually enough for me. A potentially better solution is to make use of Windows Error Reporting. I've never used it, but I can't see how it couldn't be useful. Every little bit of information regarding your applications reliability in the field is useful. Note that WER is only useful for crashing or hanging apps.

The problem is that non-software companies that produce software, traditionally, don't have organisation practices around defect reporting, management and the eventual software evolution that occurs from that. The smaller or less mature the company is, the greater the chance of them not having such practices.

There is not much you can do about that. Supporting software is an expensive process. Even if they wanted to, it may not be possible for them to setup the requisite infrastructure. Some may argue that they should have thought of this before getting in the game. That doesn't change the fact that the software is already written, and if the vendor has gone bankrupt then you have zero chance.

What about the concept of a open defect registry? Where users can report problems with bugs against the software that they use. It would mean that defect is documented somewhere but doesn't solve the following problems:
  1. doesn't mean the developer will fix it
  2. someone has to manage the defects being report to handle duplicates or non-a-defect
  3. the service would effectively be doing the work of software companies for free
  4. still doesn't help if the vendor is out of business
The first issue applies to defects reported by formal process with the vendor. You can't help that. The second two points could be solved by charge vendors a nominal flat fee to access the defect information regarding their software. This does two things, pays for someone to manage defects and secondly allow for the operational costs to be spread out over many organisations. Reducing the cost of defect management across the board.

The final problem of vendor's going out of business. What we need on top of that is an open source graveyard where applications that are still being used but are no longer supported have the code release to the public. They are kept at a common location allowing open-source developers to revive them to fix bugs or extend per demand.

Eventual advantages of this process allow users to rate the importance of a defect, giving vendors better visibility into the top 20% of defects. An API for automate defect reporting by software and better software all around.

note: This wasn't intended to be a post about a shared service of defect reporting. Train of thought blogging, ftw!

Saturday, May 24, 2008

It was so easy I thought I was kidding myself

I came home today to find my mantis server had stopped working. Something was wrong with the some aspect of the HDD. Not entirely sure yet, but I've put the drive in another machine and copied all the valuable data of it.

I was impressed just how easy it was to restore some aspects of the server. The physical machine I moved it to already had Apache on it. So it was a cut and paste of the Virtual Host and Listen line to configure and a bounce to get it going.

The next step was to reconfigure the router so that the server could be reach from the interwub. This was a 5 second job. Quick test, no database! Of course!

So I jumped on to the machine and quickly realised I couldn't access the existing MySQL install. Argh, what was I going to do? Try and reconfigure the current install of MySQL to point at the dodgy servers data? I thought I would have a look in the MySQL folder to see how the data was stored. There was a folder called data, and in that a folder with the same name as the database.

I wonder what would happen if I just cut and paste that

So I tempted fate (I have backups) and logged into MySQL, show databases and there it was. Ha! How awesome is that? OK, now for a real test. to access it via the mantis web-app. Works perfectly. Once again, awesome! Awesome like a million hot dogs.

Now, allow me to digress. I love it when applications don't bind themselves to the system that they are running on or force uses jump through some farcical ceremony just to perform a simple task. For instance, my brother has a Toshiba Gigabeat for his mp3 player. I remember him trying to get his songs onto it. You had to install their application and then work this ghastly interface just to transfer some songs. I think it was more than 24 hour period before he had everything working.

My Creative MuVo is literally a flash drive that I could drag and drop files onto. The music player app was smart enough to work out folder structures and to not trust the file system state as it last remembered it. My Nokia N95 8GB thankfully supports both methods.

Last time I checked most users know how to drag and drop a file and while it might not be immediately obvious that your application can support such advanced functionality, it is nothing that good documentation can't fix. If you have to spend lots of money writing an application, try making it one that wraps a GUI around the cut and paste process. You will save money if nothing else.

Anyway, I have just finished copying some music onto my phone. Time to go to sleep with some soft tunes ghosting through my headphones.

Wednesday, May 21, 2008

Obfuscating Inflation

The past few days I have been pondering whether or not the changing in net weight of a product is factored in when calculating the consumer price index (CPI). This relates back to a rant I had earlier in the year about the subtle 1g decrease in each packet of chips within a box of chips. Across 15 packets, there was 15g decrease or 5% saving they are made per box.

So I asked a friend who works at the Australian Bureau of Statistics (ABS) whether or not they factored this in. She didn't know but she me a link to the CPI document. It's quite an enthralling read.

The important bit is this paragraph:

Some changes are relatively easy to deal with while others prove more difficult if not intractable. A marginal change in say the weight of the can of tomato soup from 440gms to 400gms can be handled relatively easily by computing the quality adjusted price by reference to the price per gram. If the list or observed price is unchanged, the quality adjusted price will record an increase of 440/400 or 10%. Quality changes due to either a change in brand or the ingredients pose more difficult measurement problems for which we generally have no ready solution and are forced to treat the change as if it were a change in sample. Some item categories are particularly prone to a high rate of turnover in the specific brands or varieties available, and we are constantly adjusting our samples, again ensuring sample changes are introduced in such a way that the index reflects only pure price change and not differences in the cost of the old and new samples - note that this can be considered as a guiding principle in calculating the CPI.

So, yes they do factor it in, and consider many more things than I had considered. Jolly good!


Some enthralling reads:
Note: replace enthralling with gruelling.

Monday, May 19, 2008

Winamp and FLV

Even though Winamp comes with a plug-in to support FLV (Flash Video) it doesn't work. AVI playback didn't seem to work out of the box either.

There is a fairly simple solution that involves installing a couple of little apps and turning off Winamp's default plug-in.

The link is here (on a winamp forum)

Friday, May 16, 2008

Late Copy Instantiation - Addendum

Regarding my post the other day about delaying an object copy until required a friend asked for me to show my working out. So here is the "proof". The scenarios are somewhat contrived as I didn't spend much effort initially as I felt the tip was self explanatory.

Overview
The basic setup for the test is a simple function that either takes a by-value object to force the copy or a const-reference as the most efficient non-pointer implementation. The object has a method called isValid which returns true if one of its attributes is greater than zero. This attribute is defined in the constructor and therefore remains constant over the life of the object.

To give the object something to do whilst in the function, there is a second method called update. This increments a different attribute by one. It also enforces the requirement that a non-const object exist at some point.

Objects
There are three types of objects. A simple object, a string object and a complex object. Each one has different members to make the copy constructor increasingly complex.

The Single Object has 8 32 bit unsigned int attributes and that is all. The String Object has the same eight attributes as the Single Object and on top of this are five emptystl string objects. Finally the Complex object has the same attributes as the String Object as well as three dynamic arrays that are allocated to 100 bytes in length on construction and threestl vectors of differing types. The Complex Object has a custom copy constructor to copy the dynamic byte array.

Simple (click to enlarge)

String (click to enlarge)

Complex (click to enlarge)


Tests
There were two basic tests. Always exit early and never exit early. There was also two builds. Debug mode and a Release build with "full optimisation" that didn't favour speed or size. Both were compiled using the Microsoft Visual Studio 2005 compiler.

These are the release-full compiler switches:
/Ox /GL /D "WIN32" /D "NDEBUG" /D "_CONSOLE" /D "_UNICODE" /D "UNICODE" /FD /EHsc /MD /Fo"Release-Full\\" /Fd"Release-Full\vc80.pdb" /W3 /nologo /c /Wp64 /Zi /TP /errorReport:prompt

Each test was placed inside a loop and executed 50,000,000 times so that the timing code would return cycles greater than zero. I used timing code that had a resolution of one second. Any finer granularity wouldn't have really made a difference.

test code (click to enlarge)


Hardware/Software Configuration
The computer details are:
  • Microsoft Windows XP Professional, Version 2002, SP2
  • AMD Athlon 64bit X2 Dual Core 4200+ (2.21GHz)
  • 2GB 800Mhz DDR3 RAM

The machine had a fair few other applications open at the same time but none had focus aside from the executing test. Naturally the other apps are still waking and sleeping from time to time and while they may impact the overall run time they won't impact the outcome of the test.

Results
Here are the results. Numbers indicate clocks per second. The finest granularity I used was a second. The zero numbers are, more than likely, caused by the optimiser realising that the isValid method will always return true or false and as such it removed that code out of the equation. In the exit-early scenarios this collapses the function to nothing.

The simple object with always copy ends up being "zero time" I believe because the compiler knew in advance the number of iterations and the functionality in the update was trivial (integer increment). This means it could calculate an end result without running the loop at all.

Irrespective of the poor test setup it is blindingly obvious that the const-ref scenario wins as soon as the object being copied becomes non-trivial.

results (click to view same image in own window...)


Conclusion

There you have it, const reference parameters with late copying are faster than pass by value copied parameters for anything but simple data objects. They are marginally slower when the late copy needs to occur but this delay is the time it takes to make a four byte copy. Which could be offset by using the const-reference object for as long as possible, therefore enabling the compiler to make better optimisations. Const objects are easier to optimise than non-const.

If my memory of Java is true, then this applies to Java. I don't know if it applies to C#. I suspect that C# does some of this in the background.

Thursday, May 15, 2008

The value of hyphens

It is the contract renewal time of the year for me. I have a little tip for you. If you intend to extend your contract, make sure you use words like extend or renew. While wording up an email regarding my conditions of an extension I found I was about to resign... or was it re-sign?


Wednesday, May 14, 2008

Coding Tip #3 - Late Call by Value Instantiation (C++)

When you have a function that takes anything but a simple data type as a parameter and that parameter is being passed in by value, a copy of the object is created. This may be exactly what you wanted. It is standard behaviour. For example, you probably want a copy of your object when putting the object into a collection.

Figure 1. pass by value

If your method has gaurd clauses to ensure only valid objects are added to the collection then there is a chance your code is being wasteful. The early exit event that occurs when the gaurd clauses are true will mean that the original object copy was not required.

You can improve this by doing a const pass-by-reference. You then use the const reference to evaluate your gaurd clauses. If you exit early, all you have spent is 4 bytes on the stack (32bit of course) and you save on the time it takes to copy.

Figure 2. pass by const-reference with late copy

If your code then needs its own copy of the data, create it as a local variable using the const reference as parameter for the copy constructor.

The benefits are directly related to the following:
  • how many chances for early exit exist
  • the size of the object begin copied
  • whether a deep or shallow copy is occuring
  • how long it takes to copy
  • the frequency of early exit calls in relation to copy calls

Tuesday, May 13, 2008

Coding Tip #2 - First Run Tracing

I have a specific method for testing new code that can't be tested by traditional means. By traditional I mean unit tests with mock objects. By code that can't be easily tested I am usually referring to graphics code where a successful implementation is a subjective one. Or, where you are directly allocating memory on a GPU and don't have privy to all the gooey details.

Now, I use the following method for all code I write. Even when I do write unit tests. The first thing I do is step through the code, line by line, mentally desk checking memory references and values. Making sure it all works as I expected it to. I don't have the statistics to back it up but my feelings on it are it gives you a good indication if the logic you have implemented makes sense while it is running. Not just while you are coding it (it just about always makes sense while you are coding it). It allows you to identify failures as they occur rather than waiting for the unit tests to complete. You also get the benefit of analysing live data rather than static lines of code.

When I get an error, I drop a break point in, stop debugging and fix it. After the rebuild I can let it run to the break point because I have confidence up to that line. I continue to go through my code until it works. Depending on the complexity of the code I may trace through some alternate paths. In any case I then go an write any additional unit tests I may have realised I needed whilst tracing through and then let my code execute over all the unit tests.

This may seems a little bit OTT but the next time you implement polymorphism with virtual inheritance you may thank me. On more than one occasion I have uttered the words: "wait a minute, how did my code get over here?".

Monday, May 12, 2008

Life Tip #1 - Birthday Candles

When you a placing birthday candles on a cake, make sure that you put them the right way up. The base of a candle, once lit, burns faster and more brightly than the wick-end of the candle.

This achieves a number of things:

  • The candles burn much faster - you need to belt out "Happy Birthday" just a bit quicker
  • The cake looks a lot more exciting covered in flame - always good unless it's ice cream cake
  • The cake gets covered in wax - never good

I wish I had photos from the birthday cake I witnessed on the weekend. Classic.

Additional:
I looked up Happy Birthday on Wikipedia and did not expect the copyright issues around the song to be so ridiculous. Still the words come out of copyright at the end of this year. Woo!

Coding Tip #1 - Object Identifiers

When designing an interface that allows callers to request an object from some data storage, ensure that your interface accepts unsigned integers rather than signed ones. If you use a signed integer then you need to have a guard clause to reject a negative parameter. However, if you ensure your identifier type accurately models the data it represents. Then your guard clause is implicitly implemented by your parameter. This means you can drop the explicit guard clause from your code. Cleaner, faster, better code. Everyone is a winner.


Apologies for the image of code. Google Blogger doesn't really allow anything except poorly formatted text.


Note:
Obviously this does not hold true if your information model allows negative identifiers.

Testing Tip #1 - Boundary testing business objects

When testing business objects, for example an object that represents a Person or a Customer, you often have a minimum set of data requirements. I.e. a customer must have a last name. As well as having a set of optional data attributes (first name, date of birth, gender, etc).

To provide full test coverage you would mathematically need to provide N factorial minus M factorial combinations where N is the number of attributes and M is the number of mandatory attributes. I don't think I have ever had enough time to test that much, incredibly boring, testing and nor would I, had I.

In my experience the best pass through is to look at the two bounds. The minimum set where you specify the object with the bare minimum of attributes, the mandatory attributes, and each of these attributes has as little data as possible. In our customer example this would be a last name that is one single character in length.

The next test case would include every single attribute populated to it's maximum extent. So if you have a twenty character first name you specify all twenty characters. I document this as my all set.

These two test cases have been enough for every system I've tested. Using them you can prove every attribute being supplied and every attribute being absent, or supplied to their bare minimum allowed.


I don't include out-of-bounds testing here. That is, I don't check for 21 character first name objects, nor do I test for zero length last names. That specific boundary testing, I document as separate tests against the user interface. This achieves two things. Firstly, my test cases are granular and relate to a specific business requirement. This then means that defects raised are very specific and generally easier to track down. Secondly it cuts the amount of testing I have to do to what is more than likely going to cause defects.

Additional:
If you have more complex data types. For instance, if there is a business rule that states that attribute-x is only provided when attribute-z is set. Then that specific combination is already covered by your all set (attribute-z is set) and your minimum set (attribute-z is not set). I would additionally include test cases to ensure that any user interface validation of these attributes occurs.

Sunday, May 11, 2008

Automated deployment - Why it is a good idea

Grant raised the concept of the automatic deployment in a comment on a post I made the other day about controlling test environments. I've been giving it a bit of thought of late and I've come up with a quick test you can do to see if you should automate your deployment process. As you go through the test you will also see various ways how automated deployment can improve your development practices.

Once Upon a Time...
Let me start with a tale of woe.


The past two weeks I've been working with a developer who has been lumped with some less than satisfactory code. Also lacking is a suitable development environment for him to work within. In an effort to get something suitable into test I have been working with him to solve the various problems. This past week has seen about15-25 deployments into test (don't get me started on unit testing). When he is not around another person from his team will do the work. Everyone of them at some point failed to deploy correctly.

Why? Firstly their application is made of a number of disparate components that all are deployed individually. They don't have version numbers to identify the latest build. They are deploying to a cluster. They are rushing to get late code into test. The testers don't have control of the test environment (yet).


Consider this
If your deployment is as simple as a point and click install and that is it, your failure points are: installing the right version and actually doing it (don't laugh). Two failure points, ignore the triviality for now. If you have a configuration file to manually adjust, there is another point. Add them up, one for each step you have to do in order. If you have deployment instructions. Add one more as those have to be followed, if you don't have deployment documentation add 1 million points. If you have to rebuild a fresh machine each time. Add 1 for each step. If you are smart and have a prepared image. Add a single point if you need to deploy it each time. Zero points if you don't.

I think you are starting to get what gives you points and what reduces your score. Stay with me as we go back to our example:

I don't know the full details of the application but I know there are about five components each of which needs to be deployed. So 5 * 3 (version, doing it, following instructions). Three of the installed components need to be turned on each time. So that is 3 more points. 18 is our score so far.

How many machines do you have to deploy to? One host? Score one for the number of targets Clustered array of four machines. Score 4. Pretty simple scoring system. Write this down as your targets score.

For our example we have a two hosts load balanced. So we score 2.

How frequently are you going to deploy your code? Once per iteration, how many iterations in the project? 10? Score one point each time you deploy per iteration, times iterations. Record this as frequency.

How many test environments are there? Dev, Dev-Int, Test, Non-functional, Release Candidate Integration, Pre-Production, Production? Here are seven from a in my opinion pretty regular configuration. 7 Points. Once again, in my example, just one, test. Add it up for your environment score.


Failure Points
Ok, so our formula for calculating the failure points of a deployment:

Failure Points = deployment-steps X frequency X environment X targets

Example = 18 x 2 x 15 = 540 points of failure

Not bad. You may argue that once you have tested your deployment that you shouldn't have any failures after that. It is tested after all. That is a good point, but remember we are not even talking about deployment testing here. Just vanilla dropping an application onto a box.

We (this is a team project after all, shared wins/shared losses) had 540 chances in a one week period to stuff up an aspect of the deployment process. Aside from the code failures, we had probably 10 deployment failures including not installing the code onto both machines in the cluster. Those particular defects are about as much fun to detect as a race condition.

Automated Deployment
How much you automate will directly impact the chances for deployment failure. Our two constants for the act of deployment were: actually doing it and installing the correct version.

Performing the work is now done by the auto-deployer. You still need to click the go button for certain environments. Automatic deployment implies that latest valid build is used so that problem is solved.

Individual deployment steps should be wrapped up into your installer. I mean every step. Installing software, opening ports on routers, configuration files. If you do some research you will find somebody has automated already it for you, or there is an API for it. If by chance that isn't done, do it yourself and then share the love.

Next up is the deployment to each machine on the cluster. Once again this should be handled in by your autodeployer. So that one is fixed, score a zero.

After that was the total number of deployments. That shouldn't change. As long as your autodeployer is operational and you click the go button as required. You should be down to a score of 5 (once for each environment from test afterwards).

With our example we should go from 540 failure points to 5. One for each deployment that has occured over the past week. Triggered by the test team as required. There are no other manual steps.

Bonus Feature
If the latest build is unusable for testing. Allow the testers to flag it as so (Build Quality) and have the autodeployer ignore that build for future deployments.


Conclusion
You may realise by now, that I have been a little bit over the top with my example. Furthermore, every iteration you don't deploy to every environment. You and I know this, but it won't change your score that much. You may also think of more places in which the scoring system should change. Post them as a comment and I'll put together a little spreadsheet you can use.

I am not going to tell you how to automate your deployment process. I've got an idea on one way to do it and I'll post about when I've done it. In the meantime here are a couple of other ideas to get you started (thanks to Grant for these):

  • Use PsExec
  • Use putty if you are not on a windows box
  • Via TFS Build here and here

Before I go some more juicy content: your autodeployer should not be used until you have tested it through all environments. Including deployment into production.

Wednesday, May 7, 2008

Dalmore - 12yo

For the past year and a half to two years I have been a part of the Australian Single Malt Whisky Club. Each month they send me a bottle of single malt whisky that I would otherwise be unable to purchase from the local alcohol merchants. It's a passion that started after a fantastic time I had exploring Scotland's whisky in 2004.

Today I was delivered a fancy bottle of Dalmore 12yo. I'll quote the site as they don't have a permalink.

Dalmore is literaly, "the big meadowland". The distillery is situated North of the the traditional highlands, drawing its water from the Alness River, near the city of Inverness.

Colour: Rich, deep, golden mahogany.

Nose: Intense and firm. Well structured with silky smooth malty tones - a hint of Oloroso sherry lingers in the background. It shows great finesse, extolling fragrances of orange, marmalade and spiced notes.

Taste: Good attack on the mouth, more elegance than muscle. The aged Oloroso butts smooth its rich, fleshy body with great harmony. Almost a concentrated citric mouth-feel captivates and tantalises the middle part of your tongue. An aftertaste of great abundance rewards the palate. A Highland malt of great distinction.

Not sure if all of that actually applies to the whisky. My nose and palate is not as honed as whoever wrote that. I find whisky descriptions a lot like a real estate advertisement. You never really know until you give it a good look yourself and to be weary of "renovator's delights". This is especially relevant to whisky.

So I have my dram and shall give it a go. This is live whisky-blogging. The interwub is a powerful beast often abused. :)

Colour, all correct. It has the fantastic colour to it. Nose, not sure. It's a little blocked and my ability to detect faint scents has never been tip-top. Sorry to disappoint you all but from what I can tell there is no marmalade.

The taste is delightful, strong yet smooth and the faint citrus on the tip of your tongue is present. I rather like this.

Onto the shelf it goes for an occasion that warrants it, like a guest. So if you ever come around for dinner. Ask for a dram of the Dalmore 12yo, to (probably) misquote Iain Banks in his book Raw Spirit.

The perfect size for a dram is one that pleases the guest and the host.

Monday, May 5, 2008

NIN - This one's on him.

I've been a fan of Nine Inch Nails since the mid 90's. Somewhere around the time Broken came out, but it was Pretty Hate Machine that was album I heard and fell in love with.

Over the past few six months Trent has becoming more and more aware/involved with his significant and loyal fan base as well as fully embracing the concept of creative commons. It's been pretty special to watch him go from four full, very good albums (Pretty Hate Machine, Downward Spiral, Fragile, With Teeth) over the period of sixteen odd years to three albums (Year Zero, Ghosts I-IV and The Slip) over the next two.

This is pretty awesome from a fan perspective. We get lots of new content and each album is increasingly free or of low cost. Radiohead were the first to trial this process for their album In Rainbows. After which Trent marketed the Saul Williams album: The inevitable rise and liberation of Niggy Tardust as free or $5 if you were so inclined. I didn't pay the $5 as Saul Williams isn't my style of music. I still downloaded it and had a listen. I liked a couple of songs allot but the rest were as I was expecting, not what I'm interested in.

Now you may think, $5 that is pretty cheap, you should fork over the cash. You still have the album don't you? My argument is that $5 is cheap, but its not a micro-payment. It still has a value and while Saul Williams has not acquired the money of me this time. The next time I will be interested again because I know that there is a chance I may like his album. He has something that usually costs a lot of money. Exposure.

The Exposure aspect is important. Trent and Nine Inch Nails have exposure so that when Ghosts I-IV came out. I first ordered the free version because I was a little short of cash. Some while after I purchased the version that was right for me. Turns out its the glossy double CD back for $75. $70 more than the cheapest version. Why? Firstly I like buying CDs. Sure it's wasteful, but you get more than the music, you get something tangible to hold and a series of images that are somehow more real than a pdf document of the same thing. There is effort in a NIN package that correlates to the effort that goes into each track. Secondly, because I roughly know where the money is going. To the artist for the work he has put into the album.

Trent made a lot of money off Ghosts and it is a decent album, not his best, not his worst either. A worthy item in in the NIN catalogue.

What is the point of this post about free music. Nine Inch Nails released an album today called "The Slip". As he says on his news post: This one is on him. You can't dislike an artist who gives as much as he gets.

Whether or not the flurry of productivity coming from Trent will have an impact on the quality of his work is one that can only be answered in hindsight. As a fan I'm completely biased and won't be ashamed of that.

For those who like Nine Inch Nails. Enjoy!


note: I call him Trent because I got tired of writing Trent Reznor very quickly.

Sunday, May 4, 2008

Lol

I saw this ad on Facebook. I wonder if the extra income I could be earning is related to editing advertising material for grammatical errors.

Tuesday, April 29, 2008

Controlling Testing Environments

Why You Should Care?
Testing environments are fundamental to successful testing. The test environment is where testing occurs and without a controlled, regulated, stable testing environment you are undermining your entire testing foundation. Scary stuff!

What do I mean by controlling a testing environment? I mean ensuring:
  • that you know that each environment has the correct code,
  • that the various integrating applications have compatible versions,
  • that the correct hardware and software configuration exists,
  • that the data is legitimate and in the right quantities,
  • access to the environment is restricted and,
  • security policies mimic production
All of above items combine to make a stable, controlled, test environment.

Without proper management of testing environments whenever a defect is identified you have to:
  1. identify the software build,
  2. determine how long that build has been there,
  3. determine if there is a later build available?
  4. ensure that the date is valid?
  5. review the hardware to ensure it matches production
  6. review the additional software components to ensure it matches production

Beyond environmental stability there are particular test scenarios that you can now perform. You can engage in deployment testing. Every release the software package is released into production. How often is this software deployment process tested?

Other benefits are: when you receive a "bad" build you can un-install it and re-install the previous one until it gets fixed. Or, you can get two competing builds from the development team and compare them for performance. I am doing this one next week.


So how do we go about doing this?
The first step is to identify how many test environments you have / need. In summary, I like to see at least the following:
  • Development - one per developer, usually the development box but ideally should be a VM or similar that matches production architecture/operating system/software configuration. Developers may call it a build box, but they do unit testing here, so it is a test environment.
  • Development integration - one per project/release. Here the development team works on integrating their individual components together.
  • Test - where the brunt of the tester's work is done. There should be a dedicated environment for each project.
The following environments can usually be shared between project teams depending on the number and types of projects being developed concurrently.
  • User acceptance testing - can be done in other environments if the resources are not available. Ideally should be a dedicated environment that looks like prod + all code between now and project release. This is an optional environment in my opinion as there are lots of good places to do UAT and it really depends on the maturity of your project and your organisation's available infrastructure.
  • Non-functional - performance, stress, load, robustness - should be identical infrastructure to production, the data requirements can exceed production quantities but must match it in authenticity.
More environments are possible. I didn't cover integration or release candidate environments (you may have duplicate environments or subsets for prod-1, prod and prod+1) and it really depends on the number of software products being developed concurrently. I won't be discussing the logistics of establishing test environments here nor how to acquire them cheaply.

To actually gain control. First talk to the development team about your requirements for a stable testing environment. Explain your reasons and get their support. The next step is not always necessary but can give you good piece of mind. Remove developer access to the test environments. I am talking about everywhere, web servers, databases, terminal services, virtual machines. If its apart of the testing environment they should stay out.

It isn't because you don't trust them. After deployment you probably shouldn't be on those machines either. Sure, there are some testing scenarios where getting into the nitty gritty is required, but not always and certainly not when testing from the user's perspective. The bottom line is that the less people who have access to these machines results in a smaller chance of accidental environmental comprise.


So what aspects do we control?
Primarily we need to control the Entry and Exit criteria to each environment. The first step is the development environment. Entry is entirely up to the developer and exit should be achieved when unit tests passed. As the next step is the development integration environment, the development lead should control code entry.

Entry into the test environment: regardless of the development methodology the delivery to test should be scheduled. Development completes a build that delivers "N chunks" of functionality. Unit tests have passed and they are good to go.

Developers should then prepare a deployment package (like they will for the eventual production release) and place it in a shared location that the deployment testers can access. It is now up to the deployment testers to deploy the code at the request of the project testing team (these are quite often the same team). Once a build has been deployed, some build verification tests are executed (preferably automated) and the testers can continue their work.

To move from test into any environment afterwards (release candidate integration, pre-production, etc) depends on the organisation but usually the following: Testing has been completed, defects resolved, user documentation produced and most importantly user sign-off has been acquired.

The final environments (pre-production, etc) are usually (should be) managed by a release manager who controls the entry and exit gates from each environment after test and on into production. I won't cover these here.


Evidence or it never happened!
Example A: About a month ago we had a problem where one of our test environments wasn't working as expected. It took the developer over a week to find the problem. Turns out another developer had promoted some code without letting anyone else know. The code didn't work and he left it there.

This could have been avoided if the developer didn't have access to deploy in our environment. Unfortunately he does, but it is something that we are working towards rectifying.

Example B: I once worked on a project that had five development teams. Two database groups and three code cutters. Had they been able to deploy when they wanted, our test environment would have been useless. None of the teams were ever ready at the same time and it would have meant we would have had code without appropriate database support. Components that were meant to integrate but did not match because the latest build of application x wasn't ready yet.

By waiting until all builds were ready and running through the deployment ourselves we ensured that our test environment was stable and had the same level of development progression all the way through.


Too much information, summarise before I stop caring!
  1. Controlling Test Environments = Good
  2. Focus on developing entry and exit criteria
  3. Build up to production-like environments - each successive environment should be closer and closer to production.
  4. Evolve towards the goal of environmental control rather than a big bang approach. Some transitions will take longer than others (i.e. getting the right hardware) so pick a level of control for each release, get everyone involved and implement it.
  5. Get team buy in (developers, testers) - education is the key
  6. Don't make the entry into the test environment documentation heavy.

It all looks too easy, how could this go wrong?
Get development buy-in. This is important you don't want to alienate the development team. Not all developers or development teams are inconsiderate, nor do they have ulterior motives. Usually it's a simple lack of awareness and discussing with them the direction you want to take with the testing environments will achieve two things. Firstly, they have greater visibility into the testing arena and secondly they often realise that they can help improve quality by doing less. Who doesn't like doing that?


Don't make it complicated: The goal of this is to achieve a high quality test environment to facilitate high quality testing. Don't produce a set of forms and a series of hoops that you need to force various developers and teams to fill out whilst jumping through. They won't like it and they probably won't like you.

When I first tried locking down an environment, I asked the developers to fill out a handover to test document that listed the build, implemented task items, resolved defects and similar items. I had buy in and for the first few cycles it worked ok. It wasn't great though. All I was doing was accumulating bits of paper and wasting their time by making them fill it out.

All I do these days is discuss with the developers the reasons why the environment needs to be locked down and to let me know when a new build is ready. I'm usually involved in iteration planning meetings so I know what is coming anyway. All that waffle they had to fill out is automatically generated from defect management, task management and source control software.

My testing environments are generally stable, developers are happy to hand me deployment packages and consider deployment defects just as important as normal defects. After all, deployment is the first chance a piece of software has to fail in production. It is also the first place user's will see your application.

It takes time to move towards a controlled environment and as you read in my examples, my employer is not there yet either, but we are getting closer.


One other note: You may not have the ability (whether technical or organisational) to perform development testing. See if you can organise to sit with the technical team that does deployments for you.

Wednesday, April 23, 2008

If the TV fits...

Every Wednesday I head over to my mum's place for dinner. Yesterday she gave me a TV. They had upgraded their two primary televisions to 32 inch LCD screens and didn't have room for the old ones.

It barely fits into my display unit and the image quality is much better than my old crappy tele. All I need to do now is find time to watch tv. I can only find enough time to watch 3-5 hours of tv a week which is usually watching my footy teams.

Blogger's Remorse

I really need to hold off posting at night. Even though it I spend a few days to a few weeks putting together my posts, I have them proof-read at least once and use a spellchecker, the tone of my posts is never the same after the fact.

I had to fix a few up from last night including renaming one post.

Blogging, like software development is a process of continual learning and evolution for me. I'm not where where I want to be regarding style, tone, spelling mistakes, but I am aware of where I am now and where I want to be. You can't get somewhere if you don't know where you are now.


I'll always leave my old posts up here because when I do achieve a writing style that I am happy with I can see how far I've come. I am starting to do this with defects as well. It would be nice to know how many defects I've caused in every application I've written since I first coded some basic in a Commodore 64. The type, cause, severity, etc, would also be useful to clear up the reporting. This way I could tell if my coding practices are improving.

I know TFS is starting to include this type of reporting through its Data Mining capabilities and testing tools like Quality Center also provide lots of reporting. If I can get a open-source setup working that I am happy with I'll post it here.

Without getting off topic too much more, let me wrap-up. Blogging is an evolutionary process where you get better by working on your craft. Just like software development. This is why my post here about analysing how you work to improve processes and therefore improve quality is just as relevant to blogging as it is to software development. Within a few weeks I should know whether just delaying a post until the next day when I can read it again will reduce the number of grammatical offences as well as improving the overall tone of each post.


p.s. Yes, I love statistics. I once had a spreadsheet that tracked the hours I slept, how I felt (subjectively of course), hours worked on various projects, productivity on each project and number of cups of coffee/water/tea I consumed per day in an effort to identify any correlations between them. Turns out sleep had the biggest factor on how I felt which directly translated into productivity.
p.p.s I'm not actually that remorseful over the posts, I just wish I had the ability to write with clarity on the day, not on the day after. Also RSS feeds don't reflect changes.

Bug: Facebook Advertising Application

Here is step two of registering for an advertisement on FaceBook:



and here is step three after a few clicks (select all users with relationship status):


What is the difference you may ask? In the second panel the user has selected each relationship status and according to the "people" count in the top of the page, the site is losing advertising reach of 1 million users.

EDIT: I changed this post as I didn't like the tone of it.

Usability: What is the ideal date control?

Mace is not the right answer, effective though. As exciting as date controls are they are pretty easy to get wrong. Date controls have the unfortunate problem of trying to represent a large period of time whilst trying to provide a good level of granularity over that time.

Users want to be able to specify a date and if the date control resembles something they are familiar with (i.e. a calendar) then even better. From what I’ve experienced being able to supply a date as quickly as possible doesn’t come into the equation with most non-technical users. That being said, users do not want to waste time trying to operate an unwieldy date control. They want to supply a value in as many steps as few steps they can conceive in their minds.

Developers and experienced computer operators (data entry personnel, power users) tend to want to do things as quickly as possible. This is at odds with user experience. This is also me. I can type over 120 words per minute. Most users can't. Remember that.

Application Platform

Where the application is running has a big impact on how the user is going to interact with the date control. Console applications obviously are going to use a keyboard mechanism. GUI Applications are a mixture of keyboard input and mouse control depending on how many fields need to be supplied. Web applications are mouse driven, and mobile/pda applications are pen driven (essentially mouse), arrow-navigation driven or touch pad.

The type of mouse also impacts usability of a date control. Touch pad mouses and nub mouses are a different user experiences than the standard hand-held mouse.

Date Selection
The types of dates the user selects is also important. How far in the past will the user be selecting a date? If it is a web-site where someone must be 18 years or older or an student records application that requires the date of birth, the date control is going to require the user select a year at least 18 years to something old like 30 (kidding, I mean 40… no 90.)

Are the dates in the past even allowed? Travel booking sites would rather you booked in the future. Does the user need to select a second date based on the first? Can the user supply a partial date? Is the user required to include a range of dates? Simple date range selection is non-trivial whilst keeping it simple, intuitive and aesthetic.


The Options
There are many examples of date controls out there, so I’ll try to provide a real live example of each one.

Drop Down City:

To be honest I had a worst example available but thankfully I can't seem to find it on google (if I can find it again I will post it as it was atrocious. Note to self: bookmark epic failures under "hindenburg"). To be honest it's one of the few times I have been glad to not find an example. A friend told me Immigration Australia had a drop down city control but I couldn't find it either. FYI Some aspects of their site are rubbish and I'll post that under at separate topic in a day or two once I collect my thoughts.


Above, the wordpress one isn't great. Personally I don't like it. Too many controls for something that could be done visually. Furthermore, the month selector doesn't always respond to user input forcing me into mouse input. Some may claim that this is a browser or OS issue. News Just In: The user does not care what you think, only what they experience.

Anyway, back to my original impressions: for starters, lots of drop downs look hideous. Secondly to select a specific date the mouse user has to click on a variety of drop down controls and then the keyboard user must supply some other values. A date control should support either all keyboard or all mouse and preferably both.

Single Box Date Control
Examples: (seriously, I had several examples planned. Perhaps people learn. I know I do so it's entirely plausible. When I find a relevant example again I will so post.). Anyway you have all seen it. The 12 character input control with '/' for dividers!

This option is the equal fastest date control for keyboard users and not usable at all for mouse users. However if you have a number of text fields before this date control then having a keyboard only field, I feel, is OK. The user will already have two hands on the keyboard at this point and it is only a jump away from the date control.

If the user has to take their hands off the mouse just to use the date control then this option is not a good choice.

This option is not aesthetically pleasing unless there are similar looking input controls near by.

Triple Box Date Control

The other fastest date control. Depending on the look and feel of the user interface this option can be aesthetically pleasing. It can also be ugly [here]. This date control is functionally similar to the single box control, or a completed entry should skip the user to the next field. If your users can specify partial dates then this option beats a single box hands down. It provides the easiest way of allowing the user to select which aspect of the date they may wish to supply (month for instance) whilst still displaying that the other two fields are empty.

One caveat with the three box date control is that you will need to make it clear which box is for the month and which is for the day. US date controls tend to be month first while day first is common in countries like the UK and Australia.


Calendar
The most visually appealing date control, the calendar gives the user a date selection control akin to the calendar on their desk or on the wall in their garage. This is a good thing. Your average non-computer literate user can work a calendar with ease. Unless implemented correctly a calendar is also a nightmare for date selection within a few months or a few years.

This website (Dynarch) has a calendar control on the right hand side. It works fairly well, its an odd concept to work in even years on the right, odd years on the left. Personally I don't like it. To find 2012 I have to go to 2013 and then back a year. Too much work, I would rather scroll than have to click on one side of dialogue and then click on the other side to fine-tune my search.

Still kudos go to them, because the calendar control is trying. My only other criticism of Dynarch is the word "Today" is in the middle of the control, it implies that the current date selected is today, like it is on my physical desktop calendar. It doesn't mean that. It means go back to today. This threw me off. With anything there is a usability vs education issue, personally that text should give a short hand notation of today. Quicklinks should be included below the control with other concepts like "next week", "next month". Depending on your content (always, of course).


To make a calendar usable for larger date ranges the following features need to be supported:
  • Click on the month to activate a dropdown control to select a month. The month should open on June/July so that there is minimal distance to move to the desired month.
  • Click on the year should allow the user to control the year more easily. I see couple of different options implemented here.
    • Up Arrow / Down Arrow – this is a bad idea. Your date control should already contain double arrows for going forward or back one year
    • Enter a year, this is useful if the year is a long way away from the current year
    • A year slide. Looks like a drop down menu with about 5-10 years visible above and below the current year. At the top and bottom are two buttons for scrolling. These should be on-mouse-over activate or click-and-hold activated.

Other usability requirements that are a must:
  • Let the user know that if you click on the month your can select a different month. The same applies to the year. Users rely on your visual feedback to let them know what is wrong and what is right within the confines of their education. Step outside those bounds and you lose. I've done it, you've done it. Don't be ashamed, realise a mistake and move towards the user.

What isn’t good for calendars? Scroll bars are a bad idea. It is far too difficult to find a specific year using a scroll bar. On one application I tested each scroll click moved 50 years.

Here is an example of someone that is pretty darn close: http://www.basicdatepicker.com/
They're date control only misses the need for auto-scroll one year, and supply forward one year, back one year options.

Some Other Thoughts
Do you need a date control? Seriously. For the beta registration page for No Horizons we used an age attribute. We don't really care about the specifics of someones age as long we could gather their age. This is very valid for websites where someone must be 18 years or over? Seriously, making someone work a serious of dropdowns and input controls to state they're 18 is ridiculous. A simple "I am N years age" where N is a supplied value is just as effective. To memorise: understand your demographics.

Much like piracy "cautionary messages" the only people you hurt are the people who do the right thing. Pirates leave off such messages when duplicated films and people who enter your site underage, etc, are lying.

Conclusion
So what is the best date control? It really depends on the application environment, whether dates are optional, but generally you should try to provide both, the input box input as well as the calendar. A set of input boxes for the keyboard users, this is especially relevant if you have a number of input fields that the user can supply. For visual applications and especially web or mobile applications you should provide calendar functionality.

Bonus Content: Date Ranges

Asking the user to specify date ranges is a non-trivial task. Often I am expected to supply a start and an end date. This is fine if I know the specific range I am searching. If I don’t then it’s trial and error until I do. A better solution is to provide that functionality for users that do know the date range as well as functionality for those that think in terms that users are familiar with. "It was last week" or "I am pretty sure it was in January" are concepts the user understands. Hell, as a developer/tester it's a concept that I think in.

Being able to search one to four weeks ago, and then in terms of months or years is more ideal. Show me all emails sent between June and August last year. That’s closer to how users think and it’s a lot easier to do than, show me all emails from the 01/06/2007 till the 31/08/2008. It’s a subtle difference but one the users prefer.

How would you implement this? You can allow partial dates in your input boxes for keyboard users. Secondly, you can let people double click on the month header to default to the first of that month and then close the window. Furthermore, quick options, like “last week”, “this week”, “this month” are handy shortcuts that make life just a little bit easier.