Saturday, March 29, 2008

defect in microsoft stl queue documentation

I've just spent the last four or five hours trying to work out why my asynchronous tasking code doesn't work.

After a while it turned out that the last element in my queue was being mangled when I was freeing the memory associated with the current work item. This could be one of two things:

  • I'm not popping the element of the back of the queue or,
  • Somehow I've mixed my pointers up while adding them to the queue and more than one queued task is looking at the same memory. The code that adds elements to the queue is a different thread to the one that handled it.

I looked at the intellisense for the pop command on the std::queue. "erase element at end" is what it says. So I thought, it must be something to do with thread-safety. Spend a bit more time writing down memory addresses to verify my code is logically correct. Nope, nothing wrong with what I'm doing. Just on the off chance I open up the sql queue code and have a look at the pop method and this little gem is what I found:

void push(const value_type& _Val)
{ // insert element at beginning
c.push_back(_Val);
}

void pop()
{ // erase element at end
c.pop_front();
}

A couple of lines of incorrect commenting in a row. Easier enough mistake to make but probably shouldn't have passed peer review. Documentation is an invalable tool but only if it is correct. You can take this as a friendly reminder to make sure you include documentation reviews in your peer review process.

For reference this the stl version v4.05:0009 what ships with Microsoft Visual Studio 2005

Tuesday, March 18, 2008

Cold Beer

If you come home from work and want a cold beer, but like me you have none in the fridge. This little tip will get you going in a just a few minutes.

  1. Take one beer
  2. Take a glass and rinse it inside and out with cold water (refrigerated, filtered water if you have it)
  3. Open freezer, pop both in.
  4. If you have "quick-freeze" turn it on
  5. Wait 10 minutes (freezer times will vary, but 10 should be plenty)
  6. Open freezer get chilled glass out (in my case ice had well and truly formed)
  7. Take beer out (it won't change temperature much but it's better than nothing)
  8. Take glass out
  9. Close freezer
  10. Pour beer into glass
  11. Drink that tasty beverage and enjoy the few minutes left of St. Patrick's Day

Notes:
  • I think the glass being upside down will chill better than right-side up. Its a tough call between ice the rim over and having a greater surface area of the glass exposed to the chilled air.
  • The number of items in the freezer will change the chilling time. Wedge the bottle and glass between a few packets of peas and corn and you'll be ready in two shakes of a lambs tail.
  • I used an empty freezer as I have no food... well nothing that requires freezing.
  • This really only applies to warm climates. If its sub-zero outside, enjoy a dram of whisky.

Friday, March 14, 2008

Bloggerscript

Ok, I think the javascript in Blogger is a little inefficient. As I type into this dialog box my CPU usage cranks up to about 80-90%. According to procexp it's Firefox, which means Blogger. I'm not entirely sure what is going on in each keystroke but this is ridiculous. I'm waiting for each letter to appear on the screen. I'm not waiting long but it's making writing a post difficult.

Having a look at the source and there isn't much occurring. The "Save Now" being enabled on keydown but not much else. It could be a Firefox's Javascript processor. Firefox did update before I started the session but I wouldn't have thought it was that.

I am not working on my primary machine though, which maybe why I've never noticed this before. I am working on a machine that is just used for Continuous Integration and serving up SVN. So it's not powerful at all. I'll explain in my next post while I'm not using my primary machine.

This machine: 1.7Ghz Intel i386 with 1GB of RAM (DDR type1 probably)
Primary machine: Dual Core AMD 4200 64bit (32bit OS) with 2GB of 800mhz DDR2 RAM

Personally I think web-applications should be transparent and I realise I'm not the only one who thinks so. I'm hardly covering new ground. Web-applications are hard to develop properly (super-easy to do poorly... I know, I've written a few), you have a bunch of major platforms you need to support and you don't have the luxury of building a binary/distribution package to suite each one. I built www.icnh-games.com myself (well not entirely, I had a separate company do the style-sheet and layout, non-programmer-art, ftw) and I wrote Perl scripts to dynamically build the content from in-game xml-documents. However, it took us about two months after it was completed to iron out all the cross platform bugs (and I'm sure there are still some in there hiding... a testers job is rarely allowed to be done).

However, right from the start we stipulated with the company we contracted [Voodoo] that we needed to support the primary browsers (IE6, IE7, Firefox, Safari, Opera and there was another... don't feel offended if I left you out) as well as handling users that had Flash (inline vids) didn't have Flash (static images) and may or may not have Javascript (to support fancy menus as well as automatically starting the Flash movies rather than waiting for the user). Not to mention that multiple Flash versions that are out there. Still we got it all working in harmony. The more "features" you have enabled the better your experience is. The important thing is not to deny an experience simply because one is too lazy to support a user's application or operating system choice. We are here to provide applications for users and denying a user simply because of their operating system or browser choice seems a little bit too close to [racial|sexual|religious|*]-discrimination for my liking. Virtual discrimination is what it should be called. It makes it sounds like a social taboo and certainly no where near as cool as supporting Linux only cause everyone else is a n00b, or Apple or Microsoft. Also, it's not as funny as Apartheight.

I will post a blog (in a month or so once I've got all the bugs ironed out and can take comparative screenshots) that illustrates the multiple paths through the ICNH Games Framework. We support Windows (XP, Vista), Apple (10.4, 10.5 and I'm doing some checks as we speak to find out if we can support 10.3 as well... I knows it's old but I have a 10.3 box right here) and Linux (Ubuntu, FreeBSD, Fedora, OpenSUSE, MEPIS and PC Linux OS, potentially more but not tested). On top of that we provide distributions for i386, x86_64 (Intel or AMD), PowerPC 32 and 64 bit as well as there hardware features combinations of MMX, SSE, SSE2, SSE3.1 and SSE4 and 3DNOW. At the GPU level we have paths through our rendering engine that range from no-shaders, no-multitexture and no-VBOs to full-shader support, high dynamic range rendering with countless texture slots (my GPU supports 128 texture units apparently... I've never tested it) and batching algorithms to squeeze as much performance out of the machine as I know how to. I won't say as much as possible, as I know I'm not great at assembly, I'm only good at designing for performance and not getting down into the assembly level code and tweaking it further. One day a clever dev will come in and make it faster than it was before and I'll be happier.

All this being said the visual experience provided to low-end users (like my laptop with the awesome integrated graphics card) is not going to be mind-blowingly awesome, but for No Horizons and potentially other games, users will still be able to play the game. I'm will always play a game I like that looks bad over a game that looks awesome and is just a bit rubbish. That is pretty much our number point at ICNH Games: How many people can we get to play this game? Equal with it is: Is the game fun?

This was only possible because at the start we wrote down that we wanted to support all users, not just those running triple SLI. If you plan to support multiple environment configurations then you are going to get a lot closer to supporting them all then you are by tacking the support on at the end. Refactoring architectural solutions to support user XYZ is only going to make you resent them.

I'm not sure if Google's Blogger is designed to run on all browser configurations. I'm sure they have tested it, but have they tested it running a crappy old PC box? Moore's law has the speed of computers doubling every 18 months. This just means that the percentage of crappy computers is steadily increasing.



Before I go, I don't want to turn this into a gripe-blog but a few things have been bothering me of late and I never sleep much so that can't help. Is it that they've always bothered me and that starting my blog is just a way to vent existing frustrations or are all these recent occurrences of discontent purely anomalies in a generally happy and thoroughly enjoyable life?


This could be an interesting research topic; "Does starting a blog cause one to complain more frequently (or just more openly) than what they did before?". I'm sure than anonymity of the interwub certainly helps here. Perhaps it's the fact that blogging can be very cynical (not all of it is, but I've sure read a few cynical blogs in my time) and therefore bitching on a blog is more acceptable than bitching in person. It certainly allows one the escape of the social stigma associated with their gender. For instance the act of complaining in a male-oriented "traditional" Australian society is frequently met with responses like "dry your eyes Princess" and "harden the fuck up". This isn't all bad of course (fyi, nothing is ever all-bad or all-good) as Australian's tend to be good at putting up with the crap and getting on with the hard work.

I won't get into a discussion on Australian culture (which for the most part I love). For those who have already done the research or are doing it. Drop me a line, I find research topics fascinating.

Saturday, March 8, 2008

Crisps

I was just down the shop looking for some snacks to munch on while I work and I noticed that all the multiple packets of crisps (bags of 15 packets of salt & vinegar, etc) are all being switched over to cardboard boxes (rather than foil bags).

I wondered why:
  • Boxes are easier to stack and seem to consume less volume. Saves money on shipping.
  • Boxes probably biodegrade or can be recycled more easily than foil bags. They could be heading for an "ethical" packet of crisps angle,
  • Switching to boxes makes the product look like "new". Therefore they can change the individual packet sizes without as many complaints.
Boxes probably cost more (so the first two need savings/revenue spinners need to outweigh the additional costs). It is probably, nay is the third option. The 15x individual packets within the box are 19g serving sizes whilst the one remaining foil bag in the shop had individual sizes of 20g.

1g per packet isn't much but that's 15g per box or 5% smaller. If they saved 1c per gram, its 15c a box and lots of money across the board when you consider how many packets of crisps get sold per annum.

As a Generation+Ner (whatever generation you are, plus N to where I am) I shouldn't fall for, or put up with, the tactics that worked on Generation+(N-1) so I'll vote with my feet (and I'll blog at anyone walking past...Generation Rant ftw).


What I would like to see is a publicly available web-app that people can upload images (ideally thumbnails) of product packages that have changed and a timestamp of when it was bought so anyone can tell which companies are gouging us the most (in their local area). It's not a complicated web-app but I feel some human intervention would be required (or self-moderation) to ensure the content was not corrupted (potentially by the companies attempting to protect their image).


Before I go, I am aware that these companies are probably suffering from rising manufacturing costs and need to protect their shareholder's profit margins. That in my opinion isn't enough for me to put up with the underhanded manner in which they protect their profits. I'm a man of my principles.


Note: For some reason my family and therefore I, use the term "crisps" rather than the now-common term "chips". Personally I prefer crisps as it allows one to distinguish from hot-chips.

Friday, March 7, 2008

Project Development Iterations

One of the things I like about where I work is the freedom different development teams have in trying new things with respect to work practices. We are currently starting version 1.5 of a project and one of the developers wanted to try a different way of structuring our iterations to minimise the issues we had in version 1.0 and to improve the overall quality of the project being developed.

The main issue we wanted to fix is developers getting too far ahead of testers - we had a two week iterative process where the developers would code for two weeks straight and then on day 10 they would provide a build to test. Then they would continue on as fast as they could.

This was a problem, they kept on getting further and further away from the testers because new code ranked higher in their eyes than old code did. Therefore some bug fixes took a long time to come, this hampered testing.

So one of the developers suggested a different iteration configuration. An iteration still lasts two weeks but there are five days for development of new code and five days for testing. The final day of development involves preparing the deployment for test. The final day of testing involves preparing the deployment for User Acceptance Testing.


Today was the first day of the first testing phase and so far so good. Defects were raised (not that many which impressed me for day one code) and fixes are already being prepared. Beyond that developers have realised they've made a mistake in one place and as they have the time, they are checking for similar problems in other areas. Proactive bug fixing, another plus.


Now, you may be wondering how complex code can be with five days of development including unit testing. Well for starters the number of developers double the testers but this is a small iteration task wise. Not all will be though so we are breaking our iteration tasks up into two groups. Single iteration tasks and double iteration tasks.

Every second iteration will see the deployment of more complex functionality that cannot be implemented in 5 days (effectively 15 days). As testers I am happy with this, we know these larger tasks are coming in advance and can plan for their arrival. We also still get five days of dedicated developer bug fixing after its delivery.

Both the development and testing teams are aware that the potential for the number of defects to still be greater than what the developers can fix in five days (especially if there are some doozies). To combat this (as what should have always be done), defects have priority one in the iteration following their discovery and as testers we're obliged to ensure that all new code gets at least a once-over to gauge the quality of the build.

Time will tell if policy this works, but at least we are working together to solve our past mistakes.