Tuesday, January 15, 2008

Testing SOA: Service dependencies and their impacts on testing

A few days ago, I asked a series of questions regarding testing services and whose responsibility it is, how it should tested, etc. Today I’ll provide a solution, but to a different question. Service dependencies and how they can impact testing.

Whilst talking about service dependencies, I will be ignoring the impact of service-deployment-platforms, service implementations, who does the testing and any tools that could be used to assist in testing. I’ll be looking at the services from a testing organisation viewpoint.

From the aforementioned testing viewport there are two types of services: Simple services (including CRUD services), those being services that do not have dependencies on other services and Orchestration services, those that have such dependencies.

As we all know, the only acceptable way to test a service is in isolation. Reducing a service to the infrastructure dependencies that it will have in production will allow a testing team to prove the service for use amongst a bigger solution.

Prove: to validate the functionality of a service through
the act of testing

Now, for illustrative purposes I’ll define some example services:

Person-Service – CRUD service for a Person Object. Has a dependency on persistent-storage
Time-Service – Simple service that returns the time in a particular time zone. Has no external dependencies.
Searching-Service – an Orchestration Service that utilises the Person service for searching. Is expected to support other services in the future.


In our example the searching-service is dependant upon the person-service. To test the searching-service the person-service is required to be proved before testing on the searching-service can commence. Failing to do so can invalidate the testing undertaken on searching-service and provide a false level-of-confidence.

Conditional Proof: the validation of a subset of service
interfaces to facilitate concurrent development and testing
If timeframes are restricted or concurrent development and testing of services is required then a conditional proof can be organised. In our example this is where the interfaces of the person-service that are required by the searching-service are proved but the entire service itself has not been proved.

Conditional Proofs will allow testing on the searching-service to commence. The only issue is if a different interface on the person-service fails test. In such a scenario the conditional proof is revoked until person-service is fixed and retested once again establishing the original conditional proof or proving the service as a whole.

This may seem implicit to an experienced tester or developer out there. When an aspect of an application is proven (add new-customer address), other dependant code (update customer address) can be tested. However, I believe that when testing services, a more regimented approach to the organisation of testing is required to ensure that all service interfaces are tested against proven dependencies.

To facilitate this I feel a proof-register is required. This can be used within a project team to manage their development and testing responsibilities, or it can be used among a development shop to map which services are stable and what their interfaces are.

A simple spreadsheet listing services and their interfaces and the current state of testing for that interface is all that is required. As can be seen from the example, the searching-service testing has begun on an interface that has a conditional proof.



A database or similar construct may be more appropriate considering this spreadsheet can become unwieldy overtime.

Now, for 3rd party services, things get a little more complicated. Your organisation for example, may have purchased a simple off-the-shelf Client Relationship Management (CRM) service from Company-XYZ and your organisation wants your searching service to be expanded to include the CRM service in its list of services it searches on.

Now, hopefully your company received a test report from Company-XYZ for the CRM service and you can see what testing has been performed on each interface.

Back in the real world, you have some tough choices. (a) Assume the service is proven and hope it works as written on the brochure or (b) spend precious resources retesting a product that might be sound. I prefer option (c) add the service to your proof-register and set the status for every interface to conditional-proof.

If no bugs are ever raised against the service, then your testing is sound. If a bug is raised, then you can determine which services are reliant on the unproven interface. This will bubble up the service dependency chain and you would be able to identify which interfaces will need to be retested after the fix is applied to your CRM service.


In summary: when planning for the testing of services, utilise a process like the one described above to minimise the impacts of defects whilst maximising testing and development concurrency.
  • Services must be tested in isolation.
  • A service’s dependencies must be proved before the service is tested
  • Conditional proofs may allow for some degree of concurrent development and testing
  • Use a proof-register to manage the status of proved services and conditional proofs granted to interfaces.

Oh and before I go, it’s my firm belief that if you purchase an off-the-shelf service, a test-report is a mandatory part of the package. Off-the-shelf applications are generally speaking self-contained and as such any bugs in the application don’t result in the end of the world. Off-the-shelf services that are going to be integrated into your enterprise architecture may just cause the end of your world.

Friday, January 11, 2008

Testing SOA - an introduction

I might as well start with a fairly large topic and one that puts me in, I feel, an interesting position. The testing of Web services is inherently a technical task in the realm of generally non-technical testers.

There are a number of products out there that attempt to simplify the process of testing services for testing personnel. HP's Service Test, Parasoft's SOAtest, soupUI, etc, but in reality they are just UI wrappers that generate code facilitating communication with the service. Sure they may it easier than cutting your own code to do the same thing and I'm sure some of them auto-generate boundary tests (I don't know if they do but it's not that hard or unreasonable so I'll assume it) but deep down there are couple of issues nagging at me.


Should testers be testing services in the first place?

I've heard some good arguments for both and I've yet to make up my mind. This is really the basis for all my questions, can I justify who is going to test service code? My decision will impact the direction our organisation takes, so this question, I take very seriously. Currently I see five paths:

  • Developers test all service code (unit, functional, performance, etc, etc)
  • Developers perform unit and functionality testing and testers perform performance, scalability, availability, etc.
  • Testers produce a test-specification which they hand to the developer who will code the tests
  • Testers cut the code themselves (requires a technical-tester)
  • Testers use a service testing tool

How should the testing of services be performed?

Using an SOA test tool? Cutting code? I'll get onto this in more detail once I've given the various tools a solid workout but they haven't convinced me yet and while that isn't a complete write off, I feel if you can't prove your worth immediately, in the one task you do, you're not effective. Then again, cutting code is prone to human error and having to write repetitive code to test a service, arduous. Code Generation anyone?


How will the testing environment be structured?

Testing services from any perspective is hardly trivial. There are deployment and configuration issues and these impact the viability of testing. I know that it is very easy to spawn a thread at the start of a MS-Test object that creates a service. It then notifies the main test-thread it is ready and testing can commence. The unit-test cases are executed and then everything is cleaned up. This setup would allow a developer to prove a service in the comfort of his own box. It also allows the test cases to be placed on the end of a continuous integration run. However, that deployment of the service will not match how the service is deployed in production. Therefore our testing is only good enough to prove functionality. This leads me to the next question:


Who, and how are we going to test services for Performance, Scalability, Robustness, Load/Stress, Availability, Configuration and Deployment?

These testing scenarios are complex and often time consuming. Will this move into the realm of the developer or will they remain world of testers? I feel that testing most of the above scenarios requires prod-like configuration and deployment. This automatically moves them out of the development environment where stability is mandated.


What defines a service from a documentation perspective?

Services are an IT solution to a Business problem and because of that a requirement specification is not going to detail the interfaces on a service. That belongs in a technical specification. Enter Agile and my chances of seeing such specifications are reduced (apologies to all those that use and like Agile. I like it too but in a world where specifications are thin on the ground Agile doesn't appear to make it any easier).


How do you determine if a service is fit for production?

This may seem like a trivial question but if we were to remove the testers from the equation (and replace them with developer-testers) how do we know that the testing performed by a developer is adequate and proves the service? Years of testing have given me trust issues and organisations generally don't have in place code-promotion paths that bypass the test area.

Have more questions but for the sake of brevity I'll save them for specific topics. Furthermore, I don't plan to answer these questions right away. I'm tackling them all at once at work and when I come up with, what I feel is the answer to a single question. I will put it up here to get some feedback and to share my decision.

Aside from all this I'll be writing and testing some services from a developer perspective. I'll trial the various SOA testing tools to gauge their effectiveness and I'll be having many discussions with my colleagues who are a mix of testers and developers.

Stay tuned

Thursday, January 10, 2008

1st!

Hello and welcome.

Before we begin I'll briefly explain a little bit about who I am. As the title suggests by day I am a senior tester with an undisclosed organisation. By night I'm the lead developer with my company ICNH Games (www.icnh-games.com). Two full time jobs is hardly ideal but job A funds job B with the long term goal of job B (or ICNH Games as some call it) being self sufficient.

So in reality, I'm probably more developer than tester but, I am senior member in a medium sized testing team so many of my day to day decisions impact non-technical testers. This places me a in an interesting position especially as organisations (including ours) move towards distributed Service Oriented Architectures. This is also something I will be blogging about in the immediate future. However, I'll leave that for an actual post.

I won't promise to write frequently or blog for the sake of blogging. More realistically I'll blog when I have the time and when have something worthy of note ...I could be lying about its worth :) ... or when I have unanswered questions.

Feel free to add a comment or engage me on topic but please try to keep the spam-advertising to a minimum.