Pair Programming and Pair Testing

Our programmers here at iMeta now wax lyrical about pair programming and it’s easy to see why. The quality of code coming through to the test team now is exceptional. There’s very few fix-fail episodes and the programmers seem over the moon with how great pair programming is fairing. Sure, there were teething issues and some programmers didn’t feel the groove when pairing but these were soon overcome and they moved forward.

 

It got me thinking though about whether or not testers should be pairing when writing test cases. And my conclusion is that they should. It brought back memories of when I used to have to send test cases off for peer review at a previous company. I too had to review other peoples tests. It did often become a chore but more importantly was that is missed the point. And here’s why:

  • The review was more a sanity check on formatting, spelling, ensuring every step had an expected result, test case length was ok, etc
  • It became a chore so often it ended up being a skim read.
  • The person reviewing often didn’t have the same product knowledge. This meant the test cases weren’t reviewed regarding how well they tested the application.

 

And so I sat down with a fellow tester on an after work development project and did some pair test case writing to start with. It was incredible. The thought patterns and processes we entered were remarkable. As a pair we wrote simple, clean and to the point tests. Those pointless steps or ideas were left out. During the tester not doing the main writing would then spawn a mind map charting all of the ideas so that we didn’t miss any.

 

The tests were succinct and short in a high level guidance way (i.e. no detailed steps). We kept every single test DRY (don’t repeat yourself) extrapolating all setup, preconditions and data out to separate excel documents. It truly was a great experience as each of us brought a different outlook to the table. But more than that we bounced ideas off of each other. In terms of time spent it might appear that we were doubling but the quality of the output was incredible.

 

So how about actual pair testing?

The next step was to actually see if we could do some pair testing. And we could. This too brought some amazing side effects. We raised more important defects. We generated new and interesting exploratory ideas to run all managed through a mind map. We had to do so little admin to the test cases that we were both truly surprised with how good they were.

 

It felt like we’d covered more of the system in less of the time. But we also covered a lot more of the system than we had test cases for. This was because as we got to a step, one or both of us would highlight that the documentation hadn’t mentioned this, or the test case didn’t consider this factor yada yada.


The whole process has left me thinking more and more of us should consider pair testing. Maybe as a short trial process for one day a week. Maybe as a permanent idea. Believe me, the tests, the testing and the camaraderie are all enviable positive effects of pairing up. Let’s not just leave the pairing to the programmers. Let’s help take testing to a new level too.

Stubborn Cat

There’s a cat down the road from me who is so stubborn it’s untrue. He refuses to budge – literally. He saunters out to the middle of the road and sits there with his smug little smile taunting drivers and cyclists. He refuses to move out of the way and it’s not uncommon to have to mount the kerb to get past him.

I don’t know his name but I do know he is stubborn. I generally like cats, my parents have several and one thing I have noted is that they are all fairly stubborn. I think it’s in their nature.

It’s interesting how the testing community seem to think of themselves as stubborn and argumentative. I too believe these to be traits of the majority of testers and often with good reason. We sometimes need to be this way to get the job done. It’s often necessary. There are times when you need to be stubborn, to stand your ground and to hold on to your opinion in the face of pressure and resistance.

However, when we are so stubborn that we refuse to move we could be endangering the project and ruining our reputations. If we refuse to move and accept new ways of thinking we may become side tracked, irrelevant and a nuisance. Just like the cat.

I’ve recently been at the receiving end of testers who can’t/won’t accommodate new information and who genuinely do believe it is their way or no way. Testers who can be quite nasty and cutting about other testers who don’t subscribe to certain ways of thinking. It’s at times like this when it feels like people are no longer stubborn and argumentative to be constructive but are moving ever so close to arrogant and at times, woefully wrong. But we are here to serve the stakeholders, to offer a service that people get value from, not to be argumentative and stubborn. Not to cause a nuisance. Not to be seen as the awkward one.

It’s a fine line to tread between being focused on quality and downright stubborn. Tread it right and your testing will flourish.

Test Reporting in an agile environment

I did a post over at my work blog a while ago about reporting in an agile environment:
http://blogs.imeta.co.uk/RLambert/archive/2009/01/16/test-reporting-in-an-agile-environment-ndash-low-tech-dashboards.aspx

I centered it around low tech dashboards which I still think are extremely valuable.

A low tech dashboard is a great way of communicating the state of the software mid sprint. At the end of the sprint, the board is fairly meaningless unless you have stories incomplete. But mid-sprint it's a great visual way of showing progress. I.e. we've hit this feature in depth and it looks ok.

It's another indicator of how we are progressing. Look at it as a quality indicator that compliments the velocity indicators like burndowns and burnups. It's a clear, visual representation of the "quality" of the software from the testers point of view. It doesn't need weighty metrics to back it up – although that may help in some environments. It doesn't need to be absolutely accurate, just like the burndown report and it doesn't need to be complicated.

It needs to be simple, easy to read, easy to understand and simple. It's about communicating to all stakeholders (and other teams) where we are at with the software 'quality'.

And when we get to the end of the sprint and we have stories incomplete then the dashboard can be a good way of highlighting where quality is lacking.

A few years ago I created an equivalent that was a 'mood board' with smileys which the testers would put up on paper to show visitors to the team area what mood we were in (happy, sad, nonplussed, ill, bored, tired, giggly, etc). A visual representation of how we were progressing. And it worked wonders and the management loved it more than the metrics. And believe it or not – that was in a waterfall environment…