Agile Testing > Story : On the backlog

Over the next few posts I'll be exploring the concept of a story in an agile environment and what it means to us as a tester. Over the past few days I've been hearing about how testers don't get involved with story writing sessions, how testers duplicate the acceptance criteria in their tests and how testers don't fully understand how a story can replace a spec.

I'll hopefully cover all of these and some more by breaking a story down in to smaller lifecycle steps.

My experience is based around scrum projects. The one thing to consider is that each agile implementation is different, with different teams and customers and different requirements of the team. So no 'one' solution will be suitable for all. But hopefully by sharing my experiences on here it will help you round your view and decide on the best way of working on your agile project.

A story is basically a description of how someone interacts with the software to get a desired response.

A story basically takes the form of:

As a [Actor/Person] I would like to [Feature or capability] so that I can [Value of action]

At the start of a project there is a tedious, but incredibly important job of adding all of the customers ideas, requirements and thoughts to the Backlog. The Backlog is the project holding space for stories. Look at it as your requirements document.

Getting the stories on the backlog is a process of sitting with the customer and adding their requirements to the system in the form of the story. At this point it is unlikely that the stories will have acceptance criteria (i.e. the details of what is involved in the story).

Some of the stories may actually at this point be Epics. An Epic is essentially a story that contains lots of other stories. An example would be "Log In". This would normally consist of several individual log in stories – depends how complex though. When tackling an Epic it is essentially to break it down to manageable stories.

A manageable story is essentially a story that can be completed (to the definition of done) in your sprint. A sprint is normally between 1 and 4 weeks – although there is no law on that one. If the story is not achievable then it is an Epic and needs to be broken down more.

The definition of done can be complicated but essentially this is a set of rules/regulations/guidelines that must be adhered to before the sprint/story or task is considered done. For example, the sprint is not done until all code is checked in, all tasks are done, all stories are done, the stories have all been tested, the demo stack is ready for the customer, the deployment scripts are done, the automation suite is started, no defects outstanding etc etc. A series of gates in which the software and process must go through to be ready.

Right, back to the story. Once all stories are on the backlog the customer should then rank the stories based on priority order. i.e Top to bottom rank order of what's most important to them, at that moment in time. This rank order will change as the customer sees the software, gets new information or responds to market/financial pressures – and this is the beauty of agile. The next piece of work is always the highest priority for the customer.

Once ranked there are two lines of thought as to what to do next.

A: Have the team estimate each story in advance – time consuming, inaccurate as acceptance criteria will not be defined, tricky to estimate with no information on the emerging system
B: Add acceptance criteria to the first few stories and then have the team estimate.

I prefer option B as estimating the whole backlog often proves fruitless and pointless in my experience unless you are using it for forward planning. It's at this point that we, as testers, sit with the customer and programmer and work through the stories adding relevant acceptance criteria (coming in the next post).

Once acceptance criteria is added we then have the whole team estimate. Some teams use time estimation points, others complexity, others a combination of the two. For me, the only one that really matters is complexity, but the others would argue against this.

Estimating complexity is a process of sitting down with planning poker cards (numbers on each card). The scrum master (person running the sprint) would then read out the story and each team member would estimate on complexity, putting their card face down. Once everyone has estimated the whole team then turn the cards and we find a happy ground, negotiating between each other.

Estimation based on complexity is tricky to understand. It's not about how long it will take but about how complicated you think it is. An easier way of working it out would be to take a story that is neither "really hard" nor "really easy", writing in on a piece of card and sticking on a long wall. Then take each other story and write them down too. Now stick them either side of the existing story. Right hand side for more complicated. Left hand side for least complicated. From this you can start to understand that each story will have a complexity level that we assign a number to. I work on 8 being the average story and work either way from there.

Once we agree on an estimate that goes against a story and is then used to work out the team velocity. The velocity is essentially how many complexity points we can achieve per sprint. This is why sprints tend to be kept the same length to maintain a consistent velocity. In the first few sprints though, you will have no idea of the teams velocity as there is no historic data. Over time though the team will slip in to a rythm or groove which allows a much more accurate velocity to be calculated.

We have not estimated all stories at this point, but before each sprint this process needs to take place. This is so that at the sprint planning meeting the team can assign stories to the sprint which have already been estimated and have acceptance criteria. The customer also needs to check the ranking and the backlog as there could be new stories and defects to now consider. This is a continual process and is often referred to as grooming the backlog.

And so that's it really. In a nutshell (and a heavy scrum one at that) we have the basics of stories and backlogs and how they are used in a sprint. The one key point I have missed though is distilling the acceptance criteria in to each story. Something that not only makes estimation easier but also makes programming and testing smoother, cheaper and less dramatic. More on that in the next post.

Rob..

Update on what tester are you

The 'so…what tester are you' series is on hold at the moment. I have them all mapped out and complete but I'm working with Rosie @ TheSoftwareTesting club to put some images with each post. The images are looking awesome and will really compliment the posts so I'm holding off posting anymore until the images are ready.

But don't worry the series is still going ahead.

Thanks for all the comments too – always nice to get feedback on blogs.

Thanks
Rob..

Pair Programming and Pair Testing

Our programmers here at iMeta now wax lyrical about pair programming and it’s easy to see why. The quality of code coming through to the test team now is exceptional. There’s very few fix-fail episodes and the programmers seem over the moon with how great pair programming is fairing. Sure, there were teething issues and some programmers didn’t feel the groove when pairing but these were soon overcome and they moved forward.

 

It got me thinking though about whether or not testers should be pairing when writing test cases. And my conclusion is that they should. It brought back memories of when I used to have to send test cases off for peer review at a previous company. I too had to review other peoples tests. It did often become a chore but more importantly was that is missed the point. And here’s why:

  • The review was more a sanity check on formatting, spelling, ensuring every step had an expected result, test case length was ok, etc
  • It became a chore so often it ended up being a skim read.
  • The person reviewing often didn’t have the same product knowledge. This meant the test cases weren’t reviewed regarding how well they tested the application.

 

And so I sat down with a fellow tester on an after work development project and did some pair test case writing to start with. It was incredible. The thought patterns and processes we entered were remarkable. As a pair we wrote simple, clean and to the point tests. Those pointless steps or ideas were left out. During the tester not doing the main writing would then spawn a mind map charting all of the ideas so that we didn’t miss any.

 

The tests were succinct and short in a high level guidance way (i.e. no detailed steps). We kept every single test DRY (don’t repeat yourself) extrapolating all setup, preconditions and data out to separate excel documents. It truly was a great experience as each of us brought a different outlook to the table. But more than that we bounced ideas off of each other. In terms of time spent it might appear that we were doubling but the quality of the output was incredible.

 

So how about actual pair testing?

The next step was to actually see if we could do some pair testing. And we could. This too brought some amazing side effects. We raised more important defects. We generated new and interesting exploratory ideas to run all managed through a mind map. We had to do so little admin to the test cases that we were both truly surprised with how good they were.

 

It felt like we’d covered more of the system in less of the time. But we also covered a lot more of the system than we had test cases for. This was because as we got to a step, one or both of us would highlight that the documentation hadn’t mentioned this, or the test case didn’t consider this factor yada yada.


The whole process has left me thinking more and more of us should consider pair testing. Maybe as a short trial process for one day a week. Maybe as a permanent idea. Believe me, the tests, the testing and the camaraderie are all enviable positive effects of pairing up. Let’s not just leave the pairing to the programmers. Let’s help take testing to a new level too.

Stubborn Cat

There’s a cat down the road from me who is so stubborn it’s untrue. He refuses to budge – literally. He saunters out to the middle of the road and sits there with his smug little smile taunting drivers and cyclists. He refuses to move out of the way and it’s not uncommon to have to mount the kerb to get past him.

I don’t know his name but I do know he is stubborn. I generally like cats, my parents have several and one thing I have noted is that they are all fairly stubborn. I think it’s in their nature.

It’s interesting how the testing community seem to think of themselves as stubborn and argumentative. I too believe these to be traits of the majority of testers and often with good reason. We sometimes need to be this way to get the job done. It’s often necessary. There are times when you need to be stubborn, to stand your ground and to hold on to your opinion in the face of pressure and resistance.

However, when we are so stubborn that we refuse to move we could be endangering the project and ruining our reputations. If we refuse to move and accept new ways of thinking we may become side tracked, irrelevant and a nuisance. Just like the cat.

I’ve recently been at the receiving end of testers who can’t/won’t accommodate new information and who genuinely do believe it is their way or no way. Testers who can be quite nasty and cutting about other testers who don’t subscribe to certain ways of thinking. It’s at times like this when it feels like people are no longer stubborn and argumentative to be constructive but are moving ever so close to arrogant and at times, woefully wrong. But we are here to serve the stakeholders, to offer a service that people get value from, not to be argumentative and stubborn. Not to cause a nuisance. Not to be seen as the awkward one.

It’s a fine line to tread between being focused on quality and downright stubborn. Tread it right and your testing will flourish.

Test Reporting in an agile environment

I did a post over at my work blog a while ago about reporting in an agile environment:
http://blogs.imeta.co.uk/RLambert/archive/2009/01/16/test-reporting-in-an-agile-environment-ndash-low-tech-dashboards.aspx

I centered it around low tech dashboards which I still think are extremely valuable.

A low tech dashboard is a great way of communicating the state of the software mid sprint. At the end of the sprint, the board is fairly meaningless unless you have stories incomplete. But mid-sprint it's a great visual way of showing progress. I.e. we've hit this feature in depth and it looks ok.

It's another indicator of how we are progressing. Look at it as a quality indicator that compliments the velocity indicators like burndowns and burnups. It's a clear, visual representation of the "quality" of the software from the testers point of view. It doesn't need weighty metrics to back it up – although that may help in some environments. It doesn't need to be absolutely accurate, just like the burndown report and it doesn't need to be complicated.

It needs to be simple, easy to read, easy to understand and simple. It's about communicating to all stakeholders (and other teams) where we are at with the software 'quality'.

And when we get to the end of the sprint and we have stories incomplete then the dashboard can be a good way of highlighting where quality is lacking.

A few years ago I created an equivalent that was a 'mood board' with smileys which the testers would put up on paper to show visitors to the team area what mood we were in (happy, sad, nonplussed, ill, bored, tired, giggly, etc). A visual representation of how we were progressing. And it worked wonders and the management loved it more than the metrics. And believe it or not – that was in a waterfall environment…