Planning for when cows attack

A picture of a cow
A picture of a cow

 

A few weeks back at one of the testing conferences I was lucky enough to meet a very interesting man, let’s call him Mr F. He was small, stout and incredibly beardy. He was a fascinating man who had some truly incredible stories to tell. A charming man with a razor sharp wit and incredibly strong opinions. I didn’t agree with 99.9% of what he was saying, but that’s beside the point.

Mr F was telling me a rather long winded but entertaining story around the old saying: “He who fails to plan, plans to fail”

Mr F had worked in the financial industry for most of his career but had recently moved to a big brand web company and was leading the testing of the latest version of this companies new website solution. Mr F explained to me how he planned for everything and anything. He told me, with great pride, how he refused to start the project without a risks and issues register. He even chuckled at the stupidity of the project team for thinking they could start without these crucial documents.

He also explained to me how he set up a team wide calendar entry for everyone to check the risks and issues register daily. He also, with some gusto, explained how he had a wealth of action plans he could refer to when the risks became issues or when the issue was becoming a real showstopping problem. (Note: When I refer to issue I mean an issue that is present and affecting the team/project, not an issue in the software or a defect).

He explained how he spent the first one month iteration finalising the documents ensuring each team member had input to the list of possible things that could go wrong. He had plans in place in case the first release of software was below the quality gate set in his test plan, in case any member of the team was off ill, for power failures, for the loss of test environments, for scope creep, for office security breaches, for testers not achieving 10 test cases per day yada yada yada. He listed very many more. It was a really fascinating (but long) story and I have to say, I was impressed with how much planning had gone in to it. I was also incredibly impressed with how passionate Mr F was for planning for the known, unknown, known unknowns and unknown unknowns.

The problem was that Mr F wasn’t a happy man. It turns out on day three of the second monthly iteration three testers threatened to hand in their notice because of Mr F’s insistence on planning for everything and anything (which, by the way, is an endless and impossible task).

Mr F was then removed as test manager and effectively given the boot. Something I’m not entirely sure he had planned for.

Mr F explained to me how he had never understood that there were different ways of working and that some teams simply didn’t need to plan for everything. He seemed genuinely surprised that different industries used different ways of working and that he faced so much resistance when he asked the team to plan. He said they kept saying things like “let’s just get something done” and “let’s build software, not write pointless documents”.

He’d heard all of the stories of teams just “doing” things with low cost, high efficiency tools and frameworks with only enough documentation and how some teams just seem to get stuff done but he’d  never experienced them first hand. He told me how he had shaken his head in anger at how misguided and misplaced these teams were with their “just do it” attitude without truly realising that there is a world outside of *his* testing domain.

Mr F was still a naysayer of many ‘new fangled’ techniques and approaches like Exploratory Testing, Test Driven Development, automated acceptance tests etc but he was beginning to realise that there are other ways of working and that planning too much in advance was pointless for some teams. It was a hard lesson for him to learn; that there are other contexts out there.

He was a fascinating man but it highlighted to me just how many people simply have no idea there are other ways of working. There were a few others at this same event where I met Mr F who simply couldn’t comprehend that teams are releasing software each month to a high quality without the need for massive plans up front. And this is not agile versus waterfall, some of the stories were of waterfall teams just getting stuff done – and some of these teams were more agile than most.

Planning, it seems, is something many testers love to do. And I believe all testers plan, after all “He who fails to plan, plans to fail”. But there is a point in which that extra planning buys nothing. It is waste. There are always things we will never plan for. Things we will never think of. There are always things we will plan for that will never happen. That is life. It is the way of the world.

Finding the right balance is tricky. Some plans are good. Too many can be bad. None may be very dangerous indeed. What works for one team might not work for another.

I do a fair amount of running and (maybe because I’m a tester) I’ve got a plan for various things happening whilst out running. Like if I fall and break something (the joys of having brittle bones), if I’m attacked by some Hoodies (the joys of living in the UK), or chased by a skulk of foxes (the joys of living in rural Hampshire) or hunted by a massive driver less truck (the joys of watching Duel too many times) but I have never ever planned for being mugged by a rabbit.

But sure enough, one day a rabbit the size of the England Rugby team tripped me up whilst I was running and I swear it tried to steal my MP3 player and new running shoes. I would never have planned for this. I still struggle to believe it even happened. And apparently it’s not that uncommon to be attacked by animals. It was in the news a while back about a man who was surrounded by cows and forced to jump in to the River Thames. And another story about a man being chased home from the pub by an angry badger. But do we plan for these things?

Anyway. I digress. But the point is there are too many variables involved in software development (and life) for us to effectively plan for everything we think we will ever encounter so we need to find that line that exists between careful and necessary planning, and time wasting processes that don’t lead to anything useful.

There are just some things that happen that you simply cannot plan for. So why try? Plan for the basics but don’t spend too long planning for the unknown, for the low probability and for everything you could ever think of. Most of it might not happen and I can guarantee there will be bigger problems you’ve never thought of.

So instead, be ready to accept new ideas and concepts, and be prepared to be flexible in how you approach a problem. But most important of all, be open minded to new ways of working and flexible in how you deal with problems.

And I leave you with a perfect quote:

“One should not respond to circumstance with artificial and “wooden” prearrangement”

And do you know who said that? The legendary Kung Fu master Bruce Lee.

 

Rob..

Is software testing really a service?

I’ve always subscribed to the concept that the testing we offer is essentially a service to the business. The business engage with the test team for our services. I hear this phrase mentioned a lot mainly around how we can improve our service, or how best to manage this service and how we can maintain our independent service. We, the test team, maintain our impartiality and cast our critical eye on the software. We are a service. Or so I thought.

 

It sounds good in theory and I’ve previously worked in a service environment first hand. But I’ve come to realise recently that the service mentality is not helpful. It creates a mental divide that often creates a very real divide between the rest of the team and the testers. It re-enforces the dreaded wall concept and it somehow marks testing out as being special, different, aloof. And the divide, in my opinion, is not a healthy way of looking at testing.

 

I feel we need to take a step back and look at the picture from a higher level. If we do this, we see the whole development team, including testing, as the service? As the service to our business?  To our customers?

 

As development teams consist of programmers and testers (and other roles such as PM etc) the notion of the test team being a separate service seems fundamentally flawed. At a basic level each and everyone one of us are in some form of service agreement with other people. Our friends, family and colleagues for example. And there is no refuting that at all. That is everyday human transactions. But I genuinely believe that describing “testing” as a service is misleading people and creating a barrier that isn’t positive.

 

No matter what methodology is being used, it makes more sense to think of the project team as a whole? As a whole unit delivering value? As a group of people brought together for their skills and ability to deliver good software?

 

Are we really bringing together each service (BA, PM, programming, testing, support, documentation etc) to create bespoke project teams made up of individual service agreements? Maybe this is why some teams get so hung up with documentation, sign offs, gates and criteria. And yet in all my time working I’ve never heard any other department refer to themselves as a service. (no doubt some people have..let me know). So why must testing?

 

I can see why I used to believe software testing was a service. It’s because testing was hung on the back of the project, something that happened during a certain time period, something that required a special build to be handed over and then thrown back, something that was troublesome, independent and impartial, something that happened as a phase and not as a principle. And it makes sense in that environment. But is is helpful?

 

As more testers are being involved at the start of the project, maybe the service concept will give way to the team concept. Maybe people are realising that the testers are no more important than the programmers. Or that the support team have just as equal input to the project as testing or project management.

 

I’ve seen the dangers that testing as a service can bring; the late delivery to test, the lack of test input throughout the project, the poor quality release, the throwing over the wall, the communicating through the medium of defects, the blame culture, the metric wars and the late deadlines and poor quality releases.

 

I’ve seen the masses of documentation, the quality police mentality, the gated entry and exit barriers and the general lack of communication between departments. I’ve seen months wasted on up front design only to find the pesky testers destroy it through late in the process static analysis. But most importantly I’ve seen the thousands of forum threads from irate testers berating the project team for all of the above. I’ve met them. I was once one of them.

 

And before my critics complain I’m about to go on extolling the virtues of agile methodologies, this has nothing to do with agile, wagile, waterfall, fragile, lean, mean, bream or any other methodology we can name. But it has everything to do with people. More importantly, how these people integrate in to a team. Sometimes the blame lies with management for building silos, sometimes with testers for enforcing them and sometimes for all of the team simply conforming to testing norms.

 

But anything we can do, as testers, to break down the barriers between groups within the team should be done. Right? We want to be involved, we want to be asked for our opinion, we want to be delivering good software, we want to be part of a great team, we want to be respected and trusted. Can we truly achieve these things by being a service to the rest of the team? By marking ourselves out as special, different, distant, contracted in?

 

I’m not saying we should conform, be walked over, pushed aside and devalued. We can still be impartial, critical, questioning, creative and communicative. These very traits are why we’ve been selected to be part of the team aren’t they?

 

And yes, agile does try to re-enforce this team mentality where quality is shared, testing is done first and team collaboration is key, but it doesn’t mean to say this is not possible in other methodologies. I’ve seen waterfall teams pull together, utilise testers right at the start, build a team including testers and ensure communication and team moral stay high. And they have succeeded.

 

But if we look at the testers from afar we can begin to see how we are just part of the team. Nothing more. Nothing less. We posses a skill that the team needs. A skill that compliments the rest of the team. A skill that is testing.

 

The customer and business want a service that delivers great software; and that service is the team, of which testing is just one part.

 

I know some people like the wall. They like bragging about finding 100’s of defects in the first week. They like seeing the rest of the project team squirming around trying to explain why it’s all gone pair shaped with 3 weeks to release. I’ve worked with these people. They live and breathe negativity. They strive in a blame culture. That’s what gets them out of bed in a morning.

 

But for me, it simply doesn’t cut it. The failure of a project is a reflection on the team behind it. And that absolutely includes the testers.

 

A team is a team. And on the face of it, it needn’t be more complicated than that.

 

This is why I see the testing as a service to be flawed. It assumes we are outsiders and separate and that just feels wrong. We do have specialist skills and thinking, but we are not outsiders, impartial or distant. We are part of the team.

 

And if we must keep using the term service to describe our role in the team, then we need to fully understand that many of the problems we spend hours griping about come directly from this view of ourselves….

 

Disagree? Agree? Not bothered either way? Let me know why in the comments. I’m open to fine tuning this view, building on it. Let me know.

 

Rob..

Rapid Acclimatisation Process

I saw the term “rapid acclimatisation process” for the first time in a blog I follow. The term struck an instant chord with me. In the blog, the author Jan Chipchase is describing the initial period of time when he lands in a new town or city. He spends that time acclimatising to the new location, the new people, the new society, the new environment and the new time zones. All of these things have an impact on Jan and his work and he uses the term rapid acclimatisation process to describe this.

 

It intrigued me as I believe the same term can be applied to the first time we see some software as a tester. Or we log on to our brand new test environments. Or our new tool. Or out new defect tracking tool……you get the point.

 

I always spend an initial time acclimatising to what I am testing, under what contexts and in what environments. Just exploring, learning and acclimatising.

 

Rapid Acclimatisation Process. It’s got a nice ring to it.

 

Why not check out Jan’s blog herehttp://www.janchipchase.com/blog/archives/2009/10/

 

And by the way. We can learn a lot from Jan. His job is to study how people use Nokia phones in their daily lives (or experimental conditions) and then build these findings in to the next generation of phones, probably between 5 and 10 years further down the line. It’s true consumer research. And something testers should try to embrace where possible. i.e. getting stuck in with your end users to find out how they truly use the software, in their environments and under their own specific contexts.

 

And even if you don’t believe that’s something a tester should do Jan Chipchase’s blog is still a cracking read.

 

Rob..

I’m confused….crowdsourcing

I may well be opening a can of worms with this blog but I am genuinely interested to see what people think about crowdsourcing and in particular services like uTest.

 

I’ve read some glowing reports about uTest and heard nothing negative – which is unusual in our community. We often see past the gloss to reveal the real story. But maybe uTest and crowdsourcing really is the future.

 

So I’m genuinely interested to hear from people about why crowdsourcing is becoming increasingly popular amongst the testing community.

 

I have my own opinions and views on crowdsourcing but they are formed from the little experience I have with uTest. Let’s just say I found the payment, project allocation and script assignment somewhat confusing and unfair – but that was right back in the early days, things have changed since then.

 

I also have a concern that lots of testers are working for nothing. No bugs, no pay. And the stats seem to show that too (on the assumption that everyone registered is testing – which I know is not right). I know it is their decision though.

 

For example.

 

In the UK there are 778 registered testers. They have had 270 test cycles between those that take part. And they have found 1159 defects. Not bad. But it works out about 1.5 bugs each (assuming all take part – which we know not to be true).

 

It fairs slightly worse in India. 5096 testers, 430 cycles and just 4242 defects. On basic and simple maths that’s less than 1 defect each.

 

Spain has 142 testers, 32 cycles and just 78 defects. Half a defect each. Not brilliant.

 

America fairs a bit better but I know these numbers don’t tell the whole truth. It could be one tester raising hundreds of bugs. But why are so many registered and not taking part?

 

(numbers taken from http://www.utest.com/meet-testers – assuming this is up to date)

 

So please do leave comments and let me know what the benefits are of crowdsourcing services.

 

I am genuinely interested to see how the crowdsourcing model works for those doing the testing.

 

  • Are you making money? (note: please don’t disclose how much you are making – a simple yes/no.)
  • Are you learning more about testing?
  • Is the 20,000 + community really all experienced testers?
  • How long do you test on average for before you find something?
  • How often do your bugs get bounced back to you?
  • Are you simply running basic scripts or being utilised for the creative, analytical and questioning minds that you have?
  • Is the paid per bug scheme based on luck (i.e. which test scripts you get?) or is it based on skill (i.e. being picked for your ability?)
  • Should more companies be getting involved in crowdsourcing?
  • Does it solve the testing problem? Or simply solve the outsourcing problem?

There’s some suggestions to get you started.

 

I look forward to finding out more.

 

Rob..

Got my test effectiveness results back. I suck. Official.

I’ve just spent the last 10 minutes trying to control what I believe are the emotions of rage, laughter and despair.

 

I’ve just got the results back from my participation in the Advanced System Testing Groups “Software Testing Skills Assessment” pilot. And boy did I laugh. I laughed at just how badly I had done. I got just 39%. Maybe I should consider a new career. After 10 years I’m obviously not cut out for the world of software testing. Well, the world of certified, standardised, poorly structured, no customer contact and no business contact software testing.

 

After I managed to control my laughter I entered a small, but perfectly agreeable fit of rage. I’d realised just how little information came back to me from the questions I fired off during the test and just how many assumptions Advanced System Testing Groups had made in creating these supposedly insightful assessments.

 

I then entering a stage of despair as it dawned on me that this way of testing the tester could well be adopted and embraced by unknowing industries and create yet another exam and certification.

 

So why do I disagree so strongly? Well here’s my reasoning’s:

  1. The test had no real world time limit which resulted in no risk based testing (even though they claim to have assessed against that….Advanced System Testing Groups – how were you assessing my ability to do risk based testing when I had plenty time to complete all tests and questions with no commercial pressures or reporting deadlines?)
  2. The pilot had no point of contact for questions or feedback or concerns (I fired some off via email and got nothing back)
  3. The test results sheet only allowed for one defect per feature (even though some testers were reporting more than one, this was not counted (or appeared not to be))
  4. The test application itself was so archaic and old fashioned (written as an MS access app) that in today’s modern world it seemed inappropriate, certainly for my context. It also didn’t open properly in my latest version of MS access.
  5. The instructions were not very clear
  6. There was no opportunity to test or report on performance, networks, security etc etc
  7. They were measuring using ISEB/ISTQB/IEEE techniques, which although very valid techniques, are not the best measure of test effectiveness (I worked with a tester once who spent so much time preparing flow, loop and data state diagrams that he left just 3 days for testing the app….)
  8. There were no measures for good communication, passion and pro-activeness, usability, accessibility, test case quality, exploratory testing charter quality, defect reporting, ability to learn etc
  9. The measures are the assessors measures of what makes a competent software tester, not mine, not my peers or colleagues – well of those I know well anyway. Is their assessment right? Can anyone truly assess the value of a tester when the industry is so varied and complex with so many overlapping roles?
  10. They bring each and every person down to one level. They assume a tester on the floor using the assessed technique checking all day will also be competent in a highly volatile, commercial decision meeting where a strong personality and commercial clout is needed. They bring a one level assessment to every single tester, in every single role, in every single company – and this is flawed.
  11. They ignore the commercial and market pressures found in the environment
  12. They ignore test data, test environments, accessibility, usability, performance and load etc.
  13. They ignore the human traits so very much needed to work in a testing role.
  14. They essentially apply a best practice to software testing. And surely we all know by now this simply does not exist.
  15. They ignore the communication skills needed to truly reflect the defect, report the metrics, communicate to the developer, raise the right level of priority etc
  16. And no doubt more I can’t bring to mind at the moment. Someone want to help me out?

Anyway. Enough ranting. I’ve decided I’m going to create my own assessment, pilot it, sell it, make millions from a certification and then retire safe in the knowledge I’ve brought standards and Best Practices to testing. Or I could just continue doing the best job I can, offer my services for mentoring, continue to offer hands on exploratory testing sessions and help to build a social community in testing where real value can be gained from sharing experiences and ideas. Millions….or career integrity……….

 

Tough one.
Rob..

Agile Testing > Story : Distilling more information in the story

Continuing my exploration of the life of a story in agile testing. Part 1 is here.

 

Each story on the backlog has been defined by the business and the customer. These are capabilities that the customer wants. So the next step is to distill some more information in them in the form of acceptance criteria. These are essentially the markers/gates at which this story will be deemed as complete. It’s a list of things this feature/capability must do.

 

Here’s an example story using the Software Testing Club:
As a registers user I would like to create a new forum post so that I can start a discussion

 

And here’s some example acceptance criteria

  • User has the option to create a forum post
  • New Forum post window has X fields
  • User cannot add a forum post without a title
  • All error messages appear at X location
  • User cannot add a forum post with no content
  • Mandatory fields are X
  • Optional fields are X
  • Title field holds X characters
  • Description field holds X characters
  • Tags field only accepts tags separated by commas
  • Clicking submit will post to the forum board
  • Any errors when submitting will be reported to the user
  • User will be able to see forum in list after submission
  • Tags entered will be search-able
  • Other users can view the posts
  •  
  • etc

You see the point.

 

If a tester gets involved in the story writing session then the acceptance criteria tends to be more detailed. It’s natural for us to want to question the story early before we get the software and the more we can do that, the more accurate that story will be. There are plenty more to add to that story example above. Think of distilling more information in the story as if you are fleshing out a test case(s). The story is the test case. The acceptance criteria are the tests themselves.

 

In the early days for me stories would arrive with only one or two vaguely described acceptance criteria. Programmers would code against these and then make the usual assumptions they often have to make. I would get software that didn’t behave how I thought it should and the customer would need to be consulted all the time, assuming they were available. If not, they would get a demo that didn’t appeal and a whole new sprint would be dedicated to getting it right. Time wasted.

 

With detailed acceptance criteria that the tester, programmer and customer have all agreed on it is possible to avoid this wasted time. Programmers code against it, testers test against it and the customer accepts the software against it. Anything that changes is agreed between all parties and added to the story. In theory, it’s as simple as that.

 

So this is why it’s important to get involved early and put some test input in to story writing sessions because the questions and ideas you will raise (just like you would in a test case or through exploration) are essential for the success of that story. How many times do you here a programmer or customer say ‘I would never have thought of that’ or ‘You can’t do that, can you?’ – The tests and ideas that generate these statements are the same golden content you need to get in to the story…..and early.

 

If you are not involved in the story writing session then try to shoehorn your way in. After all, most managers will be receptive to ways to stop wasting time. Any ideas how to avoid bugs in the software is a good thing. If you still can’t get involved then start collecting defects that you are fairly sure the team could have avoided if testers had been consulted earlier. The tester is the one who thinks about how to find breaks, possesses skills that aid bug hunting, has experience and is often the main champion of customers and end users. So to not include testers in story distilling sessions could be a major mistake.

Agile Testing > Story : On the backlog

Over the next few posts I’ll be exploring the concept of a story in an agile environment and what it means to us as a tester. Over the past few days I’ve been hearing about how testers don’t get involved with story writing sessions, how testers duplicate the acceptance criteria in their tests and how testers don’t fully understand how a story can replace a spec.

 

I’ll hopefully cover all of these and some more by breaking a story down in to smaller lifecycle steps.

 

My experience is based around scrum projects. The one thing to consider is that each agile implementation is different, with different teams and customers and different requirements of the team. So no ‘one’ solution will be suitable for all. But hopefully by sharing my experiences on here it will help you round your view and decide on the best way of working on your agile project.

 

A story is basically a description of how someone interacts with the software to get a desired response.

 

A story basically takes the form of:

 

As a [Actor/Person] I would like to [Feature or capability] so that I can [Value of action]

 

At the start of a project there is a tedious, but incredibly important job of adding all of the customers ideas, requirements and thoughts to the Backlog. The Backlog is the project holding space for stories. Look at it as your requirements document.

 

Getting the stories on the backlog is a process of sitting with the customer and adding their requirements to the system in the form of the story. At this point it is unlikely that the stories will have acceptance criteria (i.e. the details of what is involved in the story).

 

Some of the stories may actually at this point be Epics. An Epic is essentially a story that contains lots of other stories. An example would be “Log In”. This would normally consist of several individual log in stories – depends how complex though. When tackling an Epic it is essentially to break it down to manageable stories.

 

A manageable story is essentially a story that can be completed (to the definition of done) in your sprint. A sprint is normally between 1 and 4 weeks – although there is no law on that one. If the story is not achievable then it is an Epic and needs to be broken down more.

 

The definition of done can be complicated but essentially this is a set of rules/regulations/guidelines that must be adhered to before the sprint/story or task is considered done. For example, the sprint is not done until all code is checked in, all tasks are done, all stories are done, the stories have all been tested, the demo stack is ready for the customer, the deployment scripts are done, the automation suite is started, no defects outstanding etc etc. A series of gates in which the software and process must go through to be ready.

 

Right, back to the story. Once all stories are on the backlog the customer should then rank the stories based on priority order. i.e Top to bottom rank order of what’s most important to them, at that moment in time. This rank order will change as the customer sees the software, gets new information or responds to market/financial pressures – and this is the beauty of agile. The next piece of work is always the highest priority for the customer.

 

Once ranked there are two lines of thought as to what to do next.

 

A: Have the team estimate each story in advance – time consuming, inaccurate as acceptance criteria will not be defined, tricky to estimate with no information on the emerging system
B: Add acceptance criteria to the first few stories and then have the team estimate.

 

I prefer option B as estimating the whole backlog often proves fruitless and pointless in my experience unless you are using it for forward planning. It’s at this point that we, as testers, sit with the customer and programmer and work through the stories adding relevant acceptance criteria (coming in the next post).

 

Once acceptance criteria is added we then have the whole team estimate. Some teams use time estimation points, others complexity, others a combination of the two. For me, the only one that really matters is complexity, but the others would argue against this.

 

Estimating complexity is a process of sitting down with planning poker cards (numbers on each card). The scrum master (person running the sprint) would then read out the story and each team member would estimate on complexity, putting their card face down. Once everyone has estimated the whole team then turn the cards and we find a happy ground, negotiating between each other.

 

Estimation based on complexity is tricky to understand. It’s not about how long it will take but about how complicated you think it is. An easier way of working it out would be to take a story that is neither “really hard” nor “really easy”, writing in on a piece of card and sticking on a long wall. Then take each other story and write them down too. Now stick them either side of the existing story. Right hand side for more complicated. Left hand side for least complicated. From this you can start to understand that each story will have a complexity level that we assign a number to. I work on 8 being the average story and work either way from there.

 

Once we agree on an estimate that goes against a story and is then used to work out the team velocity. The velocity is essentially how many complexity points we can achieve per sprint. This is why sprints tend to be kept the same length to maintain a consistent velocity. In the first few sprints though, you will have no idea of the teams velocity as there is no historic data. Over time though the team will slip in to a rythm or groove which allows a much more accurate velocity to be calculated.

 

We have not estimated all stories at this point, but before each sprint this process needs to take place. This is so that at the sprint planning meeting the team can assign stories to the sprint which have already been estimated and have acceptance criteria. The customer also needs to check the ranking and the backlog as there could be new stories and defects to now consider. This is a continual process and is often referred to as grooming the backlog.

 

And so that’s it really. In a nutshell (and a heavy scrum one at that) we have the basics of stories and backlogs and how they are used in a sprint. The one key point I have missed though is distilling the acceptance criteria in to each story. Something that not only makes estimation easier but also makes programming and testing smoother, cheaper and less dramatic. More on that in the next post.

 

Rob..

Pair Programming and Pair Testing

Our programmers here at iMeta now wax lyrical about pair programming and it’s easy to see why. The quality of code coming through to the test team now is exceptional. There’s very few fix-fail episodes and the programmers seem over the moon with how great pair programming is fairing. Sure, there were teething issues and some programmers didn’t feel the groove when pairing but these were soon overcome and they moved forward.

 

It got me thinking though about whether or not testers should be pairing when writing test cases. And my conclusion is that they should. It brought back memories of when I used to have to send test cases off for peer review at a previous company. I too had to review other peoples tests. It did often become a chore but more importantly was that is missed the point. And here’s why:

  • The review was more a sanity check on formatting, spelling, ensuring every step had an expected result, test case length was ok, etc
  • It became a chore so often it ended up being a skim read.
  • The person reviewing often didn’t have the same product knowledge. This meant the test cases weren’t reviewed regarding how well they tested the application.

 

And so I sat down with a fellow tester on an after work development project and did some pair test case writing to start with. It was incredible. The thought patterns and processes we entered were remarkable. As a pair we wrote simple, clean and to the point tests. Those pointless steps or ideas were left out. During the tester not doing the main writing would then spawn a mind map charting all of the ideas so that we didn’t miss any.

 

The tests were succinct and short in a high level guidance way (i.e. no detailed steps). We kept every single test DRY (don’t repeat yourself) extrapolating all setup, preconditions and data out to separate excel documents. It truly was a great experience as each of us brought a different outlook to the table. But more than that we bounced ideas off of each other. In terms of time spent it might appear that we were doubling but the quality of the output was incredible.

 

So how about actual pair testing?

The next step was to actually see if we could do some pair testing. And we could. This too brought some amazing side effects. We raised more important defects. We generated new and interesting exploratory ideas to run all managed through a mind map. We had to do so little admin to the test cases that we were both truly surprised with how good they were.

 

It felt like we’d covered more of the system in less of the time. But we also covered a lot more of the system than we had test cases for. This was because as we got to a step, one or both of us would highlight that the documentation hadn’t mentioned this, or the test case didn’t consider this factor yada yada.

 

The whole process has left me thinking more and more of us should consider pair testing. Maybe as a short trial process for one day a week. Maybe as a permanent idea. Believe me, the tests, the testing and the camaraderie are all enviable positive effects of pairing up. Let’s not just leave the pairing to the programmers. Let’s help take testing to a new level too.

Stubborn Cat

There’s a cat down the road from me who is so stubborn it’s untrue. He refuses to budge – literally. He saunters out to the middle of the road and sits there with his smug little smile taunting drivers and cyclists. He refuses to move out of the way and it’s not uncommon to have to mount the kerb to get past him.

I don’t know his name but I do know he is stubborn. I generally like cats, my parents have several and one thing I have noted is that they are all fairly stubborn. I think it’s in their nature.

It’s interesting how the testing community seem to think of themselves as stubborn and argumentative. I too believe these to be traits of the majority of testers and often with good reason. We sometimes need to be this way to get the job done. It’s often necessary. There are times when you need to be stubborn, to stand your ground and to hold on to your opinion in the face of pressure and resistance.

However, when we are so stubborn that we refuse to move we could be endangering the project and ruining our reputations. If we refuse to move and accept new ways of thinking we may become side tracked, irrelevant and a nuisance. Just like the cat.

I’ve recently been at the receiving end of testers who can’t/won’t accommodate new information and who genuinely do believe it is their way or no way. Testers who can be quite nasty and cutting about other testers who don’t subscribe to certain ways of thinking. It’s at times like this when it feels like people are no longer stubborn and argumentative to be constructive but are moving ever so close to arrogant and at times, woefully wrong. But we are here to serve the stakeholders, to offer a service that people get value from, not to be argumentative and stubborn. Not to cause a nuisance. Not to be seen as the awkward one.

It’s a fine line to tread between being focused on quality and downright stubborn. Tread it right and your testing will flourish.

Test Reporting in an agile environment

 

A low tech dashboard is a great way of communicating the state of the software mid sprint. At the end of the sprint, the board is fairly meaningless unless you have stories incomplete. But mid-sprint it’s a great visual way of showing progress. I.e. we’ve hit this feature in depth and it looks ok.

 

It’s another indicator of how we are progressing. Look at it as a quality indicator that compliments the velocity indicators like burndowns and burnups. It’s a clear, visual representation of the “quality” of the software from the testers point of view. It doesn’t need weighty metrics to back it up – although that may help in some environments. It doesn’t need to be absolutely accurate, just like the burndown report and it doesn’t need to be complicated.

 

It needs to be simple, easy to read, easy to understand and simple. It’s about communicating to all stakeholders (and other teams) where we are at with the software ‘quality’.

 

And when we get to the end of the sprint and we have stories incomplete then the dashboard can be a good way of highlighting where quality is lacking.

 

A few years ago I created an equivalent that was a ‘mood board’ with smileys which the testers would put up on paper to show visitors to the team area what mood we were in (happy, sad, nonplussed, ill, bored, tired, giggly, etc). A visual representation of how we were progressing. And it worked wonders and the management loved it more than the metrics. And believe it or not – that was in a waterfall environment…

Easy Tiger – Don’t dismiss record and playback just yet

Again this week I’ve been reading blogs, forums and tweets from people dismissing record and playback as a viable automation option. Which is fine, providing reasons and justifications can be cited for not using it. But empty statements and re-iterations of other peoples reasons don’t wash too well, especially in the complicated world of testing where context appears to be king.

 

Sure, it should probably never be used as a long term automation strategy, but I’ve done a couple of projects recently where simple, low tech, “dirty” record and playback has been the perfect choice. And here’s why:

  • The project had incredibly tight timescales
  • The project, in the same guise, was unlikely to be re-run in the same way meaning a full considered automation strategy could have been a waste of money
  • The testers didn’t have time to plan, build and utilise a full automation strategy
  • The appropriate skills weren’t available
  • Quick feedback and regression was needed

Given that time was of the essense I needed a quick and dirty way of smoke testing the UI and using automation to load data. I didn’t need long term, dynamic, flexible and wide covering automation otherwise I would have adopted a different strategy.

 

I needed a simple and quick smoke test that hit some key acceptance criteria, gave me confidence core functionality was still working and loaded some data at the same time.

 

It was right for me. It gave me confidence. It showed up basic functionality that was no longer working. It wasn’t time consuming or difficult to maintain. It took only 5 minutes to run. It did the job.

 

Think of record and playback as a tool the manual tester can use to help them achieve their testing goal. In essense, it was a project that had no automation with a manual tester who used record and playback to lower the regression burden and load states. Does that make it sound more viable and appealing? The tester was using it lower burden and make their testing efficient, not as an automation strategy or plan. Far more appealing now.

 

So when someone says that record and playback is wrong, costly and pointless ask them to qualify why that’s so and under what circumstances. It’s always best to have a balanced view of these things. There’s a time and a place for all types of automation. And if that person has never used it, never worked under the conditions it can be suitable for or simply prefers to spend time manually checking basic tests that a computer could be doing – then maybe their point of view should be taken with a pinch of salt. My guess is, that point of view may also contain the words ‘best’ and ‘practice’.

 

Long term automation with a framework and key skillset is the way forward for most projects. However record and playback still has its place, so don’t dismiss it just yet.

Acceptance Criteria : it’s a good friend

With some careful planning, a good use of time and access to your customer (or customers proxy) you can craft and distill stories that will make your job as a tester all the more effective.

 

On an agile project test involvement early in the planning and story writing can add an extra dynamic. Testers often have very critical minds and often ask questions other team members don’t. And it’s this questioning and thinking that is so powerful and effective when writing stories.

 

It’s not just that the customer understands the stories more and thinks more critically about them but that the programmers also have more information up front and the designers and any other team member can see clearly what criteria that story will be judged against. Testers often posses the skills needed to bridge the gap between the customer and the tech teams too. They also tend to put themselves in the shoes of the user, consider usability and accessibility and are often the ones who raise pertinent questions about non-functional behaviors.

 

Leaving the tester out of the story writing sessions means that when the story moves over to test for testing the testers will often generate a lot of defects, some of them often quite simple. Defects that could (and should) have been found before any code was written.

 

And if the tester is being involved to their full capacity they too will find that the story in essence becomes a very effective test case. A case that both manual and automated testers can work from. There is no reason why a story can’t contain a long list of acceptance criteria. In fact, the more the merrier in my eyes, it only helps to make estimation and verification easier. There’s no reason why the acceptance criteria can’t reference or jump out to flow diagrams, state transitions and any other supporting documentation. And all of this become far more possible when you include a critical thinker in story writing sessions.

 

I’ve been through many sprints that, at first, weren’t successful but we soon started getting key team members involved at each story writing session and we soon started dropping code that had fewer defects. With fewer defects velocity tends to go up, moral remains high and more time is freed up for exploratory testing.

 

So don’t be shy. If you are not actively being invited to story writing sessions, then invite yourself and add your critical thinking early.

It’s not a blame culture but it’s definately their fault

One of the main things I really like about agile is the fact that the whole team are creating and working towards shippable software at the end of each sprint. Well, that’s the theory anyway.

 

And a positive side effect of this is that you lose the ‘over the wall’ mentality. In a true agile environment there is no ‘them’ and ‘us’. It’s no longer a blame culture. Everyone is responsible for quality. Everyone is responsible for getting the software working. The software is not thrown over the wall to test and then thrown back over for bug fixing.

 

So it becomes a team activity in the truest sense. We are all working towards a common goal. No one person is responsible for quality – we all are. Sure, there are still individual mistakes but the team rally together to solve these.

 

And it is great. There’s no bad mouthing, sniping or hushed conversations – well fewer anyway 🙂 It’s all about the product. It’s all about the team. And that, in my eyes, is a really positive thing.

It’s all about the people

As many may have guessed or deducted from my posts I’m all about the people. I strongly believe that people and their skills, outlook and mind set are what make or break a team. A good team can achieve great things. A bad team will rarely achieve anything above average.

 

But a good team isn’t just about getting a group of genius’ or 5 star employees together. It’s about diversity and creativity. And this is exactly why great development teams can churn out huge amounts of software of exceptional quality. It’s why some open source projects are so successful and social media collaborative projects are so exciting, interesting and productive.

 

It’s because of people. It’s because the people in the team are usually self governing, highly motivated, creative and directing their own work in line with the whole team approach. It’s partly because these teams are made up of cross disciplines whose outlooks on life, work and play can be so very different.

 

As a manager or team builder don’t be too hasty to build a team of just one discipline, gender or personality. Instead, search out the creative, individual, accommodating, communicative and motivated individuals and bring these people together irrelevant of experience. (Note: Obviously some teams require certain unique skills sets which cannot be ignored)

 

The interconnection of ideas, thoughts and opinions is where real learning and development takes place. It’s where great ideas are born and plans are made.

 

Sir Ken Robinson said that “creativity is the interaction of disciplinary ways of seeing things.”

 

Whenever I build a team I look at the team as a whole, not as individual members. I don’t dictate ideas down to the team. I get them all in a room (of all levels) and we brainstorm and generate ideas together, as a unit.

 

It’s this team work that generates ideas, plans, actions and a team unification that is so often missing from many test teams. If you can include programmers as well, then you are on to a winner.

 

Creativity is a core fundamental in software testing. How can I find more bugs? What questions can I ask the software? How can I report my findings in a way my audience will interpret them as I want them to? How can I make myself more efficient? How can I leverage Bob’s skills even though he is not on my team?

 

So a good team is not only about the people (their skills or experience) but the teams outlook on life as well (attitude, understanding, communication skills etc). And don’t become complacent, it’s often the juniors who have the freshest and most interesting take on testing too………..

Agile: It will make your face melt and your mind burst

For me one of the most difficult challenges I have faced as a tester is the move from a traditional project methodology to an agile one.

 

The process of adopting agile for a manual tester is tricky. It’s incredibly difficult and often it is the testers who offers the most resistance when teams make the move. Stories about testers being negative, throwing their toys out of the pram and generally being a bad egg are common.

And I completely understand why.

When I made the transition from traditional to agile it felt like my face was melting and my mind was bursting.

It was the toughest challenge of my career. I hated those first few weeks and wondered whether I had a role in the team or not. I was contemplating a change of career and feeling completely and utterly under valued. I hated it. I was terrified that this was the future of software testing and I didn’t get on with it.

For a tester, it’s not just about doing the same work in a different order or with tighter time constraints, it’s about changing your outlook on testing and how you fit in to the team. It’s about redefining your role (and your skills) and evolving to stay relevant. You need to do a mind shift that at first seems completely alien. A mind shift that seems so very wrong.

In the end I just let go, took the rough with the smooth and worked at seeing what all the fuss was about. And here’s what I found out.

 

 

The focus of the whole team shifts to quality

  • You will become the quality expert. You will no longer be the person who tests just at the end
  • You may need to devise tests with little to no formal documentation…fast
  • You will need to feedback your test results rapidly
  • You will need to be confident, vocal, capable, responsive and communicative, often taking charge and leading on quality
  • The rest of the development team will come to you for feedback to their tests and code early

 

You will bridge the gap between the business and the techies

  • Your role should now mean you liase closely with the customer. You will need to adopt a customer satisfaction role
  • You will help to define the stories and acceptance critiria – these will become your tests and guidance so your input is essential
  • You will have to report finding about quality to the customer and stakeholders….fast, timely, accurately and with diplomacy

 

You will need to put your trust in the Product Backlog

  • Traditional projects with 100 requirements often end up delivering a large percentage of that 100 but with poor quality, misunderstanding and often incomplete
  • Agile projects with 100 requirements at the start *may* end up delivering only 60. But these will be complete, exactly what the customer wanted and of course, be superb quality.
  • This original number of 100 may grow and shrink with changing markets and business decisions. Trust the backlog.
  • The customer will define and decide the next sprint of work for your team.
    • You will simply advise, manage expectations and communicate
    • This is a tough one – letting the customer decide what to do next….
  • You will need to consider the longer term and bigger picture, but your main focus is the sprint in hand

 

You will need to increase your exploration and automation

  • You will need to replace the tedious, checklist type manual tests with automation if possible.
    • Your regression suite will get too large unless you make the most of automation and get the basics covered.
    • The only other option is to hire a load of undervalued and demotivated testers to simply ‘checklist’ basic functionality.
  • Your automation should be integrated with the continous integration and automated build deployments.
  • Elisabeth Hendrickson summed up agile testing very nicely indeed (taken from her ruminations blog – http://testobsessed.com/):
    • Checking and Exploring yield different kinds of information.
    • Checking tells us how well an implementation meets explicit expectations.
    • Exploring reveals the unintended consequences of meeting the explicitly defined expectations and gives us a way to uncovers implicit expectations. (Systems can work exactly as specified and still represent a catastrophic failure, or PR nightmare._
    • “Checking: verifying explicit, concrete expectations”
    • “Exploring: discovering the capabilities, limitations, and risks in the emerging system”
  • A negative side effect of increased exploration is how you go about managing the test information.

 

You will need to drop the concept of test case preparation and spec analysis

  • It’s unlikely you will get a detailed spec.
  • The acceptance criteria become your test cases and design.
  • The software becomes the UI design.
  • If you must write a test plan, plan for the sprint only.
    • Don’t assume you know how or what you will be testing in three sprints time.
  • Prepare to be dynamic in your tool selection, approach and thinking to testing. You may need to change your tools to cater for new information.
    • Don’t be too prescriptive.
    • Add a quality toolsmith to your team. They will save you a fortune in the long run.
    • Invest time in researching free, open source or cheap tools.
    • The more tools you know of, the more likely you will be able to respond to changes.
  • Don’t even consider what are supposedly Best Practices.
    • Do what is right for your team, on that project and at that moment in time.
  • Trust me, letting the stories and software guide the UI and design is a revelation. It’s just tricky changing your mindset to accept this.

 

You will need to get over the defect stats and metrics complexion

  • Working software is fundamental. It’s what the end goal is.
    • Each sprint you aim to deliver releasable standard software that meets the acceptance criteria.
    • So along the way there is less emphasis on raising and recording every single defect in a tracking system.
    • It’s more about shouting over to the programmer and getting it sorted between you.
    • Look at low tech dashboards as a way of reporting metrics
  • Defects that relate to the acceptance criteria and story under test mean the story is not done (even if it has been coded and the programmer has moved to a new story).
  • Defects are no longer used to cover our backsides or blame other people.
  • Defects that aren’t related to the story should be on the backlog, where the customer can prioritise.
    • After all a defect is a piece of functionality that either exists and shouldn’t or doesn’t exist and should.
    • Let the customer decide what to do with them.
    • They may be less/more important to the customer than you think.
  • If you truly must report then this needs to be done in the lightest way possible. And my guess is, that if you really are having to report each and every defect encountered along with test case metrics and stats in a formal way then someone in the process/system has not truly bought in to agile.
  • Note: I’m not saying be slack with defect tracking and reporting.
    • Far from it, if you need to put a defect on the backlog for the customer then you need to consider how you will describe this successfully for that audience.
    • When shouting to the programmer it’s often easier as you can show them the defect in action. 
    • The people you report to, the information you report and the way you report it changes.

 

After getting my head around these differences and new concepts I noticed a few unexpected side effects;

 

  • I was re-ignited with my passion for software testing
  • I was being consulted far more on quality issues meaning I spent less time complaining and raising obvious bugs after the software was dropped
  • I started to use my creativity and critical thinking in a rapid and responsive way, rather than testing a spec and thinking of a few edge cases up front.
    • I was being engaged and used for my creativity, skill and critical thinking
  • I started to work in teams where the whole team valued quality rather than an ‘over the wall’ mentality.
  • I noticed that the customers were far happier with the process. They were getting to control the focus of the work and ending up with software that meets their needs at that moment in time, not the software they thought they wanted 6 months ago
  • I lost a huge amount of negativity and became more positive, motivated and accomodating.
  • I spent far less time sitting around after raising a barrel load of defects.
    • I no longer waited for the triage, fix, build inclusion, release, retest, close.
    • I got them fixed asap, released asap and retested asap.
  • My job didn’t feel futile. I felt I was adding value.

Now I know there are people with frustrations with agile and there will be teething problems and issues for all new teams. And agile really may not be suitable for all types of work, but there are certainly some awesome principles and techniques we can all learn from agile.

If you have any agile testing stories to share then please let me know in the comments.