One of the most perennial (and mis-guided) questions most testers get asked is “Why did you miss that one?” or “Why did you not test for that?” or “Why did we get that live bug?”
It’s a question loaded with accusation and assumptions. Accusation and assumption that testers somehow hold the key to “perfect” software. With large test combinations, complex operating environments, tight budgets and tight schedules it’s increasingly important for a test engineer to perform some form of risk based testing which will no doubt have gaps of coverage.
There are the retaliation responses like “why was it coded that way?”, or “if I had more time it would have been tested” or “why was it designed that way?”. These responses, when used to pass the blame, typically reflect badly on yourself and often don’t aid in moving forward with any sort of resolution. Sometimes comments like these may be mindful and truthful observations, but I suspect there is more helpful and peaceable way of communicating them.
At the end of the day most (if not all) of our testing comes down to some sort of risk based decision about what to test. And sometimes we get that wrong.
We base our decisions on countless factors of which I don’t pretend to know all of them. What I do know though is that the decision you made when testing was the decision you made. There’s nothing you can do about that decision after the event. You can learn from it, adapt, iterate or simply move on but you can’t change it. Live issue or not, there’s no room for time travel. You made a decision, you tested what you thought was right (at that moment in time) and if you missed something then it’s fair to say it’s too late to change that decision.
You can certainly learn from the experience after you’ve done your testing, in fact it would be negligent not to. These issues can point to a problem with your testing and/or choices made but more often they point to a problem outside of your immediate testing control. They often point to a problem that the whole business needs to look at. A problem that might need people to take a step back, observe and reflect on. These could be budget, communication, expectations, hardware, software dependency, skills, time pressures and other commercial issue and a whole host of other factors that affect your ability to do your testing.
Sure, testers make mistakes, but so too do the people who help inform the risk based decisions, either through direct information or indirect factors like time, cost, motivational drain or any other factor that played a part in the testers decision.
Sometimes, it’s a straight forward mistake. Hands up, acknowledge it, assimilate and accommodate the feedback and move on. Other times it requires further analysis and a good look at how things are operating at a higher business level. So as a tester don’t be disheartened by issues that slip through your net but don’t also be held accountable for all issues either; it’s a team process with lots of contexts and factors involved.
Instead look for ways to learn and move forward both at a personal and business level that are right for your context. And if you’re still being held accountable and blamed and chastised then maybe it’s time to change your title from software tester to Quality Assurance Manager. (note and caveat: many testers already have a QA title, yet aren’t responsible for QA – it’s a complicated world we live in 🙂 )
Regular followers of this blog will know I like to work in pictures. So I cobbled together the attached diagram. I’m not sure it’s complete and I’m not even sure it represents my thinking fully but it felt right to put it out there and see what people think.
Is it a diagram of risk based decision making? Or a diagram of failed choices and tricky paths to tread? Or a diagram of dilemma and regret?
I’m not sure. It just felt like a good way to show the complexities and difficult choices testers and businesses face.