You’ve built a new test team but your bug counts are on the increase in both test and live. Why?
You are sat there wondering what went wrong. Why the grief? Why the drama? How can this be?
Well, it’s just a case of SNOT.
S – Safety
N – Net
O – Of
T – Test
Snot. Safety Net Of Test. It’s something I’ve observed many times in my career where new teams are forms, new departments spring up or a new batch of people come in. Testing often become a safety net. The catch all. The people who will control our quality. But the one aspect of this safety net that always baffles many people is why there are always *more* defects. (note: “more” is very subjective here as often there is little empirical evidence to show that “more” have indeed been found or are showing. Often it’s based on “gut feeling” <– which might well be right)
Some food for thought on why I think we often see more bugs:
When Testers are brought in to a company and a Test team is beginning to flourish more people are looking at the software and probably in a more managed / structured / critical / organised way, probably also with a fresh set of untainted / unbiased eyes . The product is being inspected, explored and investigated by professionals (we hope). This *could* be the reason for more bugs.
Sometimes an easing off of testing can happen by the programmers (and other people who were doing some testing). This is because they now have someone else fulfilling this role and responsibility. This “someone” else might not know the nuances or intricacies of the system just yet.. This *could* be the reason for more bugs.
The business as a whole now have a department to “blame” when defects are found in live. Before the Test team, the blame culture was potentially collective, now, with a Test team, it is departmental. This *could* be the reason for more bugs.
The process of bringing in new Testers often means more process and experience is brought in to place. Test management, defect process flows, exploration, critical thinking, triage, reporting and artefacts are all things that many companies start to see more of when Test teams form. This therefore, at least initially, brings bugs and good / bad existing processes to everyone’s attention. Not only that but the Testers (if they are of sound skill, knowledge and mind) will begin to champion better processes and thinking, and start to challenge bad practices and existing assumptions about testing. This will bring more focus to the software and people may start to question why there are bugs in it. These bugs may have always been there (and some may have always been known about) but we’ve raised expectations now..we need to meet them. This *could* be the reason for more bugs.
The project team as a whole now perceive their velocity or work rate to have increased with more people on board, therefore more code is produced (maybe because they have someone else to cover the testing and the code may also potentially have less checks) and the test team simply cannot keep up. This could mean more code goes out untested and hence defects slip through the net. This *could* be the reason for more bugs.
It could be that the software itself is not in a “happy place”, hence the initial desire to build out a Test team. The Test team are too late to catch the fallout and a spike in defects occurs due to legacy issues. Just staying on top of new work could take all the testers time leaving legacy stuff to be exposed to new code interactions and new ways of being executed which starts to show vulnerabilities and bugs. This *could* be the reason for more bugs.
The way defects are counted and categorised could have changed, which brings to light defects that were previously, ahem, ignored. This *could* be the reason for more bugs.
Or it could simply be that there is a just a plain old spike for some reason. This *could* be the reason for more bugs.
It could be any number of reasons, but from my experience the spike in defects is very real after the forming of a “formal” Test team.
I guess the big questions in my mind as I write this are:
- Do we really care enough to measure these spikes accurately and scientifically? (I suspect someone is already tracking a maturity model of some description)
- Are defect counts really a good indication of influence, impact and effectiveness of any Test team, let alone a newly created one?
- If the spike is temporary do we need to explain it at all?
- Are businesses still assuming Testing is the last line of defence? The safety net? The catch all group? Could the spike be down to a programming error or an ill defined requirement?
- At what point does the spike continue and become the Norm?
- If there isn’t a spike should we be worried about the Test teams effectiveness?
- Is there a way to maintain collective responsibility for Quality when new Test teams are formed? Do we really have the means to track the many complicated facets involved with potential spikes in bugs (morale, people, process, approaches, environments, features, new tech, etc)?
- And why am I asking so many questions?
So next time you see a spike in defects after a newly appointed (or changing) team is in place then I would encourage you to observe and muse on some of the potential reasons for this. But don’t worry too much; things will even out in the end 🙂