Is software testing really a service?

I’ve always subscribed to the concept that the testing we offer is essentially a service to the business. The business engage with the test team for our services. I hear this phrase mentioned a lot mainly around how we can improve our service, or how best to manage this service and how we can maintain our independent service. We, the test team, maintain our impartiality and cast our critical eye on the software. We are a service. Or so I thought.


It sounds good in theory and I’ve previously worked in a service environment first hand. But I’ve come to realise recently that the service mentality is not helpful. It creates a mental divide that often creates a very real divide between the rest of the team and the testers. It re-enforces the dreaded wall concept and it somehow marks testing out as being special, different, aloof. And the divide, in my opinion, is not a healthy way of looking at testing.


I feel we need to take a step back and look at the picture from a higher level. If we do this, we see the whole development team, including testing, as the service? As the service to our business?  To our customers?


As development teams consist of programmers and testers (and other roles such as PM etc) the notion of the test team being a separate service seems fundamentally flawed. At a basic level each and everyone one of us are in some form of service agreement with other people. Our friends, family and colleagues for example. And there is no refuting that at all. That is everyday human transactions. But I genuinely believe that describing “testing” as a service is misleading people and creating a barrier that isn’t positive.


No matter what methodology is being used, it makes more sense to think of the project team as a whole? As a whole unit delivering value? As a group of people brought together for their skills and ability to deliver good software?


Are we really bringing together each service (BA, PM, programming, testing, support, documentation etc) to create bespoke project teams made up of individual service agreements? Maybe this is why some teams get so hung up with documentation, sign offs, gates and criteria. And yet in all my time working I’ve never heard any other department refer to themselves as a service. (no doubt some people have..let me know). So why must testing?


I can see why I used to believe software testing was a service. It’s because testing was hung on the back of the project, something that happened during a certain time period, something that required a special build to be handed over and then thrown back, something that was troublesome, independent and impartial, something that happened as a phase and not as a principle. And it makes sense in that environment. But is is helpful?


As more testers are being involved at the start of the project, maybe the service concept will give way to the team concept. Maybe people are realising that the testers are no more important than the programmers. Or that the support team have just as equal input to the project as testing or project management.


I’ve seen the dangers that testing as a service can bring; the late delivery to test, the lack of test input throughout the project, the poor quality release, the throwing over the wall, the communicating through the medium of defects, the blame culture, the metric wars and the late deadlines and poor quality releases.


I’ve seen the masses of documentation, the quality police mentality, the gated entry and exit barriers and the general lack of communication between departments. I’ve seen months wasted on up front design only to find the pesky testers destroy it through late in the process static analysis. But most importantly I’ve seen the thousands of forum threads from irate testers berating the project team for all of the above. I’ve met them. I was once one of them.


And before my critics complain I’m about to go on extolling the virtues of agile methodologies, this has nothing to do with agile, wagile, waterfall, fragile, lean, mean, bream or any other methodology we can name. But it has everything to do with people. More importantly, how these people integrate in to a team. Sometimes the blame lies with management for building silos, sometimes with testers for enforcing them and sometimes for all of the team simply conforming to testing norms.


But anything we can do, as testers, to break down the barriers between groups within the team should be done. Right? We want to be involved, we want to be asked for our opinion, we want to be delivering good software, we want to be part of a great team, we want to be respected and trusted. Can we truly achieve these things by being a service to the rest of the team? By marking ourselves out as special, different, distant, contracted in?


I’m not saying we should conform, be walked over, pushed aside and devalued. We can still be impartial, critical, questioning, creative and communicative. These very traits are why we’ve been selected to be part of the team aren’t they?


And yes, agile does try to re-enforce this team mentality where quality is shared, testing is done first and team collaboration is key, but it doesn’t mean to say this is not possible in other methodologies. I’ve seen waterfall teams pull together, utilise testers right at the start, build a team including testers and ensure communication and team moral stay high. And they have succeeded.


But if we look at the testers from afar we can begin to see how we are just part of the team. Nothing more. Nothing less. We posses a skill that the team needs. A skill that compliments the rest of the team. A skill that is testing.


The customer and business want a service that delivers great software; and that service is the team, of which testing is just one part.


I know some people like the wall. They like bragging about finding 100’s of defects in the first week. They like seeing the rest of the project team squirming around trying to explain why it’s all gone pair shaped with 3 weeks to release. I’ve worked with these people. They live and breathe negativity. They strive in a blame culture. That’s what gets them out of bed in a morning.


But for me, it simply doesn’t cut it. The failure of a project is a reflection on the team behind it. And that absolutely includes the testers.


A team is a team. And on the face of it, it needn’t be more complicated than that.


This is why I see the testing as a service to be flawed. It assumes we are outsiders and separate and that just feels wrong. We do have specialist skills and thinking, but we are not outsiders, impartial or distant. We are part of the team.


And if we must keep using the term service to describe our role in the team, then we need to fully understand that many of the problems we spend hours griping about come directly from this view of ourselves….


Disagree? Agree? Not bothered either way? Let me know why in the comments. I’m open to fine tuning this view, building on it. Let me know.



Rapid Acclimatisation Process

I saw the term “rapid acclimatisation process” for the first time in a blog I follow. The term struck an instant chord with me. In the blog, the author Jan Chipchase is describing the initial period of time when he lands in a new town or city. He spends that time acclimatising to the new location, the new people, the new society, the new environment and the new time zones. All of these things have an impact on Jan and his work and he uses the term rapid acclimatisation process to describe this.


It intrigued me as I believe the same term can be applied to the first time we see some software as a tester. Or we log on to our brand new test environments. Or our new tool. Or out new defect tracking tool……you get the point.


I always spend an initial time acclimatising to what I am testing, under what contexts and in what environments. Just exploring, learning and acclimatising.


Rapid Acclimatisation Process. It’s got a nice ring to it.


Why not check out Jan’s blog here


And by the way. We can learn a lot from Jan. His job is to study how people use Nokia phones in their daily lives (or experimental conditions) and then build these findings in to the next generation of phones, probably between 5 and 10 years further down the line. It’s true consumer research. And something testers should try to embrace where possible. i.e. getting stuck in with your end users to find out how they truly use the software, in their environments and under their own specific contexts.


And even if you don’t believe that’s something a tester should do Jan Chipchase’s blog is still a cracking read.



I’m confused….crowdsourcing

I may well be opening a can of worms with this blog but I am genuinely interested to see what people think about crowdsourcing and in particular services like uTest.


I’ve read some glowing reports about uTest and heard nothing negative – which is unusual in our community. We often see past the gloss to reveal the real story. But maybe uTest and crowdsourcing really is the future.


So I’m genuinely interested to hear from people about why crowdsourcing is becoming increasingly popular amongst the testing community.


I have my own opinions and views on crowdsourcing but they are formed from the little experience I have with uTest. Let’s just say I found the payment, project allocation and script assignment somewhat confusing and unfair – but that was right back in the early days, things have changed since then.


I also have a concern that lots of testers are working for nothing. No bugs, no pay. And the stats seem to show that too (on the assumption that everyone registered is testing – which I know is not right). I know it is their decision though.


For example.


In the UK there are 778 registered testers. They have had 270 test cycles between those that take part. And they have found 1159 defects. Not bad. But it works out about 1.5 bugs each (assuming all take part – which we know not to be true).


It fairs slightly worse in India. 5096 testers, 430 cycles and just 4242 defects. On basic and simple maths that’s less than 1 defect each.


Spain has 142 testers, 32 cycles and just 78 defects. Half a defect each. Not brilliant.


America fairs a bit better but I know these numbers don’t tell the whole truth. It could be one tester raising hundreds of bugs. But why are so many registered and not taking part?


(numbers taken from – assuming this is up to date)


So please do leave comments and let me know what the benefits are of crowdsourcing services.


I am genuinely interested to see how the crowdsourcing model works for those doing the testing.


  • Are you making money? (note: please don’t disclose how much you are making – a simple yes/no.)
  • Are you learning more about testing?
  • Is the 20,000 + community really all experienced testers?
  • How long do you test on average for before you find something?
  • How often do your bugs get bounced back to you?
  • Are you simply running basic scripts or being utilised for the creative, analytical and questioning minds that you have?
  • Is the paid per bug scheme based on luck (i.e. which test scripts you get?) or is it based on skill (i.e. being picked for your ability?)
  • Should more companies be getting involved in crowdsourcing?
  • Does it solve the testing problem? Or simply solve the outsourcing problem?

There’s some suggestions to get you started.


I look forward to finding out more.