A fair few years ago I encountered a story about an environment where “bean counting” was priority.
- testers should produce X number of tests per day.
- testers should be raising X number of defects per month
- Monthly prize awarded for most defects raised
- No exploratory testing – no time for it – no need for it – test cases in advance was the norm
- Test cases must have X number of steps irrelevant of complexity or feature set or skill set of tester
- Step by step test cases only – no vagueness or open ended questions. Must have expected outcomes for each step. Nothing should be left to the tester to think about
- No data extraction to central repository meaning test cases were a maintenance nightmare.
- Detailed test plans up front and then ignored by whole team
- Detailed resource scheduling months in advance – no contingency and all staff working 100% capacity
- Staff treated as resources rather than people
I’m sure this sounds familiar to most testers. It’s certainly a common discussion point on most testing forums.
At this work environment there was a lot of negativity. The atmosphere reeked with contempt, finger pointing and blame. There was a lot of tick boxing just to meet quotas. Random checks of “passed” test cases revealed several defects that were not found. Was the test really run?
People were raising multiple defect reports for the same issue but in all the different flavours they showed (like operating systems, locales) just to raise their totals. Don’t forget, there was a prize available for the most defects raised, regardless of how sensible they were. 62 individual reports for one issue was the record. A one minute, one line code fix solved all 62 issues. Oh and yes, there was also a prize for most defects resolved by the programmers.
Then something strange happened. Through no serious planning or collaboration some test leads (and their teams) started flying below the radar. Doing things the best way (to them and their team), not the corporate way.
They continued providing the metrics needed for the managers but let their teams “just get on with it”. So they prepared shells of test cases as guidance and to report metrics on, but encouraged exploration and learning. After all, one of the most difficult challenges facing the team was the lack of communication between departments and the use of bad specs. So they explored and wrote tests during the testing. Constantly learning about the system. Constantly fine tuning the process.
They started to automate the tedious processes and the deployments to test. All off the record of course. Within about 3 weeks the “off the record” teams were motivated, dedicated and passionate about testing. They were also working fast, but disciplined. They were creating their own structure.
They were, as the agile world would call them, self organising, even though this was waterfall through and through. They had automated huge amounts of tedious regression freeing up time for more interesting exploration and scripting.
All the time they ticked boxes to appease management. All the time they flew below the radar. All the time the Test Leads risked the wrath.
Instead of pandering to the enforced corporate structure they instead created their own environments and worked to their own strengths and weaknesses.
More importantly though, they started to have fun doing their jobs. They started to find meaning in what they were doing. It made sense and it provided interest. It was now a meaningful job. When something is meaningful we want to do it. We want to get out of bed in the morning. It becomes a passion for us. It is highly motivating.
Eventually the “off the record” teams were exposed and the leads were reprimanded and the teams were forced back in to the enforced structures. The leads were moved around or their roles were changed. In essence, the structure took over again. The “trouble makers” were banished from the project and watched more closely than ever before.
And then something obvious happened. The reprimanded team members started to leave, moving off to other jobs. In the space of about 2 months 25 people had left.
Despite the real success and improvements these people made the management couldn’t see the value in good software, motivated teams and rapid delivery. No reports or formal processes were being followed – so to management it was all wasted effort. Null and void in the grand scheme of delivery. Software isn’t tested unless a test case is produced. Reports and charts were more important to the management than good software. Defects don’t count if no test case rooted them out.
Also though, the management saw the testers as resources. The numbers were more important than the people. The numbers were more important than the actual end quality. The numbers were more important than the ethos, environment and vibe. The numbers were deemed to tell the full story. The numbers were mostly faked, twisted or irrelevant. The numbers were valued more than the tester creativity, enthusiasm and flow.
I’m not advocating low flying testing.
I’m also not advocating no reports or metrics. We need some of this in order to record what has happened and to give us an indication of where we are in the process. Management still need to know where we are in the grand scheme of things. But balance is needed. Balance for the good of the product, the customer and the organisation – not for the good of management who can’t see past false numbers.
In the end the test leads essentially lost their roles and ultimate this low flying ended their careers at this organisation but it also resulted in this organisation losing extremely valuable and skilled testers. It showed clearly to me that when structures and enforced constraints start to affect morale then people will go to extremes to do a good job.
More importantly, if these people fly below the radar and still can’t get the job done there will be 2 outcomes.
1 – they will leave and the organisation will lose invaluable testers.
2 – their enthusiasm and creativity will be sapped, they will stay but offer little value. Essentially the organisation still lose invaluable testers.
A good manager should be isolating teams from the corporate politics, enforced structures and metric only assessment of quality and testing. We should all be persuading those around us that the effectiveness of the tester is at risk with excessive structure in place.
A balance should be sought between providing useful indicators of testing (metrics, burndowns, defect numbers) and letting the teams just get on and do the job.
A creative, happy, empowered and proactive tester can be worth a thousand demoralised testers structured out of self thinking and creativity.
Instead of looking at how to measure, structure and enforce global testing “best practices” we should instead be empowering the team to solve the problems. We should grab the talent we trust and give them the platform to grow. And we should be standing on our soap boxes and advocating that:
“This is the team to get it done”
Picture from Jeffk on flickr. http://www.flickr.com/photos/jeffk/92047417/