The UKTMF this week was very good. Lots of interesting discussions to be had.
Being at the event triggered some thoughts about testing that I have been meaning to air for some time now.
Mastered Functional Testing
There were strong arguments by a few influential people that we, as testers, have mastered functional testing.
I don’t agree. I think we’ve got lots to learn.
I don’t believe we can ever master something as everything is forever changing; software changes, contexts changes, we change.
There’s also no single source of right (despite what some certification boards may tell you), which means that we’ll never truly know whether we have ever mastered functional testing.
I think we have a long way to go as testers before we can say we’ve got close to mastering functional testing.
Testing is only done by testers
This is a common misconception we have when talking about testing; that testing is done just by testers.
We are quick to put the tester at the centre of the development lifecycle when in reality we are just part of a wider team. The team will typically do testing of many different types.
The discussion at UKTMF centred around technical testers and the assumption was that technical testing was done by technical testers. Not so for many companies. We need to separate out the role of tester and the act of testing. Developers test, customer test, product owners test, testers test. Not all testing needs to be done by testers. Therefore not all technical testing needs to be done by technical testers.
All products must have uber-quality
Many statements and discussions centred around the product quality and how it must always be amazing. This is not true.
As much as we would like to think all companies need testers (is this us putting testers at the heart of the process?) and need to create high quality products it’s simply not true.
There are many companies producing software with no “testers” (although they do testing) and there are many products on the market (doing well) that aren’t (or weren’t) great to use.
Products, like humans, go through life stages and quality isn’t always important at all life stages of a product. Sometimes market share or a proof of concept is more important than a quality product.
Early adopters of Twitter will know the “Fail Whale”. Years ago about 3 in 6 tweets I posted would result in a Fail Whale – yet Twitter itself is now thriving, and I’m still using it. I’m happy to live with poor quality (for a period of time) if the problem the product is solving is big enough.
We often lose sight of the context the product is used in when talking about testers and testing. We are not the centre of the Universe and we are not always required on all products.
Testers are not generic
Another point I observed was that many people spoke about testers as though they are resources. Just “things” that we can move around and assign to whatever we want and swap as we please.
It was refreshing to hear a few people telling stories about the best teams being made up of a mixed bag of people from many different backgrounds, but they (we) were in the minority. A good team, in my experience, has people with very different backgrounds and approaches. We are not carbon copies of each other.
Testing (Or more to the point, social science) is not technical
Many people seem to be quick to replace the word”technical” with “being able to write code”.
It’s almost as though “programmer” is a synonym of “technical”.
So specialists in exploratory testing, requirements analysis, negotiation, marketing, technical writing (there’s a clue in the title) and design are not technical…..really?
It actually lead to a great comment from Richard Neeve who suggested that actually a technical tester (and tester in general) could actually be described as a scientific tester. I like the thinking behind this and I need to ponder it further.
The whole “technical tester” discussion is interesting as it often takes hours just to agree on a general definition of exactly what a technical tester is – something Stefan Zivanovic facilitated very well.
And finally I thought it prudent to mention the views I aired at the UKTMF on why “technical tester” is becoming increasingly common and prevalent in our industry.
I believe that the clued up testers are the ones who realise the future is bleak for those who simply point and click. A computer is replacing this type of testing.
The switched on testers are learning how to code, specialising in areas like accessibility, training and coaching, UX, performance, security, big data and privacy or they are expanding their remit in to scrum master, support, management, operations and customer facing roles. They are evolving themselves to remain relevant to the market place and to make best use of their skills and abilities. Some of these people are differentiating themselves by labelling themselves as technical. I believe this is why we are seeing a rise of the technical tester. And just as a side note, not all of those who call themselves a Technical Tester are all that technical (in either coding, or other technical skills). As with all job labels there are those who warrant it, and those who are over inflating themselves.
I think it’s important that testers diversify and improve their technical skills (and I don’t mean just coding skills) to remain relevant. After all, if you’re interviewing for a tester and they have excellent testing skills then you’ll likely hire them (assuming all other requirements and cultural fit is right). But what if a similar candidate came along who was also excellent at “testing” but was also skilled and talented in UX, ethnographic research, performance testing or security testing……which one would you want to hire now?
Remaining relevant in the marketplace isn’t about getting another certificate, it’s about evolving your own skills so you can add even more value than simply pointing and clicking. You need to become a technical tester (as defined above) – in fact, scrap that – you just need to become a good tester – because after all, good testers are indeed technical.