Word. Search.

Whenever you encounter a new word you’ve never heard before, or a word that you don’t know what it represents then go back to your desk and research it.

Do a word search. Find out what it means.

Word Search Image
Word Search Image
Image from Sergis Blog “Sopa de letras” October 29, 2007 via Flickr, Creative Commons Attribution.

A key aspect of being a good tester is being able to understand, decipher and communicate in the language of your business. The jargon that your business uses is an important aspect of the way your team members communicate. Embrace the jargon and learn to use it – it often makes communicating within your own social or business group more effective.

As a tester you need to know what other people are talking about.

You need to know the technology they are talking about.

You need to know the approach they are taking and the way they are articulating what they are doing.

The most valuable way to understand the business you work in and the product you are testing is to do your research about the words and phrases being used.

Don’t come away from any meeting not knowing what a word means without it being an action for you to research it. Don’t encounter a word or phrase that you don’t know more than once. Research it and understand it.

You might not need to know the ins and outs of the words meaning or what that word represents but you should at least be aware of it.

For example. In a typical meeting here we might mention some or all of the following:

  • API
  • UI
  • Stack
  • SIP
  • SIPp
  • WireShark
  • Trace
  • Cold Transfer
  • Call Leg
  • Proxy
  • WAF
  • App Firewall
  • Burp
  • VoiceXML
  • NewRelic

This is a tiny tiny subset of the words we use. Some represent technologies. Some represent aspects of a system under test.

As a tester we need to know what each one means. We need to know how each one works.

So how do I find this out?

Search the web. Find resources and read about each word or phrase.

Ask. Ask your peers. Ask your colleagues.

Decipher. Work out what it means by deciphering the context in which it is being used. For example if you hear the words “run a wireshark trace” you should be able to decipher that Wireshark is some sort of tool or technique for tracing something. The more you listen, the more clues you’ll get; you’ll soon have a mental picture in your mind of what a wireshark trace is.

You can then join in and understand the conversation – and then confirm your assumptions with some research after (or even better – a question to clarify during the meeting).

I make a point of jotting down any word, phrase or description I hear in any conversation. I then search, ask or decipher.

What do you do?

Find out what the word means? Or ignore it and hope you’ll never need to know?

The rise of the technical tester, and other thoughts from the UKTMF

The UKTMF this week was very good. Lots of interesting discussions to be had.

Being at the event triggered some thoughts about testing that I have been meaning to air for some time now.

Mastered Functional Testing

There were strong arguments by a few influential people that we, as testers, have mastered functional testing.

I don’t agree. I think we’ve got lots to learn.

I don’t believe we can ever master something as everything is forever changing; software changes, contexts changes, we change.

There’s also no single source of right (despite what some certification boards may tell you), which means that we’ll never truly know whether we have ever mastered functional testing.

I think we have a long way to go as testers before we can say we’ve got close to mastering functional testing.

Testing is only done by testers

This is a common misconception we have when talking about testing; that testing is done just by testers.

We are quick to put the tester at the centre of the development lifecycle when in reality we are just part of a wider team. The team will typically do testing of many different types.

The discussion at UKTMF centred around technical testers and the assumption was that technical testing was done by technical testers. Not so for many companies. We need to separate out the role of tester and the act of testing. Developers test, customer test, product owners test, testers test. Not all testing needs to be done by testers. Therefore not all technical testing needs to be done by technical testers.

All products must have uber-quality

Many statements and discussions centred around the product quality and how it must always be amazing. This is not true.

As much as we would like to think all companies need testers (is this us putting testers at the heart of the process?) and need to create high quality products it’s simply not true.

There are many companies producing software with no “testers” (although they do testing) and there are many products on the market (doing well) that aren’t (or weren’t) great to use.

Products, like humans, go through life stages and quality isn’t always important at all life stages of a product. Sometimes market share or a proof of concept is more important than a quality product.

Early adopters of Twitter will know the “Fail Whale”. Years ago about 3 in 6 tweets I posted would result in a Fail Whale – yet Twitter itself is now thriving, and I’m still using it. I’m happy to live with poor quality (for a period of time) if the problem the product is solving is big enough.

We often lose sight of the context the product is used in when talking about testers and testing. We are not the centre of the Universe and we are not always required on all products.

Testers are not generic

Another point I observed was that many people spoke about testers as though they are resources. Just “things” that we can move around and assign to whatever we want and swap as we please.

It was refreshing to hear a few people telling stories about the best teams being made up of a mixed bag of people from many different backgrounds, but they (we) were in the minority. A good team, in my experience, has people with very different backgrounds and approaches. We are not carbon copies of each other.

Testing (Or more to the point, social science) is not technical

Many people seem to be quick to replace the word”technical” with “being able to write code”.

It’s almost as though “programmer” is a synonym of “technical”.

So specialists in exploratory testing, requirements analysis, negotiation, marketing, technical writing (there’s a clue in the title) and design are not technical…..really?

It actually lead to a great comment from Richard Neeve who suggested that actually a technical tester (and tester in general) could actually be described as a scientific tester. I like the thinking behind this and I need to ponder it further.

The whole “technical tester” discussion is interesting as it often takes hours just to agree on a general definition of exactly what a technical tester is – something Stefan Zivanovic facilitated very well.

And finally I thought it prudent to mention the views I aired at the UKTMF on why “technical tester” is becoming increasingly common and prevalent in our industry.

I believe that the clued up testers are the ones who realise the future is bleak for those who simply point and click. A computer is replacing this type of testing.

The switched on testers are learning how to code, specialising in areas like accessibility, training and coaching, UX, performance, security, big data and privacy or they are expanding their remit in to scrum master, support, management, operations and customer facing roles. They are evolving themselves to remain relevant to the market place and to make best use of their skills and abilities. Some of these people are differentiating themselves by labelling themselves as technical. I believe this is why we are seeing a rise of the technical tester. And just as a side note, not all of those who call themselves a Technical Tester are all that technical (in either coding, or other technical skills). As with all job labels there are those who warrant it, and those who are over inflating themselves.

I think it’s important that testers diversify and improve their technical skills (and I don’t mean just coding skills) to remain relevant. After all, if you’re interviewing for a tester and they have excellent testing skills then you’ll likely hire them (assuming all other requirements and cultural fit is right). But what if a similar candidate came along who was also excellent at “testing” but was also skilled and talented in UX, ethnographic research, performance testing or security testing……which one would you want to hire now?

Remaining relevant in the marketplace isn’t about getting another certificate, it’s about evolving your own skills so you can add even more value than simply pointing and clicking. You need to become a technical tester (as defined above) – in fact, scrap that – you just need to become a good tester – because after all, good testers are indeed technical.


Ice Cream Sauce and Cigarettes

I’ve recently been doing my food shopping online. No more wandering around the store and queuing. This week though I swapped online supermarkets in a bid to save some money and all I did was waste a load more time.

 4063031766_451be82993_bImage from kardboard604 “Groceries” October 31st, 2009 via Flickr, Creative Commons Attribution.


In my opinion search is at the heart of everything in the world of online shopping. I search for stuff to buy. I search some more. I search again. I find things to add to my basket via search.

Sure, i could use the menus and structure the company have added to the site but I don’t usually. Or I try not to. I search.

So it was a great surprise to find that my newly chosen supermarket’s search failed my expectations.

Ice Cream Sauce and Cigarettes

The first search I did was for “ice cream sauce”. It’s been crazy hot here recently and my kids are getting bored with plain old vanilla ice cream.

The results of the search surprised me.

There was no ice cream sauce in the top ten results but the returned list did include green tea, yoghurts, dark chocolate and cigarettes.

Search Results

Yep, you read that right. A search for “ice cream sauce” returned two different brands of cigarettes but no ice cream sauce.

Ironically ice cream sauce was listed at the bottom of the page where suggested products (based on what other people bought after searching for “ice cream sauce”) are shown. Are they taunting me?


As you can see it lists a number of ice cream sauce products all of which contained (in either the title or description) the words “ice cream sauce”. Yet they were not returned in the main search…….cigarettes were.

I then did a search based on the exact title of the “Toffee Ice Cream sauce” that other people bought (I also included the brand name in a second search) and still I didn’t get the product returned in the results. Eh?

So what is the search algorithm using to find products?

It doesn’t seem to be using the title or the description otherwise it would have returned the sauce…right?

Is it using keywords? Or tags? And has someone got it wrong and labeled cigarettes as “ice” or “cream” or “sauce” in the database?

I then searched for “blueberries”. It didn’t return a direct hit.

I then searched for “eggs”. The first pack of eggs was about 6 rows down in the results grid. Not exactly what I would call a direct hit.

I then searched for “children’s toothbrush” and didn’t find one.

I started using the menus to find the toothbrush.

Children’s toothbrushes didn’t exist under Dental > Toothbrushes, or Dental > Children. Neither did they exist under Baby and Child section either. So they basically don’t appear to sell them, yet they do sell them in-store.

Should they have tested more?

To say that they should have tested more, or done more of X testing is a statement too far. I don’t know that they didn’t do the testing, find the bugs and then a product owner say “ship it” anyway.

What I do know though is that the search was not throwing an error. It handled all of the stuff I threw at it, but it didn’t return what I wanted.

Would this have passed a functional test? Maybe..probably. Depends on who ran it, what values they used and what their level of engagement was.

As a user I wasn’t frustrated because of crashes or exceptions or errors; things we typically talk about finding as testers. I was frustrated because I couldn’t do my shopping. The result of this was that I went elsewhere to do my shopping. I went to my usual supermarket where I found 100% of the items I searched for straight away.

If the site is well designed in the background and people are keen on improving the process then I suspect they are tracking the success rate of searches and tweaking the algorithms over time. I hope they are doing this.

What is your product’s main purpose?

Your product doesn’t have to just work it has to serve it’s main purpose.

It’s much easier now to switch suppliers or providers or services, so as testers we need to make sure that when we test we’re testing that the product is fit for purpose, not just that it accepts £%£@$@££!@@£@@FFDSIFJE21398389198237 as an input to a field.

I see testers digging around in the detail, trying to find the exceptions, the errors and the data misconfiguration and missing the big picture. It’s good to find the gnarly bugs things, but pointless if the basics don’t work.

How useful is a functionally spot-on working product that is used by no-one?

This week when I did my online food shopping I didn’t find any exceptions, or stack overflows, or errors using the search, but I also didn’t find my groceries. Which result is more important?