Let’s say I gave you a giant box to explore

Let’s say I gave you a giant box to explore.

In the box was a machine that did a number of different functions along with an instruction manual. The functions that the machine can perform are complicated but ultimately observable and open to identifying more thoroughly than we currently have done in our user guide.

The machine however has lots of unknowns when it comes to putting spurious data in and connecting with a wide array of third party apps. The results of your actions are outputted to a simple user interface. It’s not pretty but it does the job.

The above was a test that a company I worked for would give during a job interview.


Image from subsetsum on Flickr.

They provided a laptop with this “test machine” running on it (a Java application) and they provided the instructions (a simple two page html document).

When I sat the test I hated it.

I didn’t feel it dug deep enough around some of the diverse skills I, and others, could offer.

In hindsight though I think this type of test was actually very insightful – although maybe the insights weren’t always used appropriately.

I got the job and soon found myself on the other side of the process giving this test to other testers hoping to come and work with us.

When I was interviewing I was handed a checklist to accompany the test. This checklist was used to measure the candidate and their success at meeting a pre-defined expected set of measures.

It was a “checklist” that told the hiring manager whether or not to hire someone.

The checklist contained a list of known bugs in the “test machine”. There were about 25 “expected” bugs followed by a list of about 15 “bonus” bugs. There were two columns next to these bug lists, one headed junior and one headed senior.

If we were interviewing a ‘junior’ (i.e. less than 3 years experience) then the company expected this person to find all of “expected” bugs.

If we were interviewing a ‘senior’ (i.e. more than 3 years experience) then the company expected this person to find all of “expected” bugs AND all/some of the “bonus” bugs.

There was also space to record other “observations” like comments about usability, fit for purpose etc

This interview test was aimed at flushing out those who were not very good in both junior and senior categories.

I’m assuming many of my readers are jumping up and down in fits of rage about how fundamentally flawed this process is…right?

I immediately spotted the fundamental flaws with interviewing like this, so I questioned the use of the checklists (not necessarily the test itself) and I stumbled across two interesting points.

Firstly, there were some hiring managers that stuck to this list religiously (like a tester might to a test case). If someone didn’t find the bugs they were expected to then they didn’t get the job. Simple as that.

Other hiring managers were more pragmatic and instead balanced the performance across the whole exercise. For example – I didn’t find some of the things I was expected to find, but I still got the job. (It turns out a few managers were ticking the boxes next to the bugs people didn’t find to “prove” to senior management (auditors) that candidates were finding the bugs they should have).

Secondly, this test was remarkably good at exploring the activities, approach and thinking that most testers go through when they test.

Whether someone got any or all of the bugs they were expected to was beside the point for me. I was more interested by the thinking and process the tester went through to find those bugs and assess the “test machine”.

It just so happened that most hiring managers were not measuring/observing/using this rich data (i.e. the testers approach, thoughts, ideas, paths, etc) to form their opinion.

Junior or Senior – it made little difference

It was clear from doing these interviews that it made little difference how many years of experience a candidate had and their ability to find the bugs they were expected to.

I actually spotted a very prevalent trend when doing these interviews.

Those that followed the user manual found most of the “expected” bugs no matter what their perceived experience level; juniors, seniors, guru grade – it made no difference.

If they followed the user guide (and often wrote test cases based on the guide) then they found these “expected” bugs.

Those that skimmed the user guide but went off exploring the system typically (although not always) found most of the expected bugs AND some/all of the “bonus” bugs – again, irrelevant of their experience (or perceived experience).

This second point highlighted to me that the approach a tester took to testing influenced the types of bugs they found, but also, in this interview case, it challenged our assumptions about experience and bug count. It also made a few people think whether they had categorised the known bugs correctly – were the bonus bugs too easy to find?

I observed very junior testers finding the “bonus” bugs and senior testers missing them purely because of their approach and mindset. (although not a scientific finding it did give me enough insights to set me on my journey towards exploration and discovery).

It wasn’t even as simple that all seniors did a more exploratory approach and that juniors did a more scripted approach. That simply didn’t happen.

It seemed to run deeper than that. It seemed to focus more around a curious mindset than years of experience.

Obsessed with finding out more

Those that took an exploration and discovery approach appeared to develop an obsession with exploring the “machine”.

They seemed more compelled to discover interesting information about the “test machine” than those who followed a script based on a partial user guide.

Following the guide simply wasn’t enough. They needed to find out the information for themselves. In a sense, they needed “primary” information. Information they had obtained themselves straight from the source (the product). Other testers were happy with using a “secondary” source of information (the user guide) to formulate conclusions. It obviously wasn’t as clear cut as that, as many straddled the middle ground, but the trends did show an interesting difference in approach.

Note Taking & Story Telling

Another dimension to this process, which was what kick started my obsession with note taking, was that those who went off exploring appeared to tell more compelling stories about their testing, often through detailed and interesting notes.

The notes they took enabled them to recite their exploration and discovery story back to others with an astounding level of detail – something most people who followed a script didn’t do ( I can’t say whether they “could” do it though).

As these people got stuck in and delved around in the product they started to tell a really interesting story of their progress. Through narration of their approach and interesting notes they got me and the other hiring managers interested, even though we’d seen dozens of people perform this very same test.

They started to pull us along with them as they explored and journeyed through the product. They wrote notes, they drew pictures, they made doodles and wrote keywords (to trigger memory) as they went.

They built up a story. They built up a story of the product using the user guide as a basis, but using their own discoveries to fill in the gaps, or to even challenge the statements made in the user guide (of course…the user guide had bugs in it too).

They wrote the story of their exploration as it developed and they embraced the discovery of new information. They kept notes of other interesting areas to explore if they were given more time.

They told stories (and asked lots of questions) about how users might use the product, or some of the issues users may encounter that warranted further investigation.

They involved others in this testing process also; they described their approach and they made sure they could report back on this compelling story once they’d finished the task.

What intrigued me the most though was that each tester would tell a different story. They would see many of the same things, but also very different things within the product and the user guide.

They built up different stories and different insights.

They built their testing stories in different orders and through different paths but actually they all fundamentally concluded very similar things. The endings of the stories were similar but the plot was different so to speak.

What they explored and what they found also said a lot about themselves and their outlook on testing.

Their actions and outputs told stories of who they were and what their contribution to our company/team would be. They let their interview guard/nerves down and they showed the real them.

Everyone who sat that test told us something about themselves and their testing, however, we (the hiring managers) were pulled along more with those who built their testing stories as they went rather than those who had pre-defined the story and stuck to it.

The explorers appeared to tell more insightful and compelling stories and they also revealed deeper insights about themselves and the product.

As our hiring improved (better job adverts, better phone screens, better recruitment consultants) we started to interview more people who would explore the product and tell us these compelling stories; these were the people we wanted to seek out.

As a result even those hiring managers who were obsessed with their checklist would find themselves being pulled along with the exploration. They found themselves ignoring their precious checklists and focusing on the wider picture.

It didn’t take long for senior management to see the problems with measuring people against lists too.

Did I learn anything?

Oh yes indeed.

We should never hire purely on binary results (i.e. found this, didn’t find this, is this type of person, is that type of person, isn’t this).

Only by testing the system do we find out the real truth about the product (specs are useful, but they are not the system). In a choice between primary information or secondary information I will gravitate to primary.

When we explore we have the opportunity to tell compelling stories about our exploration, the product, the testing and more importantly, ourselves <– not everyone has these skills though.

All testers will conclude different insights, sometimes they may find the same bugs as others, sometimes very different ones. Therefore it’s good to pair testers and to share learnings often.

The hiring approach should be consistent across all those with the power to hire and should focus on the person, not the checklist.

Good exploratory testing relies on the ability to explain the testing, insights and findings both during the testing session and after (and often relies on exceptional note taking and other memory aids).

If you’re hiring a tester, observe them doing some testing before making an offer.

I believe an obsession to learn more (and be curious) is one of the most observable traits I can see. Exploration and curiosity does not always come naturally to people. Not everyone is curious about the world around them or the products they test, but for testers I believe this trait alone will send them down the many paths of learning. (and this is a good thing)

Someone with 10 years of experience is not always better than someone with 2 years – especially so if they have had 10 of the same year.

Dividing people in to junior and senior camps is not helpful. Where is the cut-off mark (years of experience? – see point above)? We are all on a journey and our skills will change and develop over time – the person is what you are hiring, not the years that they have worked.

The big question I now ask is “Does this person want to be on a journey to mastery? If they don’t, it’s very hard to encourage them to improve.


Note: This post may come across as me suggesting exploratory testing approaches (and therefore exploratory testers) are better than scripted testing approaches (and hence testers who follow scripts). That’s not my intention.

How you perceive this post will depend very heavily on how you perceive the role of testing and the role of the tester to be in your organisation.

You know your organisation and the needs that you and your team have. Some organisations desire people to follow scripts, some organisations desire people to explore the product, some desire something in between and some desire other things all together.



Scrum Master Training

Last week we had the very clever peeps from Neuri Consulting in to give us a special one day course on being a scrum master.

It was David Evans and Brindusa Gabur that delivered our training, and what a great day it was. We had around 10 people in the course ranging from those with lots of agile experience, those currently in scrum master roles, one of our product owners, those who have expressed an interest and those who’ve done scrum master roles previously.

As it was a mixed bag of experience and expectations we focused heavily on story breakdown and estimation; two areas we’ve yet to master. David and Brindusa targeted their training on these two points whilst also covering a lot of other ground.

We played a couple of games to illustrate points and mis-conceptions that we brought to the games. We also worked through some ideas which we thought were truth or lies regarding the scrum master role. We ran a retrospective of the training which highlighted some interesting points and some good take-aways for the teams to work on.

It was a really good day and I think we all took a lot away from it. From my own point of view I feel we need a more consistent approach across teams, but in reality we’re doing pretty well.

With a little tweaking on how we measure cycle time and more emphasis on quick estimations for the backlog I think we’ll start to see more throughput in terms of stories and features.

What was great to see though was the banter and friendships that have formed here at work. It was a lighthearted affair yet focused and in tune with our core ethos of learning being central to all we do.

The only thing to disappoint us was that we all wanted more. We should have had a two/three day course with the peeps from Neuri as we felt we didn’t cover everything we could have.

The key take-aways from the training seemed to be about having more off-site retrospectives and limiting the retrospectives to a shorter period of time. This gives the retrospectives more focus and the opportunity to move away from problems that are lingering in peoples minds but aren’t actually currently a problem.

Superb course. Best get saving up for the next one 🙂

2 3 1

What can you do with a brick?

Last week one of our team, Simon, ran a really fun session with the whole test team on our Exploratory Testing process.

We started by discussing some of the thinking that we’ve identified happens when we plan exploratory testing sessions. We talked through a diagram we created a few years back, and although it’s pretty accurate in identifying some high level ideas of our Exploratory Testing process, it’s by no means complete, but it served it’s purpose as an aid to explaining.

Once we’d been through the diagram Simon introduced the team to one of my favourite games that explores creative thinking.

It’s a common game where people are asked to come up with as many uses they can find for an item.

It’s a really good way of understanding how people think up ideas, how diverse the thinking is within our group and it’s also a good way of injecting some discussions and banter in to our session.

Simon gave the group 5 minutes to write their ideas on post-it notes. The item of choice today was a “brick”.

We then affinity mapped them in to rough categories on the white board and discussed a little around each category as a group.



We then discussed the process of how people came up with their ideas.

It’s always tricky analyzing your own thinking, especially so in retrospect, but we did spot a couple of patterns emerging.

Firstly, whether consciously or not, we all envisioned a brick and started from this image we’d constructed. As it turned out we’d all thought of a standard house brick; some people saw in their minds the one with holes in it, others the bricks with a recess. Either way we started with the standard form of a typical house brick (here in England).



Here’s where we appeared to head off in slightly different thinking ways. After writing down all of the basic ideas that a brick is used for (building stuff, throwing at things, weighing things down) we started to head off in the following three directions:

  1. Thinking of other sizes, shapes and forms that a brick could take
  2. Thinking of different contexts and locations that a brick could be used in it’s original form (outside of those we naturally thought of straight away)
  3. Thinking of everyday objects that we could replace with a brick.

For example:

  • We could grinding the brick down to create sand
  • Use the brick as book ends
  • Take the brick to a philosopher (and/or someone who had never seen a brick before) and try to explain what it was used for
  • Use the brick as a house for small animals, insects and little people
  • Use the holes in the brick as a spaghetti measurer
  • Putting the brick in a toilet cistern to save water
  • Projectile missile and other weapons
  • Use it to draw right angles
  • Use it as a paperweight
  • Use it as a booster seat in a car
  • Use it as a holder for fireworks
  • Use it as a bird bath.

And many, many more.


As you can see we explored a number of uses and we created a decent amount of categories in which a brick would fit.

What was most important though was that we all took time to reflect on where our ideas were coming from.

We also noted that not all of clearly think in the same fashion. Some people remained true to the form and shape of a brick but explored different contexts.

Others ignored the standard shape of a brick and explored different elements and uses of the materials within a brick.

This realisation that we all think differently about something as simple as a brick triggered some further discussions about how we come up with ideas for testing our new features.

It ultimately lead to us to concluding that it would make sense to pair with others during and after our story kick off meeting. It might generate a broader and deeper set of test ideas. It might not. We’ll have to experiment.

For now though we’re running with two testers attending story chats followed by a brainstorming and ideas exchange meeting. We decided it would make sense to not do the brainstorming in the story chat as that will sidetrack the purpose of that meeting, but we will be sharing our wider test ideas with the team as well as writing acceptance tests in SpecFlow for static checks.

Let’s see how it goes.

It was a really good session and we took away a direct change we could try. It’s good to get the team together every week to chat through the ways we are working, thinking and solving tricky testing problems.

We made a recruitment video!!

We’re undergoing a rapid growth here at NewVoiceMedia as we strive to expand our development team to cater for our growing roadmaps and customer growth. As such we’ve been working hard trying to articulate a little more about what it’s like to work here.

We spent a day a few months back with a camera man pointing his digital camera at us throughout various times of the day. It was actually really good to see the final version as I think it’s a good reflection of the kind of company we are. There’s always a worry that the video will be too hyped up or too distant from reality, but the film crew did a great job.

Don’t forget, if you’re a developer or developer in test and you want to come and work for us, then drop me an email or apply online.

P.S – For those that seem to be interested – I am only briefly in the video as the camera pans past one of our stand-ups…

Recruiting is hard work

Despite what many people believe recruiting developers and testers is hard work.

Trust me.

It’s tricky finding good people.

It’s tricky interviewing and finding out whether both parties are a good match for each other.

It’s tricky on-boarding people in to the teams with the minimal amount of disruption.

It’s tricky scaling our development team to meet the needs of a growing company doing great work and pushing boundaries.

As such I’m reaching out to ask for help…again.

I need your help to find us talented people to join our growing team. I can’t offer much in return but should we ever meet at a conference I’ll be sure to buy you a drink, but only if there’s a free bar 🙂

Jesting aside we’re on the hunt for the following roles:

Developer (Permanent and Contract)

Must be able to write simple, maintainable and readable C# code. You must be passionate about learning and developing your skills and sharing this learning with those who work with you. You must be able to work as part of a team and in a agile methodology. You must be able to hit the ground running.

Technical Tester / Developer In Test (Permanent)

You’ll be a proficient programmer who’s obsessed with testing. You’ll need to be able to perform exploratory testing also and infect all of those around you with an obsession for writing automated tests.

In return you’ll get to work with some of the brightest minds in the industry in a learning centred development team. We’re growing fast and we need top notch people to join us and help to create something amazing.

No agencies please.

Do you think you can help? Do you know anyone who might want to work with us here? Do you yourself want to join us?

Drop me an email to rob (@) thesocialtester (dot) co (dot) uk or DM me on Twitter @rob_lambert



Another Crowdsourced Testing Service – with gamification

I was hanging out at Google Campus Cafe yesterday in London and checking out the notice board when my eyes were drawn to a sign that said:

“Testers wanted”

It turns out the sign was for another crowd sourcing community of testers called Testlio. The crowd-sourced business model is growing in popularity and can be a good fit for companies wanting something tested in the wild, on a wide variety of devices or sporadically (so have no need for a dedicated test resource).


I do quite like the gamifications aspects of Testlio though and the fact the testing is pitched as “challenges”. Points are awarded for these challenges as well as doing other things like asking questions and responding to questions. This helps to build a community aspect and engagement.

There are lots of crowd sourcing sites popping up and some consultancy companies have started to offer this model as a way of catering for the need for this approach to testing


Screening Candidates Via Video – good idea?

I stumbled across this interesting new tool and concept called Ziggeo via the Swiss Miss blog. I’ve seen something similar before; a tool for recording candidate videos and then reviewing them prior to any phone or face to face meeting.

In some respects I can see tools and services like this being quite useful, particularly if you are recruiting for someone who may have to do presentations, webinars, videos etc as part of their role. But I’d suggest caution with adopting something like this in full swing for your recruiting of testers and developers.

Not everyone will feel comfortable recording a video of themselves answering questions. Is it fair to put people under these conditions if they’re unlikely to ever encounter this type of work in the role you are offering? Will you end up losing out on candidates because they don’t feel comfortable? Is being able to record a video of yourself essential for the role you have?

It’s also a dangerous way to let your pre-conceptions and biases consciously or unconsciously play an effect before you’ve even spoken to them. You could find yourself rejecting people who fall in to certain camps before you’ve even got to know them.

Ever hear of the story of how women are more likely to be selected for orchestras if they do “blind” auditions? –  Do you believe you’ll not be party to certain biases when reviewing a video?

You could argue though that the same is true when doing Phone/Skype interviews and even face to face. Yep – there is some element to that, but at least you’re speaking to the person and having a conversation; they have more control over their responses and ability to change your mind.

The text on the Ziggeo page states:

“Have you ever interviewed someone and knew within the first 20 seconds that the meeting was a complete waste of time? “

Yes I have, but then I’ve had my mind changed by these very same people.

Nerves, poor questions, bad interview environments, mis-aligned expectations on both sides can all lead to wanting to reject candidates before you’ve got to know them.

First impressions make a big difference – don’t get me wrong, but so too does the way a candidate answers a question, or the attitude they have, or the connections you might sense during an interview. Much of what you see in the first impressions in an interview might not be the “baseline” or “normal” behavior of the candidate. Only when they relax do you start to see the real person.

Would a video show the real side of someone? Or the side that is nervous, uncomfortable and feeling really uneasy about the whole process?

However I could see this service being very useful for certain roles and jobs, and some people would have no quarms about the medium. I think it’s an idea worth exploring during your recruitment and it could give you some insights and pre-filtering to help you. It could also result in you losing out on good candidates.

What do you think?

Would you feel comfortable recording yourself asking questions?


Note: I should also point out that the service is also aimed at people finding room mates, baby sitters etc.



Opposite Views

At EuroSTAR 2012 I got talking to someone who had polar opposite views to mine on Testing and Agile implementation.

Despite his opposite views and the fact I could counter almost anything he said from my own experience I knew deep down inside that he knew he was right.

His solution, albeit not something I would label agile, worked for his clients. He was passionate about the work he does and the people he helps. He held different views, but was contributing goodness to others in the industry and more importantly, he was getting results for his clients.

He was helping people succeed.

No matter what my opinions are there will always be people who hold different opinions, ideas and experiences to me.

In software development there are very few process level ideas and actions that are black and white. It’s difficult to quantify, measure and then compare two different approaches as there are so many variables…like budget, people, product, customers, skills and experience etc.

One approach that works over here, might fail over there.

A lot of people avoid talking to people who think differently to them but I believe that only by opening our dialogue with others who think differently will we truly learn about ourselves and alternative ways of doing things.

Isn’t it in this collision of ideas where true innovation and learning comes from?

I had a lot in common with the tester I met at EuroSTAR 2012. We’ve kept in touch since and despite his continued promotion of ideas at odds with mine we’ve become good friends.

At EuroSTAR he told me that he had found his calling in life.

Who am I to say his calling is worse than mine?