Let’s say I gave you a giant box to explore

Let’s say I gave you a giant box to explore.

In the box was a machine that did a number of different functions along with an instruction manual. The functions that the machine can perform are complicated but ultimately observable and open to identifying more thoroughly than we currently have done in our user guide.

The machine however has lots of unknowns when it comes to putting spurious data in and connecting with a wide array of third party apps. The results of your actions are outputted to a simple user interface. It’s not pretty but it does the job. The above was a test that a company I worked for would give during a job interview.

box Image from subsetsum on Flickr.

They provided a laptop with this “test machine” running on it (a Java application) and they provided the instructions (a simple two page html document).

When I sat the test I hated it.

I didn’t feel it dug deep enough around some of the diverse skills I, and others, could offer.

In hindsight though I think this type of test was actually very insightful – although maybe the insights weren’t always used appropriately.

I got the job and soon found myself on the other side of the process giving this test to other testers hoping to come and work with us.

When I was interviewing I was handed a checklist to accompany the test. This checklist was used to measure the candidate and their success at meeting a pre-defined expected set of measures.

It was a “checklist” that told the hiring manager whether or not to hire someone.

The checklist contained a list of known bugs in the “test machine”. There were about 25 “expected” bugs followed by a list of about 15 “bonus” bugs. There were two columns next to these bug lists, one headed junior and one headed senior.

If we were interviewing a ‘junior’ (i.e. less than 3 years experience) then the company expected this person to find all of “expected” bugs.

If we were interviewing a ‘senior’ (i.e. more than 3 years experience) then the company expected this person to find all of “expected” bugs AND all/some of the “bonus” bugs.

There was also space to record other “observations” like comments about usability, fit for purpose etc

This interview test was aimed at flushing out those who were not very good in both junior and senior categories.

I’m assuming many of my readers are jumping up and down in fits of rage about how fundamentally flawed this process is…right?

I immediately spotted the fundamental flaws with interviewing like this, so I questioned the use of the checklists (not necessarily the test itself) and I stumbled across two interesting points.

Firstly, there were some hiring managers that stuck to this list religiously (like a tester might to a test case). If someone didn’t find the bugs they were expected to then they didn’t get the job. Simple as that.

Other hiring managers were more pragmatic and instead balanced the performance across the whole exercise. For example – I didn’t find some of the things I was expected to find, but I still got the job. (It turns out a few managers were ticking the boxes next to the bugs people didn’t find to “prove” to senior management (auditors) that candidates were finding the bugs they should have).

Secondly, this test was remarkably good at exploring the activities, approach and thinking that most testers go through when they test.

Whether someone got any or all of the bugs they were expected to was beside the point for me. I was more interested by the thinking and process the tester went through to find those bugs and assess the “test machine”.

It just so happened that most hiring managers were not measuring/observing/using this rich data (i.e. the testers approach, thoughts, ideas, paths, etc) to form their opinion.

Junior or Senior – it made little difference

It was clear from doing these interviews that it made little difference how many years of experience a candidate had and their ability to find the bugs they were expected to.

I actually spotted a very prevalent trend when doing these interviews.

Those that followed the user manual found most of the “expected” bugs no matter what their perceived experience level; juniors, seniors, guru grade – it made no difference.

If they followed the user guide (and often wrote test cases based on the guide) then they found these “expected” bugs.

Those that skimmed the user guide but went off exploring the system typically (although not always) found most of the expected bugs AND some/all of the “bonus” bugs – again, irrelevant of their experience (or perceived experience).

This second point highlighted to me that the approach a tester took to testing influenced the types of bugs they found, but also, in this interview case, it challenged our assumptions about experience and bug count. It also made a few people think whether they had categorised the known bugs correctly – were the bonus bugs too easy to find?

I observed very junior testers finding the “bonus” bugs and senior testers missing them purely because of their approach and mindset. (although not a scientific finding it did give me enough insights to set me on my journey towards exploration and discovery).

It wasn’t even as simple that all seniors did a more exploratory approach and that juniors did a more scripted approach. That simply didn’t happen.

It seemed to run deeper than that. It seemed to focus more around a curious mindset than years of experience.

Obsessed with finding out more

Those that took an exploration and discovery approach appeared to develop an obsession with exploring the “machine”.

They seemed more compelled to discover interesting information about the “test machine” than those who followed a script based on a partial user guide.

Following the guide simply wasn’t enough. They needed to find out the information for themselves. In a sense, they needed “primary” information. Information they had obtained themselves straight from the source (the product). Other testers were happy with using a “secondary” source of information (the user guide) to formulate conclusions. It obviously wasn’t as clear cut as that, as many straddled the middle ground, but the trends did show an interesting difference in approach.

Note Taking & Story Telling

Another dimension to this process, which was what kick started my obsession with note taking, was that those who went off exploring appeared to tell more compelling stories about their testing, often through detailed and interesting notes.

The notes they took enabled them to recite their exploration and discovery story back to others with an astounding level of detail – something most people who followed a script didn’t do ( I can’t say whether they “could” do it though).

As these people got stuck in and delved around in the product they started to tell a really interesting story of their progress. Through narration of their approach and interesting notes they got me and the other hiring managers interested, even though we’d seen dozens of people perform this very same test.

They started to pull us along with them as they explored and journeyed through the product. They wrote notes, they drew pictures, they made doodles and wrote keywords (to trigger memory) as they went.

They built up a story. They built up a story of the product using the user guide as a basis, but using their own discoveries to fill in the gaps, or to even challenge the statements made in the user guide (of course…the user guide had bugs in it too).

They wrote the story of their exploration as it developed and they embraced the discovery of new information. They kept notes of other interesting areas to explore if they were given more time.

They told stories (and asked lots of questions) about how users might use the product, or some of the issues users may encounter that warranted further investigation.

They involved others in this testing process also; they described their approach and they made sure they could report back on this compelling story once they’d finished the task.

What intrigued me the most though was that each tester would tell a different story. They would see many of the same things, but also very different things within the product and the user guide.

They built up different stories and different insights.

They built their testing stories in different orders and through different paths but actually they all fundamentally concluded very similar things. The endings of the stories were similar but the plot was different so to speak.

What they explored and what they found also said a lot about themselves and their outlook on testing.

Their actions and outputs told stories of who they were and what their contribution to our company/team would be. They let their interview guard/nerves down and they showed the real them.

Everyone who sat that test told us something about themselves and their testing, however, we (the hiring managers) were pulled along more with those who built their testing stories as they went rather than those who had pre-defined the story and stuck to it.

The explorers appeared to tell more insightful and compelling stories and they also revealed deeper insights about themselves and the product.

As our hiring improved (better job adverts, better phone screens, better recruitment consultants) we started to interview more people who would explore the product and tell us these compelling stories; these were the people we wanted to seek out.

As a result even those hiring managers who were obsessed with their checklist would find themselves being pulled along with the exploration. They found themselves ignoring their precious checklists and focusing on the wider picture.

It didn’t take long for senior management to see the problems with measuring people against lists too.

Did I learn anything?

Oh yes indeed.

We should never hire purely on binary results (i.e. found this, didn’t find this, is this type of person, is that type of person, isn’t this).

Only by testing the system do we find out the real truth about the product (specs are useful, but they are not the system). In a choice between primary information or secondary information I will gravitate to primary.

When we explore we have the opportunity to tell compelling stories about our exploration, the product, the testing and more importantly, ourselves <– not everyone has these skills though.

All testers will conclude different insights, sometimes they may find the same bugs as others, sometimes very different ones. Therefore it’s good to pair testers and to share learnings often.

The hiring approach should be consistent across all those with the power to hire and should focus on the person, not the checklist.

Good exploratory testing relies on the ability to explain the testing, insights and findings both during the testing session and after (and often relies on exceptional note taking and other memory aids).

If you’re hiring a tester, observe them doing some testing before making an offer.

I believe an obsession to learn more (and be curious) is one of the most observable traits I can see. Exploration and curiosity does not always come naturally to people. Not everyone is curious about the world around them or the products they test, but for testers I believe this trait alone will send them down the many paths of learning. (and this is a good thing)

Someone with 10 years of experience is not always better than someone with 2 years – especially so if they have had 10 of the same year.

Dividing people in to junior and senior camps is not helpful. Where is the cut-off mark (years of experience? – see point above)? We are all on a journey and our skills will change and develop over time – the person is what you are hiring, not the years that they have worked.

The big question I now ask is “Does this person want to be on a journey to mastery? If they don’t, it’s very hard to encourage them to improve.

 

Note: This post may come across as me suggesting exploratory testing approaches (and therefore exploratory testers) are better than scripted testing approaches (and hence testers who follow scripts). That’s not my intention. How you perceive this post will depend very heavily on how you perceive the role of testing and the role of the tester to be in your organisation. You know your organisation and the needs that you and your team have. Some organisations desire people to follow scripts, some organisations desire people to explore the product, some desire something in between and some desire other things all together.

 

Scrum Master Training

Last week we had the very clever peeps from Neuri Consulting in to give us a special one day course on being a scrum master.

It was David Evans and Brindusa Gabur that delivered our training, and what a great day it was. We had around 10 people in the course ranging from those with lots of agile experience, those currently in scrum master roles, one of our product owners, those who have expressed an interest and those who’ve done scrum master roles previously.

As it was a mixed bag of experience and expectations we focused heavily on story breakdown and estimation; two areas we’ve yet to master. David and Brindusa targeted their training on these two points whilst also covering a lot of other ground.

We played a couple of games to illustrate points and mis-conceptions that we brought to the games. We also worked through some ideas which we thought were truth or lies regarding the scrum master role. We ran a retrospective of the training which highlighted some interesting points and some good take-aways for the teams to work on.

It was a really good day and I think we all took a lot away from it. From my own point of view I feel we need a more consistent approach across teams, but in reality we’re doing pretty well.

With a little tweaking on how we measure cycle time and more emphasis on quick estimations for the backlog I think we’ll start to see more throughput in terms of stories and features.

What was great to see though was the banter and friendships that have formed here at work. It was a lighthearted affair yet focused and in tune with our core ethos of learning being central to all we do.

The only thing to disappoint us was that we all wanted more. We should have had a two/three day course with the peeps from Neuri as we felt we didn’t cover everything we could have.

The key take-aways from the training seemed to be about having more off-site retrospectives and limiting the retrospectives to a shorter period of time. This gives the retrospectives more focus and the opportunity to move away from problems that are lingering in peoples minds but aren’t actually currently a problem.

Superb course. Best get saving up for the next one 🙂

2 3 1

What can you do with a brick?

Last week one of our team, Simon, ran a really fun session with the whole test team on our Exploratory Testing process.

We started by discussing some of the thinking that we’ve identified happens when we plan exploratory testing sessions. We talked through a diagram we created a few years back, and although it’s pretty accurate in identifying some high level ideas of our Exploratory Testing process, it’s by no means complete, but it served it’s purpose as an aid to explaining.

Once we’d been through the diagram Simon introduced the team to one of my favourite games that explores creative thinking.

It’s a common game where people are asked to come up with as many uses they can find for an item.

It’s a really good way of understanding how people think up ideas, how diverse the thinking is within our group and it’s also a good way of injecting some discussions and banter in to our session.

Simon gave the group 5 minutes to write their ideas on post-it notes. The item of choice today was a “brick”.

We then affinity mapped them in to rough categories on the white board and discussed a little around each category as a group.

3

 

We then discussed the process of how people came up with their ideas.

It’s always tricky analyzing your own thinking, especially so in retrospect, but we did spot a couple of patterns emerging.

Firstly, whether consciously or not, we all envisioned a brick and started from this image we’d constructed. As it turned out we’d all thought of a standard house brick; some people saw in their minds the one with holes in it, others the bricks with a recess. Either way we started with the standard form of a typical house brick (here in England).

1

 

Here’s where we appeared to head off in slightly different thinking ways. After writing down all of the basic ideas that a brick is used for (building stuff, throwing at things, weighing things down) we started to head off in the following three directions:

  1. Thinking of other sizes, shapes and forms that a brick could take
  2. Thinking of different contexts and locations that a brick could be used in it’s original form (outside of those we naturally thought of straight away)
  3. Thinking of everyday objects that we could replace with a brick.

For example:

  • We could grinding the brick down to create sand
  • Use the brick as book ends
  • Take the brick to a philosopher (and/or someone who had never seen a brick before) and try to explain what it was used for
  • Use the brick as a house for small animals, insects and little people
  • Use the holes in the brick as a spaghetti measurer
  • Putting the brick in a toilet cistern to save water
  • Projectile missile and other weapons
  • Use it to draw right angles
  • Use it as a paperweight
  • Use it as a booster seat in a car
  • Use it as a holder for fireworks
  • Use it as a bird bath.

And many, many more.

2

As you can see we explored a number of uses and we created a decent amount of categories in which a brick would fit.

What was most important though was that we all took time to reflect on where our ideas were coming from.

We also noted that not all of clearly think in the same fashion. Some people remained true to the form and shape of a brick but explored different contexts.

Others ignored the standard shape of a brick and explored different elements and uses of the materials within a brick.

This realisation that we all think differently about something as simple as a brick triggered some further discussions about how we come up with ideas for testing our new features.

It ultimately lead to us to concluding that it would make sense to pair with others during and after our story kick off meeting. It might generate a broader and deeper set of test ideas. It might not. We’ll have to experiment.

For now though we’re running with two testers attending story chats followed by a brainstorming and ideas exchange meeting. We decided it would make sense to not do the brainstorming in the story chat as that will sidetrack the purpose of that meeting, but we will be sharing our wider test ideas with the team as well as writing acceptance tests in SpecFlow for static checks.

Let’s see how it goes.

It was a really good session and we took away a direct change we could try. It’s good to get the team together every week to chat through the ways we are working, thinking and solving tricky testing problems.

Opposite Views

At EuroSTAR 2012 I got talking to someone who had polar opposite views to mine on Testing and Agile implementation.

Despite his opposite views and the fact I could counter almost anything he said from my own experience I knew deep down inside that he knew he was right.

His solution, albeit not something I would label agile, worked for his clients. He was passionate about the work he does and the people he helps. He held different views, but was contributing goodness to others in the industry and more importantly, he was getting results for his clients.

He was helping people succeed.

No matter what my opinions are there will always be people who hold different opinions, ideas and experiences to me.

In software development there are very few process level ideas and actions that are black and white. It’s difficult to quantify, measure and then compare two different approaches as there are so many variables…like budget, people, product, customers, skills and experience etc.

One approach that works over here, might fail over there.

A lot of people avoid talking to people who think differently to them but I believe that only by opening our dialogue with others who think differently will we truly learn about ourselves and alternative ways of doing things.

Isn’t it in this collision of ideas where true innovation and learning comes from?

I had a lot in common with the tester I met at EuroSTAR 2012. We’ve kept in touch since and despite his continued promotion of ideas at odds with mine we’ve become good friends.

At EuroSTAR he told me that he had found his calling in life.

Who am I to say his calling is worse than mine?