Managing exploratory testing

One of the perennial challenges I have faced with exploratory testing is how best to manage it and then report on it.

I’ve hacked around with many different systems for years and never quite felt happy with any of them.

As most of the testing that the team do here at NewVoiceMedia is exploratory testing I’ve needed to find ways to understand how much testing we are doing and which bits of the product we are spending our time testing. The numbers alone wont say we’ve got stuff covered but when combined with metrics from across the entire Dev process I can start to piece together where our testing is focused and where we might need to make changes.

An Idea

The other month though I stumbled across an idea in my mind which felt right. I spiked it out and gathered some initial feedback from some of the team. I concluded it was worth a go and so far, when used in anger, it seems to be good. I thought I’d give it a few weeks before I blogged about it – so here it is – a hacked together system to manage the reporting of our exploratory testing.

The Beginning

During a story chat we look to put as much of the checking under automated test as possible. However, during the chat the Testers will be doodling, mind-mapping or jotting down ideas about what to test; ideas that often warrant a little further exploration. These become Exploratory Test charters.

We write our charters in the Explore [something] WITH [something] TO DISCOVER [something] format as described perfectly in Elisabeth Hendrickson’s awesome Explore IT book.

These charters are just wishes at the moment – they haven’t been run – and truth be told not all of them will get run. We’ll also add further charters, or delete existing ones, as we progress through the life of a story.

Running an Exploratory session

2013-08-23 15.42.56

Each tester will run a session in a different way, to suit their own preferences and style. They will also store their notes in different ways. Most of the team are using Rapid Reporter, I personally use Evernote and a couple of peeps are using Word and Notepad++. No matter what program they use they are creating detailed notes of the session they are running. These notes are personal to them, but public for others in the business to view. They may not make sense to others, but as long as they make sense to those who created them, then that’s cool.

The notes from the session, in whatever format they are in, will eventually make their way in to our social wiki system Confluence. In Confluence we have a space dedicated to Exploratory Testing. In this space we have a simple hierarchy of Year > Month > Test Charter.

The charters are named [TesterInitials]_[DDMMYYYY]_[SessionNumber]. For example: RL_16062013_1 and RL_16062013_2.

When creating this charter page we use a template so that all charter pages in Confluence are consistent and easy to navigate around. On this page in Confluence is a table which simply requires the charter name, testers name and date.

The rest of the page is then free for us to copy and paste our session notes, or attach files such as CSV or txt files to the page. This Confluence page essentially becomes the final record of the exploratory session. It becomes a reference point to link Pivotal Tracker stories to and it also becomes an audit point (version and permission controlled). We might never look back at it – but we keep it just in case we should ever need to.

Part one done – we’ve got the session recorded in a consistent manner. Now we need to get some metrics recorded.

Reporting the session

I created a very simple Google Form for recording results.

Once the team have completed the Confluence page they can open the form and fill it in.

The form asks for basic information such as tester name, main area of test, charter title, confluence link (back to the session notes), how many hours of testing was done (to nearest hour), any story links, browsers, operating system and environment. All of the fields are mandatory.

Once submitted the details are stored in a Google spreadsheet. Bam – basic test reporting.

I can now see how many sessions we have run against which main functional areas of the product. I can see the links to Confluence. I can see which browsers we are exploring against and I can see how many sessions we are running across different time scales. I have created a series of graphs showing sessions per tester, session per browser, sessions per functional area and average sessions per day

The next step is to pull this data down in to our bigger management dashboard and make it visible. This can then be tied to cycle time, velocity, scripted test case reporting (from a different system) and the automated tests that are running.

With all of this data we should be able to see trends and patterns which will inform our next moves. Are we covering enough of the product? Are we moving too fast? Too slow? Are we covering the right browsers? How can we improve the testing? Is the data itself providing value in decision making?

The numbers alone don’t mean anything – but as a whole they may give us enough information to ensure we keep delivering the right thing for our customers.

Friction

It’s not a perfect system but we’re using it and it was super simple to get rolling. We’ve struggled with other systems and other hacks to organise our ET and although this one means we’re hopping around a couple of different systems it is indeed suiting our work. We are in these system anyway so it’s not a major overhead.

This system will be changed up and hacked around further for sure – but right now we’re testing it and seeing what value it’s giving us.

What system are you using and is it working for you?