Bear with me – it’s a rambling post I’ve had in draft for about 3 years now. I’m having a clear out. Enjoy.
One of the things I’ve noticed over the years is that anyone is capable of doing Exploratory Testing. Anyone at all. It just happens some do it better than others. Some want to do it. Some don’t want to do it. Some don’t realise they are doing it. Some don’t know what to label it.
We all do it to some extent in our own personal lives maybe with slightly different expectations of what we’ll get out of it and almost certainly under a different label.
- Have you ever opened a new electronic device and explored around what it can do?
- Or tried to get something working without reading the instructions?
- Or tried to use a system that has below par instructions (like a ticket machine, or doctors surgery booking system for example)
I think we are all blessed with the ability to explore and deduce information about our surroundings and the objects we are looking at. Some people utilise and develop this ability more than others. Some practice more than others. You may argue some are born with a more natural ability to explore.
In our testing world though I’ve observed a great many people attaching a stigma to Exploratory Testing; it’s often deemed as inferior, or something to do rarely, or it’s less important than specification based scripted testing.
It’s seen as an after thought to a scripted testing strategy; a phase done at the end if we have time; a phase done if everything else goes to plan; a phase done to let people chill out after the hard slog of test case execution.
I think much of this stigma or resistance to exploration comes about from many testers feeling (or believing) Exploratory Testing (ET) is unstructured or random; a thinking I believe many “standards” schemes, certifications and badly informed Test Managers proliferate.
I’m a curious kind of person and it’s always intrigued me as to why people think this. So whenever I got the chance to pick the brains of someone who had condemned or dismissed ET I would ask them for their reasons and experiences around the subject.
The general gist I took from these chats (although not scientific) was that people felt they couldn’t audit/trace/regulate/analyse what actual testing took place during Exploratory Testing sessions.
It became apparent to me that one of the reasons for people not adopting Exploratory Testing (ET) was because of the perceived lack of structure, lack of identifiable actions and poor communication of outcomes to management and other stakeholders.
It’s clear to me that the stories we tell about our testing and the way we explain our actions directly affects the confidence and trust other people have in our testing. In some organisations any doubt in the approach means the approach is not adopted.
This might go some way to explain why many managers struggle to find comfort in exploratory testing and why they too struggle to articulate to their management teams the value or benefits of exploratory testing.
Early Days of someone doing Exploratory Testing
Through my own observations I’ve noticed that in the early days of someone doing Exploratory Testing much of it is indeed ad-hoc and mostly unstructured, some might even suggest that this type of testing isn’t true Exploratory Testing at all, but it shares much in common.
The notes are scrappy, the charters are ill defined and the journey through the SUT is somewhat sporadic (typically due to them following hunches and inclinations that are off charter – assuming they had a good one to start with). The reports afterwards lack cohesion, structure, consistency and deep detail over what was tested. There is often limited, if any, tracing back to stories/requirements or features. This early stage is quickly overcome by some, but can last longer for others (even with guidance).
Experienced Exploratory Testers
I believe that the more advanced a practitioner becomes in Exploratory Testing the more they are able to structure that exploration, but more importantly to me, the more they are able to explain to themselves and others what they plan to do, are doing and have done.
It is this explanation of our exploration that I feel holds the key to helping those skeptical of ET see the real value it can add. Those new to ET can sometimes lack the rigor and structure; it’s understandable – I suspect they’ve never been encouraged to understand it further, taught anything about it or been mentored in the art of Exploratory Testing.
This unstructured approach can appear risky and unquantifiable. It can appear un-auditable and unmanageable. This could be where much of the resistance comes from; a sense of not being able to articulate the real values of exploration.
From trial and error and with some guidance people can quickly understand where they can add structure and traceability. In my own experience I was unable to articulate to others what I had tested because I myself didn’t document it well enough.
Each time I struggled to communicate something about my testing I would work out why that was and fix it. Whether that was lack of supporting evidence (logs, screenshots, stack traces), lack of an accurate trail through the product, missing error messages, forgotten questions I’d wanted to ask stakeholders, missed opportunities of new session charters, lack of information for decision making or even lack of clarity about the main goal of the session – I would work hard to make sure I knew the answer myself.
After doing this analysis and seeing this same thing in others I realised that one of the most obvious and important aspects of good ET is note taking.
Note Taking
Advanced practitioners often make extensive notes that not only capture information and insights in the immediate time proximity of an exploratory session, but are also relevant in the months and years after the session took place (at least to themselves).
Many testers can look back through their testing sessions and articulate why they did the exploration in the first place, what exploration took place and what the finding/outputs were. Some even keep a record of the decisions the testing helped inform (i.e. release information, feature expansion, team changes, redesign etc).
Seasoned explorers use a number of mechanisms to record and then communicate their findings. They often have a meticulous attention to detail in their notes, built around the realisation that even the tiniest detail can lead to the trapping of a bug or a failed report to others on the effectiveness of their testing.
Some use videos to record their testing session, some rely on log files to piece together a story of their exploration, some use tools like Mindmaps, Rapid Reporter and QTrace. Others use notes and screenshots and some use a combination of many methods of note capture.
It’s this notetaking (or other capture mechanism) that not only allows them to do good exploratory testing but also to do good explanations of that testing to others.
Too often I have seen good insights and information being discovered when testing and then subsequently squandered through poor communication and poor note-taking.
Testing is about discovering information, but that is of little use if you cannot articulate that information to the right audience, in the right way and with the right purpose.
In my early days of testing I found interesting information which I failed to communicate in a timely fashion, to the right people and in the right way. I would often lose track of what exploration actually took place meaning I would have to re-run the same charter again, or spend a long time capturing some of the information I should have had in the first place. It was from having to run the testing a second time through that I learned new lessons in observation, note-taking and information gathering (like logs, firebug, fiddler traces etc).
From talking to other testers and manager I believe good exploratory testing is being done but the information communicated from that testing is often left to the testers recollection. Some testers have excellent recollection, some don’t, but in either case I feel it would be prudent to rely on accurate notes depicting actions, inputs, consequences and all other notable observations from the system(s) under test than your own memory.
We often remember things in a way we want to, often to re-enforce a message or conform with our own thinking. I fall in to this trap often. Accurate notes and other captures can guard against this. Seeing the un-biased facts *can* help you to see the truth. Our minds are complex though and even facts can end up being twisted and bent to tell a story. We should try to avoid this bias where possible – the starting point to avoiding this bias is to acknowledge that we fall foul of it in the first place.
Being able to explain what your testing has, or has not uncovered is a great skill I see too little in testers.
Good story telling, facts analysis and journalistic style reporting are good skills we can all learn. Some people are natural story tellers capable of recalling and making sense of trails of actions, data and other information and importantly; put these facts in to a context that others can relate to.
To watch a seasoned tester in action is to watch an experienced and well practiced story teller. Try to get to the Test Lab at EuroSTAR (or other conferences) and watch a tester performing exploratory testing (encourage them to talk through their thinking and reasoning).
During a session a good exploratory tester will often narrate the process; all the time making notes, observations and documenting the journey through the product. This level of note taking allows the tester to recall cause and effect, questions to ask and clues to follow up in further sessions.
We can all do this with practice.
We can all find our own way of supporting our memory and our ability to explain what we tested. We can all use the tools we have at our disposal to aid in our explanation of our exploratory testing.
I believe that the lack of explanation about exploratory testing can lead people to believe it is inferior to scripted test cases. I would challenge that, and challenge it strongly.
Good exploratory testing is searchable, auditable, insightful and can adhere to many compliance regulations. Good exploratory testing should be trusted.
Being able to do good exploratory testing is one thing, being able to explain this testing (and the insights it brings) to the people that matter is a different set of skills. I believe many testers are ignoring and neglecting the communication side of their skills, and it’s a shame because it may be directly affecting the opportunities they have to do exploratory testing in the first place.
Did you ever get around to publishing the results of your note-taking research? I’d sure be interested in seeing those!
It too is in draft! I’ve got so many posts that are almost done. I’m slowly working through them over the next few months. Thanks!
Hi Rob,
Its a very interesting topic. What i do mostly is creating high level test cases (Test case names in the test management tool) and test exploratory with them as a guide. Whenever i am discovering new paths i just add a new test case name so i can trace what i explored. At the end i did my exploratory testing and i have my results ready to communicate to management/feature team. I dont like to follow the details of the test cases. You will not find a lot of bugs via them. This way of testing gave me very good results in my last 4-5 years as a tester. I was able to find more bugs then testers that followed the test cases step by step. And once we have a new release i know where i need to explore based on the fixed bugs and hot spots.
BTW: I love your blogs, i see a lot of similarity with my way/thinking/acting/empowerment etc 🙂
Kind regards
Thomas
Hi Thomas,
Thank you for your kind words and sharing how you organise your testing. That sounds like a very sensible way of approaching the challenge and it keeps you free to explore whilst still having the high level test case names. Great stuff.
Cheers
Rob