Thanks for the information, I’ll make up my own mind though

As a tester it’s important to thank people for any information and advice on how to test, where to test and what to test but then make up your own mind as to what to do.

This is true whether it’s a specification, an email, a conversation, a user story or any other form of information. Testing is often one of those activities that everyone believes they can do, and do well. It’s not hard to test…right?

We are professional skeptics. It doesn’t mean we are skeptical of just the software, but everything else that is provided along the development and usage of the system. That means user guides, marketing briefs, claims, advertising and anything else. The only really accurate information about what the product should do is gained from working out what the system actually does. (i.e. testing)

As professional skeptics we need to make up our own minds and come to our own conclusions. That should be done using any supporting material we can, but ultimately from our own information, decisions and activities.

Moving from 8 Month releases to weekly

The other week I facilitated a session at the UK Test Management Summit.

I presented the topic of moving to weekly releases from 8 month releases.

I talked about some of the challenges we faced, the support we had, the outcomes and the reasons for needing to make this change.

It actually turned in to a questions and answers sessions and despite my efforts to facilitate a discussion it continued down the route of questions and answers. It seems people were very interested in some of the technicalities of how we did this move with a product with a large code base of both new and legacy code (and that my facilitation skills need some fine tuning).

Here are some of the ideas.

We had a vision

Our vision was weekly releases. It was a vision that everyone in the team (the wider team of more than just development) knew about and was fundamentally working towards.

This vision was clear and tangible.

We could measure whether we achieved it or not and we could clearly articulate the reasons behind moving to weekly releases.

We knew where we were We knew exactly where we were and we knew where we were going. We just had to identify and break down the obstacles and head towards our destination.

We had a mantra (or guiding principle)

The mantra was “if it hurts – keep doing it” We knew that pain was innevitable but suffering was optional.

We could endure the pain and do nothing about it (or turn around) or we could endure the pain until we made it stop by moving past it. We knew the journey would be painful but we believed in the vision and kept going to overcome a number of giant hurdles.

Why would we do it?

We needed to release our product more frequently because we operate in a fast moving environment.

Our markets can shift quickly and we needed to remain responsive.

We also hated major releases. Major feature and product releases are typically painful, in a way that doesn’t lead to a better world for us or our customers. There are typically always issues or mis-matched expectations with major releases, some issues bigger than others. So we decided to stop doing them.

The feedback loop between building a feature and the customer using it was measured in months not days meaning we had long gaps between coding and validation of our designs and implementations.

What hurdles did we face?

The major challenge when moving to more frequent releases (we didn’t move from 8 months to weekly overnight btw) was working out what needed to be built. This meant us re-organising to ensure we always had a good customer and business steer on what was important.

It took a few months to get the clarity but it’s been an exceptional help in being able to release our product to our customers.

We also had a challenge in adopting agile across all teams and ensuring we had a consistent approach to what we did. It wasn’t plain sailing but we pushed through and were able to run a fairly smooth agile operation. We’re probably more scrumban than scrum now but we’re still learning and still evolving and still working towards reducing waste.

We had a major challenge in releasing what we had built. We were a business based around large releases and it required strong relationships to form between Dev and Ops to ensure we could flow software out to live.

What enablers did we have?

We had a major architectural and service design that aided in rapid deployments; our business offering of true cloud. This means the system had just one multi-tenanted version. We had no bespoke versions of the product to support and this enables us to offer a great service, but also a great mechanisms to roll products out.

We owned all of our own code and the clouds we deploy to. This enabled us to make the changes we needed to without relying on third party suppliers. We could also roll software to our own clouds and architect these clouds to allow for web balancing and clever routing.

We had a growing DevOps relationship meaning we could consider these perspectives of the business together and prepare our plans in unison to allow smoother roll outs and a growing mix of skills and opinions in to the designs.

What changes took place to testing?

One of my main drivers leading the testing was to ensure that everyone took the responsibility of testing seriously.

Everyone in the development team tests. We started to build frameworks and implementations that allowed selenium and specflow testing to be done during development. We encouraged pairing between devs and testers and we ensured that each team (typically 4/5 programmers and a tester) would work through the stories together. Testing is everyone’s responsibility.

Testing is done at all stages in the lifecycle. We do TDD, Acceptance Test Driven Development and lots of exploratory testing.

We do a couple of days of pre-production testing with the wider business to prove the features and catch issues. We also test our system in live using automation to ensure the user experience is as good as it can be. We started to publish these results to our website so our customers (and prospective customers) could see the state of our system and the experience they would be getting.

We started to use techniques like KeyStoning to ensure bigger features could be worked on across deployments. This changed the approach to testing because testers have to adapt their mindsets from testing entire features to testing small incremental changes.

Why we love it Releasing often is demanding but in a good way. The pressure is there to produce. The challenge we have is in balancing this pressure so as not to push too hard too often but have enough pressure to deliver. We don’t want to burn out but we want to ship.

We exceed the expectations of our customers and we can deliver value quickly. In an industry that has releases measured in months (sometimes years) we’re bucking the trend.

As a development team we get to see our work in production. This gives us validation that we are building something that is being used. Ever worked on a project that never actually shipped? Me too. We now see none of that.

 

It’s been tough getting to where we are now but we’ve had amazing support from inside and outside of the business which has helped us to really push ahead and set new markers of excellence in our business domain. We’ve still got lots to get done and lots to learn but that’s why we come to work in the mornings.

 

These are just a few of the factors that have helped us to push forward. There are companies releasing more often, and some releasing less often to good effect. Each business has a release cadence that works for them and their customers.

Did I mention We’re Recruiting?

 

Side Notes:

I got asked the other day how I come up with ideas for talks/blogs, how I think through these ideas and how I go about preparing for talks. I’ll take this opportunity to add a short side note of how I do this. This approach may not work for you.

I firstly create a central topic idea in a mind map (I use XMind).

I then brainstorm ideas around the central topic. After the brainstorm I go through the map and re-arrange, delete, add and rename until I feel I have a story to tell.

Moving to weekly releases

I then start planning the order and structure of the story. Every story has a beginning, a middle and an end.

I start by writing the beginning and then the end. The middle is the detail of the presentation.

 

I then doodle, sketch and plot.

2013-02-15 16.06.11 2013-02-15 16.06.20

 

I then move to my presentation tool of choice. In this case it is PowerPoint – sometimes it is Prezi.

The presentation typically takes a long time to prep, even for a very short intro like this. This is because I don’t like including too much text in my slides and also because I think simple, but attractive slides can add some impact to the topic. So I spend some time making sure they are right. Saying that, no amount of gloss in the slides will help with a bad/poor/boring story.

 

 

Explaining Exploratory Testing Relies On Good Notes

 Bear with me – it’s a rambling post I’ve had in draft for about 3 years now. I’m having a clear out. Enjoy.

One of the things I’ve noticed over the years is that anyone is capable of doing Exploratory Testing. Anyone at all. It just happens some do it better than others. Some want to do it. Some don’t want to do it. Some don’t realise they are doing it. Some don’t know what to label it.

We all do it to some extent in our own personal lives maybe with slightly different expectations of what we’ll get out of it and almost certainly under a different label.

  • Have you ever opened a new electronic device and explored around what it can do?
  • Or tried to get something working without reading the instructions?
  • Or tried to use a system that has below par instructions (like a ticket machine, or doctors surgery booking system for example)

I think we are all blessed with the ability to explore and deduce information about our surroundings and the objects we are looking at. Some people utilise and develop this ability more than others. Some practice more than others. You may argue some are born with a more natural ability to explore.

In our testing world though I’ve observed a great many people attaching a stigma to Exploratory Testing; it’s often deemed as inferior, or something to do rarely, or it’s less important than specification based scripted testing.

It’s seen as an after thought to a scripted testing strategy; a phase done at the end if we have time; a phase done if everything else goes to plan; a phase done to let people chill out after the hard slog of test case execution.

I think much of this stigma or resistance to exploration comes about from many testers feeling (or believing) Exploratory Testing (ET) is unstructured or random; a thinking I believe many “standards” schemes, certifications and badly informed Test Managers proliferate.

I’m a curious kind of person and it’s always intrigued me as to why people think this. So whenever I got the chance to pick the brains of someone who had condemned or dismissed ET I would ask them for their reasons and experiences around the subject.

The general gist I took from these chats (although not scientific) was that people felt they couldn’t audit/trace/regulate/analyse what actual testing took place during Exploratory Testing sessions.

It became apparent to me that one of the reasons for people not adopting Exploratory Testing (ET) was because of the perceived lack of structure, lack of identifiable actions and poor communication of outcomes to management and other stakeholders.

It’s clear to me that the stories we tell about our testing and the way we explain our actions directly affects the confidence and trust other people have in our testing. In some organisations any doubt in the approach means the approach is not adopted.

This might go some way to explain why many managers struggle to find comfort in exploratory testing and why they too struggle to articulate to their management teams the value or benefits of exploratory testing.

Early Days of someone doing Exploratory Testing

Through my own observations I’ve noticed that in the early days of someone doing Exploratory Testing much of it is indeed ad-hoc and mostly unstructured, some might even suggest that this type of testing isn’t true Exploratory Testing at all, but it shares much in common.

The notes are scrappy, the charters are ill defined and the journey through the SUT is somewhat sporadic (typically due to them following hunches and inclinations that are off charter – assuming they had a good one to start with). The reports afterwards lack cohesion, structure, consistency and deep detail over what was tested. There is often limited, if any, tracing back to stories/requirements or features. This early stage is quickly overcome by some, but can last longer for others (even with guidance).

Experienced Exploratory Testers

I believe that the more advanced a practitioner becomes in Exploratory Testing the more they are able to structure that exploration, but more importantly to me, the more they are able to explain to themselves and others what they plan to do, are doing and have done.

It is this explanation of our exploration that I feel holds the key to helping those skeptical of ET see the real value it can add. Those new to ET can sometimes lack the rigor and structure; it’s understandable – I suspect they’ve never been encouraged to understand it further, taught anything about it or been mentored in the art of Exploratory Testing.

This unstructured approach can appear risky and unquantifiable. It can appear un-auditable and unmanageable. This could be where much of the resistance comes from; a sense of not being able to articulate the real values of exploration.

From trial and error and with some guidance people can quickly understand where they can add structure and traceability. In my own experience I was unable to articulate to others what I had tested because I myself didn’t document it well enough.

Each time I struggled to communicate something about my testing I would work out why that was and fix it. Whether that was lack of supporting evidence (logs, screenshots, stack traces), lack of an accurate trail through the product, missing error messages, forgotten questions I’d wanted to ask stakeholders, missed opportunities of new session charters, lack of information for decision making or even lack of clarity about the main goal of the session – I would work hard to make sure I knew the answer myself.

After doing this analysis and seeing this same thing in others I realised that one of the most obvious and important aspects of good ET is note taking.

Note Taking

Advanced practitioners often make extensive notes that not only capture information and insights in the immediate time proximity of an exploratory session, but are also relevant in the months and years after the session took place (at least to themselves).

Many testers can look back through their testing sessions and articulate why they did the exploration in the first place, what exploration took place and what the finding/outputs were. Some even keep a record of the decisions the testing helped inform (i.e. release information, feature expansion, team changes, redesign etc).

2013-02-15 12.52.01

Seasoned explorers use a number of mechanisms to record and then communicate their findings. They often have a meticulous attention to detail in their notes, built around the realisation that even the tiniest detail can lead to the trapping of a bug or a failed report to others on the effectiveness of their testing.

Some use videos to record their testing session, some rely on log files to piece together a story of their exploration, some use tools like MindmapsRapid Reporter and QTrace. Others use notes and screenshots and some use a combination of many methods of note capture.

It’s this notetaking (or other capture mechanism) that not only allows them to do good exploratory testing but also to do good explanations of that testing to others.

Too often I have seen good insights and information being discovered when testing and then subsequently squandered through poor communication and poor note-taking.

Testing is about discovering information, but that is of little use if you cannot articulate that information to the right audience, in the right way and with the right purpose.

In my early days of testing I found interesting information which I failed to communicate in a timely fashion, to the right people and in the right way. I would often lose track of what exploration actually took place meaning I would have to re-run the same charter again, or spend a long time capturing some of the information I should have had in the first place. It was from having to run the testing a second time through that I learned new lessons in observation, note-taking and information gathering (like logs, firebug, fiddler traces etc).

From talking to other testers and manager I believe good exploratory testing is being done but the information communicated from that testing is often left to the testers recollection. Some testers have excellent recollection, some don’t, but in either case I feel it would be prudent to rely on accurate notes depicting actions, inputs, consequences and all other notable observations from the system(s) under test than your own memory.

We often remember things in a way we want to, often to re-enforce a message or conform with our own thinking. I fall in to this trap often. Accurate notes and other captures can guard against this. Seeing the un-biased facts *can* help you to see the truth. Our minds are complex though and even facts can end up being twisted and bent to tell a story. We should try to avoid this bias where possible – the starting point to avoiding this bias is to acknowledge that we fall foul of it in the first place.

Being able to explain what your testing has, or has not uncovered is a great skill I see too little in testers.

Good story telling, facts analysis and journalistic style reporting are good skills we can all learn. Some people are natural story tellers capable of recalling and making sense of trails of actions, data and other information and importantly; put these facts in to a context that others can relate to.

To watch a seasoned tester in action is to watch an experienced and well practiced story teller. Try to get to the Test Lab at EuroSTAR (or other conferences) and watch a tester performing exploratory testing (encourage them to talk through their thinking and reasoning).

During a session a good exploratory tester will often narrate the process; all the time making notes, observations and documenting the journey through the product. This level of note taking allows the tester to recall cause and effect, questions to ask and clues to follow up in further sessions.

We can all do this with practice.

We can all find our own way of supporting our memory and our ability to explain what we tested. We can all use the tools we have at our disposal to aid in our explanation of our exploratory testing.

I believe that the lack of explanation about exploratory testing can lead people to believe it is inferior to scripted test cases. I would challenge that, and challenge it strongly.

Good exploratory testing is searchable, auditable, insightful and can adhere to many compliance regulations. Good exploratory testing should be trusted.

Being able to do good exploratory testing is one thing, being able to explain this testing (and the insights it brings) to the people that matter is a different set of skills. I believe many testers are ignoring and neglecting the communication side of their skills, and it’s a shame because it may be directly affecting the opportunities they have to do exploratory testing in the first place.

Staple Yourself To It

In a test team meeting the other week I was reminded of a technique/game I’d been able to label after reading the amazing book Gamestorming. The technique/game is called “Staple Yourself To It”.

In a nutshell it is about finding an object, message or aspect of your product and stapling yourself to it so that you can map out its journey through a system.

A Stapler
A Stapler

Image from – BY-YOUR-⌘’s “Vampire Stapler” May 12, 2009 via Flickr, Creative Commons Attribution.

For example, in a business you may decide to Staple Yourself to a customer raised Case and follow the case through a journey (or variety of journeys).

Once you have it mapped out (I’d recommend a visual map) then you can start to look for areas to optimise, question and improve.

The same is true for a product under test; you might find a message, or a user action, or a form submission and decide to follow this through the system to look for interesting things.

I use the phrase interesting things because you might not always be looking for areas to test.

  • You might be looking for awkward interfaces for example between people and software.
  • You might be looking for parts of the process that really slow the journey down, or move to quickly for the overall system, or leave you sat waiting; these are classic waste areas which might become the focus of your efforts.
  • You might be looking for usability issues, accessibility problems, reliance on third party software/services or performance bottlenecks.
  • You might be looking for security holes or exploits.
  • Of course, you might be looking for areas to probe further with exploration.

As an example:

We build and deploy cloud based contact centres. One of the fundamental actions a contact centre must do is route a piece of communication to a person (agent) to be dealt with.

In this example let’s use a phone call.

A phone call must reach our cloud from the callers phone and be routed to the right person so that they can have their query/question dealt with.

Staple yourself to it:

  • The caller makes a call via their phone (What sort of phone? Who are they? Why are they phoning?)
  • The call travels through the phone network to our cloud (Which telephony carrier? Which account? What number? International or local?)
  • The call is routed by our Call Centre product (Which account? Which Line? What configuration do they have? How is the call plan configured? Is there an agent available to take the call?)
  • The call is delivered to an agent (Who are they and can they solve the problem? What browser are they using? What phone system are they using? What configurations are there on the UI?)

In a relatively simple journey I can already start to dive down and ask questions of the journey and the processes involved.

Imagine the journey for a call that moved from department to department or got transferred to another system, or went through an IVR system, or got hung up at various points.

Documenting the journey is a good way to see where you can focus your energy and where there could be areas to explore.

Stapling yourself to something and analysing the journey can lead you right to the heart of the problem, or at least give you giant sized clues to help guide you. Without knowing the journey you could be prodding around in the wrong place for days.

Stapling yourself to a journey wont guarantee you will find the sweet spot but it’s just another technique to use to drive out information and the visuals that can help you to identify areas to explore.

Note: Apologies if this idea has been blogged about before in the testing context. I haven’t read anything about it but I know many people are talking about tours and this is not too far away from that idea.

Let’s say I gave you a giant box to explore

Let’s say I gave you a giant box to explore.

In the box was a machine that did a number of different functions along with an instruction manual. The functions that the machine can perform are complicated but ultimately observable and open to identifying more thoroughly than we currently have done in our user guide.

The machine however has lots of unknowns when it comes to putting spurious data in and connecting with a wide array of third party apps. The results of your actions are outputted to a simple user interface. It’s not pretty but it does the job. The above was a test that a company I worked for would give during a job interview.

box Image from subsetsum on Flickr.

They provided a laptop with this “test machine” running on it (a Java application) and they provided the instructions (a simple two page html document).

When I sat the test I hated it.

I didn’t feel it dug deep enough around some of the diverse skills I, and others, could offer.

In hindsight though I think this type of test was actually very insightful – although maybe the insights weren’t always used appropriately.

I got the job and soon found myself on the other side of the process giving this test to other testers hoping to come and work with us.

When I was interviewing I was handed a checklist to accompany the test. This checklist was used to measure the candidate and their success at meeting a pre-defined expected set of measures.

It was a “checklist” that told the hiring manager whether or not to hire someone.

The checklist contained a list of known bugs in the “test machine”. There were about 25 “expected” bugs followed by a list of about 15 “bonus” bugs. There were two columns next to these bug lists, one headed junior and one headed senior.

If we were interviewing a ‘junior’ (i.e. less than 3 years experience) then the company expected this person to find all of “expected” bugs.

If we were interviewing a ‘senior’ (i.e. more than 3 years experience) then the company expected this person to find all of “expected” bugs AND all/some of the “bonus” bugs.

There was also space to record other “observations” like comments about usability, fit for purpose etc

This interview test was aimed at flushing out those who were not very good in both junior and senior categories.

I’m assuming many of my readers are jumping up and down in fits of rage about how fundamentally flawed this process is…right?

I immediately spotted the fundamental flaws with interviewing like this, so I questioned the use of the checklists (not necessarily the test itself) and I stumbled across two interesting points.

Firstly, there were some hiring managers that stuck to this list religiously (like a tester might to a test case). If someone didn’t find the bugs they were expected to then they didn’t get the job. Simple as that.

Other hiring managers were more pragmatic and instead balanced the performance across the whole exercise. For example – I didn’t find some of the things I was expected to find, but I still got the job. (It turns out a few managers were ticking the boxes next to the bugs people didn’t find to “prove” to senior management (auditors) that candidates were finding the bugs they should have).

Secondly, this test was remarkably good at exploring the activities, approach and thinking that most testers go through when they test.

Whether someone got any or all of the bugs they were expected to was beside the point for me. I was more interested by the thinking and process the tester went through to find those bugs and assess the “test machine”.

It just so happened that most hiring managers were not measuring/observing/using this rich data (i.e. the testers approach, thoughts, ideas, paths, etc) to form their opinion.

Junior or Senior – it made little difference

It was clear from doing these interviews that it made little difference how many years of experience a candidate had and their ability to find the bugs they were expected to.

I actually spotted a very prevalent trend when doing these interviews.

Those that followed the user manual found most of the “expected” bugs no matter what their perceived experience level; juniors, seniors, guru grade – it made no difference.

If they followed the user guide (and often wrote test cases based on the guide) then they found these “expected” bugs.

Those that skimmed the user guide but went off exploring the system typically (although not always) found most of the expected bugs AND some/all of the “bonus” bugs – again, irrelevant of their experience (or perceived experience).

This second point highlighted to me that the approach a tester took to testing influenced the types of bugs they found, but also, in this interview case, it challenged our assumptions about experience and bug count. It also made a few people think whether they had categorised the known bugs correctly – were the bonus bugs too easy to find?

I observed very junior testers finding the “bonus” bugs and senior testers missing them purely because of their approach and mindset. (although not a scientific finding it did give me enough insights to set me on my journey towards exploration and discovery).

It wasn’t even as simple that all seniors did a more exploratory approach and that juniors did a more scripted approach. That simply didn’t happen.

It seemed to run deeper than that. It seemed to focus more around a curious mindset than years of experience.

Obsessed with finding out more

Those that took an exploration and discovery approach appeared to develop an obsession with exploring the “machine”.

They seemed more compelled to discover interesting information about the “test machine” than those who followed a script based on a partial user guide.

Following the guide simply wasn’t enough. They needed to find out the information for themselves. In a sense, they needed “primary” information. Information they had obtained themselves straight from the source (the product). Other testers were happy with using a “secondary” source of information (the user guide) to formulate conclusions. It obviously wasn’t as clear cut as that, as many straddled the middle ground, but the trends did show an interesting difference in approach.

Note Taking & Story Telling

Another dimension to this process, which was what kick started my obsession with note taking, was that those who went off exploring appeared to tell more compelling stories about their testing, often through detailed and interesting notes.

The notes they took enabled them to recite their exploration and discovery story back to others with an astounding level of detail – something most people who followed a script didn’t do ( I can’t say whether they “could” do it though).

As these people got stuck in and delved around in the product they started to tell a really interesting story of their progress. Through narration of their approach and interesting notes they got me and the other hiring managers interested, even though we’d seen dozens of people perform this very same test.

They started to pull us along with them as they explored and journeyed through the product. They wrote notes, they drew pictures, they made doodles and wrote keywords (to trigger memory) as they went.

They built up a story. They built up a story of the product using the user guide as a basis, but using their own discoveries to fill in the gaps, or to even challenge the statements made in the user guide (of course…the user guide had bugs in it too).

They wrote the story of their exploration as it developed and they embraced the discovery of new information. They kept notes of other interesting areas to explore if they were given more time.

They told stories (and asked lots of questions) about how users might use the product, or some of the issues users may encounter that warranted further investigation.

They involved others in this testing process also; they described their approach and they made sure they could report back on this compelling story once they’d finished the task.

What intrigued me the most though was that each tester would tell a different story. They would see many of the same things, but also very different things within the product and the user guide.

They built up different stories and different insights.

They built their testing stories in different orders and through different paths but actually they all fundamentally concluded very similar things. The endings of the stories were similar but the plot was different so to speak.

What they explored and what they found also said a lot about themselves and their outlook on testing.

Their actions and outputs told stories of who they were and what their contribution to our company/team would be. They let their interview guard/nerves down and they showed the real them.

Everyone who sat that test told us something about themselves and their testing, however, we (the hiring managers) were pulled along more with those who built their testing stories as they went rather than those who had pre-defined the story and stuck to it.

The explorers appeared to tell more insightful and compelling stories and they also revealed deeper insights about themselves and the product.

As our hiring improved (better job adverts, better phone screens, better recruitment consultants) we started to interview more people who would explore the product and tell us these compelling stories; these were the people we wanted to seek out.

As a result even those hiring managers who were obsessed with their checklist would find themselves being pulled along with the exploration. They found themselves ignoring their precious checklists and focusing on the wider picture.

It didn’t take long for senior management to see the problems with measuring people against lists too.

Did I learn anything?

Oh yes indeed.

We should never hire purely on binary results (i.e. found this, didn’t find this, is this type of person, is that type of person, isn’t this).

Only by testing the system do we find out the real truth about the product (specs are useful, but they are not the system). In a choice between primary information or secondary information I will gravitate to primary.

When we explore we have the opportunity to tell compelling stories about our exploration, the product, the testing and more importantly, ourselves <– not everyone has these skills though.

All testers will conclude different insights, sometimes they may find the same bugs as others, sometimes very different ones. Therefore it’s good to pair testers and to share learnings often.

The hiring approach should be consistent across all those with the power to hire and should focus on the person, not the checklist.

Good exploratory testing relies on the ability to explain the testing, insights and findings both during the testing session and after (and often relies on exceptional note taking and other memory aids).

If you’re hiring a tester, observe them doing some testing before making an offer.

I believe an obsession to learn more (and be curious) is one of the most observable traits I can see. Exploration and curiosity does not always come naturally to people. Not everyone is curious about the world around them or the products they test, but for testers I believe this trait alone will send them down the many paths of learning. (and this is a good thing)

Someone with 10 years of experience is not always better than someone with 2 years – especially so if they have had 10 of the same year.

Dividing people in to junior and senior camps is not helpful. Where is the cut-off mark (years of experience? – see point above)? We are all on a journey and our skills will change and develop over time – the person is what you are hiring, not the years that they have worked.

The big question I now ask is “Does this person want to be on a journey to mastery? If they don’t, it’s very hard to encourage them to improve.

 

Note: This post may come across as me suggesting exploratory testing approaches (and therefore exploratory testers) are better than scripted testing approaches (and hence testers who follow scripts). That’s not my intention. How you perceive this post will depend very heavily on how you perceive the role of testing and the role of the tester to be in your organisation. You know your organisation and the needs that you and your team have. Some organisations desire people to follow scripts, some organisations desire people to explore the product, some desire something in between and some desire other things all together.

 

Scrum Master Training

Last week we had the very clever peeps from Neuri Consulting in to give us a special one day course on being a scrum master.

It was David Evans and Brindusa Gabur that delivered our training, and what a great day it was. We had around 10 people in the course ranging from those with lots of agile experience, those currently in scrum master roles, one of our product owners, those who have expressed an interest and those who’ve done scrum master roles previously.

As it was a mixed bag of experience and expectations we focused heavily on story breakdown and estimation; two areas we’ve yet to master. David and Brindusa targeted their training on these two points whilst also covering a lot of other ground.

We played a couple of games to illustrate points and mis-conceptions that we brought to the games. We also worked through some ideas which we thought were truth or lies regarding the scrum master role. We ran a retrospective of the training which highlighted some interesting points and some good take-aways for the teams to work on.

It was a really good day and I think we all took a lot away from it. From my own point of view I feel we need a more consistent approach across teams, but in reality we’re doing pretty well.

With a little tweaking on how we measure cycle time and more emphasis on quick estimations for the backlog I think we’ll start to see more throughput in terms of stories and features.

What was great to see though was the banter and friendships that have formed here at work. It was a lighthearted affair yet focused and in tune with our core ethos of learning being central to all we do.

The only thing to disappoint us was that we all wanted more. We should have had a two/three day course with the peeps from Neuri as we felt we didn’t cover everything we could have.

The key take-aways from the training seemed to be about having more off-site retrospectives and limiting the retrospectives to a shorter period of time. This gives the retrospectives more focus and the opportunity to move away from problems that are lingering in peoples minds but aren’t actually currently a problem.

Superb course. Best get saving up for the next one 🙂

2 3 1

What can you do with a brick?

Last week one of our team, Simon, ran a really fun session with the whole test team on our Exploratory Testing process.

We started by discussing some of the thinking that we’ve identified happens when we plan exploratory testing sessions. We talked through a diagram we created a few years back, and although it’s pretty accurate in identifying some high level ideas of our Exploratory Testing process, it’s by no means complete, but it served it’s purpose as an aid to explaining.

Once we’d been through the diagram Simon introduced the team to one of my favourite games that explores creative thinking.

It’s a common game where people are asked to come up with as many uses they can find for an item.

It’s a really good way of understanding how people think up ideas, how diverse the thinking is within our group and it’s also a good way of injecting some discussions and banter in to our session.

Simon gave the group 5 minutes to write their ideas on post-it notes. The item of choice today was a “brick”.

We then affinity mapped them in to rough categories on the white board and discussed a little around each category as a group.

3

 

We then discussed the process of how people came up with their ideas.

It’s always tricky analyzing your own thinking, especially so in retrospect, but we did spot a couple of patterns emerging.

Firstly, whether consciously or not, we all envisioned a brick and started from this image we’d constructed. As it turned out we’d all thought of a standard house brick; some people saw in their minds the one with holes in it, others the bricks with a recess. Either way we started with the standard form of a typical house brick (here in England).

1

 

Here’s where we appeared to head off in slightly different thinking ways. After writing down all of the basic ideas that a brick is used for (building stuff, throwing at things, weighing things down) we started to head off in the following three directions:

  1. Thinking of other sizes, shapes and forms that a brick could take
  2. Thinking of different contexts and locations that a brick could be used in it’s original form (outside of those we naturally thought of straight away)
  3. Thinking of everyday objects that we could replace with a brick.

For example:

  • We could grinding the brick down to create sand
  • Use the brick as book ends
  • Take the brick to a philosopher (and/or someone who had never seen a brick before) and try to explain what it was used for
  • Use the brick as a house for small animals, insects and little people
  • Use the holes in the brick as a spaghetti measurer
  • Putting the brick in a toilet cistern to save water
  • Projectile missile and other weapons
  • Use it to draw right angles
  • Use it as a paperweight
  • Use it as a booster seat in a car
  • Use it as a holder for fireworks
  • Use it as a bird bath.

And many, many more.

2

As you can see we explored a number of uses and we created a decent amount of categories in which a brick would fit.

What was most important though was that we all took time to reflect on where our ideas were coming from.

We also noted that not all of clearly think in the same fashion. Some people remained true to the form and shape of a brick but explored different contexts.

Others ignored the standard shape of a brick and explored different elements and uses of the materials within a brick.

This realisation that we all think differently about something as simple as a brick triggered some further discussions about how we come up with ideas for testing our new features.

It ultimately lead to us to concluding that it would make sense to pair with others during and after our story kick off meeting. It might generate a broader and deeper set of test ideas. It might not. We’ll have to experiment.

For now though we’re running with two testers attending story chats followed by a brainstorming and ideas exchange meeting. We decided it would make sense to not do the brainstorming in the story chat as that will sidetrack the purpose of that meeting, but we will be sharing our wider test ideas with the team as well as writing acceptance tests in SpecFlow for static checks.

Let’s see how it goes.

It was a really good session and we took away a direct change we could try. It’s good to get the team together every week to chat through the ways we are working, thinking and solving tricky testing problems.

Opposite Views

At EuroSTAR 2012 I got talking to someone who had polar opposite views to mine on Testing and Agile implementation.

Despite his opposite views and the fact I could counter almost anything he said from my own experience I knew deep down inside that he knew he was right.

His solution, albeit not something I would label agile, worked for his clients. He was passionate about the work he does and the people he helps. He held different views, but was contributing goodness to others in the industry and more importantly, he was getting results for his clients.

He was helping people succeed.

No matter what my opinions are there will always be people who hold different opinions, ideas and experiences to me.

In software development there are very few process level ideas and actions that are black and white. It’s difficult to quantify, measure and then compare two different approaches as there are so many variables…like budget, people, product, customers, skills and experience etc.

One approach that works over here, might fail over there.

A lot of people avoid talking to people who think differently to them but I believe that only by opening our dialogue with others who think differently will we truly learn about ourselves and alternative ways of doing things.

Isn’t it in this collision of ideas where true innovation and learning comes from?

I had a lot in common with the tester I met at EuroSTAR 2012. We’ve kept in touch since and despite his continued promotion of ideas at odds with mine we’ve become good friends.

At EuroSTAR he told me that he had found his calling in life.

Who am I to say his calling is worse than mine?

Monitoring with NewRelic

Over the years I’ve come to rely on information radiators during testing to get immediate (or as quick as possible) feedback from the systems I’m testing.

Firebug, log files, event logs and many other sources of information are all very useful to a tester. They can give you insights in to what is happening in the system under test.

We’ve just taken this a step further by rolling out NewRelic on our test servers.

NewRelic is what’s termed a “Application Management Solution”.

I’ve been talking about this internally as a system that can give us three distinct insights:

  • User Experience Information
  • Server Information
  • Product Performance Information

I’ve probably over simplified the tool and doing an injustice but it allows me to clearly explain the value we’re seeing from it.

User Experience Information

NewRelic gives us all sorts of data around how the experience is for end users when they use our product.

We can use this to ascertain how our product is being experienced by our customers, but we can also use it to understand how the experience is stacking up for our testers.

If we are testing and we observe a slow down we can check whether it really was a product slow down using NewRelic and more importantly; what’s actually happening on the stack.

We can use NewRelic to work out what browsers are being used across all of our environments. We can see the test coverage we have across browsers and we can also see what browsers our own business use from our pre-production test environments (where we test all kits before live deploy).

We can also then see which browsers are faster than others. We can see which versions are used and which browser is our most heavily used. Interesting stuff to help guide and tune our testing.

Server Information

NewRelic monitors the actual servers giving all sorts of information such as memory, CPU, process usage etc etc. This is great information on our test servers, especially during perceived slow downs or during a load test.

We have other mechanisms for measuring this also so this is the least used function in NewRelic when testing.

Product Performance Information

For me, this is the greatest information tools like NewRelic offer; they show you what the product is actually doing.

It includes what pages are being dished, how fast are they being dished, where they may be slow (in the DOM? Network?), what queries are being run, what part of the code is running them and how often they are being called.

When we dig around in the data we can find traces that NewRelic stores which give an amazing level of detail about what the product is/was doing when the trace was run.

It’s going to become a testers best friend.

In a nutshell what it allows us to do is provide an accurate picture of what the product is doing when we are testing. This means we can now log supremely accurate defect reports including traces and metrics about the product at the moment any bugs were foud.

The programmers can also dig straight in to any errors and be given the exact code that is generating the error.

We can see which queries are running meaning that if we encounter an error, a slow down or something worth digging in to we have the details to hand.

It’s still early days using the tool but already we’ve had deep insight in to how the product runs in our environments which I’ve never been able to get from just one place.

It’s immediate also. Test – check NewRelic – move on.

Imagine how powerful this could be on your live systems too.

Imagine the richness of information you could retrieve and imagine how fast you could get to the root cause of any problems. It’s powerful stuff. Expect to hear further posts on how tools like this can inform tests, provide a depth of supporting information and provide help to performance testing.

Some notes:

  • There are alternatives to NewRelic.
  • It’s still early days but tools like this are proving invaluable for accurate and timely troubleshooting and information gathering.
  • I’m not affiliated to NewRelic in any way – I’m just a fan.

Failure Demand

One of the highlight talks from EuroSTAR 2012 was the keynote by John Seddon.

 

It wasn’t even a testing talk. It was a talk about value, waste and failure demand. The talk was about Vanguard’s (John’s company) work with Aviva Insurance to improve their system to provide value to the customer. It was an interesting talk from my perspective because it was centred around the call centre aspect of Aviva. As I work on call centre products I had more interest than some of those around me.

 

I saw good parallels to testing and software development but I don’t believe all did. I think it’s a shame because had many people seen the connections I believe they may have been as inspired as I was after the talk.

 

In a nutshell John told the story of how Aviva was being run based on cost. Management were looking at the system (the business process) as a cost centre and working to reduce costs rather than looking at the root causes of why costs were high.

Aviva started to receive large numbers of calls to their call centres. So they started to build more call centres to cater for demand. The call centres started to be moved to area in Britain and abroad where the cost per centre was cheaper. They were looking at the costs of the call centres and were optimising and reducing cost where they could.

 

The problem was though, that the costs in the call centre was an effect of customers not getting value at the beginning of the cycle. So when a customer would interact with Aviva they would not get their problem or request dealt with 100%. They would then call the call centre again. And again, not get it resolved. So they would call back. The managers took this to mean that people liked to speak to Aviva, hence more call centres. The real reason was that they were not solving the problem correctly first time, hence they were spending more trying to solve the problem later.

 

John coined the term “Failure Demand” to explain this. Failures in the system were creating demand elsewhere. In this instance it was calls to a call centre.

 

He worked with Aviva to increase the chances of satisfying the customer 100% on their first interaction, thereby reducing the need for further call centres. Customer satisfaction went through the roof and savings were made.

 

The problem Aviva had was that they were managing based on cost, rather than the value they provide to their customers. Switching this focus means a significant mindset change, but the results are incredible.

 

What’s this got to do with testing?

A lot. When we manage our development process by cost we start to ignore the value that we are adding. We use metrics to make decisions, we look for cheaper ways of doing things and we optimise the wrong parts of the system.

 

I immediately saw lots of parallels with software development. Every time we do rework, bug fixes, refactoring, enhancements and any other work which could have been avoided is, I believe, failure demand. We are spending more money correcting things than if we had spent more time getting it right in the first place.

 

With software development though there will always be times when we need to refactor, change something or fix bugs. The question for me though is at just what level does natural change cross over in to failure demand.

 

Did we not define the requirements well enough and are now having to change the product because it’s not right for the customer?

Did we not include the right people at the start and some tough questions get asked too late in the process?

Did we not have sufficient testing in place early enough to catch obvious bugs which now require rework?

Did we not have the right skills in place to make sound technical decisions which now mean we have to re-write bits of the product?

Did we not spend enough time understanding the problem domain before jumping in and building something?

 

Agile helps to reduce this somewhat by making the feedback loop smaller, but as John mentioned in his talk “Failing fast is still failing”.

 

It was a really good talk. It made me really think about what elements of a testers work could be failure demand. It re-enforced my ideas that optimising parts of the system doesn’t often address the root cause and it gave me renewed energy to look at the value rather than cost.

 

If you’re interested in the talk, here is a similar one (without the Aviva part) from Oredev and here is the Aviva video that John showed during the presentation.

Re-thinking IT – John Seddon from Øredev on Vimeo.

Interesting stuff. His company website is here:

http://www.systemsthinking.co.uk/

Geeks hiring geeks

For those that are hiring managers there is a book I would most definately recommend you read. It’s a book called Hiring Geeks That Fit by Johanna Rothman.

We’re not doing too bad at all at recruiting (we are recruiting again by the way!) but there are always lessons to be learned and advice to be sought out.

Johanna’s book is a great read packed full of useful insights, experience and nuggets of gold that may just change the way you recruit. It’s great to read a book that is pragmatic about recruitment and open to the scary reality that hiring geeks that fit can be challenging and demanding and may require managers to step outside of their comfort zone.

It’s a book designed for those that want the right candidate, not just the best candidate they can find within 30 miles of the office.

It’s also full of practical advice like how to make an offer that will be tempting, about how to be sure you are “right on the money” at offer stage and how to make a great first day impression. I liked the chapters about sourcing and seeking out candidates.

I imagine it’s not comfortable reading for those who expect generic adverts to attract top talent or for a consultant to do all of the work for them, but that’s why the book is so good. Johanna spends a nice amount of time talking about personal networks, and of course, social networks as a way to recruit. I’ve had major success from both personal and social networks so can testify to how powerful they are becoming.

“One thing you cannot do is avoid Twitter. Not if you want the best technical candidates. Not if you want people who use social media. But you can keep your Twitter use to 15 minutes a day while you are sourcing candidates. That, you can do.”

Most tasks that are worth doing involve an investment of time. This is a theme I believe runs throughout the whole book. Johanna makes it clear that the process is time consuming, but it’s an investment. To get good candidates takes a great deal of time and effort.

“You don’t have to spend gobs of money to find great candidates, but if you don’t, you probably will need to spend time. Remember, potential candidates may not all look in one place to learn about the great job you have open, so you need to use a variety of sourcing techniques to reach them.”

Or you could just throw money at recruiting:

 “If you have a substantial budget but not a lot of time, consider using a combination of the more costly sourcing techniques—such as print and media ads, external contingency recruiters, external retained-search recruiters, headhunters, and numerous nontraditional approaches—along with the time-intensive techniques.”

It’s a really balanced book to read. I took loads from the book and would definitely recommend it to anyone recruiting.

In fact, it’s a good book for those seeking a new position also – it certainly gives insights in to how managers may be recruiting.

The templates included in the book are very useful indeed, especially for refining your requirements further and understanding the value your company could offer a candidate.

It’s an easy book to read also with clear language and stories of key points as way of example.

During our recruiting I am digging in to the book and putting in to practice many of the ideas. Good book indeed.

https://leanpub.com/hiringgeeks

T-Shaped Testers and their role in a team

Stick with it….it’s a rambling long essay post.. and I may be way off the mark.

I’ve never been comfortable with the concept of a separate test team and associated “phases” of testing. I spent about 8 years working in these environments and kept struggling to answer questions like:

  • “Why are we involved so late in the project?”
  • “Why are there so many obvious bugs or flaws?”
  • “Why does the product not meet the spec?”
  • “Why do we always follow these scripts and assume the product is good?”
  • “Why don’t we use the questioning skills of tester’s earlier in the process?”
  • “Why are the tester’s skills in design, organisation and critical thinking not valued at the end of the cycle?”
  • “Why do we have some specialist testers, like performance testers, but a load of other testers who just do ‘any old functional script’?”
  • “Why does everyone keep complaining about this way of working, but do nothing about it?”

And a whole load more questions along the same lines.

These questions are common in the industry, check out any forum or conference and you will find many similar questions being asked, and a plethora of tools, services and consultants willing and able to solve these problems.

It’s taken me many years and much analysis to come to an idea about testing that I feel more comfortable with, and in truth, it wasn’t even my idea, but I’ll get to that bit.

The more people I speak to about this, the more I realise that others feel comfortable with it to. Comfortable because they are operating in these contexts, or, more crucially, would love to operate in a context like this. Of course, some don’t agree and many simply don’t care…but that’s another post.

I believe that finding bugs is just one aspect of a testers role.

I don’t think finding bugs is just the responsibility of the tester either.

I also believe that testers should use their skills in other parts of the project cycle, whether that cycle is two weeks or two months or two years.

The idea I am presenting here is the T-Shaped people idea. It’s not mine, I believe Tim Brown (CEO of IDEO) coined it in the 1990s to describe the new breed of worker.

If you imagine the letter T being a representation of a person’s skills (or as a role as I like to use it). The vertical part of the T represents the core skill or expertise. In testing I would naturally suggest this is the core skill of testing (of which there are many variations, and sub-skills). The horizontal part of the T represents the persons ability to work across multiple disciplines and bring in skills and expertise outside of the core skills.

The more I talk to people about T-shaped testers the more I hit a nerve with people. It really does seem to sum up the growing number of testers in the community. Those who are skilled testers, yet are skilled in a number of supporting domains.

Many of my peers in the community are T-shaped testers. They excel at their specific element of testing yet they bring in skills from other areas, or they use their skills to fulfill other roles within the business.

In start-ups or fast moving companies the ability to work across multiple disciplines has some obvious benefits.

One person capable of fulfilling a few roles reasonable well seems like good value and a good asset to delivering value. Even in traditional environments with more structured roles T-shaped people can be found serving multiple roles.

However, a lot of the time people don’t see themselves as contributing to something outside of testing (i.e. fulfilling other roles), or bringing other skills they have to the role. Some simply don’t have the opportunity.

Some great testers in the community fulfill other roles within their business. For example, without naming names:

  • There is a great test manager I know who is also a support manager.
  • There is a great tester I know who is also a product owner.
  • There is a great tester I know who is also a scrum master.
  • There is a great tester I know who is also responsible for market research for the company.
  • There is a great tester I know who also does all of the hiring interviews for **every** role.
  • There is a great tester I know who also runs conferences, sells advertising, builds his own product, markets his own product and consults to big clients. (how many different skills do you need to achieve that?)

These are just some examples. There are countless others.

Then there are those who are testers but have supreme skills out of work that aren’t utilised in their main role. We have musicians, artists, designers, writers, mechanics, engineers, carpenters, social media advocates, printers, net-workers and anything else you can think of that someone might do out of work. Could a company not utilise and encourage the use of these skills to help solve business problems? Of course they could.

Sadly, many people (not just testers) are pigeon holed in to their role, despite having a lot more to offer.

As a short side story I was ready to leave testing a few years back, mainly due to being unable to answer the questions I posed earlier. I was thinking “Is This It?”. What about the skills I had and the passions outside of work? Why can’t I use these? What job could I get that does use them? Would I have to re-train? Why are my other skills ignored in the work place? Then I found blogging, consulting, agile coaching, systems thinking and ultimately people management and it all fell in to place…..Anyway – I digress.

I believe that testers, actually – anyone, can contribute a lot more to the business than their standard role traditionally dictates. The tester’s critical and skeptical thinking can be used earlier in the process. Their other skills can be used to solve other problems within the business. Their role can stretch to include other aspects that intrigue them and keep them interested.

With Acceptance Test Driven Development, Test Driven Development and a whole host of other automation approaches comes the need for testers to be involved earlier but crucially, not so tied down later in the cycle running confirmation checks. Exploration, curiosity and intrigue are what drives testers in these environments. The checks are taken care of, what remains is to understand what the product actually does and provide insights in to risk, uncertainty, user experience and the markets (customer, end user, competition, industry) expectations of the product, plus the stuff we might not have thought about earlier.

They can help to discover what the product is meant to be, not just give judgment on whether it meets the requirements or not.

Finding bugs is what we do, but I don’t believe that this should be an end goal. Bugs are a side effect of discovering more about the product…maybe.

I believe everyone has the capacity to do a lot more towards the goal of shipping great products outside of their stereotype role. It’s something we’ve embraced here at NewVoiceMedia.

We have testers who write product documentation, are scrum masters, are building infrastructure to support rapid release, are taking ownership for security and compliance to standards, are presenting the development process to customers, are visiting customer sites to research how people are using the product, are writing social media content, are devising internal communication strategies, are doing agile coaching, are creating personas and are using their natural skills and abilities where they are best suited to help move the business forward.

We’re still working on the balance between roles and expectations, and the balance shifts, typically in response to the market.

Don’t get me wrong. Many people don’t have this opportunity but if you’re in a position to make changes then utilising your wider skills and the skills of those in your team could be a great approach to solving problems.

This is clearly not restricted to testers either. Programmers, product owners, support, sales, accounts etc etc – everyone is a T-Shaped person, or at least has the potential to be T-Shaped.

I think the future of testing is going to be a future of both specialists and generalists. There is always a need to have specialists in security, performance, accessibility etc, but there is also a need to have generalists; testers who can fulfil a number of different roles across the organisation whilst still maintaining a core skill of testing.

Being a T-Shaped person means having skills that can be useful across other domains. Having T-Shaped tester roles means encouraging testers to fulfil a number of roles. Learning the skills needed, or already having the skills in place (i.e. already being a T-Shaped person) means people can either slip straight in to the role, or they may have to seek out learnings, coaching and mentoring. And that’s where good management, teams and community engagement can come in.

I’m exploring around this idea right now, but I know already that T-Shaped people gives me a really good model to describe the testing and testers I feel comfortable with. The testing that I feel suits me, the companies I seek out and the markets I work in.

I believe testing is more than finding bugs; it’s about exploring the product, discovering what the product needs to be, discovering the market needs (i.e. A/B Testing), discovering what the product actually does, working out whether the product is suitable for the context of use, questioning the process, improving the process, helping to design the product, improving the product, helping to support it, helping to promote it and ultimately working with the team to deliver value.

And all of the above might explain why myself (and those peers who appreciate or demonstrate the T-Shaped model) find it so hard to recruit great testers (for our contexts), yet other managers I speak to can find “good” candidates at every street corner.

We demand more than just testing skills. We demand many other skills that complement a testing mindset. Skills that help us deliver value.

Of course, these are just my thoughts based on testing in my context. You’ll work in another context and appreciate other skills. At the moment this is just an idea, and like all ideas, it might be wrong. But I thought I would share it anyway.

 

Image courtesy of : http://www.flickr.com/photos/chrisinplymouth

Mixing it up with Personas

We had our team wide sprint demo yesterday where each team presented what they have been working on in the sprint.

These meetings are a great opportunity to share and learn about the world outside our immediate teams.

In this meeting one team decided to use personas to present their work.

It is not the first time they’ve used visuals and role play to demonstrate the work, but it’s the first time they’d used print-outs of people and given them names and background details.

Each team has settled in to their own way of presenting. Some are able to use a demo and talk through the stories they have finished whilst others talk through what they have done as their work often has no visuals to show (our test infrastructure team). No way is better than any other as each is characteristic of the work and the teams.

From these meetings, we all learn a lot about the product, the work each team does and to some extent, ourselves, as we share with the wider team what we are doing.

Our product is Contact Centre (Call Centre) software and as such we can use personas to visualise the flow of calls, the states and the interactions. Assigning roles, personalities and context to each person in the scenario allows us to really understand the motives, user story and operational context clearly.

Applying personas in this way allows us to seek empathy with each user in the system and to understand a little more about why the product is working the way it is. After all I’m sure most of us have interacted with a call centre at some point in our lives; some experiences good, some bad and some very much dependent on our own contexts (in a rush, bad mood, upset, angry, anxious)

 

 

The demo that the team did (presented by scrum master Dan and tester Andy)  involved the wider team in the demo also.

  • One product owner played the role of Mary (a tech savvy stay at home mum awaiting a call back about an issue)
  • Andy played the role of Claire (a call centre agent who was new to the role and was making the return call to Mary)
  • Another teams scrum master/dev was Clive (a senior call centre agent who was to be transferred to in order to solve the problem)

Getting people playing roles in the demo proved to be a good way of articulating interactions and stories too.  This approach might not be suitable for all, but it went down well with the team yesterday.

Can personas be helpful for testing?

Absolutely.

We’ve started to dabble with personas for testing also.

By having a core set of user interactions/capabilities we can start to apply different personas to each interaction shedding a different light on it.

For example, one of the most fundamental aspects of our product is the ability to connect callers to agents.

We could break this capability out in to many different scenarios for testing, for example:

  • There is no agent available immediately, the call is queued, then an agent becomes available and takes the call. (testing – queuing, dequeuing and call allocation, stats, call recording, etc)
  • The agent can deal with the call immediately. (testing – straight forward two party call, stats, call recording, etc)
  • The agent cannot deal with the call immediately and needs to consult a colleague (testing – hold, consult, retrieve, 3-party call, call recording, stats etc)
  • The agent cannot deal with the call at all and has to transfer to a colleague (testing – hold, transfer, retrieve (by second agent), call recording, stats etc)

I could go on with many different scenarios….

Now let’s apply personas.

For each scenario we could approach the interactions with different personas.

  • We could have agents who are experienced, stressed, new to the job, coming to the end of a shift, working under strict call handling times, etc.
  • We could have callers who are relaxed, angry, frustrated, fed up of being transferred, knowledgeable in the subject of the call, not knowledgeable in the subject of a call.
  • We could be in a large consumer driven call centre, or a small inbound support desk.

I could go on.

What do personas bring to the testing?

Well personas allow us to understand how and why someone is using the system.

They allow us to seek empathy with the agent, caller, supervisor and any other persona in the mix. It allows us to think clearly about waiting times, queues, effective use of routing, call quality, expectations, sub systems that support the interaction (stats, call recordings, audit) and it allows us to fine tune our own exploration to look for aspects of the interaction we might not have considered previously.

There are some fundamental expectations that the system must meet.

The expectations though will vary in their scope depending a lot on the context. Personas allow us to look at the product differently and see if it’s still meeting expectations from that view.

Elisabeth Hendrickson, in her excellent book Explore IT! (What? You’ve not read it yet?), refers to personas as a great tool for exploring the product and gives a really neat insight:

Just as personas are useful for designing systems, they’re also useful for exploring systems. Adopting the mantle of a persona prompts you to interact with the software with a self-consistent set of assumptions, expectations, desires, and quirks.

We’re finding Personas useful for design, testing and now presenting work back to the team.

As with anything though, there is no silver bullet solution; personas are another tool and technique we can use to achieve our goals, not the only one.

Some things are best presented and tested in other ways, but sometimes personas can be quite helpful.

 

Further Links http://www.cooper.com/journal/2008/05/the_origin_of_personas.html

http://pragprog.com/book/ehxta/explore-it http://janetgregory.blogspot.co.uk/2011/05/power-of-personas-in-exploratory.html http://www.uxbooth.com/articles/personas-putting-the-focus-back-on-the-user/

Tester’s need to learn to code

Tester’s need to learn to code…. and any number of other way of paraphrasing Simon Stewarts comments in his Keynote at EuroSTAR 2012.

I’m not entirely sure on the exact phrase he used but I heard a number of renditions after.

They all alluded to the same outcome:

“Testers need to code or they will have no job”

Speaking with Simon briefly after the event it’s clear his message was somewhat taken out of context.

I get the impression he was talking about those testers who simply sit and tick boxes. Checkers, as I think they are becoming known.

Simon suggested that these people should learn to code otherwise they will be replaced by a machine, or someone who can code. There were some who took great distaste to the general sentiment, and those who are in total agreement, and of course probably some in between and some who don’t care.

Simon himself was greatly pragmatic about it and suggested that learning to code will free people up to do good testing, a sentiment I can’t imagine many arguing with.

Those who know me have often commented that I often sit on the fence and try to see the positive in each side of an argument. It will therefore come as no surprise that I agree, and disagree about the message Simon was communicating.

I agree that Testers who do purely checking will (and should) be replaced by machines to perform the same action. Some of these people, (with the right peers and support, the right company and willing market conditions) could become great “Testers” whilst the machines automate the tedium. Some won’t and may find themselves out-skilled in the market.

But I don’t believe that all Testers have to learn to code. I know of a great many who don’t but are doing exceptional testing. Saying that, I think each team needs to have the ability to code. This could be a programmer who helps out, or a dedicated “technical tester” who can help to automate, dig deep and understand the underlying product.

I’m an advocate of encouraging Testers to learn to code, especially those new to the industry who are looking for early encouragement to tread down certain career paths. I’m also an advocate of using the wider team to solve team wide problems (and automating tests, reducing pointless activities and having technical assistance to testing are team wide problems).

When you hear a statement like “Testers need to learn to code” don’t always assume it tells the whole story. Don’t always assume it means learning enough code to build a product from scratch. Don’t always take these statements at face value. Sometimes the true message can become lost in the outrage in yours (or someone else’s) head, sometimes it might not have been communicated clearly in the first place and sometimes it might just be true; but too hard for people to accept.

  • Learn to understand code and your job as a Tester may become easier.
  • Learn to understand how the world around us works and your job as a Tester may become easier.
  • Learn to understand how you, your team and your company works and your job as a Tester may become easier.
  • Learn how to sell, convince, persuade and articulate problems to others and your job as a Tester may become easier.

There are many things which could help you become a good, reliable, future proof(ish) tester. You can’t learn all of them. You can’t do everything. Coding is just one of these elements just like understanding social sciences, usability, performance or critical thinking are some others.

Learning to code can be valuable, but you can also be a good tester and not code. It’s not as black and white as that.

Evernote As a Test Case Management Tool?

Could I realistically use Evernote as a Test Management Tool?

That’s the question I asked myself the other month after finishing writing an eBook on Evernote for one of my other blogs. I’d become convinced that Evernote could be used for almost any requirement (within reason). Could it be used for test cases?

I did a quick proof of concept the other week looking at whether or not we could use Evernote as a Test Case Management tool with some interesting outcomes, and a great deal of learning along the way. I’ll share this with you here.

I floated the idea with my team after the initial spike and a few of us did a quick brainstorming session to talk through the process flow and major problems with the model. We drew it out and talked about some of the pros and cons. We concluded it was possible, but not without some potentially major work arounds (metrics, reporting, master copies etc). I’ll talk more about that in this post.

Why Evernote? Evernote, if you’ve ever used it, is a truly awesome way of capturing information.

I use it now for all of my Exploratory Testing notes.

It’s super quick to add stuff, tag it, re-categorise/shuffle it around and edit it.

There are several ways of getting information in to Evernote and once it’s in there it’s super easy to search for stuff.

It’s a perfect companion for me when exploring.

We also have a number of mobile devices which we use for testing. As Evernote is available on almost any device, it seemed like a perfect companion.

A great feature of Evernote is that you can share a notebook. This meant we could create a series of NewVoiceMedia testing notebooks and share them across the entire team. Another great feature. Any device, any tester, anywhere.

Requirements of a Test Case Management Tool

Despite being in an agile environment and heavily automating tests we will always have a number of test cases for legacy bits of the products, tests which are not valuable to automate and for compliance/reporting reasons.

We also have some very lightweight (I actually consider them checklists) test cases that we run to check over each kit, as well as regression tests.

All of the team make copious amounts of exploratory testing notes in their favourite system (rapid reporter, notepad++, Evernote) which ultimately end up as txt files in SVN too.

How Evernote might work I created notebooks which I populated with notes. As a basic guide a “note” is a test case and a notebook could be considered a container or “suite”. I tagged each note with relevant functional area and type of test (i.e. regression, call control etc) I created the following notebooks.

  • MasterCopy (contained the master copy of each test case)
  • ToBeRun (containing a copy of each test case needed to be run)
  • Complete (containing all test cases completed – moved from the ToBeRun after it was executed.)
  • Exploratory (containing all exploratory testing session notes)
  • Areas to explore (a notebook containing any area we deemed was worthy of further exploration)

Creating Test Cases

I like to encourage exploration with the tester’s knowledge and insight leading the testing, so where possible we all tend to write our tests in a very checklist orientated fashion. This means we don’t have “step by step” instructions and we assume that the person running the test case has some prior knowledge of the system and a good grip of testing.

Therefore our tests are very much like checklists and we leave it to the tester (with their own skill, experience, knowledge and insight) to test against the checklist where relevant. I don’t mandate that each checklist item is tested against either. It could be that we only run 30% of the test case. That might be ok. The Tester is in charge based on their insights and local knowledge.

Evernote supports checklists in a very simple form. It needn’t be more complicated than a checkbox against each item so Evernote worked well for creating and storing test cases.

There are exceptions though. Some of the areas we test against are incredibly complex and require absolute precision to check for known outcomes. Therefore we do have some test cases that detail each step someone must take and the expected outcome. These are mainly focused around statistics, percentage reports and other numbers based outcomes that require precise timing and action. Automation helps greatly here, but sometimes a legacy code stack can hinder that automation.

These types of detailed tests can still be accommodated in Evernote easily. I used something called Kustom Note to create a template for these types of test ensuring I captured all relevant information.

The only thing Evernote clearly does not do is report on metrics of completion or coverage. I knew this going in to the experiment. That was ok when I started out.

Automatic Creation of Tests In Evernote

So far, so good. Evernote clearly does allow for basic test management where metrics are not the sole output. This suits us nicely. Our metrics are taken at an automation level which is dealt with by other systems and tools.

One of the really great uses I found when exploring Evernote was how I could trigger a new “test note” to be created when a new feature/story/workitem was created in a different system.

I did this by grabbing the RSS feed from our instance of Pivotal Tracker.

I then turned to If This Then That (a great automator of the web)(IFTTT).

I configured IFTTT to grab the RSS feed from Pivotal Tracker and automatically create a new note in the Evernote notebook.

Kapow. A new Exploratory Testing placeholder for a new story. Sweet.

I got giddy with where this could go now.

I then hooked Evernote up to the RSS output of a social media aggregation tool I use. I could therefore collect social media mentions of a product and create exploratory (or investigatory) sessions from it. Interesting.

For example, if there is a mention of something like a bug, or slow down, or any other query about how something might work we can automatically create a note in the relevant notebook for a tester to explore, or even for someone to respond to it.

But it gets more interesting still.

Let’s look at some other potential sources.

  • Case/Support management tools could generate a session when a new case is raised.
  • Bug tracking tools could trigger a session and include bug details when a new bug is fixed.
  • Texts, emails, facebook updates, tweets…..all used to create a new session/test.
  • Evernote changes could trigger further new sessions also.
  • Delicious feeds could create new sessions based on new ideas/techniques or approaches that someone else has written about. Bookmark the page (and therefore idea) and create a new session based around that idea.
  • Dropbox (or SkyDrive etc) updates could trigger a new session. New screenshots, files or shared resources that need exploring for example.
  • If you use Chatter (or Yammer) you could automate a new session based on a post to chatter from someone in your company.

There are many uses I can think of where we would want to create a new test based on the content or changes in another system. IFTTT can help you with this. RSS feeds from other systems greatly increases the ease of which you can do this, especially if that other system is not supported by IFTTT.

Or you could step further in to the realms of beta tools and check out Zapier. It looks truly awesome.

It supports Campaign Monitor, Basecamp/Campfire, Agile Zen, Aim, Github, Jabber, HipChat, Google Drive, Google Tasks, PayPal, MailChimp, Pivotal Tracker (direct), Salesforce, UserVoice, ZenDesk….the list goes on.

The options are plenty, not just in creating sessions from other work streams and items but how about the other way around? How about on completion of sessions/tests you update another system?

Or how about stepping outside of the test case world?

How about using Evernote and IFTTT to create an amazing notebook of testing ideas and content? Each time someone in your team shares something, bookmarks something or creates some other content it could be collated in one notebook for the rest of the team.

With a little time and some deep creative thinking I suspect there is very little a test management tool can do that you couldn’t with some hacking and mashing with apps like Evernote….with the possible exception of metrics.

But why would you?

Why not? Why not use the tools and systems that are naturally good at their specific niche and make your testing process more fluid and contextual?

If you have a test case management tool that suits your needs then that’s cool. Stick with it. If you don’t, and can’t find one, then why not get creative? Most of the tools that you can hook together are free and could solve your problem.

You’ll also probably learn a lot about your own needs and requirements through the process of exploring, have a good laugh hacking systems together and probably learn about a load of cool tools that could help you in ways you’d never imagined before.

It’s good to explore the new tools on the market, even if they aren’t strictly test management tools. The tech world is moving fast and sometimes it’s worth exploring new ways of doing things. Sometimes it’s best to stay with what you’ve got also, but only you will know what the best course of action is.

After all of this exploring and trialling we are sticking with what we have right now though.

After investigating these tools I gained deep insight in to our needs and requirements and realised that we do need some metrics around our testing for compliance and for reporting coverage.

I’m not done with the exploration of Evernote as a test management tool though. I think it has massive potential. What do you think?