Moving from 8 Month releases to weekly

The other week I facilitated a session at the UK Test Management Summit.

I presented the topic of moving to weekly releases from 8 month releases.

I talked about some of the challenges we faced, the support we had, the outcomes and the reasons for needing to make this change.

It actually turned in to a questions and answers sessions and despite my efforts to facilitate a discussion it continued down the route of questions and answers. It seems people were very interested in some of the technicalities of how we did this move with a product with a large code base of both new and legacy code (and that my facilitation skills need some fine tuning).

Here are some of the ideas.

We had a vision

Our vision was weekly releases.
It was a vision that everyone in the team (the wider team of more than just development) knew about and was fundamentally working towards.

This vision was clear and tangible.

We could measure whether we achieved it or not and we could clearly articulate the reasons behind moving to weekly releases.

We knew where we were
We knew exactly where we were and we knew where we were going. We just had to identify and break down the obstacles and head towards our destination.

We had a mantra (or guiding principle)

The mantra was “if it hurts – keep doing it”
We knew that pain was innevitable but suffering was optional.

We could endure the pain and do nothing about it (or turn around) or we could endure the pain until we made it stop by moving past it.
We knew the journey would be painful but we believed in the vision and kept going to overcome a number of giant hurdles.

Why would we do it?

We needed to release our product more frequently because we operate in a fast moving environment.

Our markets can shift quickly and we needed to remain responsive.

We also hated major releases. Major feature and product releases are typically painful, in a way that doesn’t lead to a better world for us or our customers. There are typically always issues or mis-matched expectations with major releases, some issues bigger than others. So we decided to stop doing them.

The feedback loop between building a feature and the customer using it was measured in months not days meaning we had long gaps between coding and validation of our designs and implementations.

What hurdles did we face?

The major challenge when moving to more frequent releases (we didn’t move from 8 months to weekly overnight btw) was working out what needed to be built. This meant us re-organising to ensure we always had a good customer and business steer on what was important.

It took a few months to get the clarity but it’s been an exceptional help in being able to release our product to our customers.

We also had a challenge in adopting agile across all teams and ensuring we had a consistent approach to what we did. It wasn’t plain sailing but we pushed through and were able to run a fairly smooth agile operation. We’re probably more scrumban than scrum now but we’re still learning and still evolving and still working towards reducing waste.

We had a major challenge in releasing what we had built. We were a business based around large releases and it required strong relationships to form between Dev and Ops to ensure we could flow software out to live.

What enablers did we have?

We had a major architectural and service design that aided in rapid deployments; our business offering of true cloud. This means the system had just one multi-tenanted version. We had no bespoke versions of the product to support and this enables us to offer a great service, but also a great mechanisms to roll products out.

We owned all of our own code and the clouds we deploy to. This enabled us to make the changes we needed to without relying on third party suppliers. We could also roll software to our own clouds and architect these clouds to allow for web balancing and clever routing.

We had a growing DevOps relationship meaning we could consider these perspectives of the business together and prepare our plans in unison to allow smoother roll outs and a growing mix of skills and opinions in to the designs.

What changes took place to testing?

One of my main drivers leading the testing was to ensure that everyone took the responsibility of testing seriously.

Everyone in the development team tests. We started to build frameworks and implementations that allowed selenium and specflow testing to be done during development. We encouraged pairing between devs and testers and we ensured that each team (typically 4/5 programmers and a tester) would work through the stories together. Testing is everyone’s responsibility.

Testing is done at all stages in the lifecycle. We do TDD, Acceptance Test Driven Development and lots of exploratory testing.

We do a couple of days of pre-production testing with the wider business to prove the features and catch issues. We also test our system in live using automation to ensure the user experience is as good as it can be. We started to publish these results to our website so our customers (and prospective customers) could see the state of our system and the experience they would be getting.

We started to use techniques like KeyStoning to ensure bigger features could be worked on across deployments. This changed the approach to testing because testers have to adapt their mindsets from testing entire features to testing small incremental changes.

Why we love it
Releasing often is demanding but in a good way. The pressure is there to produce. The challenge we have is in balancing this pressure so as not to push too hard too often but have enough pressure to deliver. We don’t want to burn out but we want to ship.

We exceed the expectations of our customers and we can deliver value quickly. In an industry that has releases measured in months (sometimes years) we’re bucking the trend.

As a development team we get to see our work in production. This gives us validation that we are building something that is being used. Ever worked on a project that never actually shipped? Me too. We now see none of that.

 

It’s been tough getting to where we are now but we’ve had amazing support from inside and outside of the business which has helped us to really push ahead and set new markers of excellence in our business domain. We’ve still got lots to get done and lots to learn but that’s why we come to work in the mornings.

 

These are just a few of the factors that have helped us to push forward. There are companies releasing more often, and some releasing less often to good effect. Each business has a release cadence that works for them and their customers.

Did I mention We’re Recruiting?

 

Side Notes:

I got asked the other day how I come up with ideas for talks/blogs, how I think through these ideas and how I go about preparing for talks. I’ll take this opportunity to add a short side note of how I do this. This approach may not work for you.

I firstly create a central topic idea in a mind map (I use XMind).

I then brainstorm ideas around the central topic. After the brainstorm I go through the map and re-arrange, delete, add and rename until I feel I have a story to tell.

Moving to weekly releases

I then start planning the order and structure of the story. Every story has a beginning, a middle and an end.

I start by writing the beginning and then the end. The middle is the detail of the presentation.

 

I then doodle, sketch and plot.

2013-02-15 16.06.11

2013-02-15 16.06.20

 

I then move to my presentation tool of choice. In this case it is PowerPoint – sometimes it is Prezi.

The presentation typically takes a long time to prep, even for a very short intro like this. This is because I don’t like including too much text in my slides and also because I think simple, but attractive slides can add some impact to the topic. So I spend some time making sure they are right. Saying that, no amount of gloss in the slides will help with a bad/poor/boring story.

 

 

 

QTrace and Exploratory Testing

I was exploring using QTrace the other day and I liked what I saw. It seems like a really powerful tool for keeping track of what you’re doing on an Exploratory Test session. It’s for Windows only at the moment but hopefully they will release a MAC and Linux client some time soon.

The tool adds a little control panel to the right of the screen. You can then start the recording and choose the area to record. The tool records the screenshots and the mouse clicks as static images, not as a video.

As it’s recording you can also add notes via the tool, although I found it easier to add notes to my digital notebook and then add these notes to the .Doc output file created at the end of the session.

The tool also grabs the browser information which is pretty neat.

Much of the testing we do here at NVM requires phone calls to be made so it’s good that we can add contextual information to the screen recordings, for example “call received”, “call hung up” etc, as no screen recording tool will be able to grab that contextual information.

The example here is from me clicking quickly through this site here.

Session with Notes.1

Session with Notes.2

Session with Notes.3

Session with Notes.4

Session with Notes.5

Session with Notes.7

Session with Notes.8

Session with Notes.9

The only downside to the screenshot export is that the notes are not included, but these are included in the PDF and Word. I was using the screenshot export to grab screens for defect reports and the PDF/word doc for the ET session summary.

 

It’s also free.

QTrace

Explaining Exploratory Testing Relies On Good Notes..

 Bear with me – it’s a rambling post I’ve had in draft for about 3 years now. I’m having a clear out. Enjoy.

One of the things I’ve noticed over the years is that anyone is capable of doing Exploratory Testing. Anyone at all. It just happens some do it better than others. Some want to do it. Some don’t want to do it. Some don’t realise they are doing it. Some don’t know what to label it.

We all do it to some extent in our own personal lives maybe with slightly different expectations of what we’ll get out of it and almost certainly under a different label.

  • Have you ever opened a new electronic device and explored around what it can do?
  • Or tried to get something working without reading the instructions?
  • Or tried to use a system that has below par instructions (like a ticket machine, or doctors surgery booking system for example)

I think we are all blessed with the ability to explore and deduce information about our surroundings and the objects we are looking at. Some people utilise and develop this ability more than others. Some practice more than others. You may argue some are born with a more natural ability to explore.

In our testing world though I’ve observed a great many people attaching a stigma to Exploratory Testing; it’s often deemed as inferior, or something to do rarely, or it’s less important than specification based scripted testing.

It’s seen as an after thought to a scripted testing strategy; a phase done at the end if we have time; a phase done if everything else goes to plan; a phase done to let people chill out after the hard slog of test case execution.

I think much of this stigma or resistance to exploration comes about from many testers feeling (or believing) Exploratory Testing (ET) is unstructured or random; a thinking I believe many “standards” schemes, certifications and badly informed Test Managers proliferate.

I’m a curious kind of person and it’s always intrigued me as to why people think this. So whenever I got the chance to pick the brains of someone who had condemned or dismissed ET I would ask them for their reasons and experiences around the subject.

The general gist I took from these chats (although not scientific) was that people felt they couldn’t audit/trace/regulate/analyse what actual testing took place during Exploratory Testing sessions.

It became apparent to me that one of the reasons for people not adopting Exploratory Testing (ET) was because of the perceived lack of structure, lack of identifiable actions and poor communication of outcomes to management and other stakeholders.

It’s clear to me that the stories we tell about our testing and the way we explain our actions directly affects the confidence and trust other people have in our testing. In some organisations any doubt in the approach means the approach is not adopted.

This might go some way to explain why many managers struggle to find comfort in exploratory testing and why they too struggle to articulate to their management teams the value or benefits of exploratory testing.

Early Days of someone doing Exploratory Testing

Through my own observations I’ve noticed that in the early days of someone doing Exploratory Testing much of it is indeed ad-hoc and mostly unstructured, some might even suggest that this type of testing isn’t true Exploratory Testing at all, but it shares much in common.

The notes are scrappy, the charters are ill defined and the journey through the SUT is somewhat sporadic (typically due to them following hunches and inclinations that are off charter – assuming they had a good one to start with). The reports afterwards lack cohesion, structure, consistency and deep detail over what was tested. There is often limited, if any, tracing back to stories/requirements or features. This early stage is quickly overcome by some, but can last longer for others (even with guidance).

Experienced Exploratory Testers

I believe that the more advanced a practitioner becomes in Exploratory Testing the more they are able to structure that exploration, but more importantly to me, the more they are able to explain to themselves and others what they plan to do, are doing and have done.

It is this explanation of our exploration that I feel holds the key to helping those skeptical of ET see the real value it can add. Those new to ET can sometimes lack the rigor and structure; it’s understandable – I suspect they’ve never been encouraged to understand it further, taught anything about it or been mentored in the art of Exploratory Testing.

This unstructured approach can appear risky and unquantifiable. It can appear un-auditable and unmanageable. This could be where much of the resistance comes from; a sense of not being able to articulate the real values of exploration.

From trial and error and with some guidance people can quickly understand where they can add structure and traceability. In my own experience I was unable to articulate to others what I had tested because I myself didn’t document it well enough.

Each time I struggled to communicate something about my testing I would work out why that was and fix it. Whether that was lack of supporting evidence (logs, screenshots, stack traces), lack of an accurate trail through the product, missing error messages, forgotten questions I’d wanted to ask stakeholders, missed opportunities of new session charters, lack of information for decision making or even lack of clarity about the main goal of the session – I would work hard to make sure I knew the answer myself.

After doing this analysis and seeing this same thing in others I realised that one of the most obvious and important aspects of good ET is note taking.

Note Taking

Advanced practitioners often make extensive notes that not only capture information and insights in the immediate time proximity of an exploratory session, but are also relevant in the months and years after the session took place (at least to themselves).

Many testers can look back through their testing sessions and articulate why they did the exploration in the first place, what exploration took place and what the finding/outputs were. Some even keep a record of the decisions the testing helped inform (i.e. release information, feature expansion, team changes, redesign etc).

2013-02-15 12.52.01

Seasoned explorers use a number of mechanisms to record and then communicate their findings. They often have a meticulous attention to detail in their notes, built around the realisation that even the tiniest detail can lead to the trapping of a bug or a failed report to others on the effectiveness of their testing.

Some use videos to record their testing session, some rely on log files to piece together a story of their exploration, some use tools like MindmapsRapid Reporter and QTrace. Others use notes and screenshots and some use a combination of many methods of note capture.

It’s this notetaking (or other capture mechanism) that not only allows them to do good exploratory testing but also to do good explanations of that testing to others.

Too often I have seen good insights and information being discovered when testing and then subsequently squandered through poor communication and poor note-taking.

Testing is about discovering information, but that is of little use if you cannot articulate that information to the right audience, in the right way and with the right purpose.

In my early days of testing I found interesting information which I failed to communicate in a timely fashion, to the right people and in the right way. I would often lose track of what exploration actually took place meaning I would have to re-run the same charter again, or spend a long time capturing some of the information I should have had in the first place. It was from having to run the testing a second time through that I learned new lessons in observation, note-taking and information gathering (like logs, firebug, fiddler traces etc).

From talking to other testers and manager I believe good exploratory testing is being done but the information communicated from that testing is often left to the testers recollection. Some testers have excellent recollection, some don’t, but in either case I feel it would be prudent to rely on accurate notes depicting actions, inputs, consequences and all other notable observations from the system(s) under test than your own memory.

We often remember things in a way we want to, often to re-enforce a message or conform with our own thinking. I fall in to this trap often. Accurate notes and other captures can guard against this. Seeing the un-biased facts *can* help you to see the truth. Our minds are complex though and even facts can end up being twisted and bent to tell a story. We should try to avoid this bias where possible – the starting point to avoiding this bias is to acknowledge that we fall foul of it in the first place.

Being able to explain what your testing has, or has not uncovered is a great skill I see too little in testers.

Good story telling, facts analysis and journalistic style reporting are good skills we can all learn. Some people are natural story tellers capable of recalling and making sense of trails of actions, data and other information and importantly; put these facts in to a context that others can relate to.

To watch a seasoned tester in action is to watch an experienced and well practiced story teller. Try to get to the Test Lab at EuroSTAR (or other conferences) and watch a tester performing exploratory testing (encourage them to talk through their thinking and reasoning).

During a session a good exploratory tester will often narrate the process; all the time making notes, observations and documenting the journey through the product. This level of note taking allows the tester to recall cause and effect, questions to ask and clues to follow up in further sessions.

We can all do this with practice.

We can all find our own way of supporting our memory and our ability to explain what we tested. We can all use the tools we have at our disposal to aid in our explanation of our exploratory testing.

I believe that the lack of explanation about exploratory testing can lead people to believe it is inferior to scripted test cases. I would challenge that, and challenge it strongly.

Good exploratory testing is searchable, auditable, insightful and can adhere to many compliance regulations. Good exploratory testing should be trusted.

Being able to do good exploratory testing is one thing, being able to explain this testing (and the insights it brings) to the people that matter is a different set of skills. I believe many testers are ignoring and neglecting the communication side of their skills, and it’s a shame because it may be directly affecting the opportunities they have to do exploratory testing in the first place.

 

Staple Yourself To It

In a test team meeting the other week I was reminded of a technique/game I’d been able to label after reading the amazing book Gamestorming. The technique/game is called “Staple Yourself To It”.

In a nutshell it is about finding an object, message or aspect of your product and stapling yourself to it so that you can map out its journey through a system.

A Stapler
A Stapler

Image from – BY-YOUR-⌘’s “Vampire Stapler” May 12, 2009 via Flickr, Creative Commons Attribution.

For example, in a business you may decide to Staple Yourself to a customer raised Case and follow the case through a journey (or variety of journeys).

Once you have it mapped out (I’d recommend a visual map) then you can start to look for areas to optimise, question and improve.

The same is true for a product under test; you might find a message, or a user action, or a form submission and decide to follow this through the system to look for interesting things.

I use the phrase interesting things because you might not always be looking for areas to test.

  • You might be looking for awkward interfaces for example between people and software.
  • You might be looking for parts of the process that really slow the journey down, or move to quickly for the overall system, or leave you sat waiting; these are classic waste areas which might become the focus of your efforts.
  • You might be looking for usability issues, accessibility problems, reliance on third party software/services or performance bottlenecks.
  • You might be looking for security holes or exploits.
  • Of course, you might be looking for areas to probe further with exploration.

As an example:

We build and deploy cloud based contact centres. One of the fundamental actions a contact centre must do is route a piece of communication to a person (agent) to be dealt with.

In this example let’s use a phone call.

A phone call must reach our cloud from the callers phone and be routed to the right person so that they can have their query/question dealt with.

Staple yourself to it:

  • The caller makes a call via their phone (What sort of phone? Who are they? Why are they phoning?)
  • The call travels through the phone network to our cloud (Which telephony carrier? Which account? What number? International or local?)
  • The call is routed by our Call Centre product (Which account? Which Line? What configuration do they have? How is the call plan configured? Is there an agent available to take the call?)
  • The call is delivered to an agent (Who are they and can they solve the problem? What browser are they using? What phone system are they using? What configurations are there on the UI?)

In a relatively simple journey I can already start to dive down and ask questions of the journey and the processes involved.

Imagine the journey for a call that moved from department to department or got transferred to another system, or went through an IVR system, or got hung up at various points.

Documenting the journey is a good way to see where you can focus your energy and where there could be areas to explore.

Stapling yourself to something and analysing the journey can lead you right to the heart of the problem, or at least give you giant sized clues to help guide you. Without knowing the journey you could be prodding around in the wrong place for days.

Stapling yourself to a journey wont guarantee you will find the sweet spot but it’s just another technique to use to drive out information and the visuals that can help you to identify areas to explore.

Note: Apologies if this idea has been blogged about before in the testing context. I haven’t read anything about it but I know many people are talking about tours and this is not too far away from that idea.