Test Reporting in an agile environment

I did a post over at my work blog a while ago about reporting in an agile environment:
http://blogs.imeta.co.uk/RLambert/archive/2009/01/16/test-reporting-in-an-agile-environment-ndash-low-tech-dashboards.aspx

I centered it around low tech dashboards which I still think are extremely valuable.

A low tech dashboard is a great way of communicating the state of the software mid sprint. At the end of the sprint, the board is fairly meaningless unless you have stories incomplete. But mid-sprint it's a great visual way of showing progress. I.e. we've hit this feature in depth and it looks ok.

It's another indicator of how we are progressing. Look at it as a quality indicator that compliments the velocity indicators like burndowns and burnups. It's a clear, visual representation of the "quality" of the software from the testers point of view. It doesn't need weighty metrics to back it up – although that may help in some environments. It doesn't need to be absolutely accurate, just like the burndown report and it doesn't need to be complicated.

It needs to be simple, easy to read, easy to understand and simple. It's about communicating to all stakeholders (and other teams) where we are at with the software 'quality'.

And when we get to the end of the sprint and we have stories incomplete then the dashboard can be a good way of highlighting where quality is lacking.

A few years ago I created an equivalent that was a 'mood board' with smileys which the testers would put up on paper to show visitors to the team area what mood we were in (happy, sad, nonplussed, ill, bored, tired, giggly, etc). A visual representation of how we were progressing. And it worked wonders and the management loved it more than the metrics. And believe it or not – that was in a waterfall environment…

Acceptance Criteria : it’s a good friend

With some careful planning, a good use of time and access to your customer (or customers proxy) you can craft and distill stories that will make your job as a tester all the more effective.

On an agile project test involvement early in the planning and story writing can add an extra dynamic. Testers often have very critical minds and often ask questions other team members don't. And it's this questioning and thinking that is so powerful and effective when writing stories.

It's not just that the customer understands the stories more and thinks more critically about them but that the programmers also have more information up front and the designers and any other team member can see clearly what criteria that story will be judged against. Testers often posses the skills needed to bridge the gap between the customer and the tech teams too. They also tend to put themselves in the shoes of the user, consider usability and accessibility and are often the ones who raise pertinent questions about non-functional behaviors.

Leaving the tester out of the story writing sessions means that when the story moves over to test for testing the testers will often generate a lot of defects, some of them often quite simple. Defects that could (and should) have been found before any code was written.

And if the tester is being involved to their full capacity they too will find that the story in essence becomes a very effective test case. A case that both manual and automated testers can work from. There is no reason why a story can't contain a long list of acceptance criteria. In fact, the more the merrier in my eyes, it only helps to make estimation and verification easier. There's no reason why the acceptance criteria can't reference or jump out to flow diagrams, state transitions and any other supporting documentation. And all of this become far more possible when you include a critical thinker in story writing sessions.

I've been through many sprints that, at first, weren't successful but we soon started getting key team members involved at each story writing session and we soon started dropping code that had fewer defects. With fewer defects velocity tends to go up, moral remains high and more time is freed up for exploratory testing.

So don't be shy. If you are not actively being invited to story writing sessions, then invite yourself and add your critical thinking early.

It’s not a blame culture but it’s definately their fault

One of the main things I really like about agile is the fact that the whole team are creating and working towards shippable software at the end of each sprint. Well, that's the theory anyway.

And a positive side effect of this is that you lose the 'over the wall' mentality. In a true agile environment there is no 'them' and 'us'. It's no longer a blame culture. Everyone is responsible for quality. Everyone is responsible for getting the software working. The software is not thrown over the wall to test and then thrown back over for bug fixing.

So it becomes a team activity in the truest sense. We are all working towards a common goal. No one person is responsible for quality – we all are. Sure, there are still individual mistakes but the team rally together to solve these.

And it is great. There's no bad mouthing, sniping or hushed conversations – well fewer anyway 🙂 It's all about the product. It's all about the team. And that, in my eyes, is a really positive thing.

Agile: It will make your face melt and your mind burst

For me one of the most difficult challenges I have faced as a tester is the
move from a traditional project methodology to an agile one.

The process of adopting agile for a manual tester is tricky. It's
incredibly difficult and often it is the testers who offers the most resistance
when teams make the move. Stories about testers being negative, throwing their
toys out of the pram and generally being a bad egg are common.

And I completely understand why.

When I made the transition from traditional to agile it felt like my face was
melting and my mind was bursting.

It was the toughest challenge of my career. I
hated those first few weeks and wondered whether I had a role in the team or
not. I was contemplating a change of career and feeling completely and utterly
under valued. I hated it. I was terrified that this was the future of software testing and I didn't get on with it.

For a tester, it's not just about doing the same work in a different order or
with tighter time constraints, it's about changing your outlook on testing
and how you fit in to the team. It's about redefining your role (and your skills) and evolving to
stay relevant. You need to do a mind shift that at first seems completely alien.
A mind shift that seems so very wrong.

In the end I just let go, took the rough with the smooth and worked at seeing what all the fuss was about. And here's what I found out.

The focus of the whole team
shifts to quality

  • You will become the quality expert. You will no longer be the person who
    tests just at the end
  • You may need to devise tests with little to no formal documentation…fast
  • You will need to feedback your test results rapidly
  • You will need to be confident, vocal, capable, responsive and communicative,
    often taking charge and leading on quality
  • The rest of the development team will come to you for feedback to their
    tests and code early

You will bridge the gap
between the business and the techies

  • Your role should now mean you liase closely with the customer. You will need to
    adopt a customer satisfaction role
  • You will help to define the stories and acceptance critiria – these will become your tests and guidance so your input is essential
  • You will have to report finding about quality to the customer and
    stakeholders….fast, timely, accurately and with diplomacy

You will need to put your
trust in the Product Backlog

  • Traditional projects with 100 requirements often end up delivering a large
    percentage of that 100 but with poor quality, misunderstanding and often incomplete
  • Agile projects with 100 requirements at the start *may* end up delivering only 60. But these will be complete, exactly what the customer wanted and of course, be superb quality.
  • This original number of 100 may grow and shrink with changing markets and business decisions. Trust the backlog.
  • The customer will define and decide the next sprint of work for your team.
    • You will simply advise, manage expectations and communicate
    • This is a tough one – letting the customer decide what to do next….
  • You will need to consider the longer term and bigger picture, but your main focus
    is the sprint in hand

You will need to increase
your exploration and automation

  • You will need to replace the tedious, checklist type manual tests with automation if possible.
    • Your regression suite will get too large unless you make the most of
      automation and get the basics covered.
    • The only other option is to hire a load
      of undervalued and demotivated testers to simply 'checklist' basic
      functionality.
  • Your automation should be integrated with the continous integration and
    automated build deployments.
  • Elisabeth Hendrickson summed up agile testing very nicely indeed (taken from
    her ruminations blog – http://testobsessed.com/):
    • Checking and Exploring yield different kinds of information.
    • Checking tells
      us how well an implementation meets explicit expectations.
    • Exploring reveals the
      unintended consequences of meeting the explicitly defined expectations and gives
      us a way to uncovers implicit expectations. (Systems can work exactly as
      specified and still represent a catastrophic failure, or PR nightmare._
    • "Checking: verifying explicit, concrete expectations"
    • "Exploring: discovering the capabilities, limitations, and risks in the
      emerging system"
  • A negative side effect of increased exploration is how you go about managing
    the test information.

You will need to drop the
concept of test case preparation and spec analysis

  • It's unlikely you will get a detailed spec.
  • The acceptance criteria become your test cases and design.
  • The software becomes the UI design.
  • If you must write a test plan, plan for the sprint only.
    • Don't assume you
      know how or what you will be testing in three sprints time.
  • Prepare to be dynamic in your tool selection, approach and thinking to testing. You may need to
    change your tools to cater for new information.
    • Don't be too prescriptive.
    • Add a
      quality toolsmith to your team. They will save you a fortune in the long run.
    • Invest time in researching free, open source or cheap tools.
    • The more tools you know of, the more likely you will be able to respond to changes.
  • Don't even consider what are supposedly Best Practices.
    • Do what is right for your team, on that
      project and at that moment in time.
  • Trust me, letting the stories and software guide the UI and design is a
    revelation. It's just tricky changing your mindset to accept this.

You will need to get over
the defect stats and metrics complexion

  • Working software is fundamental. It's what the end goal is.
    • Each sprint you aim to deliver releasable standard software that meets the
      acceptance criteria.
    • So along the way there is less emphasis on raising and
      recording every single defect in a tracking system.
    • It's more about shouting
      over to the programmer and getting it sorted between you.
    • Look at low tech dashboards as a way of reporting metrics
  • Defects that relate to the acceptance criteria and story under test mean the
    story is not done (even if it has been coded and the programmer has moved to a new
    story).
  • Defects are no longer used to cover our backsides or blame other people.
  • Defects that aren't related to the story should be on the backlog, where the
    customer can prioritise.
    • After all a defect is a piece of functionality that
      either exists and shouldn't or doesn't exist and should.
    • Let the customer decide
      what to do with them.
    • They may be less/more important to the customer than you think.
  • If you truly must report then this needs to be done in the lightest way
    possible. And my guess is, that if you really are having to report each and
    every defect encountered along with test case metrics and stats in a formal way then someone in the
    process/system has not truly bought in to agile.
  • Note: I'm not saying be slack with defect tracking and reporting.
    • Far from it, if you need to put a defect on the backlog for the customer then you need to consider how you will describe this successfully for that audience.
    • When shouting to the programmer it's often easier as you can show them the defect in action. 
    • The people you report to, the information you report and the way you report it changes.

After getting my head around these differences and new concepts I noticed a few unexpected side effects;

  • I was re-ignited with my passion for software testing
  • I was being consulted far more on quality issues meaning I spent less time complaining and raising obvious bugs after the software was dropped
  • I started to use my creativity and critical thinking in a rapid and responsive way, rather than testing a spec and thinking of a few edge cases up front.
    • I was being engaged and used for my creativity, skill and critical thinking
  • I started to work in teams where the whole team valued quality rather than an 'over the wall' mentality.
  • I noticed that the customers were far happier with the process. They were getting to control the focus of the work and ending up with software that meets their needs at that moment in time, not the software they thought they wanted 6 months ago
  • I lost a huge amount of negativity and became more positive, motivated and accomodating.
  • I spent far less time sitting around after raising a barrel load of defects.
    • I no longer waited for the triage, fix, build inclusion, release, retest, close.
    • I got them fixed asap, released asap and retested asap.
  • My job didn't feel futile. I felt I was adding value.

Now I know there are people with frustrations with agile and there will be teething
problems and issues for all new teams. And agile really may not be suitable for all types of work, but there are certainly some awesome principles and techniques we can all learn from agile.

If you have any agile testing stories to share then please let me know
in the comments.