No Test Case, No Bug

A while ago I remember Phil Kirkham mentioning that he’d found a bug that “fell” outside of the Test Cases he’d been given and someone was arguing that it wasn’t a bug. I found it incredibly interesting that someone would dismiss a serious bug because it was not found as part of a test case.

The test case is an abstract, it’s something we create to guide us (mostly), yet too many people take the test case to be the testing. The testing is a human process. The test case is a guide, an artefact, a crutch, something we may *have* to create, a container of information, a checklist or any other supporting element to our testing. The test case itself…is not testing.

About a year or so ago I wrote a short eBook on the problems with Testing. I set the main Test Manager character in a company who valued process over actual results. That process over results was similar to what I believe Phil saw.

This was originally going to be a decent sized blog post about process and test case management. Instead I thought I’d spent 5 minutes hacking together an Xtra Normal video to see if I could get the point across with animated characters instead (I also wanted an excuse to have a go at an Xtra Normal vid). (BTW – The Evil Tester did an Xtra Normal video here : http://www.eviltester.com/index.php/2010/03/03/hey-you-are-you-a-tester-the-m…

My video is by no means an award winning video, but it was actually quicker than writing out my thoughts. I think it conveys a similar meaning.

I’ve always wanted to be a film director too. I’ve got a lot of work to do to make that a paying reality ๐Ÿ™‚

http://www.xtranormal.com/xtraplayr/13134755/no-test-case-no-bug

Estimations, Approximations and a Million Dollars

I’m loving this YouTube channel explaining cool and interesting scientific stuff : Minute Physics

I especially liked this video on how to weigh a million dollars with your mind. It struck a chord with me because it reminded me of how we sometimes come up with estimation for unknown future features or capabilities.

[youtube http://www.youtube.com/watch?v=-zexOIGlrFo?wmode=transparent]

 

Volunteering – a good way to learn Testing

I often get asked by people new to Software Testing what the best approach to learning more about Testing is.

Is it certification, books, blogs or other courses? These are the usual categories that get listed.

I very rarely hear people ask whether “practice” is a good approach to learning. I think this stems from many people thinking that “practice” and “work” are the same thing. That one cannot exist without the other.

How can I get experience in Testing unless I have a job Testing?

I think this view is out-dated. I practice my testing outside of work. I practice my testing inside of work. If you want to get good at something..practice.

Over the last year I’ve been working with a great organisation who are building wonderful technology, mainly aimed at people without access to the basic infrastructure that many people take for granted. It’s got a great user base and is making a positive change to many people’s lives.

Over the last year I have been trying to organise some Testing for this organisation. The group of Testers I’d gathered are great; a mixture of domain experience, general testing experience and a few new to the scene.

Last week it all came together and hopefully this year we’ll starting adding a load of value for our client. And all of this is free, outside of work and purely voluntary.

My stock response for the last few years to the question “How can I learn more about Testing?” has been “Volunteer your time”.

3859852439_ca615b8876_z

There are so many organisations out there building software that *could* benefit from your enthusiasm and skills.

You’ll potentially learn loads about testing, working with people, time management, commitments, decision making, technology, reporting/articulating your testing and how to work within constraints.

It’s all great for your CV.

So how to do it?

  • Find an organisation that you feel some synergy towards.
  • Get in touch with them and volunteer your time and energy.
  • Be sure to set expectations about levels of testing, time and experience you are offering.
  • If you need a more experienced person to help you get it started then head to The Software Testing Club and ask for help, or drop me a line.
  • Persevere – communication can often be tough, expectations may need aligning and relationships will need to be built.
  • Commit and deliver on your promise.
  • Ask for help in the community if you get stuck or you realise there is too much work for you.
  • Document and audit your learning. (any good interviewer will ask you about your experience)

There are so many organisations, open source projects, charities and not for profits that would benefit from some volunteer help….so what are you waiting for?

Image courtesy of wblj – http://www.flickr.com/photos/imageclectic/

Visualising Changes in Your Product

There are so many ways of working out what is changing in your product. As a Tester I look for this information from any source. It helps me filter my test ideas.

One of the tools we’ve been exploring recently here at NewVoiceMedia is a cool tool called Gource. One of our programmers Wyndham has been experimenting with it and been tempting me with the insights this tool could offer our Testers.

There simply isn’t time to test everything so any technique or information source that will help me filter and target my Testing is very welcome.

Gource is a tool that “visualises” SVN check-ins. We can visualise where the check-ins are happening, with what frequency and with what intensity. We can get access to check-in information anyway, but what this tool gives you is a timeline of change and that all important visualisation.

Here’s a cool vid of Gource in action for Flickr:

 

http://www.vimeo.com/11876335 w=400&h=300

Flickr/Gource from Daniel Bogan on Vimeo.

So we’ve been playing with this and immediately I can see areas of code check-ins not immediately obvious from story cards or other workflow tools.

We’re going to be exploring this tool more over the next few months and seeing how well we are finding the insights it can give.

I’ve also been having crazy ideas about how I can use the SVN checkin process and Gource for visualising where we are targetting our exploratory testing. We could see how much testing is going on around which component…..but I digress. I’ll explore that and let you know.

Here’s Wyndham blog on how to get Gource set up and running.

 

Observing to help our Testing

Whilst on the train a few months back I spent some time observing how people were using technology.

Some were using the tech as I assume it was intended, some were “street hacking” the products, whilst others had adopted unique ways of utilising technology (and other devices) to fit the context they found themselves in.

I often wonder how effective testing can be (and overall design and development) when we have little insight or understanding of how our applications/systems are used in everyday life.

We can make sure it’s “functionally complete” or “performant” or “meeting requirements” but that doesn’t mean we’ve helped to create a great product right for the audience it’s aimed at.

I think Testers can help greatly in this respect, either by simply asking questions at all stages of the design and build process but also by getting out and seeing end users in situ (if you can).

Many of us assume, predict or speculate how our end users are using the software, it’s often all we can do, but I wonder how much of this speculation is accurate. I often hear it said that Testers “pretend” to be the end user, which is fine if you know who your end user is, and in what context they will use your product but this approach should never be used as an absolute point of view.

No-one will ever be able to use your applications in exactly the same way as your real end users, but we can certainly work hard to get close to it.

Here are some of the things I observed:

  • Several people were using multiple mobile phone handsets. Work and Play? Feature versus Feature? Or simply taking advantage of the best tariff? 
    • What about data on each phone? Did it need to be synched?
  • One person was swapping sim-cards in and out of one handset to make use of the best tariffs.
  • Two people were using a technique called “flashing” or “beeping” to communicate with someone else via mobile phone. (flashing is where you ring a phone and then hang up before they answer. There are all sorts of social norms growing around this practice – good article here: http://mobileactive.org/mobile-phones-beeping-call-me)
  • One person was trying desperately to get a good photo out of the window on his mobile phone whilst the train was moving. I’m not sure he was happy with any of them. Misplaced expectations?
  • One lady was using a USB expansion device blue-tacked to her laptop lid. I “assume” she was doing this to expand the capacity and/or maybe to reduce the overall width of the machine (the two USBs she was extending were bigger than the expansion USB itself). Street hack? Do we need to test any usages like this (i.e. overloading original intention)
  • The person next to me complained that his phone was running out of power too quickly. He’d been playing Angry Birds for the entire journey whilst using the phone to play music.
    • Expectations of battery life needing to be longer to support modern multi-usage?
    • How did both apps perform?
    • Which one used the most power?
    • Should we be thinking about how much power our apps consume on mobile devices?
    • Can we realistically even measure it?
    • Should we rely on the underlying platform to manage such usages?
  • Many of us live in an age of “smart phones”. How does this “one for all” device fit with other electronic devices like eBook readers, Tablets, Laptops etc
    • Can we read something on a Kindle and start reading where we left off on an iPad at home or Laptop at work? 
    • Can we seamlessly share information between devices?
    • How is our information distributed, stored, secured, managed, protected? Do we care?
  • Another lady caused minor fits of giggles as she took out her “early designed” portable DVD player. It was pretty huge by modern standards. She proceeded to play episodes of Friends on it as she slept for the journey. Old tech and usages is a major challenge for many Testers.
    • At what point do we stop supporting old tech or old versions of our software?
    • Do we need to test all versions?
    • Are there techniques for testing many platforms/versions at once?
    • At what point does this old tech stop being “acceptable” in society….which society?
    • How do we know what versions our customers use?
  • One man was reading his Kindle on the train. He had created a hook for his Kindle. The hook was made from a coat hanger. This hook was suspended around the ceiling hand rail so he could essentially suspend the Kindle at eye height. At the bottom of this bizarre Kindle concoction was a ribbon so he could stop the Kindle swinging around with one of his hands. Street Hack?
    • Could we have ever envisaged it being used this way?
    • Would it have changed the way we tested or designed or implemented?
  • Whilst on the train it is inevitable that the phone signal will come and go. With bridges, buildings and shady coverage it would be ambitious to assume total connection for extended periods. Yet I observed a number of people “hating” (i.e. swearing and getting annoyed with) their devices when they lost connectivity.
    • Two people physically threw their phones on the tray table when they lost a voice signal. Others uttered offensive and angry rants at their devices.
    • Yet the problem existed outside of the device or software in use; it existed in the infrastructure. Are expectations outgrowing reality?
    • The infrastructure no longer supporting the norm and expectations of device usage. This is an interesting challenge for those creating devices relying on technology infrastructure like broadband and mobile networking.
    • What’s the minimum connectivity? What’s the risk of it going down? What happens if it does go down? Does it recover? Do we lose data? Does it matter?
  • One person was drinking heavily whilst attempting to use a Smart Phone keyboard. I won’t repeat what he said, but he struggled to type his SMS. One interesting point he made was that he used to be able to type whilst drunk on his old phone. Loss of capability? Extreme contexts? Trustworthy observation?
  • Two people were “chatting” to each other via their mobile phones (maybe Skype or other chat system). They were sat opposite each other.
    • Are cultural changes in communication reflected in your products?
  • I was sat in a quiet carriage. As usual there were a few complaints and confrontations about noise. One of which was about the noise being made by the keyboard of someone’s laptop.
    • At what point do we pass social thresholds of acceptability? Do we need to do more Inclusive Design?
    • Should we consider more extreme contexts when testing or write these off as edge cases?
    • Could we ever imagine all of the contexts our products may be used in?
  • I kept turning on my wireless hotspot on my phone to sync my field notes (Evernote) as the train stopped at each station. This raises questions of synchronisation, offline working issues, data storage on the cloud and a whole host of privacy issues. There are some interesting examples of how people are using tools like Evernote in the wild.
    • Do you have stories of how people are using your applications?
    • Do you actively seek stories from end users?
    • Do you use this data to build user profiles and system expectations?

Does this really have anything to do with Testing?

There are a few people starting to talk about social sciences in Testing and I believe the application of many areas of social science are going to grow and diversify as Testers seek to find out more about people and technology and how our products can find the right audience(s).

Social sciences can give us a deep insight to ourselves, culture, mass media, communication, research, language and a whole lot more.

Observing people is just one element of research. Research will give you insights, clues and information about the things you research.

Observations and research will help you to make decisions on what to focus on, and what to overlook.

Testers are also natural skeptics; we should help to experiment, prototype and challenge simple assumptions and social categorisation (i.e. all young people use Twitter, wear hoodies and listen to hip-hop) *

Observing people in every-day life is often the biggest eye opener for any Tester wanting to learn more about people and tech.

Why not have a go? Focus on the world around you. I bet you’ll see stuff you never saw before. I bet you’ll see people using tech in ways you’d never seen before. I bet you’ll learn something new.

The challenge is how you can bring what you learn to your Testing.

As usual, if you have a go, let me know what you think.

* This is a real generalisation I heard someone say at a technology, culture and local council meeting!

Under Test

Undertest

Always good to test systems, but testing the live system….for a long time…….at peak bus journey time…….at a busy bus station? Panic ensued.

Yet in this instance; when people were desperate to see what time the buses were running, mobile technology became the hub of activity. Myself and two others with mobile internet were tracking the bus times. As more people gathered to find out the times, this in turn attracted even more people.

A few thoughts from this:

At what point do systems we’ve traditionally relied on become obsolete, and what do we replace them with? (we can’t assume everyone will have mobile internet)

Why is testing of these types of systems done during peak time (assuming that the message is honest)?

At what point do we improve the service we offer so we no longer need to track bus journeys and estimate the time of arrival? (I’m thinking on the Japanese train system which is legendary in it’s speed, accuracy and reliability – but I guess they still track train times???)

Why, when systems are taken down, are alternatives not put in place?

As testers how can we “test” in live whilst still maintaining a service?

And a million more questions that are flying through my head.

 

Isn’t it better to try and change something than settle for mediocre?

At EuroSTAR last week it was sad to see a “them versus us” culture still thriving in the software development community. I thought things were changing, especially with the on-set of Agile heading mainstream but it seems not.

I got embroiled in a conversation which stole an hour of my life. An hour in which I heard the virtues of “them versus us”. An hour in which this “Test Manager” extolled the positives around an “Independent” test team, who “distrusted” everyone and treated programmers with “contempt”.

It boosted Testers morales apparently. It made the team function as it should; as a separate, impartial and hated department. A department who would ruin projects. But it was never the Test Manager (or teams fault), it was the project teams or management.

I got the following vibe:

 

The Testers were frightened of the Management.
The Management didn’t like the Programmers or the Project Team, though they could live with the Testers.
The Programmers were indifferent to the Project Team but were terrified of the Testers and hated the Management.
The Management were seriously affected by the Programmers terror of the Testers.
The Project Team were nervous of the possibility of a Management – Tester alliance, spurred on by the indifference of the Programmers, and they shared everybody else’s dislike of the Management.
Or something like that.

Releasing software seemed to be a constant struggle for this chap. Testing was always an after-thought.

This was a scarily common theme and the blame was always put on other people.

Is change that difficult?

Isn’t it better to try and change something (relationships, approach, team, people, environment, structure, etc), than settle for mediocre? What are your thoughts?

EuroSTAR roundup

It’s been a mad few weeks so I’m looking forward to getting back to normality again, both at work and at home.

Here are some thoughts from EuroSTAR 2011:

I was attending EuroSTAR 2011 with a different lens on my views as I sought out fresh and interesting glimpses at where we are heading as an industry. I was sadly dissapointed. There were some interesting things happening but it was mostly a very common story.

A theme seemed to emerge from the event around the future of testing with both Gojko Adzic and James Whittaker suggesting there would be no testing phase, James even going so far as to say there would be no testers too.

Other than that, it was business as usual. Metrics, certifications, Best Practices, Agile Testing and an interesting “people” theme too. Nothing too controversial and an all round good conference, but very little to really inspire me that our craft is changing.

I mostly agreed with both Gojko and James in their prediction of the demise of Testing. It become so talked about that Paul Gerrard organised an open forum on one evening to discuss where Testing is heading. It seemed though, that all of this talk about the future of testing relied on us all having a unified agreement of what Testing actually was. And you know how hard that is. 

There were a few things though that gave me great insight and hope that we are still changing.

  • Michael Bolton was talking about dashboards and reporting for Exploratory Testing. 
  • People were aware of what Exploratory Testing was and many were practicing it.
  • Agile wasn’t as scary to many as at most Testing conferences.
  • Adam Knight was talking about Specification by Example and people were intrigued.
  • UTest were talking about 10 emerging technologies to change testing. This talk was the only talk that felt like it really shone a light on the future of testing. I mind mapped it here (and below)
  • There were one or two cloud test tool vendors who stood out for pushing the boundaries of tools and their uses. SOASTA and CloudFlex (by Intechnica) were two highlights for me. 
  • There were a growing number of Software Testing Club members at the event.
  • The Testing Planet feedback was immense. Many thanks.
  • The evening socials were busy with people talking about the conference.
  • The Test Lab was there.
  • There was a talk on mind mapping.

I actually walked out of one talk with a number of other people, because of the loose comments being made, the assumptions being driven from some research and because it had a general feeling of us versus them (test v programmer). It felt wrong. But hey, it seemed like a popular session for many.

I actually added some mind maps of the talks here for public consumption:

http://xmind.net/share/rob_lambert/

 

There was a common thread of being embarrassed about being a Tester and a general need to prove yourself with little consideration for the team. I just wish that we’d stand proud of what we do and show respect for the industry and the teams we work in; it sometimes feels like we just concentrate on the “good old days of Waterfall” and the stereotype we have created for ourselves. There is vastly more going on in the industry than many even realise. Wouldn’t it be great to stand tall and talk about it; share it and learn from it.

One thing became evidently clear from EuroSTAR. The future is mixed and uncertain and unknown – but we knew that already. That’s no different now to 1 year ago or one year from now. But I genuinely believe in the future we will rely more and more on communities of Testers. 

 

It was a good conference though and it felt more balanced than last year, but it still felt heavy on metrics and best practices. But what an excellent choice for next years Programme Chair – Zeger Van Hese – awesome choice in fact ๐Ÿ™‚

Gravitate to people like you

I believe that one of the biggest mistakes a Hiring Manager (Test Manager etc) can do for a team is to hire in people with the same set of views and opinions. I’m not talking about “Yes” people who don’t have the confidence or inclination to disagree. I’m talking about people who are pretty much in line with your thoughts and thinking, or don’t have any particular views they want to air.

It’s natural for people to gravitate to those who support your ideas. It’s natural to want support for your ideas and respect for your approach; but that’s only valuable if your approach and ideas are right.

Instead I think it’s important to hire in people who are in alignment with your Goals and Objectives, but don’t necessarily hold the same views and opinions of how to get there. Sure, you need some form of alignment in your approach, but to have no criticism, no objections and no discussions about the ‘right’ way is to fall in to the trap of Group Think. To have a team incapable of thinking up new and interesting ways to solve problems is to create a group who will forever need your guidance. And is that really want you want?

So when recruiting a team it’s always a challenge to take a step back from your point of view on Testing and hire in the “right” people for the job. People who have a point of view, but have the personality to discuss constructively.

I worked somewhere where the Team Lead said something along the lines of

 

“We tell them what to do, and they will do it”.

 

I disagreed with this approach then, and I still do now. As a manager (or Leader?) it’s important to be able to ask your team for their points of view, thoughts and opinions.

Each team member will have a different view, a different perspective, difference experiences and skills and a different way of thinking about a problem. Truly diverse and creative solutions will emerge (assuming you have an environment that supports that process). Diversity is a good thing. Right?

I think it was Adam Knight the other week who said on Twitter that he is struggling to find Testers with diverse enough skills and thoughts. I absolutely agree. There’s one thing standardising yourself to conform to the mainstream testing industry (a stereotype?), but for many jobs now the fact you have something different to offer is a positive trait.

So my advice is don’t hire people who are just like you; seek diversity.

And as a Tester, don’t be just like everyone else.

Of course, I may not be right at all and may spend all of my day arguing…time will tell. ๐Ÿ™‚

Cloud as a Test enabler

One of the interesting changes I see in the Testing industry is that many new companies, with newly formed Development teams (i.e. Programmers, Testers, Product etc), are automatically looking to the cloud for Testing solutions and tools.

It’s a natural process as many of these companies often power their entire infrastructure through Cloud tech. It’s an interesting change to a market which I expect to keep accelerating as teams look to cheaper, more flexible and more process agnostic tools to aid testing.

I have seen this happening for a number of years now and even see the same trend happening in the adoption of the Cloud product I test on. It seems new companies are using Cloud as an enabler and so automatically look to the Cloud for most of their software stack.

There could be a number of reasons for this but I suspect some of the main reasons are cost, the ability to scale, the freedom to change quickly and to also enable the formation of a distributed team. A central office is becoming less common and some companies are picking the “best” people for the job, rather than the “best” people within 20 miles of the office.

Reaching out to Free and Open Source software is also trending at the moment as companies strive to achieve their Testing goals with a number of considerations in mind; ease of use, support from the community, reliable SLA from commercial companies supporting Open Source, short term tie in, process agnostic solutions, niche tool solutions and a general lowering of overall operating and purchasing costs. Some of the most exciting, interesting and useful products come from the community.

It’s also worth bearing in mind for cloud, that you are buying a service which is considered an operating cost (i.e. OpEx) and not a product to add to the capital expense list (i.e. CapEx).

OpEx versus CapEx is an over simplified difference between cloud and a purchased outright tool, but it does have a number of positive financial points. A good article on the costing can be found here. Well worth a read if you’re interested in the financial side of Cloud.

There are many companies reaching out to more flexible and cost effective solutions, predominantly hosted in the cloud and offering a SaaS pricing and adoption model.

Test Management players Testuff and Practitest instantly spring to mind, along with a host of bug tracking solutions and services like Pivotal Tracker and Fogbugz.

NOTE: There are plenty of other providers too. There is a growing number of tools listed in The Software Testing Club wiki and a growing number of companies listed in The Testing Planet directory too.

Cloud technology has enabled these companies to provide a service at a cost effective price point, which when backed with SaaS payment model means Test teams can scale up or down when they need to. The tools are also flexible, offer data storage online and integrate well with other cloud based systems. These tools often have fewer features than the mainstream on premise or client based solutions, but this too is often a positive selling point.

Many big vendors are stuck with large software products requiring on-premise kit all bundled with large licence fees. They are unable to roll out new feature to their products very easily and the turnaround between customer requests and new features in the systems can be measured in years, not days. Flexibility, change and adaptive processes are the key to success for many businesses, which is why the free, Open Source and/or Cloud based products are gathering momentum.

That’s not to say the mainstream players aren’t good or successful; there are many Test teams using these tools very effectively. I’m just observing a trend. A trend I suspect will be played out with some of the offering we will see at EuroSTAR next month in Manchester. We’re already seeing some excellent cloud based tools and systems disturbing the market.

Not all teams can take advantage, or want to take advantage of cloud though. Existing hardware and in-house processes that work are good reasons not to jump.

But if you’re starting from scratch wouldn’t the cloud an obvious choice?

Let me know your thoughts on this. I’m obviously fairly biased on this through the product I test and the markets I work in.

Is cloud an enabler?
Do you use cloud products?
Are they cheaper?
Are they more flexible?
What are the downsides of the cloud?

Push The Button

During a conversation with a group of testers at an event I soon found myself outnumbered in my views around “enhancements” to the product. I was the only one who saw a Tester’s role as more than just verification.

I was a little amazed at how this group of Tester’s (or shall we call them Checker’s?) narrowed the focus of their roles down to simply following a script, clicking buttons and ticking boxes. They were also incredibly passionate about not suggesting improvements to the process or product. It’s not so clear to me though where the line is between enhancement and bug. It’s not always easy to categorise.

Through sheer persistence and the use of pursuasive language and examples of real bug stories I convinced them that there was more to Software Testing than checking. I also bought them all a drink, but I don’t suspect that had anything to do with it. ๐Ÿ™‚ I’m not saying they weren’t doing a good job, far from it, but to not look at the wider context around your checks is to ignore the purpose of the system.

I promised to blog about the things we talked about, and here it is, about a year later than it should have been.

This post stemmed from a point one of the guys made “A button is a button. It’s the same in every system. We test it does what the spec says it should do.”

At a basic level this is absolutely correct, but to look at a button with this narrow focus is to miss the point of Testing (as I see it). The button is just one action, or element, or interaction in a much bigger process or system. It may fire off something or trigger an event but the interesting questions for me are:

  • Why would I click the button?
  • What are my expectation?
  • Do I have any choices around clicking the button?
  • Will I like what happens?
  • How will you encourage me, force me or convince me to click the button?

And a whole lot more. To ignore the context of the button is to not question the system it lives in. To check the button does what the spec says, is to put too much trust in the spec. So I blurted out a few ideas about testing, which actually cross over in to design. I’ve included as many as I could remember here in this post. Some may be helpful for you, some may not.

ย 
Is the button accessible to all users?

  • Should it be?
  • Do certain users get more or fewer buttons? Why?
  • For example, admin users or permissions based systems.

 

Why would someone click this button?

  • Are there other choices?
  • What will drive someone to click this button?
  • What happens in the grand scheme of things if they don’t click it?

 

How will your users “feel” before and after clicking the button?

  • Do you know?
  • Does it matter?
  • Can you redesign to cater for this “feeling”?
  • For example, submitting your tax returns will give your users a different feeling to submitting a newsletter subscription form

 

Is this behaviour monitored or detected or regulated?

  • Does this change the button form?
  • Does this change the behaviour?
  • Who detects it and why?
  • Are there privacy issues or data protection concerns?
  • What is done with this data?
  • For example, audit trails in systems.

 

If your users access the site via a mobile device, does the button still have the same look and feel?

  • Does it still do the same thing?
  • In the same way?
  • Should it?

Do previous choices or data entered alter the button choice and action?

  • Should it alter it?
  • Could it alter it to get a better outcome for all?
  • When does it alter it, and when doesn’t it?

 

Does the button have the potential to do damage and therefore require more consideration or a confirmation?

  • Status updates?
  • Sending personal details?
  • Firing missiles?
  • Turning off life saving machines or devices?

Do you need further steps in the process or is clicking this button enough?

 

Is it possible to design the overall page and the button to deter or encourage certain behaviour?

  • For example, to eliminate the “Donkey” vote the candidate voting lists are often randomised

Does anything surrounding the button affect the choice or decision?

  • Should it?
  • Could it?
  • Can we affect choice by anchoring adverts or information or other users choices?
  • For example, peer review scoring on sites like Amazon

Could the button or process be disabled but visible to encourage upgrades, add ons or extra steps?

  • Should it?
  • Does it fit with the business model?
  • Would your users appreciate this?

Is the action the button performs pleasurable for the user? Or negative?

  • Could this process be sweetened if negative?
  • Or rewarded if good?

Is the button consistent with other buttons in the process or system?

  • If not, why?
  • If yes, should it be?
  • Would it be better with a different design..to stand out maybe?

Does it matter whether you are the first person to click this button?

  • Do you get extra features?
  • Do you disable a process for other users?
  • Is the outcome clearly communicated?
  • For example, one time downloads or data collections.

Can you click the button more than once?

  • Why?
  • How?
  • Does it function consistently?
  • How does this affect the system or data or process?
  • For example, submitting a form more than once

 

Does the button support multiple users?

  • Can more than one user click their own button at the same time?
  • At different times?
  • At all?

Can you double click the button?

  • What does it do?
  • Should it be possible?
  • Could you deter this behaviour with warnings or informational text?
  • For example, some users click hyperlinks twice.

Is the design of the button modern, or old?

  • Can it be skinned?
  • Does it look old for a reason (functional, to encourage upgrades etc)

Do you need supporting help text?

  • Is it obvious it’s help text?
  • Can it be on or off?
  • Or on all the time?
  • Is it obvious the text is relating to this button or process?
  • Is the help system being designed against a recommended compliance like the elmer guidelines for form design?

Can you create an emotional attachment when the user clicks on the button?

  • Can you get buy in, repeat business or a brand fan?
  • Do you want to?
  • Do you need to?
  • For example, the Like button on Facebook

What action does the button perform and is that consistent with what your users want to do?

  • Submit, next, back, details, open, close, print, etc
  • For example, the final submission button in a wizard should not be called continue, unless it has supporting text to inform the user what that button will do.

Is there a time limit surrounding the button?

  • A limit before clicking or does it start a timer?
  • How would this affect the design and testing?

Is the button grouped with other like minded buttons, or dejected and lonely?

  • Is the like minded group logical?
  • Does it convey the right message?
  • Is the lonely button there for a reason..to draw attention?
  • For example, all text formatting buttons should be logically grouped, rather than spread across a busy toolbar.

How does the button perform with no supported visuals?

  • Text only
  • Screen readers
  • Accessibility

Are you using colour to symbolise or signify meaning?

  • Red for danger
  • Green for go
  • Are these signs common amongst your users?
  • Ambiguous?
  • Should you be using colour? Accessibility? Colour blind?
  • Do they display well on different monitors?
  • If you had no colour could you still use the system?

Does the button actually do what the user expects?

  • Regardless of the spec, is the button action logical?
  • Are the expectations wrong?
  • Is the action wrong?
  • Is your oracle wrong?
  • Is it by design, fake affordance maybe?

Is the button part of a sequence?

  • How is the sequence represented?
  • Is the sequence logical?
  • Can the sequence be tricked, skipped or broken?
  • Is the sequence communicated well?

Does the action of the button mirror actions or features in other systems?

  • Will users see a link between systems, for example a status update in Facebook?
  • Could you redesign to take advantage of that association?
  • Would you want the association?
  • Is it worth redesigning a common process, or will the baby duck syndrome kick in

Is the process surrounding the button common?

  • For example installers.
  • Does the process require much thought or is an almost natural process of unthinking clicking.
  • For example, the browser add on tools that are hidden in the middle of a software installer. They work on the basis that you will not read it and hence install the browser add on.
  • What could you do to encourage more thought around the process? Make it harder?

Does the button itself represent the action?

  • Delete icon, rubbish bin icon, cross, plus sign.
  • Are they logical?
  • Are they repeated on the same screen adding ambiguity?
  • Are the symbols universal to your audience?

If there is a breadcrumb trail, how will clicking this button affect that?

Does the button require data fields to be completed?

  • Should the button be visible until all fields are complete?
  • Is that clearly communicated?
  • Greyed out? Disabled? Button text change? Disappear/Reappear
  • When does it validate? Where? How?
  • Is it consistent?

Does the button recall previous data?

  • What happens when clicked more than once?
  • How is the data represented or affected

Is the button generating real time feedback?

  • How real time?
  • Does this affect performance?
  • Is it required?

Does the button work under multiple instances?

  • Multiple browsers and/or tabs?

What lies beneath the button?

  • Can you glean any further information about the button by using a tool like Firebug?
  • Does it have alt text?
  • Is it logically labelled and suitable for automation?
  • Are there any comments in the code or other clues?

Is the button the final action?

  • Does the user need to know anything before they click the button?
  • Do they get a confirmation?
  • Are they reassured enough to click it?

If the button is part of a process, would it better to push the process through a wizard?

  • Will the user experience be better? Less prone to mistakes?
  • Will the process be done in one go, or can they save and return?

Does the button need an undo?

  • Does the action need a reverse or opposite action?

Is the button multilingual?

  • What happens to the button when it tries to show different languages or locales?
  • Does it still make sense?
  • Does it hold longer text?

Can the user press this button without using the mouse?

  • Mobile devices. No right clicks. Etc
  • Shortcut keys? Tab order?
  • Should they be able to?

Can it be made easier or harder to press the button depending on your objectives?

  • Many large organisations appear to be making it hard to find the button which leads the user to the telephone numbers or listings.ย  Self service is cheaper but is it always better?
  • Big red buttons with lots of “click me” appeal
  • Button in the middle of the page, or hidden in the top corner.

If you removed the button and that part of the process how would it affect your system?

  • Would it make it better?
  • Unshippable?
  • Un-usable?
  • Better?

A lot of these ideas come about when reading about design ethnography and UX, particularly around lenses, but for each one I’ve always been able to relate this back to a bug or “enhancement” I’ve seen in systems I test.

Questions about the bigger process will lead to more test ideas about the button (and the bigger process also). The button is a part of a bigger system, which is operating in a context. I think it is at this level that individual actions start to make more sense as a whole, and you can begin to question who, what, why, where and when.

If you find yourself working in an environment where you’re not allowed to raise questions about the checks you are running and you really want to (and attempts have fallen on deaf ears), then it may be time to move on and see what else is out there.

 

Professional Skeptics, Dispeller of Illusions and Questioner

Bear with me as a clear a few posts out of draft. This is my last one for a few weeks. Promise.

 

This one came about as response to James Bach’s excellent Open Lecture presentation. I took away a number of lessons from that lecture. (including the title – Professional Skeptics, Dispeller of Illusions and Questioner)

 

Off the back of that I decided to post some questions about “Testing the Spec” as an intriguing LinkedIn forum post got me thinking about why “Testing the Spec” is so common. “Testing the Spec” is where a Tester takes the spec and the system and then validates/verifies that the spec is correct. Yes, you read that correctly….that the spec is correct.

It got me thinking and asking a couple of questions:

  1. What benefits will you receive by testing the system against the spec?
  2. What don’t you know about the system? Will the spec help you?
  3. Do you have any other information sources or Oracles?
  4. Is the information sufficient? Or is it insufficient? Or redundant? Or contradictory?
  5. Would a system diagram suffice?
  6. At what point do you know you are complete? Once every page of the Spec has been “tested” – what about other parts of the system not covered by the spec?
  7. Have you seen a system like this before? Will it help you?
  8. Have you seen a slightly different system? Will it help you?
  9. Who has asked you to do this?
  10. What value do they see in you doing this?
  11. Does the spec accurately depict the system? (I guess this is what they were testing for….)
  12. Would it matter if the spec didn’t match the system? (which one would give you most value; the spec or the system?)
  13. Is it essential to bring the spec up-to-date to match the system?
  14. Will the spec tell you everything you need to know about the system under test?
  15. Would the spec tell you anything that the system wouldn’t?
  16. Do you know how out-of-date the spec is?
  17. Is the spec important as a communication medium? Or just something that gets produced?
  18. How would you communicate your findings? In Bug Reports? Or in a report?
  19. How would you know what was a bug and what wasn’t?
  20. Could the spec confuse you more than simply exploring the system?
  21. Are you using the spec as a crutch or a guide or an Oracle?
  22. How much of the unknown can you determine?
  23. Can you derive something useful from the information you have?
  24. Do you have enough information to model the system in your mind?
  25. Have you used all the information?
  26. Have you taken into account all essential notions in the problem?
  27. How can you visualise or report the results and progress?
  28. Can you see the result? How many different kinds of results can you see?
  29. How many different ways have you tried to solve the problem?
  30. What have others done in the past that might help?
  31. Where should you do your Testing?
  32. When should it be done and by whom?
  33. Who will be responsible for what?
  34. What milestones can best mark your progress?
  35. How will you know when you are successful?
  36. How will you know when you are done?

I’m not doubting the fact that testing a spec against a system is valuable. It *could* be the best thing you can do. But I would ask a lot of questions first. The system is always different to the spec, but in context, your spec could be used as the main reference point, so only you will know whether “Testing the Spec” is valuable for yourself.

 

I rattled out this list through a combination of the Phoenix Checklist but also after watching the very excellent video of James Bach doing an Open Lesson on Testing.

 

As with all things, I tend to sketch and draw as both a record of my thoughts, but also as a way of distilling some ideas. I read better in visuals so these rubbish doodles aid my learning and increase the chance I will revisit these points. You might struggle to see some of the text in the image but there are plenty of tools for opening images and zooming.

 

Note: The diagram is my take-aways from James’ lecture. There are many more lessonsย  which I’ve taken away earlier from James’ blogs and talks. I *may* have interpreted some things differently to how you would, or even how James intended, but they are my take-aways. I share them here just for completeness and urge you to watch the video

Please replace me, let me go

One of my favourite blogs (http://www.experientia.com/blog/) carried an essay/article on humans and machines by Marina Gorbis. It’s a breezy article but makes some awesome points and each point Marina made rang true for what we are seeing in the Testing world.

4203780743_99627e4a7b_o

Image courtesy of The Software Testing Club – The Automator “Tester Type”

Marina makes a point that machines are replacing the mechanistic jobs traditionally done by humans, which is allowing humans to concentrate on critical thinking and situational response and other “thinking” activities.

This is true in the Testing world where many people utilise machines to perform the jobs machines are good at, leaving the humans to do exploratory testing, critical thinking and creative endeavours.

I believe Michael Bolton summed this up perfectly when he made the critical distinction between Testing and Checking. Checking could be considered the mechanistic tasks, potentially best performed by a machine. Testing is the human task best performed by a….human.

Marina lists out a number of ways in which Machines are becoming included in everyday work. I’d like to briefly list her ideas out here and draw the parallel to what I see happening in the Testing world.

So what are machines good at?

Machines are awesome at repetitive, mechanistic tasks.
Those tasks that are mechanistic and repeatable. The tasks you do daily by rote are perfect for a machine.

If you can programmatically check something, aim at getting that code written.

So if you spend your day ticking boxes, checking hyperlinks work or simply clicking through the same User Interface over and over again, then you need to think about using a machine to do this.

Machines are awesome at doing the things humans cannot do.
You want to deliver 100,000 requests to a server at one go – Use a machine.

You want to check how well a device works in sub zero temperatures whilst being bombarded with Acid Rain – use a machine. (believe me, someone I know tests products in these conditions)

If you need to “test” something but it’s impossible for a human, then a machine could aid you in achieving your goal.

Machines are awesome at analysing complex data and applying rationality
A quote from Marina’s article :

“For centuries, economists have built models based on the assumption that humans behave as rational economic actors. But thanks to advances in neuroscience and behavioral economics, we’ve come to realize that humans aren’t good at thinking through probabilities and risks and making rational economic choices based on those probabilities. While we don’t want to use pure rationality when making moral or ethical decisions, more rationality would be helpful in situations such as when making financial decisions.”

Should we start using machines to aid us in decision making?

Could we start using machines to inform our decisions about what and where to focus testing? You don’t already?

How about using Pairwise tools to filter down your combinations?

How about using machines to number crunch previous system usage or bug counts to find high risk areas to test?

How about using machines to analyse performance metrics gathered from a Load Test?

 

Machines are awesome at doing tasks which are too large or too small
Those pesky tasks that are really small or really large could be completed by a machine.

Filling in compliance reports with test data.

Adding all of the change sets to the release notes.

Producing reports on Test Coverage and Results.

So what are humans good at?

Humans are awesome at Thinking
A quote from Marina’s article:

“Thinking and computation are different processes, and machines are not good at thinking”

So we have these machines that can do loads of cool things, but as of today, they cannot really think. Testing is a very human process as we aim to discover more about the system under test.

In order to understand more, we need to peel back layers to discover new insights. These insights require a human to think about them. To understand them. To respond to them. To base new Tests on them.

Humans are awesome at Social and emotional intelligence
Being able to connect to your domain and subject matter, in my opinion, leads to better software testing. I’m a firm believer that a Tester can do great testing in any domain and on any product, but I believe they will do awesome testing when they have an interest and an emotional connection to the product or systems they are testing.

When you are emotionally connected, at any level, I believe your senses are hightended and you become more channelled and focussed. As of today, there are some emotionally aware machines, but they are still in early development.

“Feeling is as complicated as thinking, if not more so, and just as the machines we’re building aren’t thinking machines, the emotional and social robots we’re building aren’t feeling machines—at least not yet.”

Without emotions how can you understand why your end user is finding the UI so frustrating?
How can you connect and build relationships with your colleagues and stakeholders?
How can you begin to understand the contexts in which your customers work?

Do your automated tests design themselves in this way?

Do they have an awareness of feelings?

Humans are awesome at creativity, intuition, and improvisation

A quote from Marina’s article:

“……. the comparative advantage of humans is in doing things spontaneously, responding to unique circumstances of the moment, and making decisions accordingly.”

 

Do your automated checks respond to a new error and design a new test on the fly?
Do your automated checks create a new hypothesis on spotting a path through the system not previously taken?
Do your automated checks make use of other tools (that you hadn’t told them to use) to test a new path of interest?

Replacing the Tester

A manual tester can be replaced by a machine. They absolutely can. But that wouldn’t be a good strategy if that Tester thinks, connects emotionally and in general, uses their skills, experiences and crucially, their brain.

So you were worried about your job being replaced by a machine? You should be if you do rote checking with no thinking, but not if you do Testing.

A great quote from the essay tells a better story than anything I could cobble together with tenous links and parallels.

There hasn’t yet been a technology that has resulted in our working less. This is because machines don’t just replace what we do, they change the nature of what we do: by extending our capabilities, they set new expectations for what’s possible and create new performance standards and needs.

 

Although I’m not so sure that sentence is absolutely true, look at how the unemployement rate went up when robots entered the car industry, it does offer a really interesting way to frame the use of machines; as amplifiers of our skills, abilities and approaches.

But does the risk of replacing menial, rote jobs mean we shouldn’t advance the use of machines and robots in more domains, like Testing? I’ll leave that for your own judgment and feelings but it’s clear machines are empowering many to achieve new heights of Testing ability.

I’ll leave you with a great quote from the article, which I would encourage you read.

 

“The combination of humans partnering with machines and using superior strategies opens up new worlds for exploration.”

The basement is much crowded, but there is plenty of room up-stairs

The other day someone sent me an email asking me how to stand out from the masses in the Testing world. I responded by suggesting they engage in the community, join groups that interest them, read about any other subject that interests them but isn’t directly considered a Testing information source and to start learning and practicing Testing.

basement

Image courtesy of : http://www.flickr.com/photos/andreialexandru/

There was some quote or reference that was nagging me though which I felt would sum up this delima.

The following popped in to my head this morning at around 4am..I believe this was the quote I was searching for. It was from P.T.Barnum in his book “Art of Money Getting Or, Golden Rules for Making Money” which I think sums up a commonly held view about Testing. It is a conversation between two people.

“I have not yet decided which profession I will follow. Is your profession full?”

“The basement is much crowded, but there is plenty of room up-stairs,” was the witty and truthful reply.

As more people flood in to this industry we are seeing a devaluing of our role and are a-wash with testers, many of which appear to all have the same skillset.

I know there will always be a market for interchangeable test “resource” but believe me, many people I speak to are having serious problems recruiting good testers.

It’s not from a shortage of applicants, but more from a shortage of talent.

So as we see the “basement” being filled with even more testers, with an even more bewildering set of certifications and best practices I can’t help but wonder what people will start to do to differentiate themselves from the masses in the future.

Do you think our profession is getting full?

Do you think there is a lack of talent or are companies becoming more specific?

Do you think it’s a lack of talent, or an unwillingness to be open minded about Testing?

Do you agree that there is a shortage of excellent testers? (I know this is subjective)

How do you think Testers could start to make a difference to their skillsets?

Work Experience and Work Placement

I’ve always tried to appreciate ‘context’ when I talk about Testing and also when I Test because ‘context’ is a very real thing. I’m an avid campaigner against Best Practices in Testing and I take every opportunity possible to question blatant “context unaware” statements about testing, especially so when they are communicated as “law”.

Yet it’s frightening how few Testers are ‘context aware’ when fine tuning ideas and talking about Testing. There may be many who never need to care about any other contexts or environments. There are some who know about other contexts but really don’t care. Then there are those who know about other contexts but don’t have a chance to explore any further than that.

Awareness is a good start. Appreciation is a further step. But what about actual “experience”?

Wouldn’t that be cool if you could “experience” another context for a number of days, maybe weeks or months? Experience first hand what it’s like to Test in this environment alien to yours, whilst you work alongside other testers who work in these contexts everyday? A kind of Tester swap?

Here’s why it *could* work.

  • Testers could gain a massive insight in to other contexts and other ways of working
  • Testers could gain experience working in contexts that they would never normally get to work in
  • Both sides would benefit. One side would get experience. The other side would get an extra set of hands, or some fresh eyes for a few days.
  • It would be a fun and interesting way of sharing knowledge
  • If you have a large company with many Test teams then it could work internally as some sort of exchange process. (Thanks to David, in the comments, for suggesting this one)

Here’s why it *might not* work

  • Confidentiality, privacy and the fact it’s a new (and potentially scary) idea for some
  • The costs involved. (Travel, accomodation, etc) <– or should this be self funded?
  • Regulatory (induction, health and safety, security compliance etc)
  • Logistics (self organised, run by a community like The Software Testing Club or ad-hoc?)
  • Some companies *could* take advantage of the scheme to get extra resource for a period of time
  • Lack of benefit from those placed if they get lumbered with sketchy jobs and checkbox testing
  • The system under test may be complex and/or complicated enough that a Tester may not have the chance in just a few days to add any real value. (Thanks to Kate, in the comments, for suggesting this) – I still believe the Tester would get to see another context, but the company offering them this opportunity may see little value..maybe.

No doubt I’ve missed some blindingly obvious pros and cons and I suspect a project like this would be bigger than I expect to get started….but it would be cool…right?