Gravitate to people like you

I believe that one of the biggest mistakes a Hiring Manager (Test Manager etc) can do for a team is to hire in people with the same set of views and opinions. I’m not talking about “Yes” people who don’t have the confidence or inclination to disagree. I’m talking about people who are pretty much in line with your thoughts and thinking, or don’t have any particular views they want to air.

It’s natural for people to gravitate to those who support your ideas. It’s natural to want support for your ideas and respect for your approach; but that’s only valuable if your approach and ideas are right.

Instead I think it’s important to hire in people who are in alignment with your Goals and Objectives, but don’t necessarily hold the same views and opinions of how to get there. Sure, you need some form of alignment in your approach, but to have no criticism, no objections and no discussions about the ‘right’ way is to fall in to the trap of Group Think. To have a team incapable of thinking up new and interesting ways to solve problems is to create a group who will forever need your guidance. And is that really want you want?

So when recruiting a team it’s always a challenge to take a step back from your point of view on Testing and hire in the “right” people for the job. People who have a point of view, but have the personality to discuss constructively.

I worked somewhere where the Team Lead said something along the lines of

 

“We tell them what to do, and they will do it”.

 

I disagreed with this approach then, and I still do now. As a manager (or Leader?) it’s important to be able to ask your team for their points of view, thoughts and opinions.

Each team member will have a different view, a different perspective, difference experiences and skills and a different way of thinking about a problem. Truly diverse and creative solutions will emerge (assuming you have an environment that supports that process). Diversity is a good thing. Right?

I think it was Adam Knight the other week who said on Twitter that he is struggling to find Testers with diverse enough skills and thoughts. I absolutely agree. There’s one thing standardising yourself to conform to the mainstream testing industry (a stereotype?), but for many jobs now the fact you have something different to offer is a positive trait.

So my advice is don’t hire people who are just like you; seek diversity.

And as a Tester, don’t be just like everyone else.

Of course, I may not be right at all and may spend all of my day arguing…time will tell. 🙂

Cloud is not the silver bullet

I’ve blogged a little recently on Cloud Testing and the future it may hold for the industry. From this post I got involved in a number of chats and discussions, mainly with Testers and Managers looking to move to the cloud.

One of the many assumptions I took from these discussions was that Cloud Testing would solve Testing problems and challenges. A silver bullet.

I’m not sure where this myth comes from but the cloud brings a number of challenges too.

And at the end of the day though, if it’s a bad testing strategy or approach, it makes no difference whether it’s cloud or not. It’s not a silver bullet, best practice solution to Testing problems (of which there could be millions).

My advice would be to take a step back and look at the problems you face. Write them down and try to work out the root causes. Only then can you start to solve each problem, one at a time. Some may be solved by adopting Cloud Testing, some not; but without the analysis you will never know.

Cloud as a Test enabler

One of the interesting changes I see in the Testing industry is that many new companies, with newly formed Development teams (i.e. Programmers, Testers, Product etc), are automatically looking to the cloud for Testing solutions and tools.

It’s a natural process as many of these companies often power their entire infrastructure through Cloud tech. It’s an interesting change to a market which I expect to keep accelerating as teams look to cheaper, more flexible and more process agnostic tools to aid testing.

I have seen this happening for a number of years now and even see the same trend happening in the adoption of the Cloud product I test on. It seems new companies are using Cloud as an enabler and so automatically look to the Cloud for most of their software stack.

There could be a number of reasons for this but I suspect some of the main reasons are cost, the ability to scale, the freedom to change quickly and to also enable the formation of a distributed team. A central office is becoming less common and some companies are picking the “best” people for the job, rather than the “best” people within 20 miles of the office.

Reaching out to Free and Open Source software is also trending at the moment as companies strive to achieve their Testing goals with a number of considerations in mind; ease of use, support from the community, reliable SLA from commercial companies supporting Open Source, short term tie in, process agnostic solutions, niche tool solutions and a general lowering of overall operating and purchasing costs. Some of the most exciting, interesting and useful products come from the community.

It’s also worth bearing in mind for cloud, that you are buying a service which is considered an operating cost (i.e. OpEx) and not a product to add to the capital expense list (i.e. CapEx).

OpEx versus CapEx is an over simplified difference between cloud and a purchased outright tool, but it does have a number of positive financial points. A good article on the costing can be found here. Well worth a read if you’re interested in the financial side of Cloud.

There are many companies reaching out to more flexible and cost effective solutions, predominantly hosted in the cloud and offering a SaaS pricing and adoption model.

Test Management players Testuff and Practitest instantly spring to mind, along with a host of bug tracking solutions and services like Pivotal Tracker and Fogbugz.

NOTE: There are plenty of other providers too. There is a growing number of tools listed in The Software Testing Club wiki and a growing number of companies listed in The Testing Planet directory too.

Cloud technology has enabled these companies to provide a service at a cost effective price point, which when backed with SaaS payment model means Test teams can scale up or down when they need to. The tools are also flexible, offer data storage online and integrate well with other cloud based systems. These tools often have fewer features than the mainstream on premise or client based solutions, but this too is often a positive selling point.

Many big vendors are stuck with large software products requiring on-premise kit all bundled with large licence fees. They are unable to roll out new feature to their products very easily and the turnaround between customer requests and new features in the systems can be measured in years, not days. Flexibility, change and adaptive processes are the key to success for many businesses, which is why the free, Open Source and/or Cloud based products are gathering momentum.

That’s not to say the mainstream players aren’t good or successful; there are many Test teams using these tools very effectively. I’m just observing a trend. A trend I suspect will be played out with some of the offering we will see at EuroSTAR next month in Manchester. We’re already seeing some excellent cloud based tools and systems disturbing the market.

Not all teams can take advantage, or want to take advantage of cloud though. Existing hardware and in-house processes that work are good reasons not to jump.

But if you’re starting from scratch wouldn’t the cloud an obvious choice?

Let me know your thoughts on this. I’m obviously fairly biased on this through the product I test and the markets I work in.

Is cloud an enabler?
Do you use cloud products?
Are they cheaper?
Are they more flexible?
What are the downsides of the cloud?

Book Review : “Myths of Innovation” by Scott Berkun; O’Reilly Media

I have read The Myths of Innnovation twice. The first time around I wasn’t especially enamored with this book. It felt too lightweight in it’s structure and the language felt too comfortable. It was easy to read, and as such, it didn’t feel like most of the books out there discussing the dreaded “I” word, Innovation.

I then realized I’d made a number of significant and interesting notes and decided to re-read the book. I actually really enjoyed it the second time around. I guess my expectations were that the book would be a scientific heavyweight (not sure where that expectation came from).

The main concept in the book is along the lines of “Do cool work, accept that ideas come from other ideas and if the timings right, cool things will happen”. And throughout the book that message is re-enforced with good examples and stories.

There was a point in the middle where I started to lose focus somewhat, but the examples brought me back in and it felt good to have finished the book. Scott picked some really well known innovations to tell a story about how ideas come about, all of which revolved around the concept that no idea is brand new. All ideas come from other ideas. All Innovations can be broken down and traced back to several other ideas.

Scott also suggests that many ideas are beyond our control and exist outside of us. Scott also talks about how ideas and innovations gain traction in society and culture. He makes a point of suggesting Myths and marketing spin are more effective at promotion than education, something which we clearly see in many products and services.

He makes some very interesting points about Historians being able to tell a story deciding which facts to include and which ones to leave out. A great element of story telling. Scott is also clear that history always contains a viewpoint and interpretation.

I enjoyed the book and it’s easy style makes it very accessible and readable. I think anyone who is interested in ideas creation; creativity and where ideas come from would enjoy this book. As too would anyone interested in marketing or entrepreneurship. It’s got them all covered. Although it won’t give you concrete advice it will sow some interesting seeds of thought in your mind.

The Myths of Innovation <– Amazon.co.uk link.

Note: Above link is an affiliate link. Feel free to search directly in Amazon.
This post is part of the blogger review scheme with Oreilly Media.

Consistent User Actions

This morning I was in my local Tesco supermarket and noticed a classic case of inconsistency between the message and the action.

I am a seriously big fan of the self service checkouts in Supermarkets. Not only have they reduced queues but they’ve made it entirely possible to dehumanise the entire experience of food shopping; which, when the customer service is often so bad, can actually be a positive thing.

I’m also a fan of using my own bags, so I press the “Are You Using Your Own Bags?” option. At this point I receive two messages which don’t match the action:

A voice suggests I place my bags on the packing section and then press “Go”

The screen suggests I place my bags on the packing section and then press “Go” but shows me a button labelled “Continue”

It may seem trivial to point this out, that both mediums suggests “Go” and the actual next step saying “Continue” but it had me scanning the real estate for a few seconds before realising it was a mistake. Other users may not find this so trivial.

And in many contexts, this simply doesn’t matter, but as a Tester it’s important to check for consistency between prompts, advice, help, information, guidance and the actual action or process.

In some contexts this consistency could be very important.

Push The Button

During a conversation with a group of testers at an event I soon found myself outnumbered in my views around “enhancements” to the product. I was the only one who saw a Tester’s role as more than just verification.

I was a little amazed at how this group of Tester’s (or shall we call them Checker’s?) narrowed the focus of their roles down to simply following a script, clicking buttons and ticking boxes. They were also incredibly passionate about not suggesting improvements to the process or product. It’s not so clear to me though where the line is between enhancement and bug. It’s not always easy to categorise.

Through sheer persistence and the use of pursuasive language and examples of real bug stories I convinced them that there was more to Software Testing than checking. I also bought them all a drink, but I don’t suspect that had anything to do with it. 🙂 I’m not saying they weren’t doing a good job, far from it, but to not look at the wider context around your checks is to ignore the purpose of the system.

I promised to blog about the things we talked about, and here it is, about a year later than it should have been.

This post stemmed from a point one of the guys made “A button is a button. It’s the same in every system. We test it does what the spec says it should do.”

At a basic level this is absolutely correct, but to look at a button with this narrow focus is to miss the point of Testing (as I see it). The button is just one action, or element, or interaction in a much bigger process or system. It may fire off something or trigger an event but the interesting questions for me are:

  • Why would I click the button?
  • What are my expectation?
  • Do I have any choices around clicking the button?
  • Will I like what happens?
  • How will you encourage me, force me or convince me to click the button?

And a whole lot more. To ignore the context of the button is to not question the system it lives in. To check the button does what the spec says, is to put too much trust in the spec. So I blurted out a few ideas about testing, which actually cross over in to design. I’ve included as many as I could remember here in this post. Some may be helpful for you, some may not.

 
Is the button accessible to all users?

  • Should it be?
  • Do certain users get more or fewer buttons? Why?
  • For example, admin users or permissions based systems.

 

Why would someone click this button?

  • Are there other choices?
  • What will drive someone to click this button?
  • What happens in the grand scheme of things if they don’t click it?

 

How will your users “feel” before and after clicking the button?

  • Do you know?
  • Does it matter?
  • Can you redesign to cater for this “feeling”?
  • For example, submitting your tax returns will give your users a different feeling to submitting a newsletter subscription form

 

Is this behaviour monitored or detected or regulated?

  • Does this change the button form?
  • Does this change the behaviour?
  • Who detects it and why?
  • Are there privacy issues or data protection concerns?
  • What is done with this data?
  • For example, audit trails in systems.

 

If your users access the site via a mobile device, does the button still have the same look and feel?

  • Does it still do the same thing?
  • In the same way?
  • Should it?

Do previous choices or data entered alter the button choice and action?

  • Should it alter it?
  • Could it alter it to get a better outcome for all?
  • When does it alter it, and when doesn’t it?

 

Does the button have the potential to do damage and therefore require more consideration or a confirmation?

  • Status updates?
  • Sending personal details?
  • Firing missiles?
  • Turning off life saving machines or devices?

Do you need further steps in the process or is clicking this button enough?

 

Is it possible to design the overall page and the button to deter or encourage certain behaviour?

  • For example, to eliminate the “Donkey” vote the candidate voting lists are often randomised

Does anything surrounding the button affect the choice or decision?

  • Should it?
  • Could it?
  • Can we affect choice by anchoring adverts or information or other users choices?
  • For example, peer review scoring on sites like Amazon

Could the button or process be disabled but visible to encourage upgrades, add ons or extra steps?

  • Should it?
  • Does it fit with the business model?
  • Would your users appreciate this?

Is the action the button performs pleasurable for the user? Or negative?

  • Could this process be sweetened if negative?
  • Or rewarded if good?

Is the button consistent with other buttons in the process or system?

  • If not, why?
  • If yes, should it be?
  • Would it be better with a different design..to stand out maybe?

Does it matter whether you are the first person to click this button?

  • Do you get extra features?
  • Do you disable a process for other users?
  • Is the outcome clearly communicated?
  • For example, one time downloads or data collections.

Can you click the button more than once?

  • Why?
  • How?
  • Does it function consistently?
  • How does this affect the system or data or process?
  • For example, submitting a form more than once

 

Does the button support multiple users?

  • Can more than one user click their own button at the same time?
  • At different times?
  • At all?

Can you double click the button?

  • What does it do?
  • Should it be possible?
  • Could you deter this behaviour with warnings or informational text?
  • For example, some users click hyperlinks twice.

Is the design of the button modern, or old?

  • Can it be skinned?
  • Does it look old for a reason (functional, to encourage upgrades etc)

Do you need supporting help text?

  • Is it obvious it’s help text?
  • Can it be on or off?
  • Or on all the time?
  • Is it obvious the text is relating to this button or process?
  • Is the help system being designed against a recommended compliance like the elmer guidelines for form design?

Can you create an emotional attachment when the user clicks on the button?

  • Can you get buy in, repeat business or a brand fan?
  • Do you want to?
  • Do you need to?
  • For example, the Like button on Facebook

What action does the button perform and is that consistent with what your users want to do?

  • Submit, next, back, details, open, close, print, etc
  • For example, the final submission button in a wizard should not be called continue, unless it has supporting text to inform the user what that button will do.

Is there a time limit surrounding the button?

  • A limit before clicking or does it start a timer?
  • How would this affect the design and testing?

Is the button grouped with other like minded buttons, or dejected and lonely?

  • Is the like minded group logical?
  • Does it convey the right message?
  • Is the lonely button there for a reason..to draw attention?
  • For example, all text formatting buttons should be logically grouped, rather than spread across a busy toolbar.

How does the button perform with no supported visuals?

  • Text only
  • Screen readers
  • Accessibility

Are you using colour to symbolise or signify meaning?

  • Red for danger
  • Green for go
  • Are these signs common amongst your users?
  • Ambiguous?
  • Should you be using colour? Accessibility? Colour blind?
  • Do they display well on different monitors?
  • If you had no colour could you still use the system?

Does the button actually do what the user expects?

  • Regardless of the spec, is the button action logical?
  • Are the expectations wrong?
  • Is the action wrong?
  • Is your oracle wrong?
  • Is it by design, fake affordance maybe?

Is the button part of a sequence?

  • How is the sequence represented?
  • Is the sequence logical?
  • Can the sequence be tricked, skipped or broken?
  • Is the sequence communicated well?

Does the action of the button mirror actions or features in other systems?

  • Will users see a link between systems, for example a status update in Facebook?
  • Could you redesign to take advantage of that association?
  • Would you want the association?
  • Is it worth redesigning a common process, or will the baby duck syndrome kick in

Is the process surrounding the button common?

  • For example installers.
  • Does the process require much thought or is an almost natural process of unthinking clicking.
  • For example, the browser add on tools that are hidden in the middle of a software installer. They work on the basis that you will not read it and hence install the browser add on.
  • What could you do to encourage more thought around the process? Make it harder?

Does the button itself represent the action?

  • Delete icon, rubbish bin icon, cross, plus sign.
  • Are they logical?
  • Are they repeated on the same screen adding ambiguity?
  • Are the symbols universal to your audience?

If there is a breadcrumb trail, how will clicking this button affect that?

Does the button require data fields to be completed?

  • Should the button be visible until all fields are complete?
  • Is that clearly communicated?
  • Greyed out? Disabled? Button text change? Disappear/Reappear
  • When does it validate? Where? How?
  • Is it consistent?

Does the button recall previous data?

  • What happens when clicked more than once?
  • How is the data represented or affected

Is the button generating real time feedback?

  • How real time?
  • Does this affect performance?
  • Is it required?

Does the button work under multiple instances?

  • Multiple browsers and/or tabs?

What lies beneath the button?

  • Can you glean any further information about the button by using a tool like Firebug?
  • Does it have alt text?
  • Is it logically labelled and suitable for automation?
  • Are there any comments in the code or other clues?

Is the button the final action?

  • Does the user need to know anything before they click the button?
  • Do they get a confirmation?
  • Are they reassured enough to click it?

If the button is part of a process, would it better to push the process through a wizard?

  • Will the user experience be better? Less prone to mistakes?
  • Will the process be done in one go, or can they save and return?

Does the button need an undo?

  • Does the action need a reverse or opposite action?

Is the button multilingual?

  • What happens to the button when it tries to show different languages or locales?
  • Does it still make sense?
  • Does it hold longer text?

Can the user press this button without using the mouse?

  • Mobile devices. No right clicks. Etc
  • Shortcut keys? Tab order?
  • Should they be able to?

Can it be made easier or harder to press the button depending on your objectives?

  • Many large organisations appear to be making it hard to find the button which leads the user to the telephone numbers or listings.  Self service is cheaper but is it always better?
  • Big red buttons with lots of “click me” appeal
  • Button in the middle of the page, or hidden in the top corner.

If you removed the button and that part of the process how would it affect your system?

  • Would it make it better?
  • Unshippable?
  • Un-usable?
  • Better?

A lot of these ideas come about when reading about design ethnography and UX, particularly around lenses, but for each one I’ve always been able to relate this back to a bug or “enhancement” I’ve seen in systems I test.

Questions about the bigger process will lead to more test ideas about the button (and the bigger process also). The button is a part of a bigger system, which is operating in a context. I think it is at this level that individual actions start to make more sense as a whole, and you can begin to question who, what, why, where and when.

If you find yourself working in an environment where you’re not allowed to raise questions about the checks you are running and you really want to (and attempts have fallen on deaf ears), then it may be time to move on and see what else is out there.