Do you need more than a certification?

I received quite a few messages and comments about my future of software testing post and for those that took the time to respond, thank you. But one that intrigued me was an email from an anonymous tester.

It wasn’t negative, nor positive, but instead extolled the virtues of certification and how certification schemes will guide testers through the pitfalls of the future and the challenges we face ahead. Now, this could be a wind up email from people who know my outspoken views on certification, or it could be genuine. I suspect the latter. Either way, it encouraged me to dig out this old blog post from my drafts folder and give it some air time (with minor edits). 



This is not an anti-certification post. Nor is it a pro-certification post. It’s just some thoughts on other areas I draw on that I believe (I don’t know for sure though) is NOT covered by certifications or the courses that lead to the final awarding of the certification. Please do correct my assumptions if they are wrong (which I no doubt believe may be)


Before I continue though I really do want to point out a value I very firmly believe in : “No one is responsible for your career other than you”


So don’t go relying on your company, your friends, certification boards, family, community or any other source to move your career forward. It is your responsibility.


A friend of mine, Markus Gartner, summed it up well at Agile Testing Days last year when he said “If you find yourself unemployed a year from now, who do you think will be responsible for your education today?” <– I would add a variant to that..


“If you find yourself unemployed a year from now, what would separate you from everyone else in the market for a testing job?” 


Why would someone employ you over someone else? 
What skills would you need a year from now? 
What skills don’t you have right now that you need to remain employable?
Where do you want your career to be?


So here’s my thoughts on why testers need more than a certification


The Lifecycles are changing
The project methodology lifecycles are changing. Feedback is demanded much earlier in the cycle. I believe that many companies are realising that long drawn out projects where the requirements get frozen for months and different teams work on different elements of the product are bringing about poor quality, broken delivery against expectations and de-moralised staff.


For testers this means that we need to find ways of making our testing count without relying on heavily scripted tests (created months before we see the product) with a massive amount of locked in assumptions. Change is innevitable in a project and the more the businesses embraces change, the more I believe some testers will struggle.


Accessibility and Usability Testing IS important
If you work in the world of web then you really should be learning about accessibilty and usability. It’s a good domain to understand anyway, but for web testers, these two elements should be a “must” for all testing. I’m not saying know them inside and out, but an awareness would be good.


Start here maybe :


[but many more sources are available – I have a delicious feed with more here :]


Security is paramount
Just like Accessibility and Usability, Security should be considered a default testing activity. Security is paramount. A good place to start is “The Web Application Hackers Handbook” and Burpsuite. Check out the OWASP site also. [Note : Other tools are available]


Added : Alan Richardson (Evil Tester) is doing a series of Burpsuite video tutorials. –


People make a successful business
In my experience the business is successful because of the people. In almost every job on the planet, you need to work with other people.


Building your interpersonal skills, learning how to express your opinions in an assertive, but friendly manner and learning how to show your personality in the work place are crucial.


As many businesses are realising the importance of good team spirit and good person fit, it should no longer be a case of just a bum on a seat. You need to shine. You need to impress. You need to let your personality show. You need people to want to work with you…right?


Exploratory Testing Skills are essential
I’m fairly confident all testers perform some type of Exploratory Testing. I think many don’t know what it’s called, many aren’t self aware of it and others do it, but maybe lack some deeper awareness of it. If someone throws some software at you and says “test it” – will you flounder or flourish? Will you need a spec to move? Or will you get stuck in and add value? Will you explore or spend 3 weeks writing a detailed plan?


Communication Skills Add Value
Being a good communicator is essential, but it also adds Kudos and Value to you and your work. If you burst in to tears when questioned, can’t explain how you found a bug, can’t justify why you need more resource/time/money, can’t talk to the customer, can’t discuss sensibly your ideas and concepts and aren’t sociable with your team then you instantly lose credibility.


Be confident, be assertive, be personable, think about the language you use, take control over your non-verbal leakage/clues and always be aware of your Purpose, your Audience and your Context of the communication and you’ll start to see some very positive results.


People are people wherever you go
As a tester you are typically building some software for someone to use (in rare cases maybe not) and building some software with other people or for someone else (sales/marketing/customer/etc). As such, it pays to understand people. People are complicated. Interactions between people are complicated. So trying to learn more about people, their environments, thinking, history, incentives, needs, location, health, language, understanding of their world, culture and many other things will be invaluable for your role as a tester, a team member and a person.


I’d suggest you start looking to the social sciences for insights, thoughts and inspiration:




Commercial Awareness
Being commercially aware could stop you being that person who holds up a release because of a bug that’s bugging just you. There are always other factors involved in the release process and operation of the business. Other factors that other people have more knowledge about. Mostly these are commercial decisions. So having an understanding of commerce and commercial operations is crucial to your activity as a tester, but it’s also a nice way to stop you going mad when other people keep shouting “ship it” despite the showstopper.



Efficiency, Effectiveness and workspace ergonomics
The best testers I know of are the ones who work effectively but carefully. They are the ones who know exactly where their tools are stored, located and accessed. They know their way around the operating system, tools and browsers. They know about addons, plugins and other aids to help them in their testing. They know about stuff that helps them test.


Information is never more than a few clicks or turns of a page away for these people. They can access stuff fast. Stuff they need, when they need it. They are engaged in what they are doing.


Their desk layout is practical and effective. They do their best to get themselves in the “flow” easily and readily. They can zone out and tune in fast. 


In a sense, being aware of your surroundings, any limitations you have and how you can work within them is crucial to success. 


How many times have you observed someone who doesn’t use any shortcut keys, has to page through weeks worth of notes to find every day crucial information, doesn’t use information radiators, isn’t effective at recreating issues and doesn’t understand some of the basics of any systems you might use?


Note: There is a risk that when you become a super user of the system under test that you start to miss issues. They are not obvious to you, but they are to new users. Care is needed to balance these poles out. 



Taking control of learning
If you aren’t learning anything new then I’m worried for you. We all need to feel like we are learning something or on a road towards mastery (which, by the way is not achievable). It’s human nature..right? According to a lot of social research, including Maslow’s heirarchy of needs  we seek self fulfillment after our basic human needs are met (food, drink, shelter, love). It’s powerful stuff and the science is typically stacking up towards self fulfillment as the main motivator at work (could help explain more charity work and Open Source contributions)…it’s very interesting and powerful stuff.


Yet. Despite the emphasis on learning, mastery and self fulfilment do you get training on how to learn? Or how to structure your career towards Mastery? Do you receive training on how to approach learning and how to get the most from it? 


I received some informal mentions at school, but we certainly haven’t (at least not here in the UK) continued to teach this fundamental skill as we go through our working lives. We are all learning. But learning how to learn is just as fundamental.


Think about the following : 
  • Note taking
  • Information distillation
  • Accommodating and assimilating information
  • Sharing your learning
  • Pushing your learning in a logical direction
  • Stopping information overload
  • Pacing our learning
  • Using our learning in practice
  • Describing our learning
  • Widening our learning
  • Restricting our learning
  • Developing core skills
There’s a massive amount we can learn about learning.



A bit of a rant. But some thoughts on aspects I don’t believe a certification scheme teaches.


That’s not to say certification schemes aren’t valuable, but I think that there is a lot more to our roles than many people realise.


There are many more aspects that affect our testing that simply don’t get mentioned or covered. So if you think certifications are important and the only way to learn, then I’m afraid the future looks sketchy for you. Certifications will not guide you through the trials and tribulations the unpredictable future will hold. They could be one part of your learning path, but they shouldn’t be the only.


What do you think are some of the most important skills that a tester needs outside of certification? 
Do certifications give you any of the above? 
Do you even think the above are valuable? 
Do you think a certification *should* offer any of the above?



Fighting Layout Bugs…Fight

I’ve mentioned a few times via Twitter (mainly from India) about a neat little tool Julian Harty talked about at the Step_Auto conference; FightingLayoutBugs. It’s a Java code project that checks for layout bugs. It’s all Open Source code and available from ““.

So here is what FightingLayoutBugs does out of the tin:


  • Scans the HTML for <img> elements with no or an invalid src attribute
  • Scans the CSS (all style attributes and <style> elements in the HTML as well as directly linked and indirectly imported CSS files) for invalid image URLs.
  • Checks if the URL to the favicon is valid.


You can configure the minimal supported screen resolution for your web page like this: 

FightingLayoutBugs flb = new FightingLayoutBugs(); flb.configure(DetectNeedsHorizontalScrolling.class).setMinimalSupportedScreenResolution(800, 600);

The default screen resolution is 1024 x 768.



detects text which is very near or overlaps a horizontal edge



detects text which is very near or overlaps a vertical edge



detects text which is not readable because of too low contrast


A super simple bit of code creates a very simple test:

public class FirstTestClass {


    public void testGetRectangularRegions() {
        FirefoxDriver driver = new FirefoxDriver();
        try {
            String testPageUrl = “”;
            FightingLayoutBugs flb = new FightingLayoutBugs();
            flb.setScreenshotDir(new File(“.”));
            final Collection<LayoutBug> layoutBugs = flb.findLayoutBugsIn(driver);
            System.out.println(“Found ” + layoutBugs.size() + ” layout bug(s)”);
            for (LayoutBug bug : layoutBugs) {
        } catch (Exception e) {
        } finally {


Enter your website url in the “String testPageUrl = “”;” line and run the test. It will open up Firefox, then load the website and then do some magic with the CSS (and other stuff) to check your layout. It puts out screen shots of the potential errors too. Very cool.


To get this working you will need Java installed, some form of IDE, an SVN client and the source code to build the Jar file or to contribute to the project if you like. Once you’ve built the project then add this .jar as a reference and hey presto you can write your tests.


Look out….snot

You’ve built a new test team but your bug counts are on the increase in both test and live. Why?

You are sat there wondering what went wrong. Why the grief? Why the drama? How can this be?


Well, it’s just a case of SNOT.


S – Safety
N – Net
O – Of
T – Test


Snot. Safety Net Of Test. It’s something I’ve observed many times in my career where new teams are forms, new departments spring up or a new batch of people come in. Testing often become a safety net. The catch all. The people who will control our quality. But the one aspect of this safety net that always baffles many people is why there are always *more* defects. (note: “more” is very subjective here as often there is little empirical evidence to show that “more” have indeed been found or are showing. Often it’s based on “gut feeling” <– which might well be right)


Some food for thought on why I think we often see more bugs:


When Testers are brought in to a company and a Test team is beginning to flourish more people are looking at the software and probably in a more managed / structured / critical / organised way, probably also with a fresh set of untainted / unbiased eyes .  The product is being inspected, explored and investigated by professionals (we hope). This *could* be the reason for more bugs.


Sometimes an easing off of testing can happen by the programmers (and other people who were doing some testing). This is because they now have someone else fulfilling this role and responsibility. This “someone” else might not know the nuances or intricacies of the system just yet.. This *could* be the reason for more bugs.


The business as a whole now have a department to “blame” when defects are found in live. Before the Test team, the blame culture was potentially collective, now, with a Test team, it is departmental. This *could* be the reason for more bugs.


The process of bringing in new Testers often means more process and experience is brought in to place. Test management, defect process flows, exploration, critical thinking, triage, reporting and artefacts are all things that many companies start to see more of when Test teams form. This therefore, at least initially, brings bugs and good / bad existing processes to everyone’s attention. Not only that but the Testers (if they are of sound skill, knowledge and mind) will begin to champion better processes and thinking, and start to challenge bad practices and existing assumptions about testing. This will bring more focus to the software and people may start to question why there are bugs in it. These bugs may have always been there (and some may have always been known about) but we’ve raised expectations now..we need to meet them. This *could* be the reason for more bugs.


The project team as a whole now perceive their velocity or work rate to have increased with more people on board, therefore more code is produced (maybe because they have someone else to cover the testing and the code may also potentially have less checks) and the test team simply cannot keep up. This could mean more code goes out untested and hence defects slip through the net. This *could* be the reason for more bugs.


It could be that the software itself is not in a “happy place”, hence the initial desire to build out a Test team. The Test team are too late to catch the fallout and a spike in defects occurs due to legacy issues. Just staying on top of new work could take all the testers time leaving legacy stuff to be exposed to new code interactions and new ways of being executed which starts to show vulnerabilities and bugs. This *could* be the reason for more bugs.


The way defects are counted and categorised could have changed, which brings to light defects that were previously, ahem, ignored. This *could* be the reason for more bugs.


Or it could simply be that there is a just a plain old spike for some reason. This *could* be the reason for more bugs.


It could be any number of reasons, but from my experience the spike in defects is very real after the forming of a “formal” Test team. 
I guess the big questions in my mind as I write this are:


  1. Do we really care enough to measure these spikes accurately and scientifically? (I suspect someone is already tracking a maturity model of some description)
  2. Are defect counts really a good indication of influence, impact and effectiveness of any Test team, let alone a newly created one?
  3. If the spike is temporary do we need to explain it at all?
  4. Are businesses still assuming Testing is the last line of defence? The safety net? The catch all group? Could the spike be down to a programming error or an ill defined requirement?
  5. At what point does the spike continue and become the Norm?
  6. If there isn’t a spike should we be worried about the Test teams effectiveness?
  7. Is there a way to maintain collective responsibility for Quality when new Test teams are formed? Do we really have the means to track the many complicated facets involved with potential spikes in bugs (morale, people, process, approaches, environments, features, new tech, etc)?
  8. And why am I asking so many questions?
So next time you see a spike in defects after a newly appointed (or changing) team is in place then I would encourage you to observe and muse on some of the potential reasons for this. But don’t worry too much; things will even out in the end 🙂


Image courtesy of : swimparallel

You know something I don’t know, but if you don’t share then we can’t grow

I’ve been getting to a few conferences recently and meeting lots of interesting people. One thing that is common amongst all of the conferences and user groups I get along to is that there are always people at these events who have one of the two following problems (and any number more that I won’t delve in to):


1. They work for someone who cannot, does not or will not share their knowledge
2. They are someone who cannot, does not or will not share their knowledge


It’s scary stuff. A lot of people in our testing community seem reluctant to share knowledge, skills or learning advice, even if they are at conferences. I’ve no concrete evidence of why but I suspect it could be any of the following:


1. They don’t realise other people might not know what they know
2. They don’t realise other people might know other stuff that they don’t know
3. They want to hog the knowledge and information in a belief they are more employable and less likely to be made redundant
4. They don’t know how to share their information
5. They don’t think people will want to learn from them
6. They lack the confidence to share information
7. They don’t value collaboration on test approaches and learning
8. They are scared people will become more knowledgeable than themselves (see point 3)
9. They don’t like other people
10. They don’t like communicating with others


No doubt there are thousands more reasons but I think it’s something we need to address as a community. There are lots of lessons and learning out there that many people could benefit from. We could all learn from each other. We could all improve our knowledge, understanding and skills.


I’m also surprised at how many stories I hear of Test Managers and Test Directors not sharing their wealth of experience (assuming they have it) with their direct team. The team is their key to success. Build the knowledge , share the knowledge, avoid the silos and encourage mastery amongst your team and I have no doubt you’ll see lots of success. So why don’t people do it?


So how can you help to share the knowledge and tease out the learning:


1. Take ownership of learning within your business / group.

Organise some learning sessions (lunch time learning, after work learning, internal blog, wiki, weekly training meeting).

Think about the Purpose, Audience and Context of your communication and choose channels and environments that compliment that.

For example, if someone in your group is unbelievably shy then presentations might not be the right choice. Maybe an internal company blog or wiki would be better.

If someone is terrible at writing and refuses to share their work in written form, then maybe an lunch time round table session might work. Experiment and keep adapting.

2. Join an online community focused on learning and sharing

For example, The Software Testing Club has an active forum, friendly people and whole wealth of groups available.

The Weeknight and Weekend Testers are very welcoming and friendly and have excellent testing sessions

There are countless forums and social groups online who are all very welcoming. Find the one that you like the feel of and sign up.

3. Join a larger social network and become part of the bigger community

Try Twitter (follow the #testing #softwaretesting #qa hashtags for a steady stream of new information) or maybe check out the softwaretesting tag on WeFollow.

LinkedIn has some good groups too, but be careful, LinkedIn has become the stomping ground of many “Best Practice Practitioners”

4. Create a local user group / meetup

Create yourself a local user group or meetup.

The Software Testing Club have some meetups throughout the year can help you get one off the ground.

There’s the very excellent London Tester Gatherings. (expanding North to Leeds also) and loads of other local meetups.

5. Slowly but surely explain and demonstrate the value of sharing and learning to those who are resistant

For example, run a training session with those who are open to sharing on some tech or some technique that you can all go away and use. Go away and use it and report the findings.

Maybe you started doing some security testing and found a SQL Injection vulnerability or you did some accessibility testing and found that none of your site is compliant with even W3C single A compliance.


There are many other ways to help promote a culture of learning and always seek to tease out information from those with a wealth of experience. It could be that they simply don’t realise how much knowledge they hold or maybe they’ve just not found the right medium to communicate it in. Keep chipping away. Keep seeking new ways to share. Keep learning.


After all, someone knows something that you don’t know. And you know something they don’t know. Wouldn’t it be beautiful to bring that together and share?


What a lot of tests

One of the most perennial (and mis-guided) questions most testers get asked is “Why did you miss that one?” or “Why did you not test for that?” or “Why did we get that live bug?”


It’s a question loaded with accusation and assumptions. Accusation and assumption that testers somehow hold the key to “perfect” software. With large test combinations, complex operating environments, tight budgets and tight schedules it’s increasingly important for a test engineer to perform some form of risk based testing which will no doubt have gaps of coverage.


There are the retaliation responses like “why was it coded that way?”, or “if I had more time it would have been tested” or “why was it designed that way?”. These responses, when used to pass the blame, typically reflect badly on yourself and often don’t aid in moving forward with any sort of resolution. Sometimes comments like these may be mindful and truthful observations, but I suspect there is more helpful and peaceable way of communicating them.


At the end of the day most (if not all) of our testing comes down to some sort of risk based decision about what to test. And sometimes we get that wrong.


We base our decisions on countless factors of which I don’t pretend to know all of them. What I do know though is that the decision you made when testing was the decision you made. There’s nothing you can do about that decision after the event. You can learn from it, adapt, iterate or simply move on but you can’t change it. Live issue or not, there’s no room for time travel. You made a decision, you tested what you thought was right (at that moment in time) and if you missed something then it’s fair to say it’s too late to change that decision.


You can certainly learn from the experience after you’ve done your testing, in fact it would be negligent not to. These issues can point to a problem with your testing and/or choices made but more often they point to a problem outside of your immediate testing control. They often point to a problem that the whole business needs to look at. A problem that might need people to take a step back, observe and reflect on. These could be budget, communication, expectations, hardware, software dependency, skills, time pressures and other commercial issue and a whole host of other factors that affect your ability to do your testing.


Sure, testers make mistakes, but so too do the people who help inform the risk based decisions, either through direct information or indirect factors like time, cost, motivational drain or any other factor that played a part in the testers decision.

Sometimes, it’s a straight forward mistake. Hands up, acknowledge it, assimilate and accommodate the feedback and move on. Other times it requires further analysis and a good look at how things are operating at a higher business level. So as a tester don’t be disheartened by issues that slip through your net but don’t also be held accountable for all issues either; it’s a team process with lots of contexts and factors involved.

Instead look for ways to learn and move forward both at a personal and business level that are right for your context. And if you’re still being held accountable and blamed and chastised then maybe it’s time to change your title from software tester to Quality Assurance Manager. (note and caveat: many testers already have a QA title, yet aren’t responsible for QA – it’s a complicated world we live in 🙂 )


Regular followers of this blog will know I like to work in pictures. So I cobbled together the attached diagram. I’m not sure it’s complete and I’m not even sure it represents my thinking fully but it felt right to put it out there and see what people think.


Is it a diagram of risk based decision making? Or a diagram of failed choices and tricky paths to tread? Or a diagram of dilemma and regret?

I’m not sure. It just felt like a good way to show the complexities and difficult choices testers and businesses face.



Mind the Map

I’ve been using mind maps on and off for many years now for a variety of usages. Sometimes just for ideas, sometimes for test ideas, sometimes for my learning.


Recently though I took a step back to observe the underlying purposes to see whether other audiences might gain value from the maps. I quickly realised that in some contexts mind maps actually make a very good communication medium. So here’s two ways I’ve recently used mindmaps for communicating to other team members. Both of these ways worked well in my context. They might work for you. Or they might not. I’m offering no guarantees.

Sharing of Test Ideas


During any planning meeting I usually whip open XMind on my laptop and get typing. As the conversations flow about requirements and stories I jot down ideas in xMind. The linking of “areas” or “streams” is done on the fly as I see obvious links emerging. It’s a brain dump of test ideas, thoughts, questions and data characteristics. After the meeting I’ll refactor the mind map and tighten up the “areas” or “streams”. This mind map will guide my testing. I won’t be bound by it, but it will lead me. In a sense it could be my “coverage” or “Test Plan”.


But I realised a while back that this mind map can serve another purpose too. It can serve a communication purpose. It’s perfect for the rest of the team. And so I’ve started sharing these mindmaps to give the programmers (and soon the project team) an idea of things I’m going to be looking for.


Some of the content in the mind maps are checks, some are tests and some are speculations or “edge” thoughts. Some will be vague or left open for future edit as the final architecture and design emerges. I will add many more ideas. I will remove others.


In this context I use them initially to drop ideas in to, secondly to communicate my test planning and thirdly to organise and help me manage my testing. They serve at least 3 functions, from just one document. But they aren’t always the answer for every context.

Mapping of usability reviews


One thing I am always keen on doing is getting out and about to meet the customers, but more importantly, to meet the end users (if you know who they are). The more end users I can meet, in their own situ, the better. I’d like to embed myself further in the teams and do some serious ethnographic research, but obviously time issues and deadlines often prevents this. Even so, one or two hours with an end user in suti using your product can offer a wealth of insights. I use mind mapping as a way to track how our users use our application.


Whilst I’m observing the end user I scribble notes. Lots of them. Which I then re-enter in to digital format using XMind. It’s important for me to do this as soon as I can after the observation whilst things are fresh in my mind. The reason for mind maps is that they offer a low tech and high visibily way to organise observations whilst also communicating to many audiences. The feedback gathered in the map can then be brought back to the office to be shared with the team.

Here’s the standard starting map I use to guide me, but again, not bind me when observing end users.



The feedback from both uses has so far has been great which has inspired me to keep using Mind Maps in these ways.


It’s another way to communicate and reduce long feedback loops. I also find mind maps easy to read (with a reasonably small amount of data) and are a good representation of thoughts and flow. They can be delved in to in detail or glanced at. They can be stored, printed, hacked around, mashed up with other mediums and can in themselves, contain other formats such as images and documents. For some of my contexts, they are the preferred mind dumping tool of choice. I know I’m not alone and in fact, I can thank Darren McMillan and Albert Gareev for getting me hooked on MindMaps again.



We need some big ideas

I’ve come to realise recently that the way to fine tune an idea or theory is to share it with the wider testing community. I shared an idea I had at The Software Testing Club meetup in Oxford and I got lots of feedback. I have re-evaluated that idea since and fine tuned it. I shall present this idea again at some other peer meetup and again, get the feedback and then adapt and iterate.


It is scary. The first time you share your ideas in a public way via any medium. And sure, there are people in the community who like to argue, to destroy ideas and to belittle others. These people exist in every community. But if you can hold your head high and ignore the nay-sayers then there are a whole bunch of very insightful people offering constructive feedback. The feedback you get on your ideas from these people is invaluable. The scary part is over. You’ve shared your idea. Now to gather the feedback and fine tune it.


It’s important to keep sharing your ideas as they grow so you can keep getting feedback. It’s a great way to develop a peer reviewed theory. Sadly too few people share their ideas before the “big launch”, meaning they leave the “big launch” feeling disappointed and destroyed. Unless of course the “big launch” is your ideal platform to get feedback.


The testing world is changing fast. Our testing domains are changing fast and to keep up and remain relevant in the software industry we need big ideas. We need to experiment and try things out. We need more sharing. We need more creativity.


But don’t expect it to be an easy ride. Putting theories and ideas out in to the wild is asking for feedback, good and bad. Don’t be offended by the feedback. Ignore the Trolls. Don’t take it to heart. Don’t let it grind you down. If you believe in your idea then keep pushing and seek out the audience you need. Gather their feedback, refactor your idea and keep moving forward.Take the rough with the smooth.

The community needs it.

We need change. We need big ideas. We need you to share.

What could be wrong with the old way?

Here are my slides from my talk at Step_Auto in Banglore last week.

When I wrote the presentation I was working to the topic of requirements in an agile environment so there is obviously an agile slant.

Yet the talk became more about testing in an old and new way. It became a talk about feedback loops. It became a talk about why we shouldn’t leave testing until the end. It became a talk about why we need to rethink how we test.

Not necessarily Waterfall versus Agile, but an old way of working and some new thinking. It’s about challenging the heavily scripted, metric heavy way of working and instead, focussing on adaptability, change, productivity but more importantly, the human mind. It’s about releasing ourselves from the bounds of metrics and tool restriction. It’s about choosing to communicate with the people that matter using the right medium. It’s about making our feedback loops as small as possible.

Of course, the slides on their own might not convey all the meaning, but that’s the point, they accompany the talk instead.

[Slides removed since publication]

Software Testing, The Future and Some Thoughts

The Future of Software Testing


I’ve been thinking recently about the future of software testing.


I’ve been wondering why some people are still testing the same way they did 10 years ago and why others are trying new ways, pushing boundaries or at least experimenting slightly. I’ve been wondering why some people are complaining about the way testing is happening, yet are doing nothing about it. I’ve been wondering why we aren’t, as a community, pooling resources more than we currently are. Why we aren’t open sourcing our learning or collaborating more on solving tricky testing problems.


The Business world is moving fast and we need to move with them. We need to remain relevant. We need to add value, not hinder delivery. 


This post isn’t about them and us. Or Agile versus Traditional. Or scripted versus Exploratory. It’s about testing. It’s about what challenges face testing now and in the next few years.


Before I list my thoughts though I will add the simple caveat that these are my *feelings* about what will happen. You may not feel the same. You may agree / disagree / not care either way. This list is not scientific. It’s not complete. Some or all of it may not come true or be relevant or even right. I can’t predict the future, but I can lay down some thoughts to start a conversation.


An Observation

One thing I have noticed over the last few years is that we seem incredibly reluctant in many corners of the Testing community to talk about the future of testing. We *appear* to be stuck in the past. We don’t want to discuss the future (or the present for some), or maybe we feel we don’t need to talk about it. 


Maybe those that care are busy getting on with it, forging the future and moving on; always pushing the boundaries, not looking back. Maybe some people don’t even see any challenges to their cosy existence and can quite comfortably meander on with retirement in mind. Maybe some simply don’t have the engagement in testing to care.

We seem reluctant to experiment, to day-dream, to try things out and to open our minds to what the future may hold. We close ourselves in our testing box and berate anyone who tries to change that. I wonder whether people are clinging to the past because they are worried that any changes we make might not work; that we might not be better off than where we are now? We’ll never know unless we try though.


But I firmly believe that the world is changing fast around us. The way we interact, communicate, do business, use and create software is changing rapidly. Most apps are now web only, many are cloud hosted, some are purely mobile apps and almost all of them are being built and released in an increasingly fast moving market. The streets and our cities are rapidly becoming technology platforms in themselves. There is a bewildering array of devices to support, all with different quirks and nuances. For many people, we have no single demographically identifiable “end user” so our test combinations and usage patterns are in the gazillions. 


Yet we seem reluctant to ask where we fit in to this future. We seem reluctant to experiment with ideas, tools and techniques en masse. Very few people are having sensible conversations about how to push/pull testing forward to meet the demands of businesses. Fewer people still are even aware such change could be coming.


For such a big question as “how do we fit in to the future?” I think we need new ideas, we need to make new mistakes, do big experiments, create new tools, collaborate and discuss, but ultimately start to make changes. Big changes. Small changes. Some changes…And share our findings.


So what will face us in the future………………..?

User Experience, Accessibility and Adoption of Tech in Society
Pretty much everything is moving online or to the digital world (certainly in the markets I work in). The rates of online adoption vary depending on who you speak to, but there is no doubt our worlds are becoming focussed around communication via mobile networks and the Internet.


More and more products are building in social channels and wanting to utilise the benefits of multi channel communication (the channels and platforms themselves will come and go, but the concept is here to stay and evolve). It might not be now, or 2 years or even 5 years, but it will happen.


With more online adoption though we run the risk of marginalizing people who are unable to interact with the web in an efficient and effective way. Physical and cognitive disabilities, user interaction problems and mindset changes *could* make the adoption of products and applications a hurdle for many in society. 


Testers can play a huge role in identifying these pitfalls, championing good design and accessible sites/applications as well as figuring out how to test a bewilldering array of devices and platforms, for a bewildering cross section of users in a bewildering series of contexts and uses.


Search and Discovery
Search and discovery for personal growth is the most neglected element of testing. I’m not talking about the Search & Discovery using Google to find your answer, then copying it, pasting it and passing it off as your own.


I’m talking about Search and Discovery as a way to learn, self improve and grow our skills and experience. This is a key skill neglected on almost every training course available to testers (there are thankfully a few exceptions). To an extent many rely heavily on certifications and our day jobs for information, learning and knowledge. Not so bad if your day job challenges you…and you like certifications.

Many testers haven’t heard of the thought leaders in the community, some testers are only at conferences because they’ve been told to be (this really surprised me) and many see testing as a lower grade career choice or a stepping stone to programming/development. 


Many testers are becoming so ingrained in big vendor tools that a conversation around testing can’t happen unless it’s about the tool.


Many testers have no awareness (or acknowledgement) of Exploratory Testing, Accessibility, Acceptance Test Drive Development, Security Testing or UX. Many don’t read trade publications or blogs, join local user groups or read books on testing. Some may never need to know more than they know now…but how do they know that?


Testing isn’t a career for many, it’s a job. For those of us serious about testing it represents a massive challenge. 


How do we convince people to do search & discovery? To self teach? To teach others? To mentor? To be a mentor? To build their awareness fields? 


How do we encourage people to make testing a career and not just a job? Or how do we separate the test heads and lifers from the people who just want to get paid? How do we further our craft without reliance on standardisation techniques and exams? How do we offer companies and markets the right people, for the right job with the right mindset?


I believe we can do more. I believe we can provide a cheap (or free) worldwide learning network to encourage growth and development for the mainstream. I believe we can offer those who want to learn a safe and welcoming environment to express their opinions and share knowledge with more like minded people. 


There are courses and learning resources available but they aren’t mainstream. Maybe it’s best we keep it that way. It might ruin what we have. Surely not?


There are some great examples of learning sources here 


Those that have followed me over the last few years know how strongly I feel about communication skills for testers. It’s my number 1 requirement for any tester joining my team. I firmly believe a tester needs to be able to articulate their views on testing. They need to be able to persuade, to be assertive, to be a good listener and to be able to communicate their findings in clear and conscise ways.


Yet communication skills are one of those soft skills that many believe aren’t essential. Most people have over inflated views of their own abilities which results in zero time spent on improving their communication. Most people aren’t self aware about their communication in the first place. Being a good communicator is one of the most powerful skills you can learn.


I feel we need more people writing, presenting and talking about testing. The future of testing will need “test champions“; people who can talk about testing to a variety of audiences, using a variety of mediums extolling the virtues of great testing.


Remember – being able to find a bug is great, but not so great if you can’t describe it, explain how you find bugs and pursuade someone to take your views seriously. A good communicator will go far. Very far.


The future of testing needs us to break the stereotype of testers – one of the most prolific is the Checklister


We don’t need checklists to work. 
We can fit in to agile teams. 
We are flexible and dynamic with our test strategies. 
We don’t run tedious, boring, repetitive tests. 
We can do……


We are more than that. Aren’t we?

Our sense of community and our need for places to hang out with like minded people are essential to our future growth. It’s why we do so much work at The Software Testing Club to make people feel welcome. It’s essential for a culture of learning and sharing to grow.


The barriers of entry to any testing community are now very low. You don’t even have to take part if you don’t want to (simply listen to the conversations), but I’d encourage more people to join any testing community that “feels” right. Sharing knowledge and experience is a great way to learn, it’s also a great way to build ties with like minded people. It’s also a great way to have a global and collaborative conversation about the future of testing or your everyday testing problems, or just a chat. It’s a great way of belonging. But be careful, it’s also the stomping ground of many “Best Practice, my way or no way” testers. 


I think the challenge for the future is encouraging more people to join these communities and to share their views and opinions in a constructive manner. 

We all know small iterations of work, with regular and rapid feedback is a good thing. We can call this agile if you like (or common sense software development)or a flavour of iterative, but Testers continually struggle to see how they fit in to this process. I see it as logical move, but the future will hold many challenges for many testers who need to drop the “long haul planning” mindset for the “short term releasing” mindset. 


Oh yeah, and coupled with this is the inevitable question about automation of tests. Agile is sure to continue to scare some people, fox others and liberate others. Some people love it. Some people hate it (even though they may not have tried it). The question is “how can companies remain competitive to changing markets unless they adopt some sort of shortened iterative development cycle? And if they must adopt shorter releases schedules to market, how can testers (and the team as a whole) help them?



Above are some of my thoughts on testing and the challenges we may face. What are your thoughts on the future of testing? 


More crowdsourcing? More certification? Even bigger divides in testing domains (heavy structured, highly scripted, agile, exploratory, schools of testing)? What about our Managers and leaders of the future? Do they need new skills?


Feel free to let me know.

We need more browser versions….

One of the big challenges for anyone working on web based projects is the age old challenge of browser compatibility. The number of browsers supported typically depends on a number of factors; your strategy as a business, your customer base (i.e. what browsers they are using) and your required use of the browser (do you only make use of newer features?)…plus many more decisions in what versions you support.


There are several tools out there to help you test against various versions of each browser like Browsershots, Selenium Grid etc but in reality many of these strategies have hurdles to overcome. Sometimes you can’t beat that human element 🙂


This problem for web testers with a wide variety of browsers supported is going to be made slightly worse with the announcement that all three of the major browsers (Internet Explorer, Firefox and Chrome) are now doing more frequent releases. So in a sense we could be doing a huge amount of compatibility testing across an even bigger set of browser versions (and growing). This news also affects other departments like support, sales and marketing. What do we support and in what context/extent?


This could mean we need even smarter ways to support and test against these growing versions, and ultimately find out what our customers are using or our product supports. It’s a challenge, but I’m sure we’ll overcome it.


Change Agent

I was re-reading my old blogs from a few years back and realised that my views have changed a lot since then.

In fact, when I look back to several years ago I realise my views have dramatically changed. I’ve shifted my thoughts and ideas by a significant distance.

The reason for these changes are personal decisions (goals and ambitions), some were chance encounters (we could call it serendipity) and others were opportunities I sought out myself. Furthermore my blogging, writing and speaking at conferences has also widened my awareness field. I’m meeting more people and learning new tricks.

That’s not to say that these old ideas/experiences and ways of working were invaluable or wrong. They were moments in time, they were valuable experience, they have made me who I am today and they were most likely the best choice at that time with the information I had available. I am grateful for all of my opportunities and the people I have met. I truly am.


But I will change. In fact, I must keep changing.
•    When I started out in Testing I believed everything was controlled by a master test plan, some predefined test cases and a standard set of expected measurements.
•    I worked for a forward thinking Test Manager who bought the whole team a copy of “Testing Computer Software” by Cem Kaner.
•    I was “encouraged” to sit my ISTQB foundation exam which opened my eyes to the standardisation of talented people.
•    I was given a great opportunity to lead teams.
•    I was encouraged to blog about Testing, attend conferences and write for trade magazines.
•    I started speaking at conferences.
•    I got to speak face to face with well known Testing practitioners (they are very approachable).
•    I got a massively lucky break to stumble across the Software Testing Club and finally end up working alongside Rosie and team.
•    I got free reign of my Testing to ensure delighted customers rather than numbers in a spreadsheet.
•    I learned more and more about Exploratory Testing.
•    I re-found my passion for social sciences like communication and ethnography which I have managed to bring to my testing world.
•    I was also lucky enough to find a reader base for this blog and my other writing through The Software Testing Club (a big thanks to you all).
•    But most important of all; my experience, work, friends, family, network, goals and chance encounters have all changed my mind and my views as I journeyed through my testing career.

This post is beginning to sound very self centred, but there is a point to all of this.

I strongly believe in the human capacity for change. I believe we shouldn’t write someone off without exploring what we can learn from each other. I believe we should never assume we know it all and never assume other people know less than we do. I think we need to appreciate that people can and will change.I believe that people will change. That they can change. And that over time we must all change.

And if our insights and views on testing really haven’t changed in the last 10 years, then we probably need to look closer at why not.


The Impossible Job

The Impossible Job


I’ve been recruiting recently and whilst researching the typical job ads and roles available here in the UK for style and content I noticed a worrying trend. Well, to be honest I noticed several worrying trends but don’t get me started on why certification should NOT appear as the number 1 requirement for any tester…


The trend I noticed was the impossible job.


The impossible job requires you to predict the future.
“Test plans should be prepared in advanced to cover all eventualities”
“Your Test Plan should explain in detail what will and will not be tested prior to testing, to ensure no defects escape to live”
“Your test cases will cover all user interactions”
“Your ability to release zero defect software will be exceptional”


The impossible job requires you to do exhaustive testing.
“No defects will be released to live”
“You must cover all features and functions in full depth”
“You will be expected to provide complete test coverage”
“Your ability to release zero defect software will be exceptional”

The impossible job requires you to be the Quality Control.
“The product quality will be exceptional because of your expert testing ability”
“The product quality issues will be solved before release through excellent testing”
“You will ensure that the product is of exceptional quality”
“Your ability to release zero defect software will be exceptional”

The impossible job requires you to be the master of everything.
“You will be an experienced Test Manager, expert Programmer, talented Performance Test Engineer and amazingly experienced electrician”
“You will be an experienced test lead, have lead large and small teams, have every certification possible, be well verses in C#, Java, Python, Ruby, and C++ whilst also being a talented performance tester and capable of seriously great exploratory testing. You will also be required to help out on support, do sales pitches, prepare legal paperwork, set up and configure the internal IT infrastructure and have some knowledge of agile principles”
“Your ability to release zero defect software will be exceptional”


Always be wary of the impossible Job.


(Disclaimer : The text in speech marks was paraphrased and didn’t exist in this form in the original job adverts – I may also have made some of it up)
(Disclaimer : From first hand experience, you don’t need to set impossible job roles to get the right people)
(Thought : Could the impossible job be the cause of misused metrics…or the effect of misused metrics…or both?)

60 Day Proof

Everytime you write something down think about making that writing 60 day proof.


The 60 day proof concept goes something like this:


“If I were to re-read what I wrote in 60 days time, would I know what it meant and would it still make sense?”


I’ve spent the last few years fine tuning my own writing based around this simple rule with lots of success. But also with lots of gaps and holes and at times I simply forget. I first encountered this 60 day proof idea in Michael Bernstein’s thesis on scrapnotes (an excellent read for people who make lots of notes) in which a subject he was observing made all of his/her notes sixty days proof. I was struck with how simple, yet powerful, this concept could be when applied to my everyday work, from note taking to defect reporting. It embodies my promotion of Purpose, Audience and Context principles for communication.


For everything I wrote down I tried my hardest to use this principle. This covered emails, notes, messages and any other form of communication I could…and it made a massive difference.


Adopting this simple concept meant I struggled less with ambigous notes and reports, had fewer of those “give me a minute to get my head around it” moments and it also made my communication even easier to understand by other people.

I still find notes in my pad that make little or no sense. Typically one time reference notes such as IP Addresses, temporary snippets of product information, blog ideas and key words that trigger thoughts in my head. But on the whole I’ve really started to take the care and time to write my notes with recall as a clear objective or purpose. It’s made a huge difference to my defect reporting as it’s made me think about making these defect reports as readable, understandable and future proof as possible.


Why not give it a go and see if it changes your writing. It may be you write your notes, emails, reports and defect reports with clarity in mind, but I thought the same also. Only on inspecting my writing did I find plenty of areas of improvement. Writing is a process of continous improvement and the 60 day proof concept, when considered constantly, has made a marked difference to my testing world. Let me know if you use a similar process or whether this concept might work for you too.

Disturb the peace

I subscribe to a very excellent blog called “Manage Your Writing“. And today’s absolute gem of a post included a quote which hit home with me. The quote could almost have been written for testers.


In her book Leadership and the New Science, Margaret J. Wheatley wrote:

For a system to remain alive, for the universe to move onward, information must be continually generated. If there is nothing new, or if the information that exists merely confirms what is, then the result will be death. Isolated systems wind down and decay, victims of the law of entropy. The fuel of life is new information–novelty–ordered into new structures. We need to have information coursing through our systems, disturbing the peace, imbuing everything it touches with new life.


“We need to have information coursing through our systems, disturbing the peace, imbuing everything it touches with new life.” <– I love that line.


I’ve been accused of “disturbing the peace” of projects in the past. I’ve been accused of generating too much information, too many bugs, too many risks, too many…..yada yada. But could that be the essence of what testers do? We move projects forward by generating information, feedback and insights for the stakeholders? Isn’t it our jobs to seek out new information, to NOT merely confirm X is X, to push boundaries of learning and information gathering? Yes, No, Maybe?






Communicating Testing using Visuals

Stumbled across this excellent little interview with Dave Gray from Communication Nation talking about communicating ideas using Visuals. Dave talks about organisations having a cool head, a warm heart and a firm hand.


Which roughly translates as Senior Management (the head) communicating using complicated diagrams, org charts and bar charts (abstracts). Supervisors and leads (the heart) communicating using more metaphoric, understandable visuals that motivate and inspire and the workers (firm hands) themselves communicating using clear outlines and achieveable steps (concrete actions).


So what has this got to do with testing? Well, not only is it a fascinating introduction to using Visual to communicate, create and think but it also goes someway to explaining our desire for metrics and how they are communicated. The more remote we are from the shop floor, the more often we prefer graphs and bar charts. That’s a sweeping statement but it does go some way to explaining why certain people work with (and expect) certain reports and communications.


A more pertinent point Dave makes though is that we are cultured (educated??) out of using simple drawings to communicate ideas. Our society expects slick, glossy, high flying diagram and visuals. We become less confident using fun, interesting and down to earth visuals to explain things.


At work I use drawings a lot to record user scenarios and complex problems. There’s lots of talk about using mind maps to explore ideas and heat maps to point out areas of high bug counts for example. And of course, the work we do at The Software Testing Club has never let go of the idea that fun, interesting and down to earth images in an interesting layout are a great way of communicating


The video is incredibly insightful: