BLOG

Nordic Testing Days – The next big event?

I had the pleasure of attending Nordic Testing Days in Estonia last week. Two of my team, Dan Billing (@thetestdoctor) and Raji Bhamidipati (@peppytester) were speaking at the event so I tagged along to support them. And they did an awesome job – well done.

Should you go to the conference next year?

Yes.
It was great.

2014-06-06 11.06.24
In Raji’s session on note taking
Dan Billing mingling with the organisors
Dan Billing mingling with the organisors

 

The Pros:

Estonia! – What a fabulous country to attend a conference in. The City of Tallinn was beautiful and the old town is a must see. For some photos of Tallinn visit my Google+ photos here

The organisation – it was a very well organised conference. The team behind it are a great laugh and worked really hard before and during the event – it went very smoothly indeed. They are passionate about delivering an outstanding conference and it shows in how good the conference was.

The speakers – what a good line up. The events website lists them all but the highlights for me were Stephen Janaway, Raji and Dan obviously!, Matt Heusser and Pete Walen keynote and Gitte Ottosen.

What I liked about Nordic Testing Days was that it was a “Context Driven” focused event that celebrated and encouraged discussion about interesting ways of working; it felt very much community based. It didn’t feel like I’d stepped in to a private club of context driven testers who were unwilling to listen to others people’s contexts unlike some events have started to feel recently.

The Venue – the venue was exceptional. It was clean, well staffed and very modern. Bizarrely enough the wifi even worked well (unheard of for most conferences) and the conference rooms were all close together keeping the crowds mingling.

The Entertainment and Conversations – There was a nice meal and some food after the first main day of the conference. There was also a bartender doing tricks and a comedian. The Old Town in Tallinn is amazing and lots of good conversations about testing were had in the bars on the main square.

For a short video of the bartender:

The Cons:

No conference is ever perfect but Nordic Testing Days was very close in my experience.

The Food – there didn’t seem to be enough food and it took ages to get what food there was. There was also very little choice for vegetarians meaning some people were left eating just salad for two days.

The purchasing of tickets – It took me ages to get the tickets bought. I didn’t seem to find a way to buy the ticket online via a credit card so had to work with invoices and money transfers. There needs to be a simple and easy way for someone to sign up and then pay for the event.

—–

Nordic Testing Days felt like the early years of Agile Testing Days where people were coming together to create an event based around enthusiasm and a new way of talking about testing.

I have a strong feeling Nordic Testing Days will keep their momentum as the people behind the event are driven, focused, ambitious, talented and intent on creating the best testing conference on the planet. If this years event is anything to go by then expect Nordic Testing Days to become an event that will be the highlight of your testing calendar.

See you at……Nordic Testing Days

I’m heading to Nordic Testing Days (NTD) in a few weeks.

It’s not a conference that normally pops up on my radar. But this year is kind of different. I’ve got two of my team speaking at NTD.
nordic
Both Dan Billing and Raji Bhamidipati are presenting and I’m heading along to support them both (not that they need it) but also to learn from what looks like an interesting line up of topics.

It’s funny how conferences slip past you but on deeper inspection they look like they will be a great place to meet people and learn more about testing.

I’m very much looking forward to attending both Dan and Raji’s sessions and I have no doubt both of them will deliver insightful and informative presentations. No pressure πŸ™‚

It’s still not too late to register for Nordic Testing Days….

 

Collective Noun for Testers?

The other day I tweeted a picture of our test team testing together as a group (for the first time) in our new test lab.

testteamtesting

I asked “What do you call a collection of testers in a test lab?”

Some interesting responses below. Are we heading towards a collective noun for testers? No doubt someone’s done a collective noun post before πŸ™‚

—–

Stephen Newton @SteveAN01 – a play group. Or an informative.

@paul_gerrard – A “Newsroom of testers”

AQtime @aqtimepro – An argument waiting to happen.

George Dinwiddie @gdinwiddie – an exploration?

Darren Hails @rw_testing – correctly located!

mubbashir @mubbashir – Breaking Bad πŸ˜‰

James Salt @saltpy – happy πŸ™‚

Mark Keats @mkeats – A crash of testers.

neill mccarthy @MccarthyNeill – a curious of…?

Amy Phillips @ItJustBroke – Dangerous

James Lyndsay @workroomprds – an expedition

—-

Thanks to all who responded.

Test Case Completion – A Story

I did a EuroSTAR webinar last week on shipping products and talked about how shipping products using test case metrics is bad news. It reminded me of this story which I share now.

It’s very common for testers and teams to rely on a number of test case metrics and measures to work out when they are done, or to plan the work, or to measure the progress.

This can be very misleading.

A common metric I used to rely on, and see many people rely on often, is the classic “Test Case Completion” metric.

This metric is often used for planning and for measuring completion but more scarily for working out when to release a product or service.

It goes like this

Let’s say we have 10 testers. We also have 1000 test cases. With a bit of magic and maybe past history we can predict that each tester should be completing 10 test cases per day each.

So, that gives us an elapsed time of 10 days to complete all of these test cases. Right?

So now we can plan.

“It’s going to take 10 days to complete our testing”

This happens on almost every single testing project. Test cases and test completion rates are often the guiding factor for schedules and release planning.

We can also use this metric to measure progress.

On day 1 we should have completed 100 test cases. Day 5 we should have done 500 test cases.

If we don’t see these numbers trending in this way (or close to it) then we can adjust. We could *make* people work more hours, maybe achieving 15 test cases per day.

We could add more testers to the mix. Or we could even just not run some test cases.

The Problem

There’s a very obvious problem with this approach. In fact, there are lots of problems yet it doesn’t stop this being the defacto way of planning testing.

One problem is that not all test cases are created equal. Some will take hours to run, some maybe even days and some a few minutes.

Another problem is that there is an assumption that the only testing that needs to be done is contained within the test case.

Another problem is that there is an assumption that testers are like robots who will perform the same each and every day. We all have bad days.

There’s also an assumption that the tester won’t find any problems and hence delay the running of a test case in order to investigate a bug.

A Story

A company once used to run a giant regression phase where all test cases would be run again on the “final” build.

They would print out all 3000+ test cases and stack them on a giant table in the office.

The expectation was that each tester would complete 10 test cases per day – this would allow them to hit the magic release marker of 100% tests run.

Here’s what happened.

At about 5:30am on the day of the regression a group of testers would arrive at the office and rifle through the test cases.

They would pick the really easy ones; the ones that took just a few minutes to run.

They would pick about 50% more than they had to complete.

They did this for two reasons.

Number 1 – they figures that they would be asked to work longer hours to complete more tests – so they already had a stash of easy ones to do whilst eating pizza.

Number 2 – even if they didn’t get asked to stay late they could excel by completing more tests than other people running up to the last few days of the phase.

10 test cases

A second group of testers would come in at 8 am and be left with the really hard test cases. Some of these test cases would require a days worth of setup and config just to run.

Each day the testers would mark how many tests they had done that day on a giant matrix stuck to a wall behind the manager. The first group would mark in the number 10. The second group would be lucky to register 2 or 3.

Some of the first group would surf the web in their spare time, some would help the other testers, some would do exploration, some would go home early.

All of the first group would game the system for a variety of reasons. Yet all of them would be doing what was asked of them according to a simple metric like test case completion.

Not so strangely, the second group of testers would simply not run all of the steps of the test cases (or even mark entire test cases as done without running the checks) in order to try and run 10 per day. When faced with the reality of a 1 day environment build to check one thing….what would you do?

The project shipped late. And it returned to be worked on further.

The scary thing is that this behaviour happens all the time.

When simple measures like Test Case Completion are used to measure progress or to plan projects you’re already skewing the process and opening it up for gaming, abuse and a false start.

What’s the alternative?

I’ve no doubt there are many alternatives to this problem and no system or measure is exempt from being skewed, gamed or misused.

My suggestion would be to move your testing to be nearer the code by pushing for more behaviour driven testing and unit testing which drives out the design, the code and some of the behavioral tests. And then to deliver in to your test environments as soon as possible. If this process works it means you no longer need test cases (or as many of them) as the checking is automated, therefore you don’t need test case completion metrics. Your automated checking becomes a set of results and it frees you up to explore the product and find the things the test cases (or checks) would never have caught….in other words… freeing you up to do testing.

It’s obviously not a simple change (I know I’ve been there) but it is possible and small steps towards these sorts of approaches are entirely possible in almost any context. The real question comes down to how much you’re willing to experiment with testing, reporting and project planning.

No matter what your approach or your context it pays to be aware of the pitfalls of relying on test case completion metrics and to spot the wrong behaviour it drives. At least if you spot it, you may be able to make some changes and encourage the right behaviour by changing your process, or measuring something different.

Michael Bolton blogged about shipping projects and test cases this week also.

Moving to Weekly Releases – Webinar

On 25th March I’ll be doing a webinar for EuroSTAR.

PIE

The webinar is entitled “Moving to weekly releases” and instead of repeating the presentation I delivered at EuroSTAR last year I thought I would open this topic up early for questions – that way I can tailor the webinar to try and answer some typical questions around this topic.

So here’s the basic outline I’ll cover:

  • We took our deployment lifecycle from 8 months to 1 week in about 1 year.
  • I stopped using any sort of test completion metric as a marker for done/not done or coverage (pictured)
  • We made testing the centre of everything.
  • We adopted lots of automation.
  • We started using metrics to help us test.
  • We put out lots of fires that most people run away from. (We went around a few also, and bought some fire proof suits).

It was a hard journey and many of us still wear the battle scars.

I’ll be walking through some of the reasons why this change was possible, some of the changes we made and some of the challenges we faced.

If you’d like me to focus on specific areas or have specific questions I can cover off then please do get in touch via Twitter @rob_lambert or email me : questions@thesocialtester.co.uk

Dan Billing and Jenny - Testing

Hackathon – How can testers take part?

I often get asked how Hackathons work for the testers in the team.

As it happens, not all of the test team can code, so it’s natural to wonder what they do during the Hackathon (formerly ShipIt days). If you’re interested in what a ShipIt day is then check out this post I wrote about ShipIt days for the NewVoiceMedia blog.

So what do testers do during a Hackathon?

Well, this week during the Hackathon we’ve got a mix of activities being done by our testers.

As you can see Dan (@thetestdoctor) and Jenny are doing some paired Exploratory Testing. Check out the Ministry of Testing signage, and yes, a Goonies mug.
Processed with VSCOcam with b1 preset

They’re testing some new functionality being built by Lyndsay P (@Lyndsp).

We’ve got four of the team doing some of their on-going learning via Coursera. We have a new addition to our technical test team and he’s busy learning the ins and outs of our stack and our test process.

Some of our other team are testing project work still and not working on the Hackathon stuff. Others still are hacking around with their own pet projects or learning to code. There’s loads of stuff going on and loads of different levels of involvement in the Hackathon.

The nature of a Hackathon makes it harder for testers to join in, although we are exploring ways to make it more engaging for the testers.

Do you have any ideas on how testers can become more engaged in a Hackathon?

You have to believe in change for it to happen

After delivering my talk at EuroSTAR last year about weekly releases I got lots of positive responses, but also some intense negativity and skepticism about the subject.

It wasn’t just “wow – that must be so tricky” it was more along the lines of “you are full of ****” and “what a load of ************ [insert your own expletive]”.

The responses were mostly ones of sheer skepticism combined with aggression. A few people believed it wasn’t possible to release often and that appeared to stir some frustration.

Yet that’s the interesting thing. If you can never picture change happening, or you don’t believe it’s ever possible to move to faster and have more frequent releases then the chances are it wont ever happen.

“It would never work here”

“We could never do that”

“We don’t have the right people”

Three years ago we were releasing yearly and the quality could have done with improving. We could have sat down and looked around and said it’s not possible. But we didn’t.

We believed there was a better way of doing things and we tried.

If you never believe that the development and testing at your company could be better then it simply wont improve.

You (or someone) have to believe things can be different.

It’s my belief that testing standards and best practices are the result of this closed thinking; they assume things don’t change.

The reality is technology is changing all the time, the companies we work for are always changing and the markets we sell in to are always changing.

So couldn’t your approach to development change also? Couldn’t you look for new ways of doing things? Couldn’t you open your mind to a future that different?

The product you test can make or break you as a tester

No product domain is better than any other, but we do each have preferences and favourable products to test. We each prefer to use, test and work with some types of products over others.

When I was testing products I didn’t enjoy I wanted to leave software testing. Period. I wanted out. Software testing sucked.

Sure, the environments/methodology these products were built in didn’t help, but ultimately I didn’t have an affinity to the product and this made me think that the trade of software testing was at fault.

I didn’t understand the product. Actually – let’s be honest, I didn’t *want* to understand it – it held no interest to me. I wasn’t interested. It didn’t make me feel curious about how it worked.

I actually thought I was rubbish because of this.

I realized over time though that it wasn’t me though. I was OK. I was a naturally curious person, just not about some products. It was the product – it wasn’t me.

I’m not curious about things that don’t interest me. However, when I find a product I like and a product I understand I flourish. I become a good tester. I am curious. I am interested. I am engaged. I want the product to succeed. I become a product evangelist. I want to know how it works, why it works and how useful it can be.

I’ve met so many testers over the years that hate testing. Or at least they think they do.

When I ask these testers what their favorite product is they always have an answer. When I ask how they use the product and how the product works they can always answer me. When I ask what it must be like to test a product like this they get wild eyed and passionate about that testing job. And when they notice this enthusiasm in themselves they soon realize that it’s not testing they don’t enjoy – it’s the product that they are testing.

I don’t have an interest in testing transactional banking, insurance products or defense products, some people will thrive testing these. I prefer cloud/hosted/web services that have communication or social interaction at their heart, some people would loathe to work on these products.

So if you’re feeling down about testing and you’re finding you’ve lost your testing mojo it might just be related to the product you’re testing.

It might not be you. It might not be testing. It might just be the way you feel about the product you’re testing.

Shine a light

Most software testing, in my experience, is rushed and often under time pressure, typically driven by a desire to meet a metric or measure of questionable value. Yet to rush through the testing of a feature or product is to often miss the changing nature of that product (or your own changing nature).

If you are testing a capability of a product and look at it in one way (say with the mindset of trying to break it) you may find several issues. If you shine a light on the product in another way (say with the mindset of trying to use the product really quickly) you may spot other issues.

Also, if you test the product on one day you may miss something you may spot on another day. You change (mood, energy, focus) all the time, so expect this to affect your testing.

The product likely hasn’t changed, but the way you see it now has.

Tours, personas and a variety of other test ideas give you a way of re-shining your light. Use these ideas to see the product in different ways, but never forget that it’s often time that is against you. And time is one of the hardest commodities to argue for during your testing phase.

How much time you will need to re-shine your light in different ways will mostly be unknown, but try to avoid being so laser focused on completing a test for coverage sake, or reaching the end goal of X test run per day, that you miss the serendipity that sometimes comes from simply stopping and observing and re-focusing your attention.

A New Year – 2014 Goals

I’m officially out of my self imposed social media hibernation. I tend to creep away from my online presence and catch up with my family over December and January. It’s time to venture out and start interacting again.

I have a load of new and exciting goals planned for this year. I’ve spent the last week doing my planning and plotting for this year, as well as a review of last years goals.

I normally hint at a few of my goals via this blog and on twitter but this time around I’m planning on making my goals more visible. It might make me commit to them further πŸ™‚

So here are a few things I’ll be doing this year that are relevant to those interested in software testing.

Goal 1 – Actually post stuff to my blog

Right now I have 233 blog posts in draft.

Some near completion, some with barely a few lines of ideas.

I suspect some of them will get deleted; they will be either too cutting edge, too ranty or too boring. So I guess I’ll have about 150 to post out on my blog by the time I’ve culled them (and allowing for new ideas to emerge).

There is a theme this year.

The theme will be “hiring testers in to a rapid release development team”. I will explore what it takes to find and hire good testers, but I will also explore some ideas around cutting down the silos between functional roles and creating a more holistic fast paced development team.

Goal 2 – Release “Idle Thoughts On Test Management”

My first book, Remaining Relevant and Employable, has done pretty well despite little promotion.

I actually got side tracked writing Remaining Relevant when I should have been writing Idle Thoughts.

I’ve got the basic chapter headings and quite a lot of content, but I won’t be making it public for a few months yet. I will be using LeanPub again to publish this second book. Expect this book to be minimalist and cut down. I want it to be succinct and to the point.

Idle Thoughts is basically a collection of short stories and essays from my time as a Test and Development Manager. Fulfilling many roles through my career (technical author, tester, support engineer, manager, scrum master, agile coach/consultant, etc) has allowed me to blend all of this experience together. I hope to be able to offer some interesting and unique views about test management. Consider it part observation and part experience report.

You can see some of the research content I’m collecting over at my Idle Thoughts Postachio blog. The posts on that blog will arrive in flurries, as will chapters for this book on to LeanPub.

Here are the chapters I’m going to be writing about. I’d be keen to get some feedback as to what else you may want to read about. It’s not complete yet, so expect the following to change somewhat. (Bold and underlined are section headers in the book)

Communication

        • Purpose, Audience and Context – the basics of communication
        • Communicate 10 x more than you currently do
        • Time your communication right
        • Communication doesn’t happen through a process tool
        • Primary, secondary, or made up information source?
        • Active listening for test managers
        • It’s active content, not static documents
        • Don’t skip face-to-face communication
        • Private blogging for sharing of ideas
        • Don’t sit on information, it won’t make the team richer
        • How to run a good meeting
        • Broadcast important stuff, but only if you need to
        • Selling testing
        • Build a communication plan
        • DUJWC (don’t use jargon when communicating)

Productivity and Learning

        • Be Quick and Nimble
        • Environment efficiency
        • Create a practice plan
        • Follow your intuition
        • Busy does not mean productive
        • Where does your day go?
        • Don’t take on too much
        • Stop people burning out
        • Learners will inherit the world

Life

        • We can’t all do what other people can do
        • We are all great. But realise your limits.
        • No job will last forever. Projects are the future.
        • It’s not good. It’s not bad. It just is.
        • Empathy

Process

        • Lean Testing – A myth
        • What a lot of tests, but which one shall I run?
        • Don’t worry about the solution. Work out what the problem is first.
        • Draw a frame and place your testing inside it (but don’t be bound by it)
        • Don’t adopt every technique without pause for thought (sometimes just a few techniques are more valuable)
        • Copy what other people do, where relevant
        • Don’t master an approach or technique just to say you’ve mastered it.
        • The team are testing, but is the product getting better?
          • A difference between intent and outcome
        • You’ll never know whether a team will work until you put them together
        • Teams are not perfect.
        • Look for times when your testing doesn’t work
        • Do you have a vision and purpose, or are you just getting through the day
        • Make decisions. Decision makers are important.
        • With a wider awareness you will be surprised less often
        • Apply constraints – they breed creativity
        • Standards versus trial and error
        • Norms are for breaking. Sometimes.
        • Allow time for innovation
        • People will game the system (if they want to)
        • Leading edge metrics
        • Improving the process is one of your main goals
        • Always go and see for yourself. Or trust your proxy.
        • Test artifacts are an output, not a strategic direction or end goal
        • A test plan is not your testing. A test case is not your testing. The testing being done is your testing.
        • Fix the process first – then bring in technology
        • Don’t obsess over tools
        • Relationships are your key to success. So be friendly.
        • Values versus principles

What is the job of a test manager

        • To hire the right people
        • To empower people to achieve the business goals
        • To develop people to their potential
        • To make decisions
        • To encourage the right behaviour
        • To reduce costs in the right place, but not at the expense of delivery
        • To be communicated through
        • To encourage a sense of learning in the team
        • To help people through tough times

Note taking

        • The importance of good note taking
        • Quick capture
        • Information scraps
        • Types of notes
        • Note taking styles
          • Outlining
          • Mind mapping
        • Digital versus Analog
        • 60 Days proof
        • To Do lists
        • Kanban
          • Visualise your work
          • Work in progress
        • Examples of note taking and capture

Managing a test budget

        • Spend your employers money wisely
        • Is spending money going to solve your problem?
        • Lack of money often leads to innovation

Idle Thoughts

        • Don’t accept limitations
        • We are too young and we are too early in our careers to standardize us.
          • Don’t apply limitations to yourself, your team or even worse, the community.
        • Stop casting yourself as a victim
          • You are as equally important as anyone else on the team. (i.e. There is nothing wrong with you)
        • Experience as much as possible
        • End goals are important, but so to is the serendipity and experience of the journey
          • Don’t always be laser like focused.
          • Take the time to look around
          • Make time for people
          • Allow conversations to meander
          • You cannot organise an accident
        • You will not please everyone
        • You don’t need permission to do Exploratory Testing
        • Communities are where the future lies – not rules and edicts
        • Be skeptical.
          • Is it always true? Is there ever a time when it is not true?
        • Make time for thinking.
        • Stop trying to measure the person.
        • Grow some thick skin. Very thick skin.

Goal 3 – Deliver an awesome presentation…somewhere.

As usual I am hoping to speak at a conference this year. Details to follow.

With the above writing plans I won’t be speaking or attending many other events. I am taking the entire NVM test team to TestBash, whoot!, but other than that I doubt I will be at many events this year.

Goal 4 – Continue to build an amazing development team at NVM

I’ve obviously got some great work goals to achieve. I will be continuing to improving the process, grow the team and deliver the best service we can for our customers. I won’t be sharing my work goals here though πŸ™‚

Goal 5 – Start mentoring someone

I’ve been mentoring people on and off for many years, but I might be making a step to make this a permanent goal, so that each year I can help to mentor one person in their career.

 

I’ve also got some very personal goals which I won’t be airing here either πŸ™‚

Exciting year ahead.

Do you plan goals relating to your career?

If so, did you want to share them in the comments section?

Creating a test lab

This week sees the development team moving in to new offices. In these new offices is our newly created test lab. A room for testers to hang out and also a place for us to keep our supported devices.

There’s loads of work to do to get the test lab functioning and adding value for us.

It’s been lead by Raji (Twitter – @peppytester) and Andrew (Twitter – @coyletester)

We’re in a good place with our test environments as they are hosted, just like our great call centre product, in the cloud so we don’t have to store or house our physical servers here in the office test lab.

However, we need to make phone calls and connect via the web hence the test lab – a place to do this, but also a place for the team to get together to do our regular regression testing.

We have the capacity to do our testing with these devices already, but they are localised around individual testers with devices often locked away in a drawer somewhere. Not good for the collaboration and sharing we need to do now as we grow much much bigger. The lab is a way of getting a centralised set of kit, a place to test and a place for us to come together to talk about testing, our test approaches and our supported devices.

Test Lab
Test Lab

Here’s some “starter for ten” goals/requirements we’ve identified for the lab. Obviously the specific details are not listed here but they give you a flavor of what we’re aiming for.

  1. All supported devices and phone carriers must be available in the lab. (i.e. if we support it – we need to test it)
  2. All devices should be charged and ready to use. (i.e. there’s no point needing a device/phone and then having to wait to use it because it’s got a flat battery)
  3. All devices should have a shared folder or other sharing faciltiy on them (Evernote notebook is one example of how we can share screenshots, findings and notes).
  4. The devices must not be plugged in and charging 24/7 unless they are being used. (Although the devices need to be available we don’t want to destroy the planet by charging them all day everyday when they are not being used. We’ll experiment with timers to give them a burst of juice to keep them functioning and tweak it to get a decent balance.)
  5. At busy periods it must be possible to book out devices, but this booking system should be a last resort, and should be a friction free as possible – a casual approach to sharing the devices should be sought first before booking forms and other waste are introduce. (i.e. it takes time to fill out forms, book things and deal with the “paperwork”. What if you just want ten minutes of usage…it’s a massive overhead to book it out. Add this overhead if needed but let’s see how it goes first)
  6. All devices will be connected to the network, clearly labelled (resolution, network, IP address etc) and available in the right place (i.e. people put them back where they come from)
  7. The phones should be included in our generic “default” call plans meaning all testers know where to get extra devices and phones from (i.e. when needed people should be able to easily add these devices to their accounts and gain access to them)
  8. All devices should have quick, one step log in and access (i.e. unless security is compromised let’s make it as frictionless as possible to use these devices and phones)
  9. A monitor showing NewRelic (plus other test related data) should be available in the lab. (i.e. data informed testing and feedback from your testing are key to the right focus)
  10. All default call plans and test accounts should be well documented and easy to follow meaning new starters can rapidly learn how to get on clouds quickly. (i.e. make it quick to get testing in the lab)
  11. There should be a number of network ports and power sockets available to house groups of testers doing testing. (i.e. when a group of testers (what’s the collective term for that???) get together is there enough power and cabling?)
  12. All test related books should be relocated to the test lab (i.e. when we need inspiration it should be there)
  13. Elisabeth Hendrickson’s cheat sheet will be available – ideally blown up and made to look freaking awesome. (i.e. ideas for testing should look good as well as be functional)
  14. The test lab should remain looking neat, tidy and welcoming (i.e. clean, simple, tidy and functional environments help to clear the mind for focus on productive stuff)

And there we have it. A starter for ten. But something to head towards.

I’ll keep you posted, as I’m sure Raji and Andrew will on Twitter about how we get on.

What do you all have in your test labs?

 

You’re brand new

“I’m new to testing – where do I start?”

I get asked the above questions A LOT. It’s a very common question for those who are brand new to testing, those who are shifting from another business function and those who are returning to testing after many years away.

I repeat roughly the same answers quite often so I’m writing a blog post to point people at.

Of course, a little self promotion – if you do nothing else then buy my own remaining relevant book – it is packed with ideas on how to learn, how to network and hints and tips on how to rock an interview πŸ™‚ You could consider it a much longer form version of this blog post.

Join the Software Testing Club. Period.

Check out the AST training courses and learn as much as possible.

Follow the big bad list of test bloggers being curated by the Software Testing Club – there are a lot of them – mostly good – pick and choose carefully though.

Download and read my Blazingly Simple Guide To Web Testing – all of the hints and tips were created from bugs I’ve found in the past.

Join twitter and follow the #softwaretesting #testing hashtags – find interesting people and follow them.

As part of the above question I also often get asked what the day to day activities of a tester are.

It’s tricky to say what the day to day activities of a tester commonly are as the role is so incredibly varied. You might be following pre-defined scripts and checking that the software matches the test case.

You might be exploring the product to discover what it does. You might be analyzing specs, writing user stories, writing automated tests, performance testing, security testing, doing customer visits, studying usability and a whole host of other stuff. You might do some of these things during one working week at some companies, you might do nothing but following scripts at others.

The industry is so varied that I would suggest, if you can, that you take the time to carefully chose the testing role you want. I would always suggest seeking out companies that put exploration and learning above scripted testing, but not everyone has the luxury of holding out for such companies.

Some companies will insist on a certification. It’s your choice as to whether you want to get one. I’m not a fan – but I’m a realist – some companies require them – and if you need a job then go for it. But take the certification for what it is – a certification that you sat the course and got a favorable result. It is NOT a marker of excellence and shouldn’t be your single point of learning.

If you follow some of the above you’ll encounter people and communities that will help you find the resources you need, the people you need to know and hopefully the sources that can help you skill up in the right way. You might even land a job through your networks and community.

Your browser is used by less than 2% of our users

I’m a big fan of the mind mapping tool Mindmup and logged in today using the Opera browser.

Here’s what I saw:

Image of the Browser support message
Image of the Browser support message

 

This is an excellent approach to communicating about the limitations and restrictions around testing – you wouldn’t expect any less from Gojko (one of the guys behind mindmup).

It’s a great way of setting expectations but without limiting the choices made by the end users. I can still choose to continue using Opera, or I can switch to one of the other stated browsers. I have a choice – but I also know it might not perform as the developers expected.

For many companies it’s often tricky just saying “no” to supporting the mass of different browsers now available so they try to test them all. Using web analytics and analysis it’s now possible for many web companies to work out what their customers do actually use (and how many people use it), and then test against those.

Nice approach.

 

Getting Hired – At Conferences

One of the things that I have observed from a number of testing conferences is that none of them have any sustained focus on hiring or getting hired *.

There have been one or two sessions about the topic of hiring but nothing sustained.

The occasional tracks that I have seen have been mostly focused around the hiring strategies of big corporates where bums on seats is sometimes more important than cultural team fit.

Most testers don’t know how to get hired – I wrote a book to help bridge that gap. Those that do know how to get hired are truly in the minority and appear, at least on the surface, to be overall better testers. Mostly this is not true – they are good, but they are often no better at testing than others, it’s just they are much better at getting hired. Getting hired is a skill.

Hiring and getting hired is a vast topic and one which is fraught with contextual challenges, but I believe that a dedicated set of talks from hiring managers from a wide variety of contexts, and maybe some sessions and tutorials on writing CVs, interviewing etc would go down well at most testing conferences. It’s great being good at testing but how do you then go on and get hired…

There are supporting topics such as social profiles, writing clear CVs, networking, self education and interpersonal communication that might also make interesting tracks. Or maybe they wouldn’t. Maybe people go to testing conferences to learn about testing and not the other stuff that comes with our working world…

What are your thoughts?

* The conferences that I have been to