Getting Hired – At Conferences

One of the things that I have observed from a number of testing conferences is that none of them have any sustained focus on hiring or getting hired *.

There have been one or two sessions about the topic of hiring but nothing sustained.

The occasional tracks that I have seen have been mostly focused around the hiring strategies of big corporates where bums on seats is sometimes more important than cultural team fit.

Most testers don’t know how to get hired – I wrote a book to help bridge that gap. Those that do know how to get hired are truly in the minority and appear, at least on the surface, to be overall better testers. Mostly this is not true – they are good, but they are often no better at testing than others, it’s just they are much better at getting hired. Getting hired is a skill.

Hiring and getting hired is a vast topic and one which is fraught with contextual challenges, but I believe that a dedicated set of talks from hiring managers from a wide variety of contexts, and maybe some sessions and tutorials on writing CVs, interviewing etc would go down well at most testing conferences. It’s great being good at testing but how do you then go on and get hired…

There are supporting topics such as social profiles, writing clear CVs, networking, self education and interpersonal communication that might also make interesting tracks. Or maybe they wouldn’t. Maybe people go to testing conferences to learn about testing and not the other stuff that comes with our working world…

What are your thoughts?

* The conferences that I have been to

The rise of the technical tester, and other thoughts from the UKTMF

The UKTMF this week was very good. Lots of interesting discussions to be had.

Being at the event triggered some thoughts about testing that I have been meaning to air for some time now.

Mastered Functional Testing

There were strong arguments by a few influential people that we, as testers, have mastered functional testing.

I don’t agree. I think we’ve got lots to learn.

I don’t believe we can ever master something as everything is forever changing; software changes, contexts changes, we change.

There’s also no single source of right (despite what some certification boards may tell you), which means that we’ll never truly know whether we have ever mastered functional testing.

I think we have a long way to go as testers before we can say we’ve got close to mastering functional testing.

Testing is only done by testers

This is a common misconception we have when talking about testing; that testing is done just by testers.

We are quick to put the tester at the centre of the development lifecycle when in reality we are just part of a wider team. The team will typically do testing of many different types.

The discussion at UKTMF centred around technical testers and the assumption was that technical testing was done by technical testers. Not so for many companies. We need to separate out the role of tester and the act of testing. Developers test, customer test, product owners test, testers test. Not all testing needs to be done by testers. Therefore not all technical testing needs to be done by technical testers.

All products must have uber-quality

Many statements and discussions centred around the product quality and how it must always be amazing. This is not true.

As much as we would like to think all companies need testers (is this us putting testers at the heart of the process?) and need to create high quality products it’s simply not true.

There are many companies producing software with no “testers” (although they do testing) and there are many products on the market (doing well) that aren’t (or weren’t) great to use.

Products, like humans, go through life stages and quality isn’t always important at all life stages of a product. Sometimes market share or a proof of concept is more important than a quality product.

Early adopters of Twitter will know the “Fail Whale”. Years ago about 3 in 6 tweets I posted would result in a Fail Whale – yet Twitter itself is now thriving, and I’m still using it. I’m happy to live with poor quality (for a period of time) if the problem the product is solving is big enough.

We often lose sight of the context the product is used in when talking about testers and testing. We are not the centre of the Universe and we are not always required on all products.

Testers are not generic

Another point I observed was that many people spoke about testers as though they are resources. Just “things” that we can move around and assign to whatever we want and swap as we please.

It was refreshing to hear a few people telling stories about the best teams being made up of a mixed bag of people from many different backgrounds, but they (we) were in the minority. A good team, in my experience, has people with very different backgrounds and approaches. We are not carbon copies of each other.

Testing (Or more to the point, social science) is not technical

Many people seem to be quick to replace the word”technical” with “being able to write code”.

It’s almost as though “programmer” is a synonym of “technical”.

So specialists in exploratory testing, requirements analysis, negotiation, marketing, technical writing (there’s a clue in the title) and design are not technical…..really?

It actually lead to a great comment from Richard Neeve who suggested that actually a technical tester (and tester in general) could actually be described as a scientific tester. I like the thinking behind this and I need to ponder it further.

The whole “technical tester” discussion is interesting as it often takes hours just to agree on a general definition of exactly what a technical tester is – something Stefan Zivanovic facilitated very well.

And finally I thought it prudent to mention the views I aired at the UKTMF on why “technical tester” is becoming increasingly common and prevalent in our industry.

I believe that the clued up testers are the ones who realise the future is bleak for those who simply point and click. A computer is replacing this type of testing.

The switched on testers are learning how to code, specialising in areas like accessibility, training and coaching, UX, performance, security, big data and privacy or they are expanding their remit in to scrum master, support, management, operations and customer facing roles. They are evolving themselves to remain relevant to the market place and to make best use of their skills and abilities. Some of these people are differentiating themselves by labelling themselves as technical. I believe this is why we are seeing a rise of the technical tester. And just as a side note, not all of those who call themselves a Technical Tester are all that technical (in either coding, or other technical skills). As with all job labels there are those who warrant it, and those who are over inflating themselves.

I think it’s important that testers diversify and improve their technical skills (and I don’t mean just coding skills) to remain relevant. After all, if you’re interviewing for a tester and they have excellent testing skills then you’ll likely hire them (assuming all other requirements and cultural fit is right). But what if a similar candidate came along who was also excellent at “testing” but was also skilled and talented in UX, ethnographic research, performance testing or security testing……which one would you want to hire now?

Remaining relevant in the marketplace isn’t about getting another certificate, it’s about evolving your own skills so you can add even more value than simply pointing and clicking. You need to become a technical tester (as defined above) – in fact, scrap that – you just need to become a good tester – because after all, good testers are indeed technical.

EuroSTAR 2013 – The Unofficial Guide

It’s here.

The Social Tester’s Unofficial Guide to EuroSTAR 2013.


I thought I’d get the guide out early this year. Hopefully I’ll see you all there.

Note: Since I completed this I’ve been informed that there is no gala dinner…..I have yet to confirm or deny this. If there is, the section on dressing smart still stands, if there isn’t….well….we’ll find somewhere to go on that night 🙂

Click the image above or here for the PDF download.

A conference story

Across the world there were similar journeys taking place.

Delegates are boarding planes, trains and automobiles to attend a European conference in the wonderful city of Amsterdam. For some this journey is short. For others it’s epic.

Some delegates would be travelling on their own. Others would be travelling with people they know; colleagues, friends, peers or family.

For the first time conference attendee the journey can be daunting and nerves can start to form around what to expect.

Some journeys go better than others.

In England one conference delegate is flying from a small city airport in a tired looking Bombardier Q400.

2012-11-05 14.18.07


The noise on-board is deafening, the coffee cold and the seats cramped. Despite these downsides the delegate can’t help but notice that the flight staff are welcoming, well trained and during the safety overview, incredibly well synchronised. He notices the menu in the seat-back and does some impromptu testing of the contents. The first obvious bug he spots is the use of the word “gourmet” to describe the microwaved cheese toastie.

Those who often attend testing conferences are somewhat obsessed. Testing isn’t just a career – it’s a calling (or a lucrative business). It’s something they’ve been doing their whole lives. Some might call this devotion to it strange, weird or sad. Others just see it as a way of life.

Over the Atlantic ocean is one of the conference speakers, asleep on a Virgin Atlantic jet. His journey started several hours ago. It won’t end for a few more hours yet. Tiredness is inevitable.

In Amsterdam there are already a large group of delegates descending on the city’s hotels, hostels, rentals and flats. Pockets of testers are occupying three or four of the main hotels. Many don’t know each other and are destined to spend the rest of their stay oblivious to the fact that the person next to them at dinner is a tester and attending the same conference.

All of the delegates share something in common; testing. They may approach the subject differently. They may oppose each others views. They may dislike each others personalities, but they all have a common thread to at least start a conversation. Yet many won’t. Many will spend the entire time alone. Some may enjoy this, others will inevitably feel isolated.

Some delegates arrive on the Monday, others wont turn up until the start of the presentations on Tuesday. The Twitter stream starts to fill with people discussing meet-ups, comments on the tutorials already taking place and general banter about the event.

Chat sessions fire up, text messages are sent and tweets are broadcast to arrange meals, drinks and gatherings. Many of these people are meeting others in the flesh for the first time. Relationships on social channels have paved the way for a seamless ability to meet-up in person. The hard work is done. The relationships can flourish further in person, or dwindle away at the realisation that online personas don’t always match reality.

As the evening comes around small groups of Testers are travelling to bars, hotels and restaurants to catch up and relax before the conference starts.

In an Indonesian restaurant in the city centre, a group of testers are gathered around vast quantities of food and beer. Conversations are flowing about testing, life and work. People are getting to know new people or are refreshing relationships with people they met at other conferences. As more beer flows the conversations become more lucid and discussions about the state of testing inevitably crop up.

As this particular group discuss certification schemes, standardisation and best practices, in the context of them destroying the industry, it is oblivious to them that across town, in a posh hotel where Heineken is served in impossibly tall glasses, there is another group of testers talking about how context doesn’t matter and best practices are what the testing world needs.

Just down the road in a small bistro is a group of well funded testers talking about how maturity models are the future for their giant consulting business. They are busy putting the final touches to new methods of quantifying the true value of testing to an organisation.

Across Amsterdam that evening there are many tribes of testers discussing testing. Some with polar opposite views, some in perfect alignment and some who are too tired, or drunk, to talk about testing. In hotel lobbies and rooms across Amsterdam there are also testers with no-one to talk to.

As Tuesday afternoon spins around the conference centre starts to get busy. In the corner is a tester filming the queues forming using a time delay app on his iPad. Upstairs are testers sitting around talking about strategies for handling regression testing and exploratory testing. Meetings are happening and discussions are flowing.

Many testers are pacing around the venue embroiled in conversations to someone on the other end of a phone call. Are they conducting business? Speaking to relatives? Pretending to speak to someone to look important?

In the Test Lab the lab rats are getting ready for people to do some testing at the conference – a concept that is alien to some delegates.

In the expo centre are salespeople and demonstrators trying to draw attention to their products, services and goods. Some are doing better than others. Some are more involved than others.

For some of the vendor representative it is their 20th conference this year. Some of the vendor reps haven’t been home for the last 8 months, their life is a constant cycle of conferences.

Throughout the event some presentations go to plan, some of the speakers don’t quite communicate their intended message well and some experience a few technical glitches.

Some talks are funny and insightful, others masked in terminology and concepts that aren’t appealing to everyone. Some are about systems, some about space rockets, some about video games and some about standards.

There’s a talk for everyone. Many delegates are complaining that there is too much to see and they are having to make tough decisions on which talk to get to. Some consider that struggle to decide the marker of an excellent conference.

This year sees a real sense of community and involvement. The community hub took a day to get going. But on the last day was well attended with delegates wanting to listen to lightning talks, chat to new people and contribute their views and ideas to the Testa Mappa.

It’s in the community hub that delegates get to talk to the speakers and other attendees in a welcoming and safe environment. Many delegates are changing their mind about the presentation after discussing the topic further with others. Many speakers are learning a lot about themselves, their presentations and their ability to communicate their core message.

Throughout the event there are many smaller and much more focussed meetings happening, some private and by invite only, some open to a wider audience. There is business happening, friendships being made, social connections starting and also ending.

As the conference comes to a close the delegates, speakers, organisers and vendors travel home, many leaving with new friends and connections. Many will return home to families, friends and colleagues more tired than they thought they would be. Conferences take their toll both physically and emotionally. The challenge for many will be filtering and putting in to action many of the interesting ideas they took away from the conference.

The challenge for some will be working out how they can talk to more people next time and how they can ensure they are invited to the many social gatherings that are happening.

For many though the conference didn’t end when the doors closed and they travelled home; the conference continued on the social channels for many more months. And that is often the real marker of a successful conference.

(I wrote this short observational story at EuroSTAR 2012 last year. It was a great conference and this years looks like it will another great conference too – see you there hopefully.)

Moving from 8 Month releases to weekly

The other week I facilitated a session at the UK Test Management Summit.

I presented the topic of moving to weekly releases from 8 month releases.

I talked about some of the challenges we faced, the support we had, the outcomes and the reasons for needing to make this change.

It actually turned in to a questions and answers sessions and despite my efforts to facilitate a discussion it continued down the route of questions and answers. It seems people were very interested in some of the technicalities of how we did this move with a product with a large code base of both new and legacy code (and that my facilitation skills need some fine tuning).

Here are some of the ideas.

We had a vision

Our vision was weekly releases. It was a vision that everyone in the team (the wider team of more than just development) knew about and was fundamentally working towards.

This vision was clear and tangible.

We could measure whether we achieved it or not and we could clearly articulate the reasons behind moving to weekly releases.

We knew where we were We knew exactly where we were and we knew where we were going. We just had to identify and break down the obstacles and head towards our destination.

We had a mantra (or guiding principle)

The mantra was “if it hurts – keep doing it” We knew that pain was innevitable but suffering was optional.

We could endure the pain and do nothing about it (or turn around) or we could endure the pain until we made it stop by moving past it. We knew the journey would be painful but we believed in the vision and kept going to overcome a number of giant hurdles.

Why would we do it?

We needed to release our product more frequently because we operate in a fast moving environment.

Our markets can shift quickly and we needed to remain responsive.

We also hated major releases. Major feature and product releases are typically painful, in a way that doesn’t lead to a better world for us or our customers. There are typically always issues or mis-matched expectations with major releases, some issues bigger than others. So we decided to stop doing them.

The feedback loop between building a feature and the customer using it was measured in months not days meaning we had long gaps between coding and validation of our designs and implementations.

What hurdles did we face?

The major challenge when moving to more frequent releases (we didn’t move from 8 months to weekly overnight btw) was working out what needed to be built. This meant us re-organising to ensure we always had a good customer and business steer on what was important.

It took a few months to get the clarity but it’s been an exceptional help in being able to release our product to our customers.

We also had a challenge in adopting agile across all teams and ensuring we had a consistent approach to what we did. It wasn’t plain sailing but we pushed through and were able to run a fairly smooth agile operation. We’re probably more scrumban than scrum now but we’re still learning and still evolving and still working towards reducing waste.

We had a major challenge in releasing what we had built. We were a business based around large releases and it required strong relationships to form between Dev and Ops to ensure we could flow software out to live.

What enablers did we have?

We had a major architectural and service design that aided in rapid deployments; our business offering of true cloud. This means the system had just one multi-tenanted version. We had no bespoke versions of the product to support and this enables us to offer a great service, but also a great mechanisms to roll products out.

We owned all of our own code and the clouds we deploy to. This enabled us to make the changes we needed to without relying on third party suppliers. We could also roll software to our own clouds and architect these clouds to allow for web balancing and clever routing.

We had a growing DevOps relationship meaning we could consider these perspectives of the business together and prepare our plans in unison to allow smoother roll outs and a growing mix of skills and opinions in to the designs.

What changes took place to testing?

One of my main drivers leading the testing was to ensure that everyone took the responsibility of testing seriously.

Everyone in the development team tests. We started to build frameworks and implementations that allowed selenium and specflow testing to be done during development. We encouraged pairing between devs and testers and we ensured that each team (typically 4/5 programmers and a tester) would work through the stories together. Testing is everyone’s responsibility.

Testing is done at all stages in the lifecycle. We do TDD, Acceptance Test Driven Development and lots of exploratory testing.

We do a couple of days of pre-production testing with the wider business to prove the features and catch issues. We also test our system in live using automation to ensure the user experience is as good as it can be. We started to publish these results to our website so our customers (and prospective customers) could see the state of our system and the experience they would be getting.

We started to use techniques like KeyStoning to ensure bigger features could be worked on across deployments. This changed the approach to testing because testers have to adapt their mindsets from testing entire features to testing small incremental changes.

Why we love it Releasing often is demanding but in a good way. The pressure is there to produce. The challenge we have is in balancing this pressure so as not to push too hard too often but have enough pressure to deliver. We don’t want to burn out but we want to ship.

We exceed the expectations of our customers and we can deliver value quickly. In an industry that has releases measured in months (sometimes years) we’re bucking the trend.

As a development team we get to see our work in production. This gives us validation that we are building something that is being used. Ever worked on a project that never actually shipped? Me too. We now see none of that.


It’s been tough getting to where we are now but we’ve had amazing support from inside and outside of the business which has helped us to really push ahead and set new markers of excellence in our business domain. We’ve still got lots to get done and lots to learn but that’s why we come to work in the mornings.


These are just a few of the factors that have helped us to push forward. There are companies releasing more often, and some releasing less often to good effect. Each business has a release cadence that works for them and their customers.

Did I mention We’re Recruiting?


Side Notes:

I got asked the other day how I come up with ideas for talks/blogs, how I think through these ideas and how I go about preparing for talks. I’ll take this opportunity to add a short side note of how I do this. This approach may not work for you.

I firstly create a central topic idea in a mind map (I use XMind).

I then brainstorm ideas around the central topic. After the brainstorm I go through the map and re-arrange, delete, add and rename until I feel I have a story to tell.

Moving to weekly releases

I then start planning the order and structure of the story. Every story has a beginning, a middle and an end.

I start by writing the beginning and then the end. The middle is the detail of the presentation.


I then doodle, sketch and plot.

2013-02-15 16.06.11 2013-02-15 16.06.20


I then move to my presentation tool of choice. In this case it is PowerPoint – sometimes it is Prezi.

The presentation typically takes a long time to prep, even for a very short intro like this. This is because I don’t like including too much text in my slides and also because I think simple, but attractive slides can add some impact to the topic. So I spend some time making sure they are right. Saying that, no amount of gloss in the slides will help with a bad/poor/boring story.



Tester’s need to learn to code

Tester’s need to learn to code…. and any number of other way of paraphrasing Simon Stewarts comments in his Keynote at EuroSTAR 2012.

I’m not entirely sure on the exact phrase he used but I heard a number of renditions after.

They all alluded to the same outcome:

“Testers need to code or they will have no job”

Speaking with Simon briefly after the event it’s clear his message was somewhat taken out of context.

I get the impression he was talking about those testers who simply sit and tick boxes. Checkers, as I think they are becoming known.

Simon suggested that these people should learn to code otherwise they will be replaced by a machine, or someone who can code. There were some who took great distaste to the general sentiment, and those who are in total agreement, and of course probably some in between and some who don’t care.

Simon himself was greatly pragmatic about it and suggested that learning to code will free people up to do good testing, a sentiment I can’t imagine many arguing with.

Those who know me have often commented that I often sit on the fence and try to see the positive in each side of an argument. It will therefore come as no surprise that I agree, and disagree about the message Simon was communicating.

I agree that Testers who do purely checking will (and should) be replaced by machines to perform the same action. Some of these people, (with the right peers and support, the right company and willing market conditions) could become great “Testers” whilst the machines automate the tedium. Some won’t and may find themselves out-skilled in the market.

But I don’t believe that all Testers have to learn to code. I know of a great many who don’t but are doing exceptional testing. Saying that, I think each team needs to have the ability to code. This could be a programmer who helps out, or a dedicated “technical tester” who can help to automate, dig deep and understand the underlying product.

I’m an advocate of encouraging Testers to learn to code, especially those new to the industry who are looking for early encouragement to tread down certain career paths. I’m also an advocate of using the wider team to solve team wide problems (and automating tests, reducing pointless activities and having technical assistance to testing are team wide problems).

When you hear a statement like “Testers need to learn to code” don’t always assume it tells the whole story. Don’t always assume it means learning enough code to build a product from scratch. Don’t always take these statements at face value. Sometimes the true message can become lost in the outrage in yours (or someone else’s) head, sometimes it might not have been communicated clearly in the first place and sometimes it might just be true; but too hard for people to accept.

  • Learn to understand code and your job as a Tester may become easier.
  • Learn to understand how the world around us works and your job as a Tester may become easier.
  • Learn to understand how you, your team and your company works and your job as a Tester may become easier.
  • Learn how to sell, convince, persuade and articulate problems to others and your job as a Tester may become easier.

There are many things which could help you become a good, reliable, future proof(ish) tester. You can’t learn all of them. You can’t do everything. Coding is just one of these elements just like understanding social sciences, usability, performance or critical thinking are some others.

Learning to code can be valuable, but you can also be a good tester and not code. It’s not as black and white as that.

Getting a grip on exploratory testing

Last week we had the pleasure of inviting James Lyndsay to our offices in rainy Basingstoke for a two day course on “getting a grip on Exploratory Testing”.

The whole test team were there for the two days and we worked through a number of elements of Exploratory Testing, right from time managements and focus to techniques and approaches. James is incredibly knowledgeable and his experience was tapped by the team for the two days.

Each of the team took away a number of insights, but here are my own personal observations:

  • I focus very heavily on the user and their experience of using software
  • I also tend to visualise the problem and walk through it often randomly
  • I also focus heavily on any claims made in any documents, guidance or specifications.
  • The team all approach the same problem in subtly different ways. This is a good thing.
  • Knowing how long we have to test is crucial. It changes our approaches and styles.
  • My approach of using Mind Maps for planning and documenting is a style that works well for me, but not always for others.
  • I tend to use a different map for different problems. I use mind maps, tables, doodles and general free form notes. I also tend to scrap a map if it’s not working.
  • Our judgements on what is an issue and where to focus testing differ between people even in the same business. What I see as issue – other might not and vice versa. This highlights the need to understand your target audience; something we’ve been working hard at here.
  • I need to work on my “straying” from charter. I often get too involved in side paths which are not on charter. This is not always a bad thing but I need to be mindful of it.
  • I often explore the application to drive out charters and ideas for further testing as a form of planning
  • We need to instigate more exploratory testing debriefs

The two day course was insightful, very well delivered and fun. There were lots of hands-on practicals and James tailored the content for our needs here at NewVoiceMedia. Overall it was a great two days and I know the team very much enjoyed it. We just need to put our learnings in to practice, something I am already seeing the guys doing.

Here’s James’ website with course details.

Professional Skeptics, Dispeller of Illusions and Questioner

Bear with me as a clear a few posts out of draft. This is my last one for a few weeks. Promise.


This one came about as response to James Bach’s excellent Open Lecture presentation. I took away a number of lessons from that lecture. (including the title – Professional Skeptics, Dispeller of Illusions and Questioner)


Off the back of that I decided to post some questions about “Testing the Spec” as an intriguing LinkedIn forum post got me thinking about why “Testing the Spec” is so common. “Testing the Spec” is where a Tester takes the spec and the system and then validates/verifies that the spec is correct. Yes, you read that correctly….that the spec is correct.

It got me thinking and asking a couple of questions:

  1. What benefits will you receive by testing the system against the spec?
  2. What don’t you know about the system? Will the spec help you?
  3. Do you have any other information sources or Oracles?
  4. Is the information sufficient? Or is it insufficient? Or redundant? Or contradictory?
  5. Would a system diagram suffice?
  6. At what point do you know you are complete? Once every page of the Spec has been “tested” – what about other parts of the system not covered by the spec?
  7. Have you seen a system like this before? Will it help you?
  8. Have you seen a slightly different system? Will it help you?
  9. Who has asked you to do this?
  10. What value do they see in you doing this?
  11. Does the spec accurately depict the system? (I guess this is what they were testing for….)
  12. Would it matter if the spec didn’t match the system? (which one would give you most value; the spec or the system?)
  13. Is it essential to bring the spec up-to-date to match the system?
  14. Will the spec tell you everything you need to know about the system under test?
  15. Would the spec tell you anything that the system wouldn’t?
  16. Do you know how out-of-date the spec is?
  17. Is the spec important as a communication medium? Or just something that gets produced?
  18. How would you communicate your findings? In Bug Reports? Or in a report?
  19. How would you know what was a bug and what wasn’t?
  20. Could the spec confuse you more than simply exploring the system?
  21. Are you using the spec as a crutch or a guide or an Oracle?
  22. How much of the unknown can you determine?
  23. Can you derive something useful from the information you have?
  24. Do you have enough information to model the system in your mind?
  25. Have you used all the information?
  26. Have you taken into account all essential notions in the problem?
  27. How can you visualise or report the results and progress?
  28. Can you see the result? How many different kinds of results can you see?
  29. How many different ways have you tried to solve the problem?
  30. What have others done in the past that might help?
  31. Where should you do your Testing?
  32. When should it be done and by whom?
  33. Who will be responsible for what?
  34. What milestones can best mark your progress?
  35. How will you know when you are successful?
  36. How will you know when you are done?

I’m not doubting the fact that testing a spec against a system is valuable. It *could* be the best thing you can do. But I would ask a lot of questions first. The system is always different to the spec, but in context, your spec could be used as the main reference point, so only you will know whether “Testing the Spec” is valuable for yourself.


I rattled out this list through a combination of the Phoenix Checklist but also after watching the very excellent video of James Bach doing an Open Lesson on Testing.


As with all things, I tend to sketch and draw as both a record of my thoughts, but also as a way of distilling some ideas. I read better in visuals so these rubbish doodles aid my learning and increase the chance I will revisit these points. You might struggle to see some of the text in the image but there are plenty of tools for opening images and zooming.


Note: The diagram is my take-aways from James’ lecture. There are many more lessons  which I’ve taken away earlier from James’ blogs and talks. I *may* have interpreted some things differently to how you would, or even how James intended, but they are my take-aways. I share them here just for completeness and urge you to watch the video

In pursuit of agile coolness

I’ve been doing a series of talks recently about agile being a mindset and not a methodology. At the core of the presentation (and the end talking points) are three questions:

1.At what point are you “agile”? 2.Are too many teams craving structure? 3.Is agile a methodology?

I’ve also had some feedback that the graphs and cheesecake pie sum up the current state of agile testing.

So here they are, included in this post. Let’s include some perspective on them though.

The Graph

Agile is cool. No doubt. The graph shows how the more you use the word “agile”, the cooler you become. It starts to take a downward turn the more you use the word “agile” though as you start to become a sort of freakshow if the only thing you say is “agile”.

“You want to go out for lunch?” “agile” “hmm, ok, did you want me to grab you a sandwich?” “agile” “right. You fancy a beer later?” “agile” “Got anything planned for the weekend?” “agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile, agile” “right. Freak.”

The thing is though, you’d still be cool, but only on YouTube and daytime TV.

The Pie Charts

The pie charts show agile adoption and failure within the IT industry in 2009. As we can see EVERYONE tried agile in 2009. And EVERYONE failed. At least that’s what it feels like.

The Agile Cheesecake Pie

Agile Pursuit is by far the most common behaviour I see. It’s based around the board game “Trivial Pursuit”.
Playing pieces used in Trivial Pursuit are round and divided into six sections. A small, plastic wedge can be placed into each of these sections to signify when a question from a certain category has been correctly answered
. –

Instead of questions, development teams use techniques or processes that once mastered, result in a tiny piece of cheese, or pie, or cake – hence the name “cheesecake pie” being available.

Some teams are so determined to become agile that they set themselves a list of about 6 – 8 techniques they think they need to be doing. They then enter the lengthy, and often never ending game of agile pursuit, even if the process they had to start with was working fine.

They face problems, questions and queries on their way to filling their agile cheesecake pie. Only when the cheesecake pie is complete do they declare themselves agile. Ironically though, in the pursuit they loose sight of the reasons why they wanted to be agile and end up with so many techniques that they just can’t get the software out of the door. Either that or the game gets abandoned because everyone got bored…or left…or retired.

As a team relaxes about declaring themselves “agile” they actually end up gravitating towards the complete cheesecake pie as they strive to offer rapid feedback and tweak their process. (sounds painful).



After each of my talks there has been a resounding and common conclusion to the three questions I pose:

1.At what point are you “agile”?

You are agile when “you stop wondering whether you are agile and just get the job done”

2.Are too many teams craving structure?

Structure is a killer for most teams, especially in the testing world where we are used to formal documents, formal test cases and formal metrics. Making the shift is terrifying and for some it’s a step too far. But for some organsations and teams that *can* be a good thing.

3.Is agile a methodology?

Agile is not a methodology. It’s a mindset. It takes a new outlook on software development. It’s an attitude. It’s about inspecting and adapting. It’s an approach.

Let me know what you think (and no doubt there will be some interesting feedback) as comments to the blog post.