Unicom – Next Generation Testing Conference Review #ngtc2010

A change of venue for the “Next Generation Testing Conference” for last weeks Unicom event, and a welcome change at that. Much easier to get to now at Grosvenor Hotel, Victoria, London. The room itself was quite long with tables for delegates, a fairly small projector screen and a catering/display area at the back. This actually had a fairly bad effect on acoustics and lighting but overall not too bad that it spoiled any enjoyment. The lighting meant that I had difficulty getting some good photos though.

To start with Niel Molataux was the chairperson but this soon switched to Dorothy Graham which wasn’t made very clear, but was probably a wise move.


Dot Graham opened proceedings with a good talk about Test Automation Objectives. She extolled the virtue of making sure we are clear about what we want to achieve with automation. Especially so when trying to sell it to management. This led to a series of contentious metrics and measures you could use to work out a Return On Investment which Julian Harty and John Stevenson questioned and challenged at various points.

To start with though Dorothy was making it clear that we needed to be sure we knew why and when to automate. Here’s some key points.

  • 75% of automation efforts fail.
  • Objectives for automation should not be the same as testing
  • Automation takes longer. 10 times maybe.
  • Effectiveness is a characteristic of testing. Efficiency is a characteristic of automation.
  • We need to look at the objectives and decide whether they are valuable
  • Are the tests actually worth running at all?
  • Automation often requires more people.
  • If we begin to replace testers with automation then managers are lowering the testers to the level of machines.
  • Regression tests add confidence. Do not find many bugs.
  • Testers, tests, exploratory testing, testing new code – these are all the most effective at finding bugs
  • The worst time to automate is when the project is running late
  • Over time our knowledge and understanding changes. Go back and apply our new knowledge to our objectives
  • Often automation is used to support testers and not actually automate the tests. Dorothy mentioned Jonathon Kohl had written an article. I found this link here which might be helpful, but I’m not sure it’s the one Dorothy was referring to :
  • Lisa Crispin suggests an automation refactoring sprint to focus on making the automation more effective.
  • We should measure the success of our automation and ask WHY are we automating?

I’m not a fan of measuring too much, mainly because I’ve found I spend too long measuring the activity rather than doing the activity. So in a sense I decrease my efficiency and effectiveness by trying too hard to measure it. But Dorothy gave a case for measuring automation using a simple calculation of working out how long it would take to run a manual test versus an automated one. Before you all get annoyed with how simple that is, Dot did explain some more examples and other measures to add to the mix.


Next up was Martin Gijsen who talked about Domain Specific Languages. I’ve seen Martin before at SIGIST and both times I’ve enjoyed the content but struggled with the delivery, mainly because he was very quiet. He knows his stuff, but he is naturally a fairly quiet and dry presenter which meant the audience started to get distracted often.  I actually tweeted mid way that I want to see real working examples and not just screen shots. I was too early though because Martin included a few real fitnesse tests running against Amazon. Always nice to see examples running rather than code snippets. It’s a really great topic and Martin certainly know his stuff.

Julian Harty was up next talking about mobile testing and how to use automation more effectively. Julian is a really talented speaker and his knowledge of the mobile web browser world is incredible. Julian suggested that testing on mobiles was very slow. Like wading through quicksand. It takes too long to test so many only test on a subset of available mobiles.

Julian’s suggestions to get around this are to actually start to design with testability in mind. The problems then evaporate.
Test Driven Development is also essential. Doing the testing first focuses attention. Code that is easy to test is easy to test for all, not just the programmer.
The faster the feedback, the better your code.

Some handy hints when doing mobile testing:

  • Use SMS to send URLs to the phone, rather than trying to type them in to the phone.
  • Capture user-agent date from the headers to gather data. You can then hijack this value and pretend to be a different phone.

Julian suggested you should all check your pocket/wallet/purse when anyone talks to you about UI automation. Make sure you have the same amount of money in there after as you did at the start. UI automation tools are easy sells to the unsuspecting.


Lunch was good and a great opportunity to network. To be honest I was a little disappointed with the stands this time around. It felt like there should have been more. The two sponsors/stands who were there had good stands with little giveaways but I didn’t get the feeling they were interacting with the crowd as much as at previous events.

After lunch Clive King from Oracle was talking about the need to automate and load / scalability test your applications with some great case studies on how he has done this. Clive’s a good speaker and I won’t attempt to repeat here the example he used as they were insightful and in depth.

Next up was one of the most contentious talks of the day for me. It drew the most comments from the audience and was in some respects a good example of false messages about agile. It was a great talk by Jenine Thorne who is a very good presenter and she had some of the best designed slides of the day, but a few people took issue with a lot of the content. Mainly the people who work on agile projects. In one respect it was a real life story of agile adoption at the Norwich and Peterborough Building Society. It was a complete warts and all description of a big bang agile adoption showing the trials and tribulations she and her teams encountered.

The main points raising questions were the following:

  • Jenine mentioned how they had consultants in to move them to agile. However, the consultants moved the ‘development’ team to agile but not the “test” team. The consultant then left. I raised the question to Jenine about whether or not she felt she had been mis-sold the consultancy.
    • She didn’t believe she had. In my mind though this is exactly what is troubling about the agile movement. There are too many consultants going in and making a Dev team agile, but no-one else. For me, I can’t really comprehend what that actually means. I can’t see how a Dev team can be agile without the test team.
    • It’s also not useful to think in terms of teams when you are in a more agile world. Agile, for me, doesn’t work unless the ENTIRE team (including PMs, BA and management etc) are all moving to agile at the same time and with the same motives.
  • Jenine described the process of delivery to test and release and it was clear that it was a series of mini waterfalls. Iterative development with a testing phase at the end. They didn’t release for 6 months.
    • For a few people I spoke to this in essence was not agile. Not being able to deliver for 6 months made many people feel this was not agile.
  • It sounded like the automation process wasn’t in place which is crucial for agile success, especially when working on a large project.
  • There were some invalid descriptions of some very crucial concepts, for example, TDD was described as developers automating the boundary cases and tedious tests that the testers didn’t want to do.

A lot of the audience didn’t work in an agile environment and were there to source information and find out how it’s done. Messages about agile vary wildly, some incorrect, some confused, some right, some slightly right.

Unfortunately this happens a lot at conferences where experience reports show a negative side of agile, or a side that actually isn’t right. In this case many would say that it is not agile. With a false statement of what TDD is, no release for 6 months and little explanation of how regression and automation was done it sounds like another example of mislabelling agile. It was though, an experience report, regardless of what label was applied to it. They are by their very nature…a personal experience. Moving to agile is hard. For sure. But moving to a mis-interpreted place called “agile” is almost impossible. No wonder so many experience reports tell a similar tale. Aim for the wrong goal, call it agile, report it as a disaster but keep calling it agile and keep claiming you are nearly there.

The thing is though, Jenine’s talk was the highlight for me. Although it was contentious to those who work in an agile environment, it was also obviously an emotive topic. For me, there is no point going to a conference if you agree with everything that is being said.

Next up was Keith Braithwaite. Keith is a seasoned agile professional and he gave a superb talk on testing with checked examples. In one example he was providing a fitnesse test, driven from an excel spreadsheet of financial reference data. The interesting thing was that this checked example was provided to development before the code was written. This is a great form of Test Driven Development as the development team have the test data before the code. Perfect for making sure the test team won’t report back cases and checks they had never thought of.

Keith explained how he saw the testers role in agile development changing to be a provider of tests and information at the start rather than actually doing this testing at the end. So informing the Devs of the test cases so they can be automated early. More test advising than test executing. It’s a sentiment I can truly get with. Testing at the start is the fastest way of getting test feedback. So why shouldn’t Devs and testers be pairing to write unit tests before the code? How great for a tester would that be; you could spend your time *really* testing the app when you get it, safe in the knowledge you’ve essentially already checked it. It makes finding bugs hard work. Exactly why we get paid. Right?

Keith said “Testers ticking boxes day after day is a waste of human existence”. As a developer himself he was asking “What can we do to make developers understand more about testing”

At the end of the day there were some lightning talks by Dot, Jenine and Gojko Adzic. What I found intriguing was how many people in the audience had no experience of agile yet were happy to chirp in with comments like “pie in the sky” and “unrealistic”. I don’t buy that. I think what they should be saying is “my thinking, interpretation and understanding of agile is pie in the sky”. It’s not the concepts of agile that are wrong. If you’ve never tried it, you’ll never really know. “But it would never work here” is another common one. How do people know unless they try?

But for all the agile bashing that happens at any testing event, there are always some interesting and balanced views being presented too, suitable for any methodology. I had a really great day and unfortunately couldn’t make it back for day 2. I heard that was good too.

And by the way. If you are anywhere around Cambridge, looking for a job and want to work for Redgate then they are recruiting. Here’s a link to a YouTube video on why it’s awesome to work for Redgate.[youtube]