The BCS SIGIST (Special Interest Group In Software Testing) was the first conference event I ever attended. It was about 6 years ago and I remember being amazed that people actually got together to talk about testing.
In my opinion though the quality of the talks soon went downhill and I soon stopped going.
There were alternatives that I felt offered better value and a more consistent and relevant set of talks. Meetups were better attended and more sociable. The UKTMF had more interesting topics and more discussion. The London Tester Gathering had more beer. The SIGIST just felt old fashioned in a rapidly changing conference world.
Yesterday I went to the SIGIST event after a three year hiatus. And it was good.
It wasn’t amazing, but it was good.
The event is now held at The Barbican Centre which, for me personally, is a lot trickier to get to than their previous home at RCOG centre. It’s a personal thing, but it was an opinion shared by a couple of other attendees.
The food, drink, rooms etc were good although the rooms were not easy to find and a distance from the main networking area – cue people getting lost.
There was one vendor stand selling some sort of TMMI service so nothing to see there.
I reckon there were 50(ish) attendees – not a great turn out and in a big(ish) lecture hall it felt woefully under attended.
The annual AGM was also the opening part of the day. If you’ve never been to an AGM (at any organisation) then keep it that way. It was formal and boring, but essential for the BCS I guess….
Declan’s interest is in security testing and he talked about how a lack of assertiveness can lead to a lack of good security testing. He also talked about how lack of assertiveness can manifest itself, such as in people being Passive/Aggressive. Declan talked about some interesting behavioral facts whilst tying it to testing. Declan’s got a really easy presentation manner which added to the talk.
After a networking slot it was time for my morning workshop with Paul Gerrard. The main hall continued with track sessions.
Paul presented his ideas about a new model for testing. You can read more about the model here.
The session was good. Paul spoke about modeling our testing and presented his new model of testing. It’s worth reading and applying it to your own thinking to see if it works. Paul’s after feedback about it.
We use modeling as an aid for creating tests here at NVM but we don’t use it as a way to agree scope – something I’m sure we’ll start doing. We also don’t share our models across teams which is something I’d like us to try more of. So I took a lot away from Paul’s workshop.
After a decent lunch I went to the presentation tracks.
Up first was Russell Gallop who did a good presentation on Agile and CI for embedded software but it was clear he was at the wrong event. No one in the audience was working on embedded software and few of the audience were programmers.
It’s a shame when this happens as the speakers put a lot of time and effort in to presentations only to be presenting to the wrong audience.
Next up was David Orr who presented about “Testing Your Mind”.
I’ll be honest I was expecting something different but it was good. David spoke about his experience of working in three different changing contexts and how each one challenged him. I did expect it to be about the way testing can become frustrating and stressful sometimes and how our minds cope with it. But it was more about how the testing changed to cater for this changing process and environments. Not a bad thing but a mis-match in expectations.
Then came a discussion panel.
I must admit, although I have deep respect for all the people on the panel, I didn’t get much from this part of the session. The panelists are all great but they are all similar in the way they think about testing. The panelists were Tonnvane Wiswell, Ole Hansen, Mike Jarred and Declan O’Riordan.
They shared common values and ideas about the testing topics discussed. This resulted in four people all agreeing with each other and not being able to add much to what the previous panelist had said.
I want to see some Jeremy Kyle sort of panel with chairs thrown and security getting involved. Only joking, but you get the point. A panel discussion should be just that – a discussion – and there wasn’t much of that. Not the fault of the panelists. Maybe just needed a divisive topic or more diversity in panel members.
There was also no audience participation in the discussion. This assumes that the audience (for who’s value the panel is there for) had nothing valuable to add.
I like panel sessions where a seat is available for audience members to chip in with opinions and observations – a Goldfish bowl approach.
It also felt like the initial topic was one possible solution to a bigger problem, rather than the underlying real problem. Starting with a possible solution leads to discussions around this topic rather than the real problem.
“How do we get testers involved in earlier stages of projects?” was the opening gambit. A common question in the community. But what problem is that solving?
- Why do people feel the need to involve testers earlier in the project?
- What problem will that solution solve?
- What problems will it create?
- Will it work? Why might it not work?
- Are there other people we should involve earlier?
- What does early mean?
- What about those who work in a collaborative agile environment already?
- What benefits have been seen by early collaboration?
Like I said, the panelists were great but the panel session just didn’t seem to work. Not enough discussion and not enough rigorous debate for my liking.
Then Paul Gerrard closed with a presentation about the Internet Of Things. This was good. It’s a topic I’m super excited about. It’s an infrastructure and network of devices that I think the testing community are ill prepared to test.
Paul talked about what the Internet Of Things is and what it means for society. He talked about the problems with it, the good it can bring and of course, the challenges we will face testing it.
How will we test it? Can we even test it? What happens when we miss something? What impact will it have on society? How can we recover from an epic fail? It’s a fascinating topic.
Good talk and lots to think about.
The SIGIST face stiff competition in the testing community and I do wonder how long the event will remain a viable option to run.
Meetups are free, better attended and more social.
Other conferences are much better value for money, more modern and more engaged with the community.
Online courses and other resources are more convenient for learning.
And of course, social media has replaced the need for many people to meet face to face anymore.
Can the SIGIST position itself somewhere in the market and regain it’s audience?