If you go to these continuous delivery meet-ups there are developers there talking about testing.
Testers need to be there as well.
– Amy Phillips
Most software testing, in my experience, is rushed and often under time pressure, typically driven by a desire to meet a metric or measure of questionable value. Yet to rush through the testing of a feature or product is to often miss the changing nature of that product (or your own changing nature).
If you are testing a capability of a product and look at it in one way (say with the mindset of trying to break it) you may find several issues. If you shine a light on the product in another way (say with the mindset of trying to use the product really quickly) you may spot other issues.
Also, if you test the product on one day you may miss something you may spot on another day. You change (mood, energy, focus) all the time, so expect this to affect your testing.
The product likely hasn’t changed, but the way you see it now has.
Tours, personas and a variety of other test ideas give you a way of re-shining your light. Use these ideas to see the product in different ways, but never forget that it’s often time that is against you. And time is one of the hardest commodities to argue for during your testing phase.
How much time you will need to re-shine your light in different ways will mostly be unknown, but try to avoid being so laser focused on completing a test for coverage sake, or reaching the end goal of X test run per day, that you miss the serendipity that sometimes comes from simply stopping and observing and re-focusing your attention.
One of the things that I have observed from a number of testing conferences is that none of them have any sustained focus on hiring or getting hired *.
There have been one or two sessions about the topic of hiring but nothing sustained.
The occasional tracks that I have seen have been mostly focused around the hiring strategies of big corporates where bums on seats is sometimes more important than cultural team fit.
Most testers don’t know how to get hired – I wrote a book to help bridge that gap. Those that do know how to get hired are truly in the minority and appear, at least on the surface, to be overall better testers. Mostly this is not true – they are good, but they are often no better at testing than others, it’s just they are much better at getting hired. Getting hired is a skill.
Hiring and getting hired is a vast topic and one which is fraught with contextual challenges, but I believe that a dedicated set of talks from hiring managers from a wide variety of contexts, and maybe some sessions and tutorials on writing CVs, interviewing etc would go down well at most testing conferences. It’s great being good at testing but how do you then go on and get hired…
There are supporting topics such as social profiles, writing clear CVs, networking, self education and interpersonal communication that might also make interesting tracks. Or maybe they wouldn’t. Maybe people go to testing conferences to learn about testing and not the other stuff that comes with our working world…
What are your thoughts?
* The conferences that I have been to
A while ago I remember Phil Kirkham mentioning that he’d found a bug that “fell” outside of the Test Cases he’d been given and someone was arguing that it wasn’t a bug. I found it incredibly interesting that someone would dismiss a serious bug because it was not found as part of a test case.
The test case is an abstract, it’s something we create to guide us (mostly), yet too many people take the test case to be the testing. The testing is a human process. The test case is a guide, an artefact, a crutch, something we may *have* to create, a container of information, a checklist or any other supporting element to our testing. The test case itself…is not testing.
About a year or so ago I wrote a short eBook on the problems with Testing. I set the main Test Manager character in a company who valued process over actual results. That process over results was similar to what I believe Phil saw.
This was originally going to be a decent sized blog post about process and test case management. Instead I thought I’d spent 5 minutes hacking together an Xtra Normal video to see if I could get the point across with animated characters instead (I also wanted an excuse to have a go at an Xtra Normal vid). (BTW – The Evil Tester did an Xtra Normal video here : http://www.eviltester.com/index.php/2010/03/03/hey-you-are-you-a-tester-the-m…
My video is by no means an award winning video, but it was actually quicker than writing out my thoughts. I think it conveys a similar meaning.
I’ve always wanted to be a film director too. I’ve got a lot of work to do to make that a paying reality 🙂
I’ve just finished reading Specification By Example. Sorry Gojko for taking so long!Specification by Example is an interesting read. I was going to paraphrase what it’s about but it’s best to come straight from the source:
“Specification by Example is a set of process patterns that helps teams build the right software product. With Specification by Example, teams write just enough documenta- tion to facilitate change effectively in short iterations or in flow-based development.”
I really like the idea of Living Documentation. It’s an interesting idea which I believe could solve a number of communication problems in many businesses.As Gojko says:
“Living documentation is a source of information about system functionality that’s as reliable as programming language code but much easier to access and understand.”
The book is incredibly easy to read and is split in to many different sections, each one building on the idea of Living Documentation, and each one accompanied by a number of quotes from people implenting these ideas.The book has a compelling introduction which sets the scene well before moving through the various stages of thinking about Specifying through examples:
- Key Benefits
- Key Process Patterns
- Living Documentation
- Initiating the Changes
- Deriving Scope From Goals
- Specifying Collaboratively
- Illustrating using examples
- Refining the specification
- Automating validation without changing specifications
- Validating Frequency
- Evolving a documentation system
- Case Studies
Each section builds on the idea of specifying using examples, but there were a couple of standout chapters for me, particularly Specifying Collaboratively and Validating Frequently. I enjoyed these sections because they hit a chord with my own thinking and some of the challenges I’ve seen in organising automation (and story chats).
It’s also very interesting reading about the companies Gojko interviewed in the Case Studies section, especially when these companies have incredible well respected people offering these insights.I took away a huge amount from this book. It was especially well timed as we embark on a more “specification by example” approach to cover some of our automation here. It’s good to have some examples of approaches for legacy products to refer to when we try it.
Sometimes books only concentrate on the greenfield approach, but here Gojko has added lots of advice for people in differing contexts.Although the book is really aimed at people looking to automate using examples I still think this book would be useful for others who have a general interest in automation. Some of the hints and tips I took from the book were more agile process related too, so I think there could be something for everyone in it.
I’m loving this YouTube channel explaining cool and interesting scientific stuff : Minute Physics
I especially liked this video on how to weigh a million dollars with your mind. It struck a chord with me because it reminded me of how we sometimes come up with estimation for unknown future features or capabilities.
I’m a huge fan of keeping things simple, usable and accessible when it comes to developing stuff (if possible).A cool source of ideas for building usable and accessible “forms” is the ELMER guidelines. These guidelines are aimed at those building public sector forms, but I think the guidelines are good for anyone having to build forms. Here’s a little from the website:
“Simplification of public forms is important to improve communication between the users and the public sector. The proceeding transition to electronic reporting may be an important simplification measure for the respondents, but only if the Internet-based solutions are felt to be more user friendly than the old paper versions. By applying good pedagogical principles, electronic forms may also ensure a better understanding of the task, better control of the data before submission, and by that even better response quality and more efficient processing by the relevant authority.”
The guidelines aren’t that tough to read and they make a lot of sense. The forms that we ended up with when building against these guidelines just flowed well and felt much better than the original prototypes we’d built.Here’s some of the “stand-out” ideas I took from the guidelines, which helped to build much better forms with greater usability and accessibility. I must also add that these excerpts below were taken from the guidelines about a year ago (I know…. my draft list of posts is too large). I’ve checked a few of them against the current guidelines and they are still the same, but some might have changed in the last year. Another document to re-read 🙂
“The page order in forms must be locked where: 1) the order is significant with regard to response interpretation and quality, or
2) the order will depend on responses given on previous pages.”
“User-requested help must appear when the user clicks on a standardised help icon in“In forms where a significant number of questions are irrelevant for specific form filler
connection with a question. The user can make the help text disappear from the
screen by clicking again on the same symbol, and must be replaced by a different text
if the user clicks on the icon connected to a different question”
groups , or where different form filler groups shall complete significantly different sets of questions, different tracks must be developed. Each individual user shall be directed to the relevant track. Several tracks may consist of identical pages/question sequences” “Both labels and input fields for response dependent questions must be greyed on the
form pages and only be opened for completion if previous answers indicate that they
are relevant” “Each individual page should be delimited with a view to avoid an unreasonably long
download time. The download time is affected by such factors as graphics use, the
amount and type of controls and the number of fields.” “If there are several interim sums which are not simultaneously visible to the user, a
separate summations group should be created at the bottom of the page. If the interim sums are located on different pages, they should be transferred to a summary page” “Fonts, font sizes, colours and other graphic elements, must be used consistently and
uniformly in all forms issued by the same inquirer. The forms must differentiate clearly between various types of elements (headings, category headings, labels, error messages and warnings, etc.). As a general rule, sans serif fonts should be used and colours should provide good contrast.” Links to information that is irrelevant to the form completion should be avoided. “
“The readability of help texts should be increased through use of typographic means.
Except for very brief phrases, the texts should be broken into a series of easily read- able chunks with highlighted headings and keywords.” “In cases of incorrect completion of individual fields, an error message must appear
automatically as soon as possible after the error has occurred. The text must appear in
the information area and the relevant field must be clearly marked. Messages presented
in a separate window (dialogue boxes or pop-ups) must not be used”
As you can see, there are some interesting ideas that might work for you.
I often get asked by people new to Software Testing what the best approach to learning more about Testing is.
Is it certification, books, blogs or other courses? These are the usual categories that get listed.I very rarely hear people ask whether “practice” is a good approach to learning. I think this stems from many people thinking that “practice” and “work” are the same thing. That one cannot exist without the other. How can I get experience in Testing unless I have a job Testing? I think this view is out-dated. I practice my testing outside of work. I practice my testing inside of work. If you want to get good at something..practice. Over the last year I’ve been working with a great organisation who are building wonderful technology, mainly aimed at people without access to the basic infrastructure that many people take for granted. It’s got a great user base and is making a positive change to many people’s lives. Over the last year I have been trying to organise some Testing for this organisation. The group of Testers I’d gathered are great; a mixture of domain experience, general testing experience and a few new to the scene. Last week it all came together and hopefully this year we’ll starting adding a load of value for our client. And all of this is free, outside of work and purely voluntary. My stock response for the last few years to the question “How can I learn more about Testing?” has been “Volunteer your time”.
There are so many organisations out there building software that *could* benefit from your enthusiasm and skills.
You’ll potentially learn loads about testing, working with people, time management, commitments, decision making, technology, reporting/articulating your testing and how to work within constraints.It’s all great for your CV.
So how to do it?
- Find an organisation that you feel some synergy towards.
- Get in touch with them and volunteer your time and energy.
- Be sure to set expectations about levels of testing, time and experience you are offering.
- If you need a more experienced person to help you get it started then head to The Software Testing Club and ask for help, or drop me a line.
- Persevere – communication can often be tough, expectations may need aligning and relationships will need to be built.
- Commit and deliver on your promise.
- Ask for help in the community if you get stuck or you realise there is too much work for you.
- Document and audit your learning. (any good interviewer will ask you about your experience)
There are so many organisations, open source projects, charities and not for profits that would benefit from some volunteer help….so what are you waiting for?
Image courtesy of wblj – http://www.flickr.com/photos/imageclectic/
There are so many ways of working out what is changing in your product. As a Tester I look for this information from any source. It helps me filter my test ideas.
One of the tools we’ve been exploring recently here at NewVoiceMedia is a cool tool called Gource. One of our programmers Wyndham has been experimenting with it and been tempting me with the insights this tool could offer our Testers.
There simply isn’t time to test everything so any technique or information source that will help me filter and target my Testing is very welcome.
Gource is a tool that “visualises” SVN check-ins. We can visualise where the check-ins are happening, with what frequency and with what intensity. We can get access to check-in information anyway, but what this tool gives you is a timeline of change and that all important visualisation.
Here’s a cool vid of Gource in action for Flickr:
So we’ve been playing with this and immediately I can see areas of code check-ins not immediately obvious from story cards or other workflow tools.
We’re going to be exploring this tool more over the next few months and seeing how well we are finding the insights it can give.
I’ve also been having crazy ideas about how I can use the SVN checkin process and Gource for visualising where we are targetting our exploratory testing. We could see how much testing is going on around which component…..but I digress. I’ll explore that and let you know.
Whilst on the train a few months back I spent some time observing how people were using technology.
Some were using the tech as I assume it was intended, some were “street hacking” the products, whilst others had adopted unique ways of utilising technology (and other devices) to fit the context they found themselves in.
I often wonder how effective testing can be (and overall design and development) when we have little insight or understanding of how our applications/systems are used in everyday life.We can make sure it’s “functionally complete” or “performant” or “meeting requirements” but that doesn’t mean we’ve helped to create a great product right for the audience it’s aimed at.
I think Testers can help greatly in this respect, either by simply asking questions at all stages of the design and build process but also by getting out and seeing end users in situ (if you can).Many of us assume, predict or speculate how our end users are using the software, it’s often all we can do, but I wonder how much of this speculation is accurate. I often hear it said that Testers “pretend” to be the end user, which is fine if you know who your end user is, and in what context they will use your product but this approach should never be used as an absolute point of view.
No-one will ever be able to use your applications in exactly the same way as your real end users, but we can certainly work hard to get close to it.Here are some of the things I observed:
- Several people were using multiple mobile phone handsets. Work and Play? Feature versus Feature? Or simply taking advantage of the best tariff?
- What about data on each phone? Did it need to be synched?
- One person was swapping sim-cards in and out of one handset to make use of the best tariffs.
- Two people were using a technique called “flashing” or “beeping” to communicate with someone else via mobile phone. (flashing is where you ring a phone and then hang up before they answer. There are all sorts of social norms growing around this practice – good article here: http://mobileactive.org/mobile-phones-beeping-call-me)
- One person was trying desperately to get a good photo out of the window on his mobile phone whilst the train was moving. I’m not sure he was happy with any of them. Misplaced expectations?
- One lady was using a USB expansion device blue-tacked to her laptop lid. I “assume” she was doing this to expand the capacity and/or maybe to reduce the overall width of the machine (the two USBs she was extending were bigger than the expansion USB itself). Street hack? Do we need to test any usages like this (i.e. overloading original intention)
- The person next to me complained that his phone was running out of power too quickly. He’d been playing Angry Birds for the entire journey whilst using the phone to play music.
- Expectations of battery life needing to be longer to support modern multi-usage?
- How did both apps perform?
- Which one used the most power?
- Should we be thinking about how much power our apps consume on mobile devices?
- Can we realistically even measure it?
- Should we rely on the underlying platform to manage such usages?
- Many of us live in an age of “smart phones”. How does this “one for all” device fit with other electronic devices like eBook readers, Tablets, Laptops etc
- Can we read something on a Kindle and start reading where we left off on an iPad at home or Laptop at work?
- Can we seamlessly share information between devices?
- How is our information distributed, stored, secured, managed, protected? Do we care?
- Another lady caused minor fits of giggles as she took out her “early designed” portable DVD player. It was pretty huge by modern standards. She proceeded to play episodes of Friends on it as she slept for the journey. Old tech and usages is a major challenge for many Testers.
- At what point do we stop supporting old tech or old versions of our software?
- Do we need to test all versions?
- Are there techniques for testing many platforms/versions at once?
- At what point does this old tech stop being “acceptable” in society….which society?
- How do we know what versions our customers use?
- One man was reading his Kindle on the train. He had created a hook for his Kindle. The hook was made from a coat hanger. This hook was suspended around the ceiling hand rail so he could essentially suspend the Kindle at eye height. At the bottom of this bizarre Kindle concoction was a ribbon so he could stop the Kindle swinging around with one of his hands. Street Hack?
- Could we have ever envisaged it being used this way?
- Would it have changed the way we tested or designed or implemented?
- Whilst on the train it is inevitable that the phone signal will come and go. With bridges, buildings and shady coverage it would be ambitious to assume total connection for extended periods. Yet I observed a number of people “hating” (i.e. swearing and getting annoyed with) their devices when they lost connectivity.
- Two people physically threw their phones on the tray table when they lost a voice signal. Others uttered offensive and angry rants at their devices.
- Yet the problem existed outside of the device or software in use; it existed in the infrastructure. Are expectations outgrowing reality?
- The infrastructure no longer supporting the norm and expectations of device usage. This is an interesting challenge for those creating devices relying on technology infrastructure like broadband and mobile networking.
- What’s the minimum connectivity? What’s the risk of it going down? What happens if it does go down? Does it recover? Do we lose data? Does it matter?
- One person was drinking heavily whilst attempting to use a Smart Phone keyboard. I won’t repeat what he said, but he struggled to type his SMS. One interesting point he made was that he used to be able to type whilst drunk on his old phone. Loss of capability? Extreme contexts? Trustworthy observation?
- Two people were “chatting” to each other via their mobile phones (maybe Skype or other chat system). They were sat opposite each other.
- Are cultural changes in communication reflected in your products?
- I was sat in a quiet carriage. As usual there were a few complaints and confrontations about noise. One of which was about the noise being made by the keyboard of someone’s laptop.
- At what point do we pass social thresholds of acceptability? Do we need to do more Inclusive Design?
- Should we consider more extreme contexts when testing or write these off as edge cases?
- Could we ever imagine all of the contexts our products may be used in?
- I kept turning on my wireless hotspot on my phone to sync my field notes (Evernote) as the train stopped at each station. This raises questions of synchronisation, offline working issues, data storage on the cloud and a whole host of privacy issues. There are some interesting examples of how people are using tools like Evernote in the wild.
- Do you have stories of how people are using your applications?
- Do you actively seek stories from end users?
- Do you use this data to build user profiles and system expectations?
Does this really have anything to do with Testing?
There are a few people starting to talk about social sciences in Testing and I believe the application of many areas of social science are going to grow and diversify as Testers seek to find out more about people and technology and how our products can find the right audience(s).
Social sciences can give us a deep insight to ourselves, culture, mass media, communication, research, language and a whole lot more.
Observing people is just one element of research. Research will give you insights, clues and information about the things you research.
Observations and research will help you to make decisions on what to focus on, and what to overlook.
Testers are also natural skeptics; we should help to experiment, prototype and challenge simple assumptions and social categorisation (i.e. all young people use Twitter, wear hoodies and listen to hip-hop) *
Observing people in every-day life is often the biggest eye opener for any Tester wanting to learn more about people and tech.Why not have a go? Focus on the world around you. I bet you’ll see stuff you never saw before. I bet you’ll see people using tech in ways you’d never seen before. I bet you’ll learn something new.
The challenge is how you can bring what you learn to your Testing.As usual, if you have a go, let me know what you think. * This is a real generalisation I heard someone say at a technology, culture and local council meeting!
I presented at a conference last year where I talked about large feedback loops and how agile attempts to shorten these loops. Ideas such as Acceptance Test Driven Development, Test Driven Development and agile sprint durations are *some* reasons why agile achieves shorter feedback loops. (not exclusive to agile though).
I also suggested that long feedback loops make it acceptable to build an “us versus them” environment and to use static tools like Defect Tracking systems as a form of communication.
In environments where we see lots of “Us Versus Them” mentality between Programmers and Testers (and there are plenty of these environments), do we also see large feedback loops and/or system in use that encourage slower feedback?
Or, in environments with large feedback loops, do we see more “Us Versus Them” mentality.
I suspect both are true and I wonder which comes first.
No matter which way the cause and effect is, it’s clear to me that we need to work hard to break down these walls by incressing the speed at which we provide feedback.
For some, this is easier said than done, but I firmly believe that short feedback loops can do wonders for your whole Development team. Thoughts?
In my experience there can often be, especially amongst testers, a desire to hear the bad news, the gossip or the failings. Get to any mainstream Testing conference to hear stories of Testers gleeful at delays, failings and horror stories of late releases; the stereotypes of negative Testers did start somewhere and is very much alive in some industries.
I don’t believe this is a good strategy at all, but I’ll save that rant for another post. A lot of the negativity, delays and grief can often come from the way we report our testing. Often innocent reporting of findings can result in a witch hunt, panic and stress. I’ve seen whole management teams mobilised because of lose comments about quality, especially when a product is nearing it’s release date.
When we report our test findings we need to be careful not to sensationalise the information, or underplay the importance of the findings.
So in a sense, just because some people crave tales of disaster, rework, bad designs, mistakes and trouble ahead it’s not always constructive to give them this news. Now, don’t get me wrong. I’m not advocating that we don’t give the truth. Not at all. What I am saying is be careful how you deliver this news.
Be careful to keep your information factual. Try not to add a personal or political balance to it (unless you have an agenda) – this is a lot harder than you think.
Becareful of the language used. Emotive words will flavour the meaning.
Think about the information you are providing. Sit and read it and then write down 5 questions you could realistically expect to be asked about this information. Then answer these questions. Add this further information to your reporting. Then try to find 5 more questions. Not only will you prepare yourself for inevitable questions about your reporting, but you will also fine tune your information in the process. It may seem over-kill but I have wasted countless hours in uneccessary meetings because someone over-sensationalised their findings.
Keep the data clean, minimalist, simple and to the point.
Think about what context you find your project in. Is it the last day? Is it the first day? Are you working overtime? Are you under lots of pressure? Is it going well? These factors may make a difference to how your information in interpreted.
Make your information gel nicely with the context. Or make it contradict for a reason.
But always be conscious of the effect your information *could* have. Even stopping to think about what you’re reporting is a very effective filter. And remember:
“What interests the project team, might not be in the interest of the project team”
Testing is dead, that’s what they said in the news.I disagree, but I think it’s getting confused.
There has been a lot of talk on whether or not “testing” is dead.
At EuroSTAR last year it was an over-riding theme and it generated a lot of talk about what the future holds for Testers.Trying to predict what Testing will look like in the future will always be limited by our inability to gain an agreed definition of what Testing currently looks like. I don’t believe that trying to pigeon hole Testing is a good idea anyway (but that’s a different post) So to talk about a future of Testing when we aren’t even sure what the current view of Testing is seems to me to be a fruitless task. I know; I’ve tried in the past and failed hence I focus now on what trends I see and what challenges we may face. What becomes so painfully clear when widening your awareness to understand how Testing is done in other companies and domains is that Testing is incredibly diverse (and often surprising).
A future idea about Testing (or techniques for achieving good Testing) for one person could be an old fashioned approach for another person. Both may be valid for their contexts and both may or may not be future orientated; it depends where you personally stand.For example human tickbox testing (often called “checking”) is very much alive and well in some industries and it shows no sign of going away (sadly). It’s a very buoyant market and a huge number of “Checkers” are employed doing it. I can’t see this market going away soon; so is Testing dead?
I don’t believe Checking is Testing, but now we’re arguing Semantics when we talk about Testing is Dead (i.e. do we mean Checking is dead?).Exploratory Testing is the future for some people, but a tried and tested approach for others. Testing is not dead. It’s just changing…for some people, in some industries. Just like the world is changing…for some people, in some industries.
Good Testers will do good Testing no matter how their world changes. They may just approach it differently, with a different mindset and different set of tools and techniques and approaches.
Those that don’t adapt will find their value diminishing, but that doesn’t mean Testing is dead.Unit testing is testing. Acceptance testing is testing. UX testing is testing. AB testing is testing. Testing in live is testing. Design reviewing is testing. A Story chat is a form of testing. Testing is changing (for some people). The people doing testing are changing. But Testing is still happening. And Testing will continue to happen. Talking to people at EuroSTAR it became increasingly clear there were two very distinct camps of thinking about the death of Testing, with a number of blended ideas in-between. Camp 1 was people who were terrified of the future and what it might bring. Camp 2 was people who embraced the future and all of the change it could bring. I for one am firmly in Camp 2. I’m excited about the change and challenges and the technology we’ve yet to see. Yet I know there are many who are scared. I think a lot of this fear comes from not knowing what the future may hold and not being able to visualise yourself working in these new environments. The biggest problem for most people around the future of testing is an inability to forecast themselves and their skills into a job they don’t believe exists (or are willing to believe exists).
This comes out as resistance to change; Cloud will never come to their domain: Agile will never work where they work: Sitting with Programmers will never work in their environments: Virtualisation will never work because it’s BLAH BLAH BLAH.But the world IS changing and a key skill a good tester needs to possess is the ability to understand how their world is changing, how their skills will be valued in this changing world and what they need to do to future proof themselves. A good Tester will adapt. At EuroSTAR a lady said that Cloud and Agile would never come to her industry because it’s impossible to achieve and the industry wouldn’t accept it. Her industry is Call Centre software. Well guess what? I work in the Call Centre Industry and our product is cloud based and we develop it in an Agile/Lean approach. Testing is never as Black and White as we may initial think. There is never a Best and only way. Refusing the future will not work either. Predicting the future is impossible also. But being adaptable in your approach, your skills, your understanding and your learning will certainly help the future seem less scary. The only constant is change.
The future will happen.
Testing will still happen.
The only question is: “Will it be you doing this testing?”
More Testing Is Dead posts:
Matt Heusser – http://www.softwaretestpro.com/Item/5352
Google GTAC conference – http://www.youtube.com/watch?v=X1jWe5rOu3g
Arborosa on Testing is Dead – http://arborosa.org/2012/01/11/is-testing-dead/
Always good to test systems, but testing the live system….for a long time…….at peak bus journey time…….at a busy bus station? Panic ensued.
Yet in this instance; when people were desperate to see what time the buses were running, mobile technology became the hub of activity. Myself and two others with mobile internet were tracking the bus times. As more people gathered to find out the times, this in turn attracted even more people.
A few thoughts from this:
At what point do systems we’ve traditionally relied on become obsolete, and what do we replace them with? (we can’t assume everyone will have mobile internet)
Why is testing of these types of systems done during peak time (assuming that the message is honest)?
At what point do we improve the service we offer so we no longer need to track bus journeys and estimate the time of arrival? (I’m thinking on the Japanese train system which is legendary in it’s speed, accuracy and reliability – but I guess they still track train times???)
Why, when systems are taken down, are alternatives not put in place?
As testers how can we “test” in live whilst still maintaining a service?
And a million more questions that are flying through my head.
At EuroSTAR last week it was sad to see a “them versus us” culture still thriving in the software development community. I thought things were changing, especially with the on-set of Agile heading mainstream but it seems not.I got embroiled in a conversation which stole an hour of my life. An hour in which I heard the virtues of “them versus us”. An hour in which this “Test Manager” extolled the positives around an “Independent” test team, who “distrusted” everyone and treated programmers with “contempt”.
It boosted Testers morales apparently. It made the team function as it should; as a separate, impartial and hated department. A department who would ruin projects. But it was never the Test Manager (or teams fault), it was the project teams or management.I got the following vibe:
The Testers were frightened of the Management.
The Management didn’t like the Programmers or the Project Team, though they could live with the Testers.
The Programmers were indifferent to the Project Team but were terrified of the Testers and hated the Management.
The Management were seriously affected by the Programmers terror of the Testers.
The Project Team were nervous of the possibility of a Management – Tester alliance, spurred on by the indifference of the Programmers, and they shared everybody else’s dislike of the Management.
Or something like that.
Releasing software seemed to be a constant struggle for this chap. Testing was always an after-thought.This was a scarily common theme and the blame was always put on other people. Is change that difficult?
Isn’t it better to try and change something (relationships, approach, team, people, environment, structure, etc), than settle for mediocre? What are your thoughts?