I hate it..but I must admit I’ve not tried it

It seems the more that something becomes popular the more that people start to hate it, often without even trying it. Look at what’s happening with Agile in the testing community. A lot of negativity from people who’ve never tried it. The lashing out is usually based on false ideas of what it actually means to be a tester in an agile context. A good example is a classic and incredibly misguided BCS blog post here. The comments are invaluable…and funny.

I see the same thing in the social media arena too. Lot’s of people berating twitter, blogging and facebook as nothing more than a passing trend. Yet for many it’s bringing them closer to their real customers and end users. For some it’s building long lasting friendships and relationships. Giant circles of contacts. Extending peoples reach, making the wider community available to us all.

The detractors seem to be those who haven’t tried it, or dabbled and didn’t like what they saw. It is those who see it from a distance and critique it, often with misplaced or over hyped information, often without understanding how contexts and environments play a role.

Look at how some people react when you mention acceptance test driven develoment or exploratory testing.

They offer the same look my careers advisor once gave me when I told her I wanted to be a Ventriloquist when I leave college.

 

Some people go mad at how ridiculous these ideas are. “Where’s the quality” they shout?

New communication techniques and social media channels give us a platform to challenge the one sided stories we’ve been hearing for years and years. The empty arguments and global testing norms can be challenged, globally, locally, glocally. We can discuss these so called Best Practices and expand our sources of information to get a better picture of what may or may not work for us. We no longer have to go to the mainstream for our information.

 

I heard someone at a Testing Conference criticising Virtual Machines when used for Load Testing because they felt they weren’t reflective of real systems.

Yet they spent thousands of pounds on expensive build environments, big teams to manage these environments and they then had to wait two days to rebuild this system after a failure.

The interesting part of this story was that the “real” system the presenter was Load Testing was indeed virtualised anyway. What? A live system being run on virtual machines??? Never.

It’s a bit of a rant this post, but really……if you must criticise, reject or belittle something, then criticise something you have at least tried. 

Communication. Signs. Symbols.

I’ve studied communications for years now but today was the first time I encountered the video from Archive.org that I’ve included in this post. Which is a shame because it’s fantastic. At the same time I’m glad I’ve stumbled across this video now so that I can share it with those who are interested.

Communication underpins everything we do. In a sense, we cannot do anything without communicating something, so it’s important we understand the implications of this.

For those that follow this blog you’ll know I’m interested in simplifying the discussions around communication by elevating the deeper rooted theory in to a simple three thought process. Purpose. Audience. Context. It’s a starting point to think about communication and how well we communicate but is no means complete.

This video does an excellent job of explaining the deeper rooted theory, particularly focussing on Shannon’s model of communication but also touching on signs, symbols, culture and noise. It’s a great video and carries that typical 1950’s haunting aspect of informational videos. I love the images, editing techniques and footage and the soundtrack just tops it all off. But despite all of this retro greatness there’s nothing more important in this video than the content.

It’s a long video at 20 mins but well worth sticking through. Some highlighted phrases stood out from a testing point of view.

 

English language is one half redundant

and

Computers are often referred to as brains….The greatest fallacy in the comparison is one of degree.

 

Enjoy:

http://www.archive.org/flow/flowplayer.commercial-3.2.1.swf

The Peltzman Effect

I find The Peltzman Effect incredibly interesting from a Testing point of view.

 

The Peltzman effect is the hypothesized tendency of people to react to a safety regulation by increasing other risky behavior, offsetting some or all of the benefit of the regulation” (Wikipedia)

 

I find it interesting because I wonder whether we see this (or some other closely related theory) when we build secure systems or create safe ways for people to interact with our applications. If we provide an application that is secure and/or communicate outlandish safety and security claims, do our users exhibit less secure behaviour when using it?

 

I have a friend who interpreted “The most secure ISP provider on the market” to mean that he no longer needed Anti-Virus or a Firewall on his laptop. He had his credit card details stolen 2 weeks after signing up.

 

I know of someone else who fell for the same phishing attack 5 times. He put his trust in the new “Incredibly secure online banking portal” to protect him.

 

I know of systems that are incredibly secure yet send out all login details via just one mailing through the post. Get the mailing, get the credentials, get the money.

 

I know of people who keep their PIN numbers in their wallet with their cards.

 

Some people use obvious passwords like 1234 or 0000 or Pa$$word – check out the iPhone password stats here.

 

And so as a tester I always take a step back when it comes to security and look at the wider picture.

 

Experience has shown me that it’s the human that’s typically the weak link in any security process or application usage. And I can’t help but wonder whether this same human, when bombarded by claims of high security and safety, becomes an even weaker link in the chain.

 

Maybe it’s the Peltzman Effect? Maybe it’s something else? Maybe it’s nothing at all?

 

But I reckon all testers would benefit from drawing/sketching out the whole process (including the human elements) of your system under test and ask yourself one question:

 

“How can the human parts of this process compromise the security?”

 

Some light reading :