“The Peltzman effect is the hypothesized tendency of people to react to a safety regulation by increasing other risky behavior, offsetting some or all of the benefit of the regulation” (Wikipedia)
I find it interesting because I wonder whether we see this (or some other closely related theory) when we build secure systems or create safe ways for people to interact with our applications. If we provide an application that is secure and/or communicate outlandish safety and security claims, do our users exhibit less secure behaviour when using it?
I have a friend who interpreted “The most secure ISP provider on the market” to mean that he no longer needed Anti-Virus or a Firewall on his laptop. He had his credit card details stolen 2 weeks after signing up.
I know of someone else who fell for the same phishing attack 5 times. He put his trust in the new “Incredibly secure online banking portal” to protect him.
I know of systems that are incredibly secure yet send out all login details via just one mailing through the post. Get the mailing, get the credentials, get the money.
I know of people who keep their PIN numbers in their wallet with their cards.
Some people use obvious passwords like 1234 or 0000 or Pa$$word – check out the iPhone password stats here.
And so as a tester I always take a step back when it comes to security and look at the wider picture.
Experience has shown me that it’s the human that’s typically the weak link in any security process or application usage. And I can’t help but wonder whether this same human, when bombarded by claims of high security and safety, becomes an even weaker link in the chain.
Maybe it’s the Peltzman Effect? Maybe it’s something else? Maybe it’s nothing at all?
But I reckon all testers would benefit from drawing/sketching out the whole process (including the human elements) of your system under test and ask yourself one question:
“How can the human parts of this process compromise the security?”
Some light reading :