The Peltzman Effect

I find The Peltzman Effect incredibly interesting from a Testing point of view.

The Peltzman effect is the hypothesized tendency of people to react to a safety regulation by increasing other risky behavior, offsetting some or all of the benefit of the regulation” (Wikipedia)

I find it interesting because I wonder whether we see this (or some other closely related theory) when we build secure systems or create safe ways for people to interact with our applications. If we provide an application that is secure and/or communicate outlandish safety and security claims, do our users exhibit less secure behaviour when using it?

I have a friend who interpreted “The most secure ISP provider on the market” to mean that he no longer needed Anti-Virus or a Firewall on his laptop. He had his credit card details stolen 2 weeks after signing up.

I know of someone else who fell for the same phishing attack 5 times. He put his trust in the new “Incredibly secure online banking portal” to protect him.

I know of systems that are incredibly secure yet send out all login details via just one mailing through the post. Get the mailing, get the credentials, get the money.

I know of people who keep their PIN numbers in their wallet with their cards.

Some people use obvious passwords like 1234 or 0000 or Pa$$word – check out the iPhone password stats here.

And so as a tester I always take a step back when it comes to security and look at the wider picture.

Experience has shown me that it’s the human that’s typically the weak link in any security process or application usage. And I can’t help but wonder whether this same human, when bombarded by claims of high security and safety, becomes an even weaker link in the chain.

Maybe it’s the Peltzman Effect? Maybe it’s something else? Maybe it’s nothing at all?

But I reckon all testers would benefit from drawing/sketching out the whole process (including the human elements) of your system under test and ask yourself one question:

“How can the human parts of this process compromise the security?”

Some light reading :

11 Replies to “The Peltzman Effect”

  1. I think that Threat Profiling must be an invaluable tool for testers when it comes to security. Essentially a threat profile relating to Agile is a user story written from the point of view of an adversary.Story #1: “As an adversary I can gain access to credit card details”In the same way that Agile stories address the “goals” of a user, the threat profile addresses the goals of an attacker. This helps focus your attention on a particular area. I do believe that during the tests the focus of this goal should be iteratively changed. After some intial testing and feedback from the application, a particular area, variable or paramteter may become more apparent as a back door or vulnerability.Modelling the “human parts” in this way may have equal value.Story #2: “As a user I am restricted in falling fowl of the Peltzman effect” 🙂

  2. Hi Dan,Thanks for the comment. Interesting thinking about the “reverse” story. I.e not being able to do something. It’s an angle I hadn’t considered but could be very useful as a way of bringing the Threat Profiles in to the story. Nice. We should try it :)Rob..

  3. This reminds me of the Citreons safest car in the world. It’s never been in an accident. This is down the the fact that the body is made of glass and with sharp spikes in the cabin and a serious spike on the stearing wheel. The idea is that the driver is so keanly aware of the dangers of driving they do everything to avoid them.The problem with more secure software and ‘better’ software is that the risks are hidden and it is difficult to imagine them. There are several approaches I take.-The first is to identify what the problem is for the solutions, i.e. what were the risks we are attempting to mitigate. Like adding ait bags prevents the driver hitting the stearing wheel.-The second is what have been the risks from the begining. i.e. the current solution always builds on something that has come before. i.e. go back to the glass car, what are the risks of driving that?-The third way is to user stories where you define personalities of the users and their traits. i.e. The person that thinks secure means they don’t have to do anything to protect themselvs because they use this software and therefore keep the keys in the car.- Another good one is the news. It’s always full of the ‘silly’ things that people do. Like switching on Cruise control and expecting the vehicle to turn corners whilst they make a cup of coffee in the back.All of this contributes to the risk analysis. Likely hood of happening. And don’t forget “never over estimate the intelligence of the user”.

  4. Hi Rob,I think humans are very much the weakest link in security. Certainly, I think there is a psychological effect when people are told that something is secure and safe to use by someone in authority or reputable organisation – a lot of people would believe this to be true and result in lowering their own personal security. Unfortunately relating to this topic, many people can easily be conned by people simply impersonating someone else or claim to be from another organisation. In terms of computer security why spend a lot of time cracking or brute forcing passwords and looking for weaknesses in applications when its easier to simply target humans. Kevin Mitnick was on the run from the Fbi for years for doing this type of attack – eventually got caught – but he wrote a very interesting book – The Art of Deception which talks about this type of attack and he also decribes some techniques to defend against it.

  5. Hi Stephen,Thanks for taking the time to comment. I like your suggestions a lot and the way you make them human by putting them in a stories with personas is a great way to make them feel more real.I remember the story about the guy who set the cruise control and then hopped in the back of his camper to make a brew. Classic.It’s like when you buy peanuts and it says on the pack that it may contain peanuts. At one extreme it is health and safety gone mad, but at the other end it is perfectly necessary.Thanks for commenting and sharing your ideas.Rob..

  6. Hi Martin,Thanks for taking the time to comment. The Art of Deception – I’ll be sure to give this book a read. You should add this book to the NewVoiceMedia reading list :)Good point about it being easier to target the human than crack the software. I’d never framed it in this way.CheersRob..

  7. Hi Rob,Really interesting idea!I think the times we make assumptions or use trust is a potential trap. You can see this in the Peltzman effect and also in ideas around risk compensation.I proposed the idea that the real issue with risk compensation is that we neglect or overlook the aspect of feedback in our assumptions and judgements (below), and that’s where we sometimes get caught out. Maybe that applies with the Peltzman effect also.So, it’s the awareness of the problem, as you say, that is important. Once we’re aware of a potential trap in our thinking we can try and cope with it.Ref: http://testers-headache.blogspot.com/2010/12/risk-compensation-and-assumption

  8. Hi Simon,Thanks for commenting. I like the idea of neglecting the feedback, I think at some level this is the Peltzman effect. Maybe the promise or trust of security makes us, at a subconscious level, ignore the signs and feedback about our actions. Maybe the promise of security or trust blinds us to the future thoughts of “what could happen” <–>Thanks for the feedback. :)Rob..

Comments are closed.