Search
StarWind is a hyperconverged (HCI) vendor with focus on Enterprise ROBO, SMB & Edge

Looking at the human factors in security breaches

  • November 1, 2017
  • 13 min read
Gary is a virtualisation, storage and Windows systems administrator who also occasionally ventures into Linux and networking and cloud areas. Container user, Windows tech, Veeam Vanguard, Spiceworks moderator. A very firm believer that the best way to solve a problem is to start with a hot cup of tea.
Gary is a virtualisation, storage and Windows systems administrator who also occasionally ventures into Linux and networking and cloud areas. Container user, Windows tech, Veeam Vanguard, Spiceworks moderator. A very firm believer that the best way to solve a problem is to start with a hot cup of tea.

There have been a lot of high profile security breaches this year, the highest profile has to be that of Equifax as that is a breach which has the potential to run and run for some time to come. Deloitte also got breached and alongside those large companies which should have known better, there have been various others impacting systems such as Disqus.

Of course, once it was made clear how the breach occurred, a lot was said about how bad it is that the breaches accorded and how it should never have happened and this is quite valid from a technical standpoint but, the reasons that these security issues were allowed to exist go far beyond the technical and into the realm of human factors.

To be clear, I’m an IT pro, I love what I do. I’m not a psychologist and nor am I attempting to be one but I also have an interest in how disasters unfurl, not least of which are aviation disasters. Whenever there is an aircraft accident the investigators always look at the human factors alongside the technical and mechanical ones and I think that it’s time that the IT industry started to do the same when reviewing IT disasters including security breaches.

Root causes of data breaches

A company can have all the high-tech security in the world but that’s not going to stop a staff member with a deadline, especially if that staff member is technically literate and that deadline is being pushed by someone senior.

Let’s be clear here, these are not people who are deliberately opening a company up to an attack but are just trying to circumvent security policies because someone is breathing down their neck and those see those very same policies as a hindrance to them getting the job done. This pressure doesn’t even have to be forceful, it can be incredibly subtle, terms like “If we don’t get this out on time, the customer might leave us!” can build a subtle pressure that indirectly encourages people to circumvent security to help the company achieve a goal.

This is why I’d like to see companies not only review the technical issues around a security breach but also the cultural and psychological.

I believe that security incidents involving human factors can be broken down into several areas:

Corporate Politics “This bug isn’t in my code”/”This issue isn’t for my team, I’ll leave it for them to fix”.
Quite likely not intentional but people working for large companies (or small companies pretending to be large ones) will often divide teams up into specific functional areas. Touch anything outside that area and you can be in trouble. This ends up fostering the attitude of “not my problem” and forces people to be blind to security incidents for the absolutely insane reason that they could get into trouble for treading on the toes of another team.
This sort of company attitude is one of the worst. Security is about co-operation and communication if teams are divided up to the point that interaction is discouraged then bugs, including security-related ones, will fall between the cracks and eventually they will be exploited.

Artificially Tight Deadlines AKA “We’ll do security LATER”. Or “There isn’t time to test!”
Sadly, this is all far too much a common scenario, a project is needed and it’s needed NOW because marketing gave a go-live date without checking with any of the IT teams before paying out a small fortune on press releases.
Often, the message from high will be of the order “Security is important, everyone knows this but it can come later, just before the product is due to ship. After all, it’ll just slot right in and we can’t afford the time for anything to get in the way of development because of the publically announced go-live date.”

Of course, at that point it’s too late so security gets bumped to version 2 which maybe never happens especially if release 1 wasn’t a big commercial success and even if it does there will probably be a list of bugs and feature improvements that are required over and above the need for security or the developers will find that building security in to an existing product will create more issues because it was never designed from the ground up to be secure.

The upshot of all of this is that over time, the software ages becomes more vulnerable and never actually fixed.
A slightly different variant of the same problem occurs with “on time bonuses”. Bonuses for shipping a critical project on time and feature complete.
Of course, when this sort of reward is introduced the focus shifts to the features because there is a clear reward for having these included and everything else such as stability and security becomes a very poor second behind that which has the reward.

Denial AKA “We don’t need to patch, we have good firewalls” or “It doesn’t matter If our code has a security hole, the load balancer is there for that”.

Relying on a firewall, load balancer, IPS or other security device is never a good strategy. Yes, they are all good devices and they do a great job but like anything, they have software that could be vulnerable. All it takes it a security flaw like heartbleed and the whole house of cards can come crashing down. Deference in depth is an old term but it’s a good one and it’s one that applies in these cases – don’t assume that the firewalls will prevent a breach.

“It’ll be fine” AKA “I’ll take responsibility for that”.

In the 1960’s Stanley Milgram performed a series of experiments to determine how far someone would go when someone else takes responsibility for the person’s actions. The answer was quite a shock as the experiment showed that people will go a lot further than first thought as long as someone senior says “I’ll take responsibility” and the same is true of IT security especially when it involves the development and deployment of a new product, maybe one that was rushed through without much thought of security. I do have to wonder just how many breaches can be attributed to management saying “It’ll be fine, just do it” or some other variant of that phrase. I suspect it would factor in many of the breaches.

IT security breach analysis is a huge area involving some specialist skills and it can be quite fascinating to see from the technical side what went wrong but it really is time to start looking at the human side to see what allowed the technical side to fail, potentially the same technique could and should be used for outage investigations as well.

Hey! Found Gary’s insights useful? Looking for a cost-effective, high-performance, and easy-to-use hyperconverged platform?
Taras Shved
Taras Shved StarWind HCI Appliance Product Manager
Look no further! StarWind HCI Appliance (HCA) is a plug-and-play solution that combines compute, storage, networking, and virtualization software into a single easy-to-use hyperconverged platform. It's designed to significantly trim your IT costs and save valuable time. Interested in learning more? Book your StarWind HCA demo now to see it in action!