Leveraging the Strongest Factor in Security (I)

In January 2013, Gary McGraw wrote an excellent piece on 13 secure design principles that summarize the high-level ideas any security engineer or architect should be familiar with to be called so. Dr. McGraw is, of course, that intelligent gentlemen from Cigital who wrote the “Software Security” book, records the “Silver Bullet” podcast, and has played his role in many career choices in the security industry.

He explains the first principle that is quite logical and intuitive: “Secure the weakest link.” This principle spans many disciplines, such as project management and logistics, and is evident. There is no other way to dramatically improve something than to take its worst part and fix it. Pretty simple, right?

Most information security professionals agree that the human factor is the weakest element in any security system. Moreover, most of us promote this idea and do not miss a chance to “blame the user” or address human stupidity as an infinite source of security problems. It may seem true as it is very easy to rationalize. Technology is something people have invented. Technology is something people do. Thus, if anything in technology goes wrong, the root cause is most probably a mistake or a poor decision made by someone.

Never attribute to malice that which is adequately explained by stupidity.

Hanlon’s razor

However, when you start challenging this idea and ask what did they attempt to do in order to change the situation, the answers are few. Just try it yourself: every time you hear someone says “… you cannot fight phishing/social engineering/human error etc.”, kindly ask them: “And have you tried to?…” I do it all the time and believe me, and it’s a lot of fun.

The uncomfortable truth is that the human brain is very efficient in detecting and dealing with threats. It spends the majority of its computing time and calories burned to maintain this “situational awareness” that allows us to step on breaks long before we solve the equation system that represents the speeds and trajectories of our car and that one approaching from the side.

Our brain, if appropriately trained, can serve as an effective security countermeasure that could outrun any security monitoring tool in detection or response. The problem is that we as an industry didn’t have as much time to train humanity to monitor for, detect, and respond to technology threats as nature had for teaching us to avoid open fire, run from a tiger, and not jump from the trees. And an even bigger problem is that we don’t seem to start doing it.

So, what’s wrong with us? Why don’t we combine the collective knowledge of human weakness in front of cyber threats and the maxim of securing the weakest link? I frankly have no idea. Maybe it’s because the knowledge domains that deal with human “internals”, such as neuroscience, psychology, and behavioral economics, are very different from what security people are used to dealing with: networks, software, walls, and fences, – I don’t know.

However, I have tried (harder ©) to improve the way people that are not security experts deal with cyber threats. And you know what? It’s more fun than blaming the user. But, I guess that’s enough for one post, to be continued…

Leave a Comment