In January 2013, Gary McGraw wrote an excellent piece on 13 secure design principles that summarize the high-level ideas any security engineer or architect should be familiar with in order to be called so. Dr. McGraw is of course that smart gentlemen from Cigital who wrote the “Software Security” book, records the “Silver Bullet” podcast, and has played his role in many career choices in the security industry. The first principle he explains is quite logical and intuitive: “Secure the weakest link”. This principle spans over many disciplines, such as project management and logistics, and is evident to many: there hardly is another way to dramatically improve something than taking its worst part and fixing it. Pretty simple, right?

The vast majority of information security professionals agree that the human factor is the weakest element in any security system. Moreover, most of us promote this idea and don’t miss a chance to “blame the user” or address human stupidity as an infinite source of security problems. However, when you start challenging this idea and ask what did they attempt to do in order to change the situation, the answers are few. Just try it yourself: every time you hear someone says “… you cannot fight phishing/social engineering/human error etc.”, kindly ask them: “And have you tried to?…” I do it all the time and believe me, and it’s a lot of fun.

The uncomfortable truth is that the human brain is very efficient in detecting and dealing with threats. It spends the majority of its computing time and calories burned to maintain this “situational awareness” that allows us to step on breaks long before we solve the equation system that represents the speeds and trajectories of our car and that one approaching from the side. Our brain, if appropriately trained, can serve as an effective security countermeasure that could outrun any security monitoring tool in detection or response. The problem is that we as an industry didn’t have as much time to train the humanity to monitor for, detect, and respond to technology threats as nature had for teaching us to avoid open fire, run from a tiger, and not jump from the trees. And an even bigger problem is that we don’t seem to start doing it.

So, what’s wrong with us? Why don’t we combine the collective knowledge of human weakness in front of cyber threats and the maxim of securing the weakest link? I frankly have no idea. Maybe it’s because the knowledge domains that deal with human “internals”, such as neuroscience, psychology, and behavioral economics, are very different from what security people are used to dealing with: networks, software, walls, and fences, – I don’t know. However, I have tried (harder ©) to improve the way people that are not security experts deal with cyber threats. And you know what? It’s more fun than blaming the user. But, I guess that’s enough for one post, to be continued…