"Uphill battle" is not an apt way of putting it. You know, you can usually see the top of a hill.
Maybe we're fighting the wrong battles altogether.
Consider the great pejorative of infosec, "security through obscurity."
Everyone knows that security through obscurity is not just bad, but downright irresponsible. The canonical illustration of this comes from cryptanalysis: rolling your own crypto scheme is almost always a bad idea, with a result that is considerably weaker than what you get with a publicly known, well-understood encryption scheme. Kerckhoffs's principle, a rule of thumb for cryptosystems saying that your system should be secure even if everything but the key is public knowledge, first appeared in print way back in 1883.
But lately, when designing security for really hard-to-secure things -- like everyday things used by millions of regular people -- I've found myself saying about our defensive measures, "Let's not talk about this." What on Earth am I doing?
Economic warfare is what I'm doing. But not in the way you're thinking. There's an old principle in security that I'm sure you know: if you make a system more expensive (in time or resources) to break into than the reward gained from doing so, you've built a damn fine defensive barrier. And it's obvious that secrets take effort to unearth, through reverse engineering or intelligence gathering, for instance.
The reason this doesn't usually apply to crypto is that cryptanalysis is historically a game played by nation states, where the reward gained from undermining a system is so great that it justifies an extraordinarily expensive intelligence effort.

The problem today is that practically everyone has special access to something -- bank accounts, email accounts, corporate assets -- and we need to secure those things, but they're not really crown jewels. And we can't treat everyone like castle guards. If a castle guard doesn't follow the right security procedure, you throw him in the dungeon to teach him a lesson. Those rules don't apply to everyday security. Put in the jargon of the net: consumer security education doesn't scale.
Ultimately, consumer security solutions that require training are not solutions. They're actually security problems. (Does anyone really think that we can train a majority of people to look for the "green bar" on ssl sites? or that anyone on Earth is going to notice when their online banking login skips that step where they're supposed to look for their special photo? Seriously?)
Solving those training problems takes the defenders' money and time, so from an economic warfare standpoint, when you go down that road, the advantage actually shifts, by default...
to the attacker!
As we make our research findings known, through this blog, papers, and conferences, and as our R&D comes to market (through product roll-outs and licensing to partners), we will be demonstrating how one of the ways in which we are raising the defensive bar is through uncertainty, the helpmeet of obscurity.
Today's most well-deployed defensive tools - antivirus, anti-malware, DLP, and application aware firewalls, present no uncertainty to attackers. It's easy to test your malware against every antivirus package before you release. It's easy to tell when you've penetrated a firewall.
We think that the next generation of defensive security will work by breaking the attack development cycle, and breaking the connections between exploit and payment.
No comments:
Post a Comment