The heart of tech is coming to the heart of the Mediterranean. Join TNW in València this March 🇪🇸

This article was published on September 9, 2012

Why are companies so bad at responding to data breaches?

Why are companies so bad at responding to data breaches?
Lauren Hockenson
Story by

Lauren Hockenson


Lauren is a reporter for The Next Web, based in San Francisco. She covers the key players that make the tech ecosystem what it is right now. Lauren is a reporter for The Next Web, based in San Francisco. She covers the key players that make the tech ecosystem what it is right now. She also has a folder full of dog GIFs and uses them liberally on Twitter at @lhockenson.

Looking back on the news of the last year, it’s hard not to feel uneasy about the enormous amounts of data lifted, altered and stolen by hackers. Every few weeks, it feels like another major company is compromised, whether it’s a cache of millions of Apple UDIDs, Dropbox email addresses, or LinkedIn passwords — a clear sign that no one is safe when hackers are out to do some damage. And the companies who fall victim to these breaches always appear the same way: hobbled, slowed down, and completely vulnerable for days and possibly weeks.

But perhaps the most disturbing thing about it is that hacking is not new. In fact, it’s not close to being new, even in the mainstream consciousness. Does anyone remember the movie Hackers, starring a young, punky Angelina Jolie? That came out 17 years ago. Companies know hackers exist, communities know hackers exist, and even Hollywood knows hackers exist. So why are major companies that deal in technology every day, companies that understand the risks and consequences of mishandling cybersecurity, so often left hurt by breaches that they can’t muster up the speed to take care of the issue in a timely and constructive manner? Apparently, it all boils down to the culture of seeing IT as an add-on rather than an integral part of company dynamics.

“Many companies don’t even have a Chief Security Officer — these are multi-billion dollar companies,” explains Phil Lieberman, founder and CEO of cybersecurity contractor Lieberman Software. “Even at the C-level, there’s no one responsible for the security or protection of the company.”

The anatomy of a poor reaction to cyberthreats and incidents begins with the nonchalant and divorced way many upper-level executives who are unfocused on security treat its development. Liebermann says that the C-suite views the implementation of stringent methodology to combat potential hacking threats a lot like the way many regular people treat car insurance: why spend the money to adopt a comprehensive plan hinged upon a mere threat that isn’t guaranteed? From a cost-risk analysis perspective, a procedural defense against cyberthreats doesn’t seem worth it.

“The simple reason for most of these breaches: companies see IT and security as an item that they need to and can reduce costs for,” Lieberman explains. “So, the cost of the breach becomes a cost of doing business.”

As a result, many companies invest in only preventative measures — like a comprehensive firewall — and ultimately end up bringing a knife to a gun fight. According to Foreground Security CIO Dave Amsler, this is due to the fact that many companies do not understand hacking beyond the cost of its risk. As a result, it’s difficult to convince companies that they are underprepared for an attack.

“This is what we’ve been screaming and yelling about for years, saying that these threats exist, that these attacks are happening, that people are getting compromised and that people need to invest now in a system and be prepared,” Amsler explains. “The only difference now is that more people are paying attention to it.”

Even worse, Amsler says that many companies can overestimate the level of their own preparedness.  By and large, companies are bound by regulatory agencies to maintain some semblance of cybersecurity. For example, businesses that perform online handling of financial information (credit and debit cards) normally comply with the PCI Security Standards Council and all publicly traded companies must adhere to the Sarbanes-Oxley Act, which requires routine security audits. But, the crux of the issue is that not all regulations are the same. In addition, companies tend to do just the bare minimum, or aren’t held to their regulations as stringently as they should be.

“Regulations are often about a point in time, saying, ‘These are the regulations that you need in place now. Nevermind if you change them next week,'” Amsler explains. “No one has really thought about what to do if you see something going on.”

Unsurprisingly, a preventative-only approach does well to undermine the actual skills of the hacking community. According to an annual report conducted by cybersecurity group Trustwave SpiderLabs, hacking methodology is moving away from traditional “smash and grab” techniques — which include bombarding a system and draining data en masse — for a more stealthy approach.

“The true threats don’t advertise when they compromise your data and steal intellectual property,” Amsler says. “Those are the ones that are even more damaging.”

In order to stay in a breached network long enough to garner the data needed for an IP rob or a reliable money siphon, the report indicates that hackers use webshells that enable them to spend extended periods of time within the remote unit. As a result, modern hacking — particularly when dealing with encrypted data or financial siphoning — doesn’t occur in a flamboyant or even immediately detectable way. To the human eye, a data breach can rarely be found. And oftentimes, no one is even looking.

“There’s a very small number of companies that actually have the capability to detect a breach themselves,” says Colin Sheppard, director of incident response and education at Trustwave SpiderLabs. “I didn’t coin the phrase, but we call it the ‘Rotisserie Chicken Syndrome,’ it’s set it and forget it.”

The SpiderLabs report indicates that in 2011, self-detection of a security compromised dipped to a dismal 16%. And that’s just half of the problem: In the 84% of breaches that were ultimately detected by an external body (a regulatory agency, law enforcement or otherwise), attackers had an average of 173.5 days within the victim’s systems before detection occurred, compared to 43 days for companies with the faculties to detect their own compromise. Any way you slice it, a large portion of company action to data breach is inhibited by the fact that many of them are still stumbling around in the dark and have no idea what they’re looking for.

“The reality is, when we come in and investigate, clearly within system logs and in different types of activity logs, we’re able to piece the attack together,” Sheppard says. “This indicates that controls were in place but someone was asleep at the wheel.”

Another major factor to the less-than-ideal nature of breach reporting and resolution is simply a pervading feeling of embarrassment. Companies who actively choose to not to establish a planned cybersecurity methodology risk ignoring problems out of shame or even fear of termination. And because there are no standards set for express elevation to an internal audit or executive examination, many corporate underlings and middle management are left making the decisions.

“Say the person in the finance department realizes that numbers don’t add up and the company is off by 1.5 million dollars,” explains Jeff VanSickel, senior consultant at SystemExperts. “First reaction is, ‘Oh my god, this could impact my job.’ And he doesn’t want to go to his manager with this problem, because what would his boss do? His boss is going to want to throw multiple people at it and try to resolve it before escalating it, too.”

That embarrassment can ultimately permeate into the company’s relationship with their own consumers. According to federal policies, companies are legally obligated to tell the public if identifiable information was compromised within a 30-day period. VanSickel adds that not only are companies more inclined to use up every single one of those 30 days to put the company in the best light possible, but it increases a reluctance among the C-level to actually name the incident as a data breach.

“You think that things are slow from the point where things are identified to the point of resolution? Think about all the time before the data breach is named,” VanSickel says. “That makes the impact much larger.”

That leads to the core problem of the speed and handling of data breaches: companies, even in 2012, are still wondering if they will be hacked and not bracing for when they will be hacked. And unfortunately, without that awareness, we consumers will be suffering at the hands of data breaches well into the future.

Image Credit: Surat Lozowick

Back to top