Skip to main content

Verizon 2008 Data Breach Investigations Report


Verizon published a report summarizing about 500 data breaches that is worth a read for anyone who is or pretends to be interested in IT security. (Download directly from Verizon)

Some interesting findings

As Verizon notes, the percentages are likely skewed. They are reporting on what they investigated, not on what happened. It's still more than worth the read.

External threats are far more frequent (73%) than internal. The old axiom that the biggest threat is from the inside seems to be archaic. Perhaps, as the report indicates, 'when mainframes ruled the computing world, internal threats were the predominant concern'. That makes sense. When mainframes ruled the world, inside threats were predominant because the mainframes were generally not attached to the outside world. We now have that thing we call the Internet.

Partner threats have greatly increased over time, probably because data exchanges between partners have migrated over time from EDI-like file transfers to inherently difficult to secure VPN-like network connections. And '...In a scenario witnessed repeatedly, a remote vendor’s credentials were compromised, allowing an external attacker to gain high levels of access to the victim’s systems...'. I can see this happening - or more accurately, I've seen this happen. The application vendor requires access to your systems for technical support, the vendor gets compromised, and so do all the customers. Because the vendor used the same credentials for all their customers. The '....partner’s lax security practices...undeniably allow such attacks to take place'. And '...many occasions, an account which was intended for use by vendors in order to remotely administer systems was compromised by an external entity...'

However, the shift toward external and partner threats is not the whole story. 'The median size (as measured in the number of compromised records) for an insider breach exceeded that of an outsider by more than 10 to one.' So as measured by the combination of size + frequency, inside threats are still as big of concern.

And of the internal breaches, half of them are from IT staff. That's a number I'm interested in. Tell me again why every IT staff needs DBA privs or read-only access to the whole database? The combination of IT staff + ODBC + Notebook keeps me awake at night. 

On configuration management - For system managers, application managers and DBA's, it's worth knowing that errors of omission '...contribute to a huge number of data breaches. This often entailed standard security procedures or configurations that were believed to have been implemented but in actuality were not....'. So the standard practices, best practices, or whatever, were believed to be implemented, but were not. I'm pretty hung up on determining that a device or application is configured a certain way and knowing through some independent means that the config is really there. Audit scripts, config checkers, etc. Anything but humans. A perl script will find a 'permit any' in a firewall config faster and more reliably than a human every time.

The report indicates that the threat is moving up the application stack - confirming what current thinking seems to be.  '...attacks targeting applications, software, and services were by far the most common technique...' at about 40%. But OS/platform ranks closer than I would have guessed, at about a quarter of the attacks. There are far more people writing code for applications than platforms or operating systems. The target is much, much larger, and the developers and admins of the application space are far behind the OS vendors in their security/maturity progress.

On patching, ' breaches were caused by exploits of vulnerabilities patched within a month or less of the attack....'. Meaning that as the report concludes, it is better to patch consistently than it is to patch quickly. That means that the odd-ball 'appliances' that your vendor won't let you patch, the server-in-the-cube, and the VM I left laying-around-just-in-case need to get patched too. That also indicates that vulnerability scanning, even with a primitive tool, would be valuable in finding the unmanaged, unpatched odds & ends servers.

Attack complexity is skewed toward the simple attack that '...automated tools and script kiddies...' could conduct, at about 50%. The conclusion is to '...implement security measures such that it costs the criminal more to compromise your organization than other available targets...'. So get the simple security controls in place and well managed first. Do that and you are ahead of your peers, and that's what matters.

And where is your data? You really should know, because 'two-thirds of the breaches in the study involved data that the organization did not know was present on the system...'.  Probably on a notebook in a coffee shop somewhere. Data containment, or what Verizon calls the 'transaction zone' sounds critical. Move the tool to the data, not the data to the tool. It's easier to secure the tool than the data.

On detection - most incidents were reported by someone outside the organization (70%), and most events were detected long after they occurred (weeks or months). the state of event monitoring appears to be pathetic. Or worse. The Verizon advise is  -  'Rather than seeking information overload, organizations should strategically identify what systems should be monitored and what events are alertable.'  My personal limit for looking at events is 10,000 per day. After that I get a headache. ;)

And on what you don't know (Quoted from page 24).

Nine out of 10 data breaches involved one of the following:

• A system unknown to the organization (or business group affected)
• A system storing data that the organization did not know existed on that system
• A system that had unknown network connections or accessibility
• A system that had unknown accounts or privileges

We refer to these recurring situations as “unknown unknowns” and they appear to be the Achilles heel in the data protection efforts of every organization.

I suppose it's tough to secure it if you don't know it's there.

Verizons conclusions

For executive types, I suppose:

  • Ensure essential controls are met
  • Find, track, and assess data
  • Monitor event logs

Manager types can read the more detailed conclusions on pages 26-27.

My conclusions

  • Patch consistently, focusing on completeness and coverage rather than fast patch rollouts.
  • Implement enough security to discourage the attacker and direct her to simpler targets. You don't need to out run the grizzly bear, you only need to out run the other hikers in your group.
  • When designing security, think in breadth first, perfection later. Less that perfect coverage of lots of security areas will have far more short term value than perfection in a single area.
  • Devise automated audits for critical security controls.  
  • Move the tool to the data, not the data to the tool.
  • Be wary of your partners. Don't trust, but if you have to trust, you must verify.
  • Monitor critical events only, and don't get distracted by event log volume.
  • Layer your security, and keep a security layer close to the data.
  • Pay attention to the application, not just the operating system.

And most importantly - the closer the security layer is to the data, the closer you need to monitor the logs.


  1. Michael: The Verizon study spotlights an important topic for debate. Legally speaking, what is "reasonable security?" FTC punished TJX for not having it, but I argue FTC was wrong. Verizon says 9 of 10 data breaches could have been avoided if "reasonable security" were present. That implies 9 in 10 breach victims were in violation of law. The study's outlook is that the solution to identity theft is locking down corporate data. But a security consultant/solution provider like this Verizon unit naturally sets a high bar for what is reasonable. And when Verizon evaluates if reasonable security could have prevented a break-in, it does so with benefit of hindsight. Yet the study goes on to say that in modern systems knowing where all your data reside is "an extremely complex challenge." In other words, the sheer problem of locating data (so you can apply security) is very expensive, and mistakes by data-holders who act in good faith are easy. The reasonable measures expected by FTC and Verizon are extravagantly hard to implement in practice. Hence, the portion of incidents preventable by FTC/Verizon's reasonable procedures is much lower than 90%. We need to focus more attention on other solutions to identity theft. What do you think? --Ben

  2. Mr Wright:

    I disagree with your arguments. (On all the blogs on which you've pasted that paragraph)

    The solution to theft of corporate data is locking down data.

    Verizons bar isn't very high. Any system admin, network admin or security admin with 5 years of experience should easily be able to meet it.

    Knowing where your data is located is a challenge. Life is full of challenges. Either rise to the challenges or find another job.

    Locating data is expensive, and so is maintaining the company fleet of business jets. Sell the jets, use the money to locate the data.

    The reasonable measures expected by Verizon and the FTC are not difficult. PCI isn't difficult. I created a security standard prior to PCI, around 2002 or so, that is roughly the same as the core of PCI. The concepts were not hard to grasp.

    Security isn't difficult. Time consuming, maybe. Difficult, no. I've done it. It's pain, but it isn't quantum physics, brain surgery or rocket science. If you've been doing system admin or security for more than 5 years, and you are not capable of implementing PCI or equivalent standards, then you need to find a new career.

    My understanding of the TJX incident is that they used easily crackable wireless for their POS terminals, that they had essentially no internal network segmentation, that they didn't monitor their network, and that as far as anyone outside TJX can tell, they took no reasonable security measures at all. In other words, they were either negligent or incompetent.

    The problems they had were obvious well before the 2005-2007 time frame of the hacks. The standards that I wrote in '02 address most of the issues, and they were obvious then.

    They can't even say that they didn't know, because there were similar, very high profile point of sale wireless sniffing incidents prior to the TJX incident. All they'd have had to do was read the news.

    The real TJX question is why they continue to exist at all. That level of gross incompetence should be rewarded by bankruptcy and criminal prosecution. Think Enron, not a trivial FTC fine.

    "The company collected too much personal information, kept it too long, and relied on weak encryption technology to protect it, putting the privacy of millions of its customers at risk," said Privacy Commissioner of Canada Jennifer Stoddart. (Information Week, 09/26/2007)

    Companies who do that should cease to exist. This is a free market, but unfortunately the brutal realities of capitalism sometimes fail to destroy the weak.

    And yes, my family did shop at the St Paul TJX store where the wireless sniffing allegedly took place, and yes, my credit cards were likely part of the breach.

    We need to focus on implementing the simple, basic security standards outlined in PCI and similar standards. Whining about FTC's slap on TJX's wrist is a waste of time and resources.

  3. Mr. Janke: I appreciate the public conversation. I did post my comments on several blogs because I wanted to provoke some intelligent debate, like this. I don't know everything, and I learn by talking to good people like you.

    According to the Wall Street Journal article May 4, 2007, the TJX hackers had to work long and hard to get the data. The breaking of wireless encryption was just a first step to get in the door. In other words, they had to execute a very professional attack.

    As I understand the story, the criminals did not simply steal your POS credit card data at your local store. They used your local store as a stepping stone toward the company's central IT infrastructure, where the data was compromised en masse.

    As I read the public statements about TJX, the company had a lot of security . . . many layers of security. It just did not have perfect security.

    My sense is that hard-working, professional criminals will always find chinks in a merchant's armor. (Do you not believe that to be the case?)

    So I am skeptical that the law should be bankrupting merchants. Why, as an alternative, can't we change the credit card system so that credit card data are not so valuable to criminals? I present more of the alternative argument at
    Thanks. --Ben

  4. Ok Ben, - I'll calm down. :)

    I might be wrong about TJX's security, but from what I can infer from the WSJ and other articles, TJX didn't do much to secure their edge networks. Strike one.

    The hackers then "digitally eavesdropped on employees logging into TJX's central database" and "stole one or more user names and passwords", implying that there was no password encryption on the unsecured networks out at the stores. Strike two.

    Then "they set up their own accounts in the TJX system", implying that the accounts that were intercepted out on the edge of their enterprise had sufficient privileges to create other accounts, and implying TJX doesn't have a system for reconciling newly created accounts with authorized persons. Strike three.

    And they "were able to go into the TJX system remotely from any computer on the Internet", implying that there was no access control between the databases holding the card info and the internet. Strike four.

    And somewhere along the line they were able to intercept card data real time because "TJX transmitted that data to banks "without encryption,"". Strike five.

    I sounds to me like there wasn't much between the unsecured edge, here in St Paul, and the pile of gold at the core.

    And hackers do work long & hard. They have been doing that for a couple decades. That's there job. And because that's what they do, the TJX's of the world need to build security that detects or defends long enough to discourage them, so they move along to easier targets. That's TJX's job.

    Having said that, I'd agree that no system is perfect, and determined hackers can probably break down any reasonably implementable security system. Where I'm coming from on this particular incident is that I don't think that TJX did a reasonable job of security in 2005. It looks to me like they had reasonable cira 2001 security in 2005, and for a company that has $15B in revenue, that isn't good enough. (Neither is the common 2008 practice of deploying new applications that are vulnerable to SQL injection, a common hacking technique that was widely analyzed and documented prior to 2005. For info - Google OWASP.)

    If I live in rural Wisconsin & I have nothing of any value, I probably can get by with $10 locks on my door and a faded 'beware of dog' sign. That would be reasonable security. If I move to the big city & fill my house with works of art, I probably need a monitored, professional security system, with motion detection, glass breakage detection, etc. That would be reasonable security.

    If you've got a hundred million credit card numbers stored, or whatever the number was, you are in the big city, and your security needs to reflect that reality.

    Also - Should credit card numbers somehow be made worthless so that they are not a hack target? I like that idea. That would be one way of removing the incentive for targeting merchants.

    I do think this is an interesting discussion.

  5. Michael: This topic is thoroughly engrossing to me. I ask what must the law require. Legally speaking, it is really interesting to say that as of 2005, security that was adequate a mere four years earlier is now illegal. (Note that in those 4 years there was no new legislation or regulation, and PCI was vague and just emerging.) Remember, the merchants did not invent the credit card system; the banks did. The merchants did not cause little credit card numbers to be valuable targets of hackers; the banks did. And the banks promoted wide implementation of the credit card system long before they invented the PCI. --Ben More argument:

  6. is really interesting to say that as of 2005, security that was adequate a mere four years earlier is now illegal....

    If 'legal' is somehow defined as 'reasonable', then yes, because the definition of 'reasonable' in internet security constantly changes, the legal requirements would have changed in four years. (My opinion).

    In Internet-land, 'reasonable' can change overnight. If for example, 3DES or AES were cracked tonight, 'reasonable encryption' would have changed tonight.

    Remember, the merchants did not invent the credit card system; the banks did. But they signed the merchant agreements. The did that voluntarily. They could choose to not accept cards.

    The merchants did not cause little credit card numbers to be valuable targets of hackers; the banks did. But it is the merchants poor security that feeds the cards to the market. That problem pre-dates networks. Two decades ago, merchants were throwing carbon copies of card imprints in the trash for the 'hackers' to recover from dumpsters. Today they are leaving hundreds of millions of cards numbers laying about unencrypted. To me, the problem is the same, the dates and technology has changed. Merchants leave credits cards laying around unsecured, and bad things happen.

    So tell me - if card issuers decided to implement a perfectly secure card system, and all it required was for every merchant in the world to throw their point of sale (POS) systems & card readers out the door and buy new ones, would the merchants happily do that or would they resist the expense and continue used the old, unsecure systems? I'll bet the TJX, the company that was too cheap to upgrade WEP to WPA, would keep the old, unsecure POS terminals. They kept WEP when WPA was available and cost effective, they'd keep the old terminals also.

    And the banks promoted wide implementation of the credit card system long before they invented the PCI. And the merchants bought into the promotions. they didn't have to. They could have stayed cash only.

  7. Nice banter, gents.

    For an explanation of what we mean by "reasonable security," visit the Verizon Business Security Blog post referenced below.

    What do we mean by "Reasonable Controls?"

  8. Fiber Optic Pacthpanels Features -- Technical information --


Post a Comment

Popular posts from this blog

Cargo Cult System Administration

Cargo Cult: …imitate the superficial exterior of a process or system without having any understanding of the underlying substance --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.
The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:
Failing to understand the difference between necessary and sufficient. A daily backup …

Ad-Hoc Versus Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to…

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. These …