Skip to main content

Another Reason for Detailed Access Logs

Another poorly written application, another data leak. Not new, barely news.

This statement is interesting though:

“[company spokesperson] said it's unclear how many customers' information was viewed, but that letters were sent to 230,000 Californians out of an "abundance of caution.”

Had there been sufficient logging built into the application, Anthem Blue Cross would have known the extent of the breach and (perhaps) could have avoided sending out all 230,000 breach notifications. That’s a view on logging that I’ve expressed to my co-workers many times. Logs can verify what didn’t happen as well as what did happen, and sometimes that’s exactly what you need.

There are a couple of other interesting things in the story:

“the confidential information was briefly accessed, primarily by attorneys seeking information for a class action lawsuit against the insurer.”

That’ll probably cost Anthem a bundle. Letting lawsuit-happy attorneys discover your incompetence isn’t going to be the cheapest way to detect bad applications.

And:

“a third party vendor validated that all security measures were in place, when in fact they were not.”

Perhaps the third party vendor isn’t competent either?

Via Palisade Systems

Comments

  1. Logging isn't sufficient: you have to set up logging in such a way that you can trust it after a breach. Once a host is broken into, you can't trust anything on it, or anything coming from it.

    So if a machine does log remotely, and was breach at 08:59:59, then any log entries from 09:00 onwards can't be trusted.

    Setting up a properly auditable infrastructure is no easy thing if you've got a lot of moving parts.

    ReplyDelete
  2. Good point.

    The default for me is that the only logs that I consider usable for forensics are those that are logged off the server, in real time, to a remote server on a separately firewalled network segment.

    The text that apps store on the server in the 'log' directory isn't a 'log' in the context of forensics.

    --Mike

    ReplyDelete
  3. It's a struggle to get anyone to define requirements for logging that would be useful in a forensic analysis. For years, all I got in response to requests for such requirements -- from very well-meaning but very busy people -- was an acknowledgment that it's a good question and a general statement about logging any change in authentication/authorization status and any "substantive" change to data. That has begun to change.

    Clearly there are some applications, though, or at least parts of some applications, where viewing certain data should also be logged. I seriously doubt that this requirement would be identified without a risk analysis / threat model.

    ReplyDelete

Post a Comment

Popular posts from this blog

Cargo Cult System Administration

“imitate the superficial exterior of a process or system without having any understanding of the underlying substance” --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:Failing to understand the difference between necessary and sufficient. A daily backup is necessary, b…

Ad-Hoc Verses Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to …

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:
In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. The…