Skip to main content

Tipping Point Vulnerability Disclosures–IBM Incompetence?

Last August, Tipping Point decided to publically disclose vulnerabilities six months after vendor notification. The six months is up.

Take a look at the IBM’s vulnerability list and actions taken to resolve the vulnerabilities. If you don’t feel like reading the whole list, the snip below pretty much sums it up:

[08/26/2008] ZDI reports vulnerability to IBM
[08/26/2008] IBM acknowledges receipt
[08/27/2008] IBM requests proof of concept
[08/27/2008] ZDI provides proof of concept .c files
[07/12/2010] IBM requests proof of concept again and inquires as to version affected
[07/13/2010] ZDI acknowledges request
[07/14/2010] ZDI re-sends proof of concept .c files
[07/14/2010] IBM inquires regarding version affected
[07/19/2010] IBM states they are unable to reproduce and asks how to compile the proof of concept
[07/19/2010] ZDI replies with instructions for compiling C and command line usage
[01/10/2011] IBM states they are unable to reproduce and requests proprietary crash dump logs

Tipping Point: Two Thumbs Up.

IBM: Two and a half years. Still no clue.

What IBM’s executive layer needs to know is that people like me read about the failure of their software development methodology in one division and assume that the incompetence spans their entire organization. That may not be fair – IBM is a big company and it’s highly likely that some/most software development groups within IBM are capable/competent. However – if one group is allowed to flounder/fail, then it’s clear to me that software quality is not receiving sufficient attention high enough within IBM to ensure that all software development groups are capable/competent. If some/most software developed within IBM is of high quality, it’s because some/most software development groups believe in doing the right thing. It’s not because IBM believes in doing the right thing.

In other news, IBM’s local sales team is aggressively pushing for me to switch our entire Unix/Database infrastructure from Solaris/Oracle on SPARC to AIX/DB2 on Power.

Guess what my next e-mail to their sales team is going to reference?


Popular posts from this blog

Cargo Cult System Administration

“imitate the superficial exterior of a process or system without having any understanding of the underlying substance” --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:Failing to understand the difference between necessary and sufficient. A daily backup is necessary, b…

Ad-Hoc Verses Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to …

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:
In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. The…