Skip to main content

When Software Vendors Make Security Assumptions

Bob recently ran into a situation where in order to run a vendor provided tool, he had to either modify his security practices or spend a bunch of time working around the poor tool design. The synopsis of his problem:
"Problem with that, though, is that it wants to log in as root. All the documentation says to have it log in as root. But on my hosts nobody logs in as root, unless there’s some big crisis happening."
This wasn't a crisis.

This seems to be a common problem. We've had a fair number of situations where a vendor assumed that remote login as root was possible, that there were no firewalls anywhere, that all systems of the same platform had the same credentials, and that unsafe practices are generally followed.

  • Really expensive enterprise backup software that assumed that there were no firewalls anywhere. The vendor advised us that technical support couldn't help us if the customer was firewalled. (This was a while ago, but the product still requires the worlds ugliest firewall rules).
  • An 'appliance' (really a linux box) that because it used the brilliantly designed random port-hopping Java RMI protocol for its management interface, couldn't be firewalled separately from its console (really a Windows server).
  • A financial reporting tool that required that the group 'Everyone' have 'Full Control' over the MS SQL server data directories. No kidding - I have the vendor docs and the f*ugly audit finding.
  • A really expensive load testing product that assumes that netstat, rsh and other archaic, deprecated, unencrypted and unsecure protocols are enabled and available across the network.
Why are vendors so clueless?

Here's a couple of hypothesis.
  1. Bob and I are the only ones in the world with segmented networks, who have remote root login disabled and who have rational security practices. To the vendors, we are outliers who don't matter.
  2. The vendors’ developers, who insist that the only way that they can be productive is if they get a sandbox/dev environment where they are root and they don't have any security restrictions, actually get what they ask for. The code they write then works fine (in their unrestricted environment) so they ship it. Customers don't object, so the practice continues.
I suspect the later.

(OK - maybe there are other possibilities, but they aren't as amusing as picking on developers...)
It doesn't have to be this way. A couple decades ago I installed a product that had specific, detailed instructions on the minimum file system permissions required for application functionality for each directory in the application tree, including things like write-only directories, directories with read, but not file scan privs, etc. (an early version of what's now called GroupWise)

Today? I still see vendors that assume they have 'Full Control', remote root, unrestricted networks, etc.

My solution?
  • Escalate brain deadedness through the vendors’ help desk, through tiers 1-2-3 to the duty manager and don't let them close the ticket until you've annoyed them so badly that the product manager calls you and apologizes.
  • In meetings with the vendors’ sales team, emphasize the problem. Make it clear that future purchasing decisions are affected by their poor design. ‘You’ve got a great product, but unfortunately it’s not deployable in our environment’. The sales channel likely has more influence over the product than the support channel.
  • Ask for a written statement of indemnification on future security incidents that are shown to have exploited the vendors poor design. You obviously will not get it, but the vendors’ product support will have to interface with their own legal, which is painful enough that they'll likely not forget your problem at their next 'product roadmap' meeting.
  • Do it nicely though. Things like ", I've got an audit finding here that makes your product look really bad, that's going to hurt us both..." are more effective than anything resembling hostility.
If enough customers make enough noise, will the vendors eventually get the message?


Popular posts from this blog

Cargo Cult System Administration

Cargo Cult: …imitate the superficial exterior of a process or system without having any understanding of the underlying substance --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.
The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:
Failing to understand the difference between necessary and sufficient. A daily backup …

Ad-Hoc Versus Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to…

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. These …