Skip to main content

The crud moved up the stack

A long time ago (in internet time), in a galaxy really nearby, a few large software companies attached their buggy, unsecured operating systems to the Internet.

Havoc ensued.

The overall quality of software, as measure by MTTB (Mean Time to Badness), or 'if I connect to the internet, how long 'till bad things happen' was pathetic. Cow tipping was an amusing past time, Land attacks, Ping of Death attacks, Smurf attacks, were a daily event. Toss a few malformed packets into a campus, watch the campus roll over and die. Build a worm that can hack a zillion web servers in a week, or sling out a UDP packet that can turn hundreds of thousands of database servers into corporate network wrecking zombies, affecting even the companies that wrote the software. Sysadmins made the problem worse by connecting any old crap to the internet without the slightest thought for securing it.

It was chaos.

But software vendors recognized the problem, or at least most of them did, and implemented necessary processes to ensure that their software was written, tested and deployed in a manner that largely solved the worst of the problems; sysadmins started hardening first, then deploying, and things generally settled down.

In other words, the worm and hack outbreaks of the early 00’s accelerated the development of reasonably secure operating systems with pretty decent default installations, and in particular, changed the attitude toward secure software development at a few large software vendors. The change in attitude toward the importance of security has, in my opinion, made a dramatic difference the security and availability of operating systems (using Windows 2003/IIS6 as the best example).

Today?

The crud moved up the stack. We now have tens or hundreds of thousands of web sites vulnerable to XSS and mass SQL injection attacks, poorly written applications that require DBO or DBA privs, full rights to file systems and random firewall port openings. We have applications that are written and deployed with trivially exploitable vulnerabilities, applications with no concept of roles, rights or privs, and application vendors who yank support from you if you attempt to add the most basic of security controls on to their crap.

It's the same story, just moved up the stack.

Unfortunately, todays problem is much harder to resolve. In the late 90’s and early 00’s, a relatively small number of developers at a few large vendors could largely solve the problem by changing the way they build software. Figure tens of thousands of developers or so, who worked for no more than a handful of software companies, who were reporting to what we presume to be authoritative managers, and who could be ‘forced’ to follow a methodology or standard, could more or less solve the problem by doing the right thing when they wrote code.

Today, with web apps as the hack target, the community of developers who have to understand the problem and change the way they build systems is pretty much anyone who has ever slung up a simple web/database application. That population of developers is millions, not thousands, they are all over the map in terms of their skill set, and they are largely outside the bounds of large structured software companies or anything resembling top down authority, and are extremely unlikely to be using any software development methodology whatsoever, much less a methodology that encompasses secure software development. (They'll be agile though.).

Educating them is a pretty tough problem. Changing the way they write code is even tougher.

My experience hosting applications developed by small companies and contractors is universally negative. They simply don’t have the a clue how to handle the current round of SQL injection, XSS, etc., they have no clue what file system or database rights and permissions their applications require, they have no concept of separation of code from data, they require a well known SA password on a database, 'Everyone' 'Full Control' on the SQL database directory structure, and when asked about application security, they are as likely as not to respond with a brochure outlining how many locks there are on the doors to their hosting facility. Least bit? Not even close. They need all the bits. They aren't sure which ones they're actually using, and it'd take at least until lunch to figure it out.

It sounds to me like a much larger problem.

The chaos continues.

Comments

Popular posts from this blog

Cargo Cult System Administration

“imitate the superficial exterior of a process or system without having any understanding of the underlying substance” --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:Failing to understand the difference between necessary and sufficient. A daily backup is necessary, b…

Ad-Hoc Verses Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to …

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:
In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. The…