Skip to main content

If it can browse the Internet, it cannot be secured

Tired of IE’s vulnerabilities?

You could switch to Firefox, but if you were honest, you’d have to admit that you still can’t declare yourself secure. Or you could try Opera, but then you’d have to manage critical patches also, though perhaps less frequently. There is nothing about Chrome or Safari that indicates that using them will make you secure. They may have fewer vulnerabilities, or it may be that fewer of their vulnerabilities have been discovered and published. You may be more vulnerable or less vulnerable by switching browsers, but you will still be vulnerable. Throw in cross platform vulnerabilities and the combined vulnerabilities of the various third party browser addons & the menu looks pretty bleak.

Frankly, as the threats from the Internet have evolved over the last decade or so, I’ve not seen a huge difference between the security profiles of the various browsers. Some have fewer vulnerabilities, some have more; some have an easier selection of somewhat more secure browsing modes, others are more difficult to configure reasonably securely. None, as far as I can tell, are bug free, hardened, or easily configurable in a manner that is sufficiently secure such that ordinary users can fearlessly browse the Internet. There are differences between the browsers, and I have a strong preference for one browser, but fundamentally the choices are only that of relative security, not absolute security. The most popular browser likely has the most problems, but it also is the biggest target. When or if a less used browser that currently appears to be more secure ends up the most widely distributed browser, it’s pretty safe to assume that it will be targeted and it will get hit, and the results will be more or less the same.

Even if you could build a perfectly secure browser, you still have the infamous simian keyboard-chair interface that will routinely click on the banner ad that installs malicious fake ‘security’ software or stumble upon widely distributed malicious content. I don’t think it is possible to secure that particular interface using current technology.

My conclusion is simple:
If it can browse the Internet, it cannot be secured.
Start with that premise. The security model that you begin to derive is significantly different than where we are today.

Comments

  1. I agree with the idea that there's no secure browser, but it's sort of like the definition for a lie: a very poor substitute for the truth, but the only one discovered to date.

    If you've got to research things on the internet, you've got to use the best available, even if that one still sucks.

    Eventually it might be possible to eliminate all incoming threats through the web, but I doubt it, since the web was designed to present information, and even that can be a threat in the wrong hands.

    ReplyDelete
  2. Agreed.

    What we presume when looking at our infrastructure, is that the desktops that we use to get work done on the Internet are not trusted by the data center servers any more than a random airport kiosk, not matter what browser they have on them. No user desktops are trusted, and all trusted sysadmin work originates from a server that can't surf the internet. (If it could surf the Internet, we couldn't trust it.)

    It's not an air-gap though. The connection between the internet-desktop and the data center is an SSH tunnel or something similar (but not a file share anything like that).

    Someday I'll write up blog post on our model.

    ReplyDelete
  3. That makes me ask, how do your desktops trade information with the servers?

    I could see a DMZ between your users and your servers for some things, but for others (fileservers? databases?) I don't see how you could get the information between the two.

    I'm very interested in learning more about the infrastructure, for sure!

    ReplyDelete

Post a Comment

Popular posts from this blog

Cargo Cult System Administration

“imitate the superficial exterior of a process or system without having any understanding of the underlying substance” --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:Failing to understand the difference between necessary and sufficient. A daily backup is necessary, b…

Ad-Hoc Verses Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to …

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:
In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. The…