Skip to main content

The influence of Unix on Windows 2008

Sam Ramji has an excellent technet.com blog post on the influence of 'Open Source' on Windows 2008.

Technet.com also has an interesting interview with Andrew Mason on Windows Server Core that explains some of the fundamentals of Windows 2008.

System admin scripting is finally being elevated to a first class tool for administering servers. This is a fundamental change in direction for Windows system admins, and for me, changes the equation as far as determining the best operating system for deploying an application.

I'm really looking forward to being able to write tools, scripts and utilities to manage Windows servers, instead of the incredibly error-prone human mouse-click & check-box methods that we currently use. Mouse-click management is the worst possible way to ensure that our servers are configured identically and securely.

I'm also looking forward to seeing how close Windows Server Core come to making appliance-like stripped down systems possible to build using the Windows platform. I've always favored Unix-like OS's for building DNS, DHCP and similar light weight, single purpose severs. Right now, our standard Solaris build for a server that is fully functional as an Apache web server or DNS is about a 500MB bootable image. That 500MB gets us a full functional, fully manageable server, suitable for hosting many or most of our applications. If we need a sever that has to run Java, the image size almost doubles, but that is still pretty light weight as far as I am concerned. When we churn through the weekly Solaris security patch reports, we have the wonderful ability to draw a line through the vast majority of them with the simple notation: 'Not vulnerable, package not installed....'.

Today, with a Windows 2003 web or application server, we start out with a 20GB boot disk with about 4+ GB in use, not including the swap file, just to get a bootable server. That bootable server is fully loaded, with far more functionality, features, and vulnerable software than we'll ever use. We spend a day per month analyzing the latest patch Tuesday vulnerability list and figuring out if we have mitigation in place for any of the vulnerabilities. It would sure be nice to check 3/4 of them off the list with a simple -'Not vulnerable, package not installed....'.

It looks like Micrososft is starting to think differently about servers. I'm happy about that.

Comments

Popular posts from this blog

Cargo Cult System Administration

“imitate the superficial exterior of a process or system without having any understanding of the underlying substance” --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:Failing to understand the difference between necessary and sufficient. A daily backup is necessary, b…

Ad-Hoc Verses Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to …

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:
In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. The…