In my experience, the ‘hardware is cheap, programmers are expensive’ mantra really only applies to small, lightly used systems, where the fully loaded hardware cost is actually trivial compared to the cost to put a programmer in a cube. Once the application is scaled to the point where adding a server or two no longer solves the scalability problem, or in cases where there are database, middleware or virtualization licenses involved, the cost to add hardware is not trivial. The relative importance of software optimization versus more hardware then shifts toward smart, optimized software, and the cheap hardware argument at Coding Horror quickly falls apart.
The comments at Coding Horror descended into the all too common ‘if I had a better monitoring, I could write better code’ nonsense pretty quickly, which of course misses the point of the post. Some of the commenters got it right though:
“The initial cost of hardware (servers) is not the only cost, and - yes hardware is cheap, but is a drop in the proverbial bucket compared to the total cost of ownership” – JakeBrake
“Throwing more hardware at problems does not make them go away. It may delay them, but as an application scales either…you may get a combinatoric issue pop up outstripping you ability to add hardware…[or]…you just shift the problem to systems administration who have to pay to maintain the hardware. Someone pays either way.” – PJL
“Throwing hardware at a software problem has its place in smaller, locally hosted data facilities. When you're running in a hardened facility the leasing of space, power, etc. begins to hurt. One could argue the amount of time and labor necessary to design and implement a new server, along w/ the hardware costs, space, power -- and don't forget disk if you're running on a SAN (fibre channel disk isn't cheap!) -- can easily negate the time of a programmer to fix bad code.” – Jonathan BrownThe above comments correctly emphasize that the purchase price of a server is only a fraction of the cost of the server. A fully loaded server cost must include space, power, cooling, a replacement server every 3-4 years, system management, security, hardware and software maintenance costs and software licensing costs. And if the server needs a SAN attach, then fiber channel port costs can equal the server hardware costs. Some estimates imply that the loaded power, space and cooling cost of a server can approximately equal the cost of the server.
Fortunately, the hardware-is-cheap argument was promptly retorted with a well written post by David Berk:
“To add linear capacity improvements the organization starts to pay exponential costs. Compare this with exponential capacity improvements with linear programming optimization costs.”In other words, time spent optimizing code pays back cost saving dividends continuously over the life of the application, with little or no additional ongoing costs. Money spent on hardware that only compensates for poorly written code costs money every day, and as the application grows, that cost rises exponentially.
That’s basically where we are at with a couple of our applications. They are at the size/scale where doubling the hardware and associated maintenance, power, cooling, database licenses will cost more than a small team of developers, and because of the inherent limits of scalability in the design of these applications, the large outlay in capitol will at best result in minor capacity/scalability/performance improvements.
Adding on the David Berk’s response, I’d add that one should consider greenhouse gases (a ton or two per server per year!), database licensing costs (the list price for one CPU’s worth of Oracle Enterprise plus Oracle RAC is close to the cost of a programmer’s salary).
Another way of looking at this is well written, properly optimized software pays itself back in hardware, datacenter, cooling and system manager cost in a broad range of scenarios, the exception being the small, lightly used applications. For those – throw hardware at the problem and hope it goes away