Skip to main content

Performance Benchmarks that Include Energy Efficiency Data

Signs of the times:

Energy Benchmarking: Rich Miller at Datacenter Knowledge is reporting that TPC will update their performance benchmarks to include energy efficiency data. In the future, they’ll measure performance, price and energy in their benchmarks.

Actual datacenter energy costs (rather than power supply nameplate ratings) are hard to generalize. The numbers that I can find are all over the map. Energy use depends on server load, server configuration, server efficiency, power distribution efficiency and cooling efficiency, none of which are easily calculated and rarely measured. As a rough estimate, it looks like for small servers the cost of power + cooling approaches the cost of purchasing the server hardware and amortizing it over 4 years. Figuring energy use into the price/performance calculations for systems should skew future purchases toward efficiency.

Power Calculators: HP has a rack power calculator tool that provides useful estimates of power use for a given HP server and rack configuration. APC and others provide similar tools.  I’m sure they build the tools to help figure per-rack UPS, power and cooling for custom rack configurations, but the tools can easily be used to help estimate energy costs.

Don’t forget cooling: One thing I’ve noticed is that people tend to forget that for every watt of electricity that their systems use, they’ll have more than one watt of cooling that they need to supply to remove the heat from the datacenter (or their house if they have air conditioning). The process of removing the watt of energy from the room is not 100% efficient. For example, if I have a rack that uses 5000 watts, a cooling system that was 100% efficient would use an additional 5000 watts to remove the heat from the room. But cooling systems are not 100% efficient. Worst case, you might spend up to an additional 10,000 watts of energy to cool the 5000 watt server rack. 

Comments

  1. Slightly off-topic, but on cooling your datacenter:

    You could also use more innovative heating/cooling systems that either use outside air in the winterlands to cool, tie into HVAC systems to reduce the demand on your heating infrastructure, use geothermal systems where your cost is reduced to the pumps and air handlers, eliminating condensers and the like, or other options. In any case, either using what you've generated in the datacenter to create efficiency or reduce demand elsewhere, or using passive sources of heating/cooling to reduce the amount of mechanical equipment required.

    ReplyDelete

Post a Comment

Popular posts from this blog

Cargo Cult System Administration

“imitate the superficial exterior of a process or system without having any understanding of the underlying substance” --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:Failing to understand the difference between necessary and sufficient. A daily backup is necessary, b…

Ad-Hoc Verses Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to …

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:
In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. The…