CMU-CS-09-113 Computer Science Department School of Computer Science, Carnegie Mellon University
Optimal Power Allocation in Server Farms
Anshul Gandhi, Mor Harchol-Balter March 2009
Server farms today consume more than 1.5% of the total electricity in the U.S. at a cost of nearly $4.5 billion. Given the rising cost of energy, many industries are now looking for solutions on how to best make use of their available power. An important question which arises in this context is how to distribute available power among servers in a server farm so as to get maximum performance. By giving more power to a server, one can get higher server frequency (speed). Hence it is commonly believed that for a given power budget, performance can be maximized by operating servers at their highest power levels. However, it is also conceivable that one might prefer to run servers at their lowest power levels, which allows more servers for a given power budget. To fully understand the effect of power allocation on performance in a server farm with a fixed power budget, we introduce a queueing theoretic model, which also allows us to predict the optimal power allocation in a variety of scenarios. Results are verified via extensive experiments on an IBM BladeCenter. We find that the optimal power allocation varies for different scenarios. In particular, it is not always optimal to run servers at their maximum power levels. There are scenarios where it might be optimal to run servers at their lowest power levels or at some intermediate power levels. Our analysis shows that the optimal power allocation is non-obvious, and, in fact, depends on many factors such as the power-to-frequency relationship in the processors, arrival rate of jobs, maximum server frequency, lowest attainable server frequency and server farm configuration. Furthermore, our theoretical model allows us to explore more general settings than we can implement, including arbitrarily large server farms and different power-to-frequency curves. Importantly, we show that the optimal power allocation can significantly improve server farm performance, by a factor of typically 1.4 and as much as a factor of 5 in some cases. 35 pages
*IBM T.J. Watsron Research Center, Hawthorne, NY
| |
Return to:
SCS Technical Report Collection This page maintained by [email protected] |