Energy Consumption | Infrastructure

TLDR (efficiently expressed – rhetorically); Reducing infrastructure hardware with virtualization technology and picking energy efficient hardware that ultimately reduced our energy consumption over the last six years. Certainly beating the annual increase on commercial energy costs and giving us insights into the power of software and hardware technology innovations. 

This project is: My measurements are a ‘broad stroke’ of what our potential savings were. We want to see if we A) saved money on electricity to lower the environmental impact and B) see if my operational planning is lowering our foot print, and requiring less of a demand on internal resources ($$$$) and operating more efficiently (this is what I really care about).

Infrastructure Energy Consumption Analysis

Well… I think the graph speaks for itself. We saved a lot of money on electricity. How did we do it? Keep reading.

What this analysis could have been: 

It could have been a complete analysis that broke down each system by CPU (E3, E5, AMD, etc..), RAM (8, 16, 32, 64, 128), Storage (RAID configs based on SSD’s, HDD’s), and exact amount of (network) data pushed by the hour. Gratefully, I get paid more than $0.082 per hour, unfortunately, that means that the time to perform a fully detailed analysis may dramatically reduce the cost savings. Then again, it could be tucked away as a hidden variable…, jk.

Remember, efficiency.  “When products use more power to perform the same amount of work, they are by definition less efficient.”

Why it wasn’t that: 

Cost benefit analysis i.e, not worth the time for a similar outcome. This analysis is being done in hindsight with 20/20 vision. If I wanted to make a decision about future infrastructure changes (collocation, hybrid public cloud, on-prem datacenter expansions) with vendor purchase agreements over $100,000 for a single refresh, then the cost/benefit may be exponential enough to measure. We don’t spend that type of money on infrastructure.

What it is: 

I mean, knowing that the new hardware consumes less wattage than the prior hardware and comes with software that supports low idle usage during low load times; means that the power consumption will be lower – in theory. But remember, all you know are the stats given to you from the vendor. That doesn’t include the environment variables (seasonal electrical cost changes, systems usage changes, and random implementations or deprecation’s, nor system count changes). That sentence alone is exhaustive and makes me want to switch to cloud computing! For instance, our year over year costs per device is on an upward trend, but our overall costs for our environment is on a downward trend. How is that? 

year over year electricity cost per device

Originally, I wanted to see how much of a cost savings or increase we would see from my infrastructure decisions. It quickly became apparent that most of my savings was not because I picked super efficient and ‘green’ hardware – but I did. The primary reason for my cost savings was actually a software technology – virtualization. Yes. It has been around for a LONG TIME.

The primary savings was accomplished during the partial and full virtualization phase (2016-2017). Reducing the onsite datacenter footprint from 24 servers down to 3 primary servers. Unfortunately, some of the technologies deployed required additional power consumption, increased demand on average server usage and increased PoE demand on all switches as more devices become Powered over Ethernet. 

Eitherway, we dramatically reduced our energy consumption! Yay, us. Sorry, Entergy.

What I plan to do with this: 

Increase system efficiency, cost effectively – I thought I made that clear.

The primary bottleneck that is limiting our system throughput are the disks I/O speeds. With future analysis, we will be able to determine if SSD’s can provide us with an operational cost savings through the direct cost of electricity, infrastructure purchasing costs by consolidating one of the Hyper visors from three down to two and comparing those cost savings to varying models. I’m considering and testing costs in both a hybrid-cloud infrastructure (which adds systemic processes (lowers efficiency)) and complexes the design (lowers troubleshooting efficiency without proper training) and increases demand for professional development. All variables must be considered before making a decision on our next infrastructure initiative. 

How I plan to measure: 

  • Electrical costs can continue to be measured on an annual basis by kWh per device based on average load/usage and multiplied by the total number of ‘like’ devices in the network. 
  • Processes can be measured by taking the collective salary average and dividing it by the support hours required to maintain, monitor, and support a hybrid-cloud. 
  • In the same light, troubleshooting time can be averaged by the salary over ticket completion times for systems / infrastructure tasks. 
  • And finally, PD costs are explicit when utilizing subscription plans, boot-camps, and training materials. The hardest aspect to measure will be personal, off-the-clock training time dedicated to increasing our staff knowledge on cloud computing maintenance and troubleshooting. 



Some research:

Amazing vendors: (PowerEdge is amazing!) (Built in power consumption metrics) (manual power consumption stats)

#show power inline

module available used remaining

(watts) (watts)

1 370.0  39 331


  • Not affiliated with anyone / anything in this post directly.
  • Excuse grammatical issues, I’m not a writer.
  • All analysis was inspired by others with a personal directive to save the earth and increase efficiency.

Leave a Reply

Your email address will not be published.