• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

NY Times Investigation: Massive power waste at internet data centers (~90% waste)

Status
Not open for further replies.

jonnyp

Member
I don't think this has been posted...

That 90% number refers to that apparently the average internet company data center server is only actually using 6 - 12% of its processor capacity at any one time, but it's using 100% of its electricity at all times. And there are huge numbers of data centers now, all over the country.

Here's hoping memristors become viable for mass production soon then.
 
D

Deleted member 1235

Unconfirmed Member
What he is saying is that cpu utilization is not the only thing that matters to power usage, and it's not a good measure of how much these servers are being used.

A basic file server can have really high utilization of bandwidth and file system and still be low utilization on CPU.

exactly.

There is tech in place to address these issues. here's an excellent video of VMWare's distributed powermanagement.

http://www.youtube.com/watch?v=7CbRS0GGuNc

for the lazy, it powers on additional servers as needed and moves running servers over to new hosts. when it gets quiet again, it consolidates them back to less hosts. Watch the vid, it's pretty impressive tech.


The article is complete rubbish, and it kind of reminds me of the "Foxconn hysteria" that the NYT also got in on. Take an issue most people don't know and a surprising amount don't understand, throw in some yellow journalism, and wait for the accolades. You will always have idiots

this is nonsense. while some of their items they use to justify the 90% thing is a bit.... silly, IT is a HUGE drain on resources. Much more than the airline industry for example.
 

Slavik81

Member
Power consumption is one of the primary costs of data centers. Each watt of power usage requires several watts to cool.

I'd be very, very surprised if data center operators were not trying their hardest to reduce their power consumption. Every advertisement I read for server hardware talks about power usage.

I got the impression the author feels I should do something about the problem... But I have faith in the people who are working on it already.
 

GraveRobberX

Platinum Trophy: Learned to Shit While Upright Again.
I remember watching 60 minutes and they had a sit down with Zuckerberg + Facebook for the entire show

They showed a huge data center built somewhere close to the tundra in Alaska/Canada area
It could be Sweden lol, I'm hazy on the location

It was ridiculous to look @

Look @ this one they're building in North Carolina

mFp9i.jpg
 

Kenka

Member
And the next bad thing:

DC Data Centers only can cut consumption to 85% of what currently used in AC ones.
Sucks.
 

iamblades

Member
Wait, I'm confused as well. CPU usage does not scale linearly with other part consumption, how are they arriving to these numbers?

The numbers are probably accurate, but they are misinterpreted by a reporter who doesn't know the details and is being influenced by an expert who has something to sell.

Just bad reporting in general.

Not only is CPU usage not the only factor in power consumption, CPUs aren't really 'on/off' devices. Low load CPUs take a lot less power than a CPU at high load, and more importantly when it comes to engineering data centers, produce less heat, which means they require less cooling. I don't know the engineering details of every data center of course, but I can see plenty of scenarios where it is actually more energy efficient to have multiple low utilization servers than one high utilization one.

People in this thread have already said the most important point though. Energy costs are the number one issue for people who run data centers, so they are already constantly looking for efficiency improvements that don't impact QoS.
 
Tech companies would love to spend less on data center site servicing. However, the actual needs of their application infrastructure and in-company SLA demands make things like aggressive power management and virtualization secondary efforts to the meat and potatoes of operations work-getting new or improved infrastructure up, running, and available for application teams to work with with the uptime demands that are needed.

I work with data center guys from the healthcare, finance, and retail industries quite a bit, and virtualization is always a "that would be nice" thing way behind high availability ,near-transparent disaster recovery, capacity and network upgrades, and integrating new products and appliances into their environments. It doesn't help that capacity issues right now are best addressed by "throwing more hardware/CPU/memory/an entire additional infrastructure layer" at things.

edit: I do think that this kind of journalism is the best way to get the status quo changed. IT departments and tech companies aren't going to do it themselves.
 

Zaptruder

Banned
Tech companies would love to spend less on data center site servicing. However, the actual needs of their application infrastructure and in-company SLA demands make things like aggressive power management and virtualization secondary efforts to the meat and potatoes of operations work-getting new or improved infrastructure up, running, and available for application teams to work with with the uptime demands that are needed.

I work with data center guys from the healthcare, finance, and retail industries quite a bit, and virtualization is always a "that would be nice" thing way behind high availability ,near-transparent disaster recovery, capacity and network upgrades, and integrating new products and appliances into their environments. It doesn't help that capacity issues right now are best addressed by "throwing more hardware/CPU/memory/an entire additional infrastructure layer" at things.

edit: I do think that this kind of journalism is the best way to get the status quo changed. IT departments and tech companies aren't going to do it themselves.

I'm unclear... can the a lot of the applications and services in these data centers be off loaded to a cloud based virtualization service? Could the new model viably be - let's not build our own, let's just rent computing capacity from a company that handles cloud virtualization services like Amazon?
 

zou

Member
Something else the author conveniently forgets to mention (even though he actually talked about Amazon. Yay, journalism): That Amazon actually offers exactly what he complains about (no one sharing their underutilized servers). That's how AWS started out..
 

Prez

Member
Something I never got about data centers: all data on the internet is stored on hard drives, right? What happens if one of the drives breaks? Is all the data on that drive lost or is there a back-up? If there are back-ups, that means that everything on the internet is stored twice?
 

xero273

Member
Something I never got about data centers: all data on the internet is stored on hard drives, right? What happens if one of the drives breaks? Is all the data on that drive lost or is there a back-up? If there are back-ups, that means that everything on the internet is stored twice?

Not familiar with data centers but I would assume they use some form of RAID and backups

http://en.wikipedia.org/wiki/RAID
 
i worked for a company that puts their datacenters near alaska. they basically just have regular ol fans that bring in the cold air from outside. they seemed to like it.
 

Ganhyun

Member

ronito

Member
Here's the thing, It's not gonna change.

Even with the Panacea of the cloud it's not changing.

Let's say you're the IT director and you have to have stable system up all the time that can deal with hockey sticks and mesas in utilization. You could spend more for an incredibly robust server and take a hit on the nose for spending too much or you could try to get as close to the right size you need and risk losing your job over it. You'll take the overkill everyday.

Proponents of the cloud would have you believe that it expands and contracts as necessary and easily too. It's true that you can spin up new nodes in minutes instead of days. But the fact is in complicated systems you can't completely automate said spinning the pre-spin some and have others ready in case. If you count that pre-spinning and such as part of the "power" used you're still seeing a big discrepancy between utilization and availability. You've just moved the overkill somewhere else.
 

Ganhyun

Member
That'd be a pain in the ass to change components.

I'm pretty sure it voids the warranty of the components the minute you submerge them. It would likely be a big company option only for a while. But I agree, it would be a pain to change them out.

http://www.datacenterknowledge.com/archives/2012/09/04/intel-explores-mineral-oil-cooling/

Seems like the next logical step. Intel does testing with it and results in a 90% power REDUCTION.

I just posted about that earlier :) Nice to know I'm not the only one who heard about that.
 

XiaNaphryz

LATIN, MATRIPEDICABUS, DO YOU SPEAK IT
I think its safe to say our datacenter is nearly always using as many of the procs, memory, bandwidth, and storage disks that's available.
 

iamblades

Member
Something I never got about data centers: all data on the internet is stored on hard drives, right? What happens if one of the drives breaks? Is all the data on that drive lost or is there a back-up? If there are back-ups, that means that everything on the internet is stored twice?

Way more than twice.

If it's anything important It'll be mirrored in raid, probably with an entire redundant server with another mirror of the whole raid array at a different site, and all that data is regularly backed up in multiple locations as well. There are still places that do long term tape backups of critical data as well.

Cloud hosting makes the whole thing a bit more amorphous because it can vary how many copies of a particular file there are, but generally there will be several. Also data that is cloud hosted should be backed up separately as well.
 

iamblades

Member

It's certainly an option. It has obvious maintenance issues of course, and it requires datacenters, racks, and server chassis built from the ground up to support that cooling method. Basically it requires a pretty huge amount of custom engineering to do something like that, you can't just build out a datacenter with off the shelf components that will work with oil cooling.

It's not really anything new though, people have been doing it for years. Oil cooled transformers and electric motors have been around for nearly as long as there has been electricity. Mostly the cooling savings haven't been worth the increased engineering and maintenance costs, but it's always a case by case basis.

I could see a solution like this being completely viable for a really dense high powered supercomputer or render farm with storage externally located via fibre channel. I don't know if your typical web server farm is dense enough or has high enough cpu utilization for oil cooling to make much of a difference though.
 

Ganhyun

Member
It's certainly an option. It has obvious maintenance issues of course, and it requires datacenters, racks, and server chassis built from the ground up to support that cooling method. Basically it requires a pretty huge amount of custom engineering to do something like that, you can't just build out a datacenter with off the shelf components that will work with oil cooling.

It's not really anything new though, people have been doing it for years. Oil cooled transformers and electric motors have been around for nearly as long as there has been electricity. Mostly the cooling savings haven't been worth the increased engineering and maintenance costs, but it's always a case by case basis.

I could see a solution like this being completely viable for a really dense high powered supercomputer or render farm with storage externally located via fibre channel. I don't know if your typical web server farm is dense enough or has high enough cpu utilization for oil cooling to make much of a difference though.

I agree that right now there isnt likely a cost-effective way for many companies to do this.
 

Zaptruder

Banned
This retort is reactionary, idiotic, and completely fails to substantively address the main point of the NYT piece. It's just "the internet is important" in long form.

It's not just that the internet is important... but that relative to its impact and efficacy on society, it's proportionately very cheap, even with the tremendous wastage.

Extending the idea into the future - imagine the power and materials savings when telecommuting (among other things) is the norm rather than the exception.

The main point if you interpret the NYT article generously is that - the internet can afford to be much more efficient.

The main point if you don't interpret it as charitably is - the internet is a waste of power.
 
Status
Not open for further replies.
Top Bottom