On Gear Live: 2024 Nissan Z Nismo Review

Latest Gear Live Videos

Thursday April 7, 2011 2:18 pm

Facebook ‘Open Compute Project’ aims to change the server industry

Facebook Open Compute Project

Facebook began showing off its plans for a new data center and server design on Thursday. It will be called the "Open Compute Project," executives said.

Facebook is making the design documents and specifications public at OpenCompute.org. The company claims that the design of the new servers is 38 percent more power efficient than its older designs, and costs 24 percent less to make.

Graham Weston, the chairman of Rackspace, said that his company would use the new Open Compute servers in its own designs, and Zynga's chief technical officer said that his company would take a serious look at adding the new technology to its own cloud.

Industry executives said that the new server designs will have a positive impact not just on the IT industry, but also with emerging countries that may not have the R&D resources to design their own power-efficient servers and data centers. Instead, they said, they can leverage the collective expertise. With the cost savings that the new server designs enable, those savings can be passed along to service companies that use web hosting to drive their businesses.

"This is how Facebook kicks Google's ass," said Robert Scoble, a blogger for Rackspace, one of the companies that will use the technology. The new data center does not use a "chiller," he said. Instead, it puts fine particles of water in the air and cools the server through evaporative cooling.

What's the key to an effective server and data center design? Power. "It's easy to lose track of what power is about," said Mark Zuckerberg, Facebook's chief executive. The big problem with adding new features is to build up the power to drive the new features, he said.

"All this really ends up being is extra capacity, and that ends up being bottlenecked by the power," Zuckerberg said. There are two ways of designing servers, he said: design them yourselves, or to buy from a mass-market OEM or ODM. Facebook found that the latter way wasn't the best avenue.

"We're trying to foster an ecosystem... and we're not the only ones who need the hardware that we're building out, which should make it beter for all social applications to do what we're doing," Zuckerberg said.

Facebook used its data center in Prineville, Ore. as the showcase for its discussion. The company didn't say how many servers are contained within the the building, although Richard Fichera, an analyst for Forrester, said that the data center measures 150,000 square feet, with a second, equally-sized facility planned for next year.

But Facebook executives said that the facility itself does not use air conditioning - a key component of power costs. Instead, it uses natural air to cool the servers, with a ductless evaporative cooling system to help chill the servers without the need for dedicated air conditioning.

"[Open Compute] represents the biggest cost reduction in server infrastructure in a decade," Rackspace's Graham Weston said.

Inside, the servers - which Facebook designed in conjunction with Quanta - Facebook uses either a custom motherboard designed around either an AMD or an Intel processor. Facebook also designed a custom power delivery mechanism that eliminates several step-down steps that can waste power: the data center wastes just 2 percent power, executives said.

The servers themselves are 1.5U high, half again as high as the normal 1-U rack, Facebook executives said. That allows Facebook to build more space in the racks for cooling; the company used 60-mm fans to move more air with less power, they said. The racks are built on shelves, so they can be easily serviced.

The power supplies are more than 93 percent efficient, almost heard of in an industry where 90 percent efficiency is considered outstanding. For backup power, they use a modular 48V DC battery backup unit that supplies up to six servers through a DC-DC converter in each server. Each battery is connected via the network, so that the Facebook IT managers can monitor the health of the system.

Analysts applauded the move. "The world of hyper scale web properties has been shrouded in secrecy, with major players like Google and Amazon releasing only tantalizing dribbles of information about their infrastructure architecture and facilities, on the presumption that this information represented critical competitive IP," Forrester's Fichera wrote in a blog post. "In one bold gesture, Facebook, which has certainly catapulted itself into the ranks of top-tier sites, has reversed that trend by simultaneously disclosing a wealth of information about the design of its new data center in rural Oregon and contributing much of the IP involving racks, servers and power architecture to a an open forum in the hopes of generating an ecosystem of suppliers to provide future equipment to themselves and other growing web companies."

Facebook's "microserver" plans
A Facebook executive recently offered another sort of behind-the-scenes look at the technology it uses, with its "Chinese foot-soldier" Web server strategy. One of the keys to the company's future strategy are microservers, tiny servers that Intel recently said it would address.

Gio Coglitore, director of Facebook Labs, spoke at an Intel event in San Francisco recently, where Intel announced plans for a sub-10-watt Atom server processor in 2012.

As the fourth-largest site within the United States, with more than 153 million visitors as estimated by comScore, how the company deploys its server architecture is obviously extremely important. Coglitore also said that Facebook believes in "testing in production," adding test machines to a live network.

"If you ever experience a glitch [while using the site], it might be Gio testing something out," Coglitore said.

Facebook has a rather substantial back-end database that enjoys a certain category of processors, and a front-facing infrastructure and memory cache that uses a bunch of Web servers, Coglitore said. It's in this front- or user-facing environment that Facebook will use the microservers, he said.

Debates about which types of servers to deploy - cheap, low-power, "wimpy" nodes like the Intel-authored "FAWN" paper suggests, or more expensive, "brawny" nodes have dominated the enterprise hardware space for years. Blade servers, which placed low-cost CPUs next to one another, was one early solution.

But if a front-end server dies at Facebook... well, so what, Coglitore seemed to say. "The microserver model is extremely attractive," he said. "I've said this before: it's foot soldiers, the Chinese army model. When you go into these battles, you like to have cannon fodder to some degree, an ovewhelming force and ability to lose large numbers of them and not affect the end-user experience. When you have a realized environment, you can do that. It's hard to that with a virtualized environment."

To achieve the level of redundancy in a virtualized environment, Facebook would have to deploy a standby node, which takes away the cost advantage, Coglitore said. "I'd have t keep multiple large-pipe pieces of hardware in my environment, where I'd prefer to keep little segments," with a load balancer directing traffic to the smaller computers, he said.

"The microserver allows us to target that particular workload and allows us to scale it like we haven't been able to do before," Coglitore added. Facebook has tested the microservers, Coglitore said, and will begin deploying them in 2011 or early 2012.

This article, written by Mark Hachman, originally appeared on PCMag.com and is republished on Gear Live with the permission of Ziff Davis, Inc.

Advertisement

Comments:

Great post, thanks. I’d like to have one of those in my place, can anyone tell if these will be available for users, not for companies etc.?


Best regards,
Adrian from Handy Backup

It might just be me…
But could they not focus on getting there freaking page to work for developers.
Wonderful platform - if it did not change constantly.

Thanks for the post man 😊

Advertisement

Commenting is not available in this channel entry.

Advertisement

{solspace:toolbar}