March 20, 2014

It's no secret that financial services organizations are at a crossroads when it comes to their datacenter strategies. Saddled with high-cost facilities across the globe, most institutions have started to consolidate datacenters to reduce costs and increase efficiency.

For instance, Fidelity Investments, the massive investment manager ($1.7 trillion AUM, 2013), realized that continuing to run dozens of datacenters across the country didn't make sense. According to Joe Higgins, vice president of engineering and corporate sustainability officer at Fidelity, the institution started looking at its datacenter strategy five years ago. To date, it has reduced its global footprint by 20 percent and is well on its way to reaching the goal of 35 percent in the next few years. To get the 20 percent reduction, Fidelity has consolidated is data facilities from 80+ "rooms," which include small data closets in offices all the way up to medium and large facilities, to approximately 20 locations today. "We want to get down to a handful," by the time the consolidation is complete, he adds. (Higgins will be speaking about Fidelity's datacenter strategy at Interop 2014).

Consolidating datacenters to reduce costs, however, is only part of the story. New technologies and changing customer expectations also play a large role in Fidelity's decisions, Higgins says. In fact, Fidelity has flipped datacenter planning on its head, so to speak. Traditionally, most datacenter strategies were developed from an operational, capacity planning and real estate perspective. "We have taken an outside-in view, rather than the inside-out view," he says. "From the customers perspective, it's about how they want more robust services at a competitive price. We have to deploy applications it at a better way to meet customer expectations," which are always increasing, Higgins notes.

"Everyone is familiar with the growth in mobile and social data," he says. "Those trends, coupled with the rapid advances in technology and the unprecedented need to access data for transactions or analysis, are important to note." Users — both Fidelity's customers and its own employees — "want the ability to conduct transactions and business anytime and anywhere, but at the same time we need to deliver the data securely."

[To hear Joe Higgins discuss Fidelity's datacenter strategy, attend the Future of the Financial Services Data Center panel at Interop 2014 in Las Vegas, March 31-April 4. You can also REGISTER FOR INTEROP HERE.]

To meet these needs, Fidelity has thrown out the traditional enterprise datacenter strategy, which calls for multi-megawatt facilities, three to five year planning cycles, and 30-year lifespans for datacenters that will likely run at only partial capacity for years before being filled. Instead, Fidelity has developed a strategy that employs a prefabricated datacenter design that allows the enterprise to add capacity in smaller increments in a shorter period of time. For instance, Fidelity can roll out new capacity in a new datacenter in nine to twelve months, while a traditional build out may take two or three years (from initial design to completion). Overall, Higgins says the datacenter strategy has changed and can now be summarized as "smaller, responsive and optimized." Moreover, Higgins says that this mindset will reduce Fidelity's datacenter real estate and operational costs by 50 percent in the next 15 years. "These are significant benefits in the bigger picture, it’s not about saving a few dollars per square foot or per Watt of added capacity," he says.

To drastically cut the time it takes to get a new facility to market, Fidelity went to the drawing board … literally. Fidelity's new approach to datacenter design relies on Centercore, Fidelity's own model that uses an off-site constructed datacenter that can utilize open source hardware components. Open source hardware components are gaining in popularity with technologists. "All future Dcs will be using centercore," Higgins says. "It is flexible, and it can take different shapes and sizes. We have already implemented it" in two facilities. Fidelity used Centercore to add capacity to its existing facility in North Carolina. Centercore is also a major component of the firm’s new facility in the Midwest. (Take a look inside Fidelity's datacenter strategy: Centercore -- A New Flexible Datacenter Approach).

The Centercore facility in North Carolina is intergral to Fidelity's internal private cloud, which is known internally as Click To Compute. "We are predicting the majority of our IT footprint will move to [Click To Compute]," over the next few years," Higgins says. The platform is very popular with developers and it uses open source software and hardware. Fidelity is a founding member of the OpenCompute Project, which is led by Facebook. "The internal private cloud will all be on Centercore."

Higgins contends that Centercore has advantages to other solutions on the market because it can use open source hardware and can be customized to any company's specific needs. "We looked at other solutions on the market, and we didn't see anything that met our needs," including the reliability and resiliency needed for a financial services datacenter, as well as the ability to rollout quickly and expand in the future. "When we deploy capacity now, we want to do it in small chunks, in the right place at the right time."

ABOUT THE AUTHOR
Greg MacSweeney is editorial director of InformationWeek Financial Services, whose brands include Wall Street & Technology, Bank Systems & Technology, Advanced Trading, and Insurance & Technology.