How to Prepare for the Data Center Facility of the Future

6 minute read
Data Center Facility of the Future

Transformational technologies such as virtualization, convergence and cloud computing change the way information technology (IT) operates.

Is it really possible to predict what the future data center may look like?

We sit down with Steve Harris, Forsythe's vice president of data center development, to find out.

FOCUS: How can IT better prepare for the data center of the future?

Steve: To understand the data center of tomorrow, it is important to understand the data center of today. IT should really know their data centers from the top down. They should be able to answer these types of questions:

  • What is my designed/realized tier level?
  • What are the electrical capacities or loads of my generator, uninterruptible power supply (UPS)?
  • Can I determine my per cabinet power profile?
  • Can I calculate the power usage effectiveness (PUE) for my data center?
  • Where is my growth one, three, five-plus years from now?
  • Is it with server, storage and/or network?
  • What is the value of the hardware and data located in the data center?
  • If there were a major issue or outage, how much would it cost, and how long would it take to replace a data center facility?
  • If the data center went down, what business functions would be impacted?
  • How would impact look after day one?
  • How would impact look after day three?
  • Am I aware of all or most of the risks facing my data center today?
  • By using metrics obtained from answering these types of questions, IT can better illustrate the business costs, risks and opportunities associated with their data center.

FOCUS: You mentioned data center risks. Can you give us some examples?

Steve: These risks could include operational concerns, single points of failure, capacity and load issues, inability to support business growth, efficiency metrics, and the value of the data center to the organization. By not fully understanding these risks, failures or outages in the data center may occur.

FOCUS: What is a data center failure or outage signaling?

Steve: When a data center failure or outage occurs, it is signaling a likely problem in one of the following areas: the design, the age or health of the building systems infrastructure, or a capacity or load issue. When failures happen, it is important to immediately address them. If the problem is not forthcoming, it is important to dig deeper to find out the root cause to ensure the problem does not happen again. Forensic engineering may be required to trace a cascading building systems infrastructure problem.

FOCUS: What issues should data center managers make a priority in addressing?

Steve: There are a number of things that they should look for every year. For example, answering questions such as:

  • What does the space situation (horizontal and vertical) look like in my data center?
  • How has the power and cooling capacity or load changed?
  • Am I better off or worse off than a year ago?
  • Does my organization have a private and/or public cloud strategy?
  • How am I modernizing legacy applications?
  • Have virtualization, consolidation and optimization been accomplished?
  • Data center managers should really address the most pressing issues first. If they are not sure what the most pressing issue is, they may want to get a vulnerability assessment. Most likely, there is one problem that stands out over the others, and that issue should be addressed in a timely manner. Remember, there is always going to be another problem around the corner.

"To understand the data center of tomorrow, it is important to understand the data center of today." — Steve Harris

FOCUS: When making big changes to the data center, how far in advance should IT professionals plan?

Steve: Two to three years is a good rule of thumb. This timeframe is due to two factors: paperwork (design, budgeting and internal approvals) and the actual project itself. In some cases, the paperwork side of the project can consume more time than the project implementation, as major data center projects can often see price tags in the millions to tens of millions of dollars. That is no small expense. As a result, most organizations should allow a two- to three-year window to accommodate the full project lifecycle.

FOCUS: Why does it seem like data center concerns don’t change?

Steve: The same issues keep popping up over and over because many patches or quick fixes are designed to solve the problem of the day, but not for the future. Most facility issues or problems are inter-related, making it difficult to fix only one area or concern at a time and to do so without affecting the data center’s operation. Consider this analogy: it is tough to change a plane’s engine when it is flying at 30,000 feet. The same goes for a data center in use. Unless the data center has been designed to a tier level three or higher, it is not likely the data center’s major building systems can be repaired or upgraded while it is operating. Many organizations focus primarily on refreshing technology. Additionally, IT refreshes are viewed as a good thing, unless the data center environment doesn’t keep pace. If technology is refreshed every three years (on average), after 10 years and three IT refresh cycles, the design intent of the data center may be significantly different from the demands of the equipment it is supposed to be supporting. This can lead to the same problems popping up.

FOCUS: How then should data center managers prepare for the future?

Steve: They should focus on short-term cooling, airflow and equipment placement to optimize data center space and costs. At the same time, they should develop a long-term data center design strategy that maximizes flexibility, scalability and efficiency.

FOCUS: You mentioned optimizing short-term costs. How can organizations realize these savings?

Steve: Did you know that removing one watt of power on the IT side of the house results in almost three watts of total power savings? This is called the cascade effect. For example, if data center managers are able to save a single watt of power by removing old technology or via a new server component, this change reduces demand on power conversions, switchgear, UPSs, cooling, etc. These cost savings are realized via consolidation, optimization and virtualization. So a single IT watt reduction can create almost three times the impact elsewhere, increasing overall data center performance while reducing costs.

FOCUS: How important is floor space in the data center?

Steve: It is very important. To optimize, consolidate and virtualize the data center, there needs to be enough floor space. These types of projects typically increase the demand on floor space before reducing it. If a lot of new technologies are brought in with the intention of migrating old to new, it is going to add short-term demand for floor space as well as power and cooling. Of course, these demands will decrease as the old technologies are removed, but there will still be the need for some elbow room when starting these projects. Additionally, floor space will be needed to handle sudden influxes in business demand and growth that could occur from mergers and acquisitions. So floor space is critical.

FOCUS: If you could only provide one piece of advice about the data center, what would it be?

Steve: Understand what you don’t know about the data center. The more you don’t know, the more you introduce risk into the data center. Ask yourself: can my 20th-century data center handle 21st-century demands? Also, consider how your physical environment may be holding the business back and whether it is putting IT at risk.

For more tips on how to prepare for the data center facility of future, get your copy of The Essential Guide to the Data Center Facility of the Future.

You Might Also Like
Join our Newsletter

Stay up to date with the latest and greatest from our monthly newsletter

About the Author
Popular Today