Back

Ten data center trends driving change in 2015

There´s technology hype and then there are true trends that affect data centers long-term. These 10 trends are sure to have an impact beyond 2015.

 

Data center technologies are emerging and evolving at an astounding pace. Just consider how a fledgling idea like virtualization became an infrastructure necessity in the span of only a few years, or the expanding role of solid-state drives in high-performance storage cache and virtual SAN deployments.

IT professionals need to pay attention to new developments, and consider the impact that those products or initiatives can have on the data center -- and the business. At Gartner´s IT Operations Strategies and Solutions Summit 2015 here this week, analyst David J. Cappuccio outlined 10 IT trends poised to impact data centers over the next year and beyond.

 

1. Non-stop demand

There are always new workloads, users and data, and the demand for IT resources is constantly increasing. Cappuccio points to an average annual growth rate (AAGR) of 10% in server workloads; 20% in power demands; 35% in network bandwidth; and an astonishing 50% AAGR in storage. IT leaders must follow utilization trends and perform careful capacity planning to ensure adequate resources are available to maintain service performance and user experience levels.

 

2. Treating business units as technology startups

The simple reality is that business units need a level of agility and responsiveness that is particularly challenging for IT departments struggling just to keep the lights on. Consequently, individual business units are spending from their budgets to bring in mobile applications and cloud services -- and even using their own devices, etc.

In years past, this would have been called "shadow IT" and frowned upon. Business units will simply work around IT if that´s what is required to address business problems. But IT still bears the responsibility to ensure technologies are integrated and managed properly. The goal for IT, Cappuccio said, is to get in front of these efforts and collaborate with business units right from the start to achieve a better business outcome.

For some businesses, the collaboration starts with better tools.

"We´re probably going with SharePoint and outsourcing it to a South African company called Openbox," said Chris DiGiacomo, vice president and director of operations at corporate financing firm W. P. Carey in New York. "The value is having the end users collaborate better not only with IT but within their own departments."

W. P. Carey also added four new IT positions -- business relationship managers -- to act as liaisons between IT and individual business units, DiGiacomo said.

"They´ll understand the business as much as the staff, and help incorporate processes and tools that can make these units more productive and work more efficiently," he said.

 

3. Internet of Things

The proliferation of embedded, networked sensors that deliver an astounding volume of data to the business, more commonly known as theInternet of Things (IoT), is on the rise. Gartner predicts the IoT will include over 26 billion connected devices by 2020, so IT faces the daunting challenge of processing, storing, correlating and reporting an ever-growing volume of real-time data from a multitude of sensor sources.

The business, in turn, can use this data to make superior decisions in real time and see more strategic trends and opportunities over the longer term.

 

4. Software-defined infrastructure

By now, IT professionals have heard a variety of different "software-defined" terms, including software-defined storage, software-defined networking (SDN), and even software-defined data centers. It´s a new way to automate, orchestrate and operate enterprise IT. Under ideal conditions, it can bring fast and flexible infrastructure reconfiguration from a single location while enhancing workload performance and network traffic behaviors -- all running under open standards.

While specific software-defined elements like SDN are within reach now, others, such as software-defined data centers, will be difficult to achieve until tools can fully integrate on- and off-premises computing resources. In addition, the automation of software-defined infrastructures depends on logic and rule sets that must be reviewed and updated periodically. Otherwise, adopters risk improper or ineffective automation as computing needs change over time.

"If you automate, don´t forget about it," Cappuccio says.

 

5. Integrated systems evolution

The data center trend of integrated infrastructures, commonly known asconverged infrastructures (CI), is hardly new. CI has recently gained considerable momentum and is expected to gain even more traction in years to come. CI´s appeal comes from its system-level approach, which includes delivering servers, as well as storage and networking components pre-bundled and heavily integrated by the vendor. CI platforms are continually evolving to provide better performance, power efficiency and manageability.

But CI can be tricky for IT. Cappuccio explained that the expense means senior executives will be deeply involved in CI selection -- moving the traditional ´best product for a given job´ emphasis of IT to a vendor relationship focus that resonates with C-level executives. The investment in CI platform evaluation is also difficult to repeat as infrastructure needs grow and change, so organizations may stick with existing vendors and experience vendor lock-in.

 

6. Disaggregated systems

Traditional data center hardware exists as complete subsystems. For example, a server contains a power supply, processors, memory and storage within the same box and is interconnected through proprietary, short-distance electrical interfaces. If you need more processor cycles or memory, you probably will buy more boxes -- duplicating other components you don´t need.

The idea of disaggregated systems is to modularize computing building blocks, which can be racked as needs dictate, and the modules join together through high-speed shared connections (such as silicon photonics). For example, if you need more computing cycles, you´d plug more processor modules into the rack.

Rack designs are also changing to provide direct current (DC) to computing devices, thereby reducing the number of power supplies (and possible points of failure) while improving energy efficiency with fewer AC-to-DC conversions. Open Compute servers can leverage rack-distributed DC now, and the trend will continue through disaggregation.

 

7. Proactive infrastructures

IT and business leaders increasingly rely on analytical tools to better understand the data center and its computing resources -- and then make better decisions about data center utilization and growth. This has been an ongoing process with platforms like data center infrastructure management, but lately, it´s been accelerating to move the organization from a reactive state to a proactive state. For example, today´s tools are very good at helping administrators predict what will happen in the future. But eventually, these tools will evolve to proactively prescribe changes necessary to achieve desired outcomes.

 

8. IT service continuity

Business continuity (BC) and disaster recovery (DR) have typically been approached as two separate and distinct functions -- usually related to two different sets of problems. But the two disciplines are now merging into a single integrated function Cappuccio terms "IT service continuity."

An underlying idea addresses the fundamental goal of both BC and DR: to keep essential services available to users. Service continuity relies on multiple sites and increasing intelligence, which can forecast potential disruptions and outages, and then move workloads dynamically to other sites. It´s a data center strategy embraced by large organizations like trading firms, but it should find broader acceptance over the next year and beyond.

 

9. Bimodal IT

IT typically struggles with the dual challenges of keeping the shop open (mode 1) and exploring new technologies to enhance the business (mode 2). The two modes of operation don´t work well together because the traditional process- and procedure-driven efforts of production IT can easily be disrupted by new technologies. These upstarts often carry a significant risk for the business -- but the risk is critical because business units will eventually adopt those new technologies anyway.

These two modes of IT can (and should) exist together, Cappuccio said. It´s alright to preserve the processes, procedures, and compliance for mode 1 operations and still embrace the agility and experimentation of mode 2 activities. The goal is to run both efforts separately, but don´t penalize IT folks for mistakes or failures in mode 2 endeavors.

It´s also perfectly acceptable to change modes over time. For example, a new technology might come into the business through experimentation and evaluation. But as the technology finds acceptance and there is more reliance on it, IT will adopt more process and procedures to manage it. The use of public cloud is one common example of this type of transition within IT.

 

10. Scarcity of IT skills

Finally, Cappuccio sites a lack of IT pros with the skills needed to carry IT and the business forward. Factors such as increased IT complexity, greater support demands, shorter development times, shrinking budgets and end user requirements are putting pressure on IT staff.

 

IT professionals need to do a better job of thinking outside of their own silos of expertise and tying different skills together. Cross-training staff and encouraging more training and growth is a good strategy for incentivizing and retaining IT professionals -- they will be more engaged and stay at their jobs longer.

 

Back