The future
of data centers will depend on cloud, hyper-converged infrastructure, and more
powerful components.
A data
center is a physical facility that companies use to host their
business-critical applications and information. So as they evolve, it is important
to think long term about how to maintain your reliability and security.
Data Center Infrastructure
Datacenters
are often referred to as something unique, but in reality they are comprised of
many technical elements such as routers, switches, security devices, storage
systems, servers, application delivery controllers, and more. These are the
components IT needs to store and manage the most critical systems that are
vital to a company's ongoing operations. For this reason, the reliability,
efficiency, security, and constant evolution of a data center are often a
priority.
In addition
to technical equipment, a data center also requires a significant amount of
facility infrastructure to keep hardware and software up and running. These
include power subsystems, uninterruptible power supplies (UPS), ventilation and
cooling systems, backup generators, and cables for connecting to outside
network operators.
Data Center Architecture
Any business
of significant size will likely have multiple data centers, possibly across
multiple regions. This gives your organization flexibility in how it backs up
its information and protects against natural and man-made disasters such as
floods, storms and terrorist threats. How the data center is architected can be
one of the toughest decisions because there are almost limitless options. Some
of the main considerations are:
- Does the
company require mirrored data centers?
- How much
geographic diversity is required?
- What is the
time required to recover in the event of an outage?
- How much
space is needed for expansion?
- Should you
rent a private data center or use a co-location / managed service?
- What are
bandwidth and power requirements?
- Is there a
preferred carrier?
- What kind of
physical security is required?
The answers
to these questions can help you determine how many data centers to build and
where. For example, a financial services company in Manhattan is likely to
require ongoing operations as any outage can cost millions. The company will
likely decide to build two nearby data centers, such as New Jersey and
Connecticut, which are mirror sites of each other. An entire data center could
then be shut down without loss of operations, because the entire enterprise
could run only one of them.
However, a
small professional services company may not need instant access to information
and may have a primary data center in its offices and back up the information
to an alternate nationwide site every night. In the event of an outage, it
would initiate a process for retrieving the information, but would not have the
same urgency as a company that relies on real-time data for competitive
advantage.
While data
centers are often associated with companies and web-scale cloud providers, in
fact any business can have a data center. For some small and medium businesses,
the data center may be a room located in the office space.
Industry Standards for Data Centers
To help IT
leaders understand what type of infrastructure to deploy, in 2005, the American
National Standards Institute (ANSI) and the Telecommunications Industry
Association (TIA) published standards for data centers, which defined four
distinct layers with design and implementation guidelines. A level one data
center is basically a modified server room, where a level four data center has
the highest levels of system reliability and security. A complete description
of each data center can be found here (http://www.tia-942.org/content/162/289/About_Data_Centers)
on the TIA-942.org website.
As with all
technology, data centers are going through a significant transition, and
tomorrow's data center will look significantly different from what most
organizations are familiar with today.
Enterprises
are becoming increasingly dynamic and distributed, which means the technology
that powers data centers needs to be agile and scalable. As server
virtualization increases in popularity, the amount of traffic moving sideways
through the data center (East-West) decreases traditional inbound and outbound
client-server traffic (North-South).
This is
wreaking havoc on data center managers as they try to meet the demands of this
IT age. But as Bachman Turner Overdrive's song says, "B-b-b-baby, you
haven't seen anything yet."
These are
the key technologies that will evolve data centers from static and rigid
environments that keep companies in agile and fluid installations that can meet
the demands of a digital company.
Clouds Expand Data Centers:
Historically,
companies had the option of building their own data center, using a hosting
provider or a managed service partner. This changed the ownership and economy
of running a data center, but the long delivery times needed to implement and
manage technology still remained. The increase in Infrastructure as a Service
(IaaS) of cloud providers such as Amazon Web Services and Microsoft Azure
offers companies an option where they can provision a virtual data center in
the cloud with just a few mouse clicks. ZK Research data shows that more than
80% of companies are planning hybrid environments, which means the use of
private data centers and public clouds.
Software-defined networking (SDN)
A digital
business can only be as agile as its less agile component. And that is often
the network. SDN can provide a level of dynamism never before experienced.
(Here is a deeper dive into SDN).
Hyper converged infrastructure (HCI)
One of the
operational challenges of data centers is having to combine the right
combination of servers, storage and networks to support demanding applications.
Then, once the infrastructure is deployed, IT operations must discover how to
scale quickly without interrupting the application. HCI simplifies that by
providing an easy-to-deploy device, based on basic hardware that can be scaled
by adding more nodes to the deployment. The first cases of HCI use revolved
around desktop virtualization, but recently they have expanded to other
commercial applications, such as unified communications and databases.
Containers
Application
development is often slowed down by the time it takes to provision the
infrastructure on which it runs. This can significantly hamper an
organization's ability to move to a DevOps model. Containers are a method to
virtualize a complete runtime environment that allows developers to run
applications and their dependencies on a stand-alone system. The containers are
very light and can be created and destroyed quickly, so they are ideal for
testing how applications run under certain conditions.
Micro Segmentation
Traditional
data centers have all the security technology at their core; thus, as traffic
moves north-south, it passes through security tools and protects business.The
increase in east-west traffic means that traffic avoids firewalls, intrusion
prevention systems and other security systems and allows malware to spread very
quickly. Microsegmentation is a method to create safe areas in a data center
where resources can be isolated from each other, so if a violation occurs,
damage is minimized. Microsegmentation is usually done in software, which makes
it very agile.
Non-Volatile memory express (NVMe)
Everything
is faster in a more and more digitized world, which means that data needs to
move faster.Traditional storage protocols, such as Small Computer System
Interface (SCSI) and Advanced Technology Accessory (ATA), have been around for
decades and are currently reaching their limits.NVMe is a storage protocol
designed to accelerate the transfer of information between systems and
solid-state drives, greatly improving data transfer rates.
GPU (Graphics Processing Units)
Computing
Central
processing units (CPUs) have boosted the data center infrastructure for
decades, but Moore's Law is reaching a physical limitation. In addition, new
workloads such as analysis, machine learning and IoT are driving the need for a
new type of computing model that exceeds what CPUs can do. GPUs, which were
previously only used for games, work fundamentally differently, since they are
capable of processing many threads in parallel, which makes them ideal for the
data center in the not too distant future.
Data centers
have always been critical to the success of companies of almost all sizes, and
that will not change. However, the number of ways to implement a data center
and enabling technologies are undergoing a radical change. To help build a
roadmap to the future data center, remember that the world is becoming
increasingly dynamic and distributed. The technologies that accelerate that
change are the ones that will be needed in the future. Those who do not will
probably stay for a while, but they will be less and less important.
This article was originally published
on.....Read More