Tuesday 22 October 2019

What Role Does Network play in Data Center?


Data centers are generally where critical business applications reside and where critical business logic occurs, both for internal and external consumers. There are many levels of communications that must happen internally and externally to data centers. Ensuring that these communications are carried out without problems, efficiently and safely is a fundamental role of the network that unites all these components.

Let's take a look at the simplified figure below. Each block has a dependency with multiple blocks that establishes the workload patterns that the network must carry. These dependencies between the modules perform specific commercial functions. These commercial functions could carry different workloads such as:

  • Run complex critical business applications at multiple levels and locations
  • Shared load and grouping of applications in different geographies.
  • Cloud computing: automation and orchestration workloads
  • Disaster recovery and business continuity (DR / BC) - availability workloads
  • Data replication and backup workloads
  • Security and law enforcement
  • Development and testing of workloads.
  • Daily maintenance
  • Workload management

One thing in common in all these functions is the network and its ability to unite these components!





It is essential now that we never have an intelligent, reliable and functional network that provides next-generation innovations for companies to evolve from a traditional network to a "cloud-enabled" network. What is a "cloud-enabled" network? A network that is compatible with VM, a network that can grow and shrink according to consumer demands, a network that can recalculate routes dynamically during failures, a network that can guarantee different kinds of service based on predefined parameters and postures, a network that ensures that there are no blocked routes, a network that can track changing workloads and react accordingly (VM mobility), we can go on and on. Simply put, networks are becoming programmable (API) and flexible to accommodate the paradigm of changing applications that are required in various Cloud models.

There are many network-based innovations that have been widely discussed in Cisco and other forums, such as Virtual Port-Channels (vPC), Overlay Transport Virtualization (OTV), Locator / ID Separation Protocol (LISP), FabricPath, FiberChannel-over-Ethernet (FCoE), Virtual Security Gateway (VSG), etc. These innovations with next-generation HW / SW combinations, such as Cisco Nexus series products, help create a path to a unified fabric, network and computing approach to Cloud Computing. This is further proof that we are trying to address business and technical challenges with smarter networking tools. I am not saying that this level of smart grids is required in all scenarios, but based on commercial and technological requirements, next-generation data center networks are making application decisions that they never had to make before!

For any given data center, its capabilities are finite. Then, immediately, we have an agotable resource to start, normally would be the facilities: energy, rack space, available ports, etc. Or they could be other physical assets within the Data Center such as network, computing or storage. Since we are talking about networks, let's accept that even network resources are finite from various perspectives, for example, scale; number of MAC addresses, VLAN, Layer 3 pairs, performance, over-subscription relationships, to name a few. I will cover some of these aspects of the network in a future topic when discussing data center consolidation and migration planning.

For now, the next time someone claims that networks do not play an important role in Cloud Computing, they will have something to say about it!

This article was originally published on ------- Read More



























NordVPN cuts ties with Data Center after Security Breach


NordVPN terminated its contract with a Finnish data center provider following a breach last year.

After the breach was revealed in a nasty Twitter fight on October 20, the company has only now publicly admitted that it was the victim of an attack.

NordVPN boasted: "No hacker can steal your life online. (If you use VPN)."

In response, hacker group KekSec revealed that another group had broken into NordVPN and attached links as evidence.

NordVPN has since deleted its tweets, but the company says it has known the hack for months but has not released the information.

A spokesman said in a statement on his website: “We did not disclose the exploit immediately because we had to ensure that none of our infrastructure could be prone to similar problems.

"This could not be done quickly due to the large number of servers & the complexity of our infrastructure."






Blame Game

The data center provider that operates the allegedly compromised facility is Finland's Oy Creanova Hosting Solutions Ltd, reports Bloomberg.

The hackers used a poorly protected remote management system built into an unidentified server in Creanova's data center in Helsinki. The attack occurred in March through an insecure remote management system and an expired private key (TLS key) was taken.

Because the TLS Key has been stolen, there is a fear that it may be used to create spoofed NordVPN servers and collect personal information from incoming traffic.

Nord blames Creanova for sloppy security and says he was unaware of accounts linked to the remote management system, but the data center provider says Nord is just trying to change responsibility.

NordVPN claims that Creanova noticed the activity but did not inform Nord - instead, its technicians discovered an open account and unauthorized use of its server "a few months ago" - which led to an audit of the entire company. Network on its thousands of servers.

DCD contacted Creanova for further comments, but did not respond at the time of publication.

Company Terms and Conditions page states: "Oy Crea Nova Hosting Solutions LTD cannot maintain the server if the client has exclusive administrator rights. Therefore, the client will have sole responsibility for the content and security of the server. The client assumes the obligation configure and maintain your servers so that the security, integrity, and availability of networks, other servers, as well as third party software and data from Oy Crea Nova Hosting Solutions LTD, are not endangered.

"It is your obligation to install security software, regularly obtain information about known security holes, and close known security holes. If Oy Crea Nova Hosting Solutions LTD provides security or maintenance programs, this will not relieve you of your obligation."

Lesson Learned?

NordVPN has 12 million users worldwide, but the company estimates that only 50 to 200 clients used the breached server.

He says he is not underestimating the security threat.

In the statement, he said he had learned some hard lessons: "Although only 1 of the more than 3,000 servers we had at the time was affected, we are not trying to undermine the severity of the problem.

"We failed to hire an untrusted server provider and we should have done better to ensure the security of our clients.

"We are taking all the necessary means to improve our security."

This includes a security audit and an independent external audit of your infrastructure next year.

This article was originally published on ------- Read More

Monday 21 October 2019

How are data centers connected to each other?


The purpose of modern data center networks is to accommodate multiple tenants of data centers with a variety of workloads. In this network, the servers are the components that provide the requested services to the users (and the programs that work on their behalf).

The simplest network services may be the responses to calls to API functions. Servers can also provide applications to users / clients, through web protocols, language platforms or virtual machines that provide users with full desktops.

Inside Data Center Networking:

Today, few business workloads, and progressively less consumer and entertainment workloads, run on individual computers, hence the need for data center networks. Networks provide servers, clients, applications and middleware with a common map with which to organize the execution of workloads, and also with which to manage access to the data they produce.

Coordinated work between servers and clients in a network is the workflow that requires a data center network between resources. The data is exchanged between servers and clients, although for modern data centers, there is no central supervisor of such exchanges.

A conventional data center network comprises: servers that manage workloads and respond to customer requests; switches that connect devices to each other; routers that perform packet forwarding functions; controllers that manage the workflow between network devices; gateways that serve as junctions between data center networks and the wider Internet; and clients that act as consumers of the information in data packets.




The resources in the network share a common mapping system based on network technologies or standards. For modern networks, this shared map is often based on Internet Protocol (IP), Ethernet and other related network technologies. Layer 3 IP addresses (IP routing) are designed to provide intermediate forwarding agents in a network, called routers, clues about the general address along which to move packets to data. Using the transport control protocol (TCP / IP), routers pass data packets to each other, literally in a riddle effort.

Another common data center technology is Ethernet, which connects devices using media access control (MAC) addresses. To overcome the limitations of these basic network technologies, many additional network protocols have been developed, including VXLAN and Open Flow, some of which can be run as an "overlay" found at the top of the basic network infrastructure.

These components form the data center network infrastructure. As the infrastructure evolves, none of the functions of these components with independent physical devices should no longer be fulfilled. Virtualization allows the software to play the role of any or all of these components.



Software-Defined Data Center Networking

In a software-defined network (SDN), the dynamics of data center workflows change, to accommodate variable workloads more effectively and efficiently. Specifically, the workflow is divided into two categories: the content of the documents or media used by customers (the data plane) and instructions on how the network should accommodate this data (the control plane). In this way, an SDN controller can make radical adjustments in the way the data plane is assigned, even while a workflow is running, without compromising the control plane and the connections that link the components of the net.

A data center today is less subject to physical and geographical restrictions than ever. Technically, a data center is the collection of components that share a common map of IP addresses with each other, and that can be (although not necessarily) linked by a common domain. To the extent that the bandwidth of the underlying infrastructure allows it, a single data center can cover the entire world.

However, in conventional use, companies and public services continue to perceive their data centers as the collection of servers that operate in premises they own or rent. However, even this interpretation is being worn out by the new realities, the most prominent of these is the availability of cloud-based infrastructure and platforms made available to companies "as a service", sold by subscription or pay per use. Let's go, base.

How the Cloud Remaps Data Centers:

The cloud has evolved to mean using network virtualization to separate physical processors from the services they provide. This may not sound much like the colloquial term "the cloud," with which consumers refer to the undetermined storage space that contains their synchronized documents. However, cloud data centers, as consumers realize, were made possible through virtualization.

For example, multi-volume distributed file systems spanning a variety of domains are virtualized component products that separate addressable files from physical file systems. In large data center networks, SDN controllers are responsible for managing these components; In smaller, though still fairly large, enterprise network networks, virtual network overlays performed by workload orchestrators allow file system grouping.

As the nature of data center networks becomes increasingly disaggregated, the notion of "center" becomes almost entirely abstract. Instead of where assets are managed and operated, a data center network can now be no more concrete than gathering information technology resources that are accessible to each other - that a business owns or leases, or where sign


This article was originally published on ------- Read More







Wednesday 9 October 2019

What are data centers? How they work and Their future in the cloud








       The future of data centers will depend on cloud, hyper-converged infrastructure, and more powerful components.
       A data center is a physical facility that companies use to host their business-critical applications and information. So as they evolve, it is important to think long term about how to maintain your reliability and security.

Data Center Infrastructure

      Datacenters are often referred to as something unique, but in reality they are comprised of many technical elements such as routers, switches, security devices, storage systems, servers, application delivery controllers, and more. These are the components IT needs to store and manage the most critical systems that are vital to a company's ongoing operations. For this reason, the reliability, efficiency, security, and constant evolution of a data center are often a priority.

     In addition to technical equipment, a data center also requires a significant amount of facility infrastructure to keep hardware and software up and running. These include power subsystems, uninterruptible power supplies (UPS), ventilation and cooling systems, backup generators, and cables for connecting to outside network operators.

Data Center Architecture

      Any business of significant size will likely have multiple data centers, possibly across multiple regions. This gives your organization flexibility in how it backs up its information and protects against natural and man-made disasters such as floods, storms and terrorist threats. How the data center is architected can be one of the toughest decisions because there are almost limitless options. Some of the main considerations are:
  • Does the company require mirrored data centers?
  • How much geographic diversity is required?
  • What is the time required to recover in the event of an outage?
  • How much space is needed for expansion?
  • Should you rent a private data center or use a co-location / managed service?
  • What are bandwidth and power requirements?
  • Is there a preferred carrier?
  • What kind of physical security is required?

      The answers to these questions can help you determine how many data centers to build and where. For example, a financial services company in Manhattan is likely to require ongoing operations as any outage can cost millions. The company will likely decide to build two nearby data centers, such as New Jersey and Connecticut, which are mirror sites of each other. An entire data center could then be shut down without loss of operations, because the entire enterprise could run only one of them.
However, a small professional services company may not need instant access to information and may have a primary data center in its offices and back up the information to an alternate nationwide site every night. In the event of an outage, it would initiate a process for retrieving the information, but would not have the same urgency as a company that relies on real-time data for competitive advantage.

      While data centers are often associated with companies and web-scale cloud providers, in fact any business can have a data center. For some small and medium businesses, the data center may be a room located in the office space.

Industry Standards for Data Centers

      To help IT leaders understand what type of infrastructure to deploy, in 2005, the American National Standards Institute (ANSI) and the Telecommunications Industry Association (TIA) published standards for data centers, which defined four distinct layers with design and implementation guidelines. A level one data center is basically a modified server room, where a level four data center has the highest levels of system reliability and security. A complete description of each data center can be found here (http://www.tia-942.org/content/162/289/About_Data_Centers) on the TIA-942.org website.

      As with all technology, data centers are going through a significant transition, and tomorrow's data center will look significantly different from what most organizations are familiar with today.

      Enterprises are becoming increasingly dynamic and distributed, which means the technology that powers data centers needs to be agile and scalable. As server virtualization increases in popularity, the amount of traffic moving sideways through the data center (East-West) decreases traditional inbound and outbound client-server traffic (North-South).

      This is wreaking havoc on data center managers as they try to meet the demands of this IT age. But as Bachman Turner Overdrive's song says, "B-b-b-baby, you haven't seen anything yet."

       These are the key technologies that will evolve data centers from static and rigid environments that keep companies in agile and fluid installations that can meet the demands of a digital company.

Clouds Expand Data Centers:

       Historically, companies had the option of building their own data center, using a hosting provider or a managed service partner. This changed the ownership and economy of running a data center, but the long delivery times needed to implement and manage technology still remained. The increase in Infrastructure as a Service (IaaS) of cloud providers such as Amazon Web Services and Microsoft Azure offers companies an option where they can provision a virtual data center in the cloud with just a few mouse clicks. ZK Research data shows that more than 80% of companies are planning hybrid environments, which means the use of private data centers and public clouds.

Software-defined networking (SDN)

       A digital business can only be as agile as its less agile component. And that is often the network. SDN can provide a level of dynamism never before experienced. (Here is a deeper dive into SDN).

Hyper converged infrastructure (HCI)

      One of the operational challenges of data centers is having to combine the right combination of servers, storage and networks to support demanding applications. Then, once the infrastructure is deployed, IT operations must discover how to scale quickly without interrupting the application. HCI simplifies that by providing an easy-to-deploy device, based on basic hardware that can be scaled by adding more nodes to the deployment. The first cases of HCI use revolved around desktop virtualization, but recently they have expanded to other commercial applications, such as unified communications and databases.

Containers

      Application development is often slowed down by the time it takes to provision the infrastructure on which it runs. This can significantly hamper an organization's ability to move to a DevOps model. Containers are a method to virtualize a complete runtime environment that allows developers to run applications and their dependencies on a stand-alone system. The containers are very light and can be created and destroyed quickly, so they are ideal for testing how applications run under certain conditions.

Micro Segmentation

     Traditional data centers have all the security technology at their core; thus, as traffic moves north-south, it passes through security tools and protects business.The increase in east-west traffic means that traffic avoids firewalls, intrusion prevention systems and other security systems and allows malware to spread very quickly. Microsegmentation is a method to create safe areas in a data center where resources can be isolated from each other, so if a violation occurs, damage is minimized. Microsegmentation is usually done in software, which makes it very agile.

Non-Volatile memory express (NVMe)

      Everything is faster in a more and more digitized world, which means that data needs to move faster.Traditional storage protocols, such as Small Computer System Interface (SCSI) and Advanced Technology Accessory (ATA), have been around for decades and are currently reaching their limits.NVMe is a storage protocol designed to accelerate the transfer of information between systems and solid-state drives, greatly improving data transfer rates.

GPU (Graphics Processing Units) Computing

       Central processing units (CPUs) have boosted the data center infrastructure for decades, but Moore's Law is reaching a physical limitation. In addition, new workloads such as analysis, machine learning and IoT are driving the need for a new type of computing model that exceeds what CPUs can do. GPUs, which were previously only used for games, work fundamentally differently, since they are capable of processing many threads in parallel, which makes them ideal for the data center in the not too distant future.

        Data centers have always been critical to the success of companies of almost all sizes, and that will not change. However, the number of ways to implement a data center and enabling technologies are undergoing a radical change. To help build a roadmap to the future data center, remember that the world is becoming increasingly dynamic and distributed. The technologies that accelerate that change are the ones that will be needed in the future. Those who do not will probably stay for a while, but they will be less and less important.

This article was originally published on.....Read More