0% found this document useful (0 votes)
35 views42 pages

Data Center Principles and Strategies 6

Uploaded by

Rocky M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views42 pages

Data Center Principles and Strategies 6

Uploaded by

Rocky M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

BITS Pilani

presentation
BITS Pilani WILP
Pilani Campus
BITS Pilani
Pilani Campus

CSIWZG522 Design and operations of Data center

CS 06
Books

Text Book(s)
No Author(s), Title, Edition, Publishing House
T1 Building a Modern Data Center: Principles and Strategies of Design
by Scott D. Lowe (Author), David M. Davis (Author), James Green (Author),
Seth Knox (Editor), Stuart Miniman (Foreword)

Reference Book(s) & other resources


No Author(s), Title, Edition, Publishing House
R1 Data Center for Beginners: A beginner's guide towards understanding Data
Center Design
B.A. Ayomaya (Author)

BITS Pilani, Pilani Campus


Slide references

Building a Modern Data Center:


Principles and Strategies of Design
by
Scott D. Lowe (Author),
David M. Davis (Author),
James Green (Author),
Seth Knox (Editor),
Stuart Miniman (Foreword)

BITS Pilani, Pilani Campus


Lecture Plan
Contact Hour 11&12

Time Type Description Content


Reference

Pre CH R.L-3.4 Deployment and consolidation of servers


3.5 Performance of server.

During CH CH3 The Parallel Paths of SDS and T1: Ch-3


Hyperconvergence
The Details of SDS
What Is Hyperconverged Infrastructure?
The Relationship Between SDS and HCI
The Role of Flash in Hyperconvergence
Where Are We Now?

BITS Pilani, Pilani Campus


The Parallel Paths of SDS and
Hyperconvergence

• It was mentioned that software defined storage (SDS) is one of


the primary enablers of hardware commoditization in the data
center.
• By allowing commodity storage to be pooled across commodity
servers while providing enterprise-class storage services, SDS also
opens the door to a new data center architecture altogether.
• This data center philosophy, which was mentioned in Chapter 2 of
T1, is called hyperconvergence.
• It’s the evolution of converged infrastructure, in which many
disparate solutions are connected at the factory and sold as one
package.

BITS Pilani, Pilani Campus


The Parallel Paths of SDS and
Hyperconvergence

• In hyperconvergence, the services those disparate solutions provided


actually become one solution.
• That one solution provides compute virtualization, networking, storage,
data services, and so on.
• It’s really many different layers of the SDDC that make
hyperconvergence possible.
• To really understand hyperconvergence, a deeper level of understanding
of software defined storage is especially critical.
• Without SDS, the flexibility that makes hyperconvergence what it is
would be impossible.

BITS Pilani, Pilani Campus


The Parallel Paths of SDS and
Hyperconvergence
• It was mentioned that software • In hyperconvergence, the services
defined storage (SDS) is one of the those disparate solutions provided
primary enablers of hardware actually become one solution.
commoditization in the data center.
• That one solution provides compute
• By allowing commodity storage to be virtualization, networking, storage, data
pooled across commodity servers services, and so on.
while providing enterprise-class
• It’s really many different layers of the
storage services, SDS also opens
SDDC that make hyperconvergence
the door to a new data center
possible.
architecture altogether.
• To really understand
• This data center philosophy, which
hyperconvergence, a deeper level of
was mentioned in Chapter 2, is
understanding of software defined
called hyperconvergence.
storage is especially critical.
• It’s the evolution of converged
• Without SDS, the flexibility that makes
infrastructure, in which many
hyperconvergence what it is would be
disparate solutions are connected at
impossible.
the factory and sold as one package.
BITS Pilani, Pilani Campus
Traditional defined Storage

BITS Pilani, Pilani Campus


Software Defined Storage

BITS Pilani, Pilani Campus


Traditional defined Storage
Vs Software Defined Storage

BITS Pilani, Pilani Campus


The Details of SDS
• Software defined storage is a really tricky subject to nail down and understand.
• This is because, similar to Cloud and DevOps, software defined is a philosophy
and not any one specific thing.
• Thus, the categorization of what is and is not SDS can be a bit challenging.
• There are a few broad features that characterize SDS which should apply to
any solution or technology purporting to be SDS:
• It should provide abstraction from the underlying physical hardware.
• It should apply services and protection to data based on policy.
• It should be accessible and programmable via standard interfaces.
• It should have the ability to scale as the business requires.

BITS Pilani, Pilani Campus


Abstraction

• First and foremost, SDS is an abstraction from the physical storage that
is being managed.
• It includes a type of storage virtualization akin to the way compute
virtualization makes virtual machines independent of the underlying
physical hardware.
• This is very important, because the strength of SDS is its flexibility.
• That flexibility is made possible by abstraction.
• The requirement to provide abstraction does not mean that SDS can’t be
a way of consuming storage from a more traditional, monolithic storage
array.
• SDS is commonly associated with hyperconvergence; however, that’s
only one of many ways that SDS can be leveraged.

BITS Pilani, Pilani Campus


Abstraction

• An SDS layer can provide the method for managing, automating, and
scaling an already specialized storage solution.
• That abstraction is typically found in one of two implementation types.
• The first type is a virtual appliance deployed in the infrastructure.
• This virtual appliance contains the software that provides and manages the
SDS platform and abstracts the storage behind it from the workloads in
front of it.
• The other method is kernel-level storage virtualization.
• Rather than running in a virtual machine, software runs on the hypervisor
itself to provide the storage features of the SDS platform.

BITS Pilani, Pilani Campus


Policy-Based

• The application of policy rather than specific settings reduces administrative


burden, eliminates opportunity for administrator error, and introduces a
method of ensuring consistency over time in the environment.
• In an SDS environment, policy may dictate any number of settings related to
the storage devices themselves or the how the workloads are placed,
protected, or served.
• A practical example of policy-based management may be a policy that
applies to a virtual machine.
• The policy could mandate that the virtual machine data is striped across a
specific number of disks or nodes.
• It could also say that the virtual machine is snapshotted every 6 hours and
snapshots are kept for 3 days onsite and are replicated offsite to keep for 7
days.

BITS Pilani, Pilani Campus


Policy-Based

BITS Pilani, Pilani Campus


Policy-Based

• It might say that the workload must reside on Tier 2 storage (the
qualifications for Tier 2 having been previously defined by the administrator).
• Imagine applying these specific settings to one virtual machine a single time.
• The task is not incredibly daunting, given the right software.
• However, imagine applying these same settings to 1,000 virtual machines in
an environment where six new virtual machines are provisioned each week.
• It’s only a matter of time before mistakes are made, and with each new
virtual machine an administrator will burn time setting it up.
• With policy-driven SDS, simply by having applied the policy (created once),
the virtual machines will be treated exactly as desired with accuracy and
consistency over time.

BITS Pilani, Pilani Campus


Programmability

• Management automation is the hallmark of the SDDC.

• For helpful and approachable automation to take place, the functions of a


system must be accessible to third parties via the use of APIs.

• An API is a developer friendly way of exposing resources in such a way that


another program can query them (get data about a resource) or manipulate
them (initiate actions or change properties).

• Some examples of API implementations are SOAP, which is becoming less


common, or REST, which is becoming more common.

• APIs are necessarily present in SDS because the SDDC as a whole uses
some sort of orchestration engine to make all the pieces work together.

BITS Pilani, Pilani Campus


Programmability

• That orchestration engine needs a way to interface with each of the


individual components, and APIs provide that integration point.

• The Programmable Data Center is a subset of the overall software defined


data center vision, which aims to allow anything and everything to be
accessible via API.

BITS Pilani, Pilani Campus


Scalability
• Finally, SDS is highly scalable in nature.
• This characteristic works in conjunction with the abstraction; in part, it is the
abstraction that provides the scalability.
• By seamlessly allowing different physical hardware to be added and
removed underneath the abstraction layer, changes to the scale of the
system can be completed without the workloads every being aware.
• This gives organizations leveraging SDS a distinct advantage over the prior
method of scaling storage.
• Historically, storage was scaled by buying a bigger unit and painfully,
methodically migrating data to it.
• SDS allows the addition of more storage, or a shift to a new platform to take
place, that is totally transparent to the workload.

BITS Pilani, Pilani Campus


What Is Hyperconverged Infrastructure?

• Hyperconvergence is an evolution in the data center that’s only just beginning


to take hold.
• The past couple of years have seen hyperconverged solutions developing at
an incredibly rapid pace and taking hold in data centers of all sizes.
• Hyperconvergence is a data center architecture, not any one specific product.
• At its core, hyperconvergence is a quest for simplicity and efficiency.
• Every vendor with a hyperconverged platform approaches this slightly
differently, but the end goal is always the same: combine resources and
platforms that are currently disparate, wrap a management layer around the
resulting system, and make it simple.
• Simplicity is, perhaps, the most sought after factor in systems going into data
centers today.

BITS Pilani, Pilani Campus


Hyper-converged Infrastructure

BITS Pilani, Pilani Campus


What Is Hyperconverged Infrastructure?

• A common misconception is that hyperconvergence means “servers and


storage in the same box.”

• Pooling locally attached storage is a good example of the power of SDS,


which itself is a part of hyperconvergence, but it is not the whole picture.

• Hyperconverged infrastructure (HCI) aims to bring as many platforms as


possible under one umbrella, and storage is just one of them.

• This generally includes compute, networking, storage, and management.

• Hyperconvergence encompasses a good portion of what makes up the SDDC.

BITS Pilani, Pilani Campus


Hyperconverged Infrastructure

BITS Pilani, Pilani Campus


One Platform, Many Services

• Convergence, which was discussed in Chapter 2, took many platforms and


made them one combined solution.

• Hyperconvergence is a further iteration of this mindset in which the


manufacturer turns many platforms into one single platform.

• Owning the whole stack allows the hyperconvergence vendor to make


components of the platform aware of each other and interoperable in a way
that is just not possible when two different platforms are integrated.

• For instance, the workload optimization engine might be aware of network


congestion; this allows more intelligent decision to be made on behalf of the
administrator.

BITS Pilani, Pilani Campus


One Platform, Many Services
• As IT organizations seek to turn over more control to automation by way of
software, the ability to make intelligent decisions is critical, and tighter integration
with other parts of the infrastructure makes this possible.

• What characterizes hyperconvergence is the building-block approach to scale.

• Each of the infrastructure components and services that the hyperconverged


platform offers is broken up and distributed into nodes or blocks such that the
entire infrastructure can be scaled simply by adding a node.

• Each node contains compute, storage, and networking; the essential physical
components of the data center.

• From there, the hyperconvergence platform pools and abstracts all of those
resources so that they can be manipulated from the management layer.
BITS Pilani, Pilani Campus
Simplicity
• Makers of hyperconverged systems place extreme amounts of focus on making
the platform simple to manage.
• If managing compute, storage, and networking was complicated when they were
separate, imagine trying to manage them at the same complexity but when
they’re all in one system.
• It would be a challenge to say the least.
• This is why the most effective hyperconvergence platforms take great care to
mask back-end complexity with a clean, intuitive user interface or management
plugin for the administrator.
• By nature, hyperconvergence is actually more complex than traditional
architecture in many ways.
• The key difference between the two is the care taken to ensure that the
administrator does not have to deal with that complexity.

BITS Pilani, Pilani Campus


Simplicity

• To that end, a task like adding physical resources to the infrastructure is


generally as simple as sliding the node into place in the chassis and notifying
the management system that it’s there.
• Discovery will commence and intelligence built in to the system will configure
the node and integrate it with the existing environment.
• Because the whole platform is working in tandem, other things like protecting
a workload are as simple as right-clicking and telling the management
interface to protect it.
• The platform has the intelligence to go and make the necessary changes to
carry out the request.

BITS Pilani, Pilani Campus


Evolution

BITS Pilani, Pilani Campus


Software or Hardware?
• Because hyperconvergence involves both software and the physical resources
required to power the software, it’s often confusing to administrators who are
learning about hyperconvergence.

• Is hyperconvergence a special piece of hardware, or is it software that makes


all the pieces work together? The short answer is that it’s both.

• Depending on the hyperconvergence vendor, the platform may exist entirely in


software and run on any sort of commodity hardware.

• Or the platform may use specialized hardware to provide the best reliability or
performance.

• Neither is necessarily better, it’s just important to know the tradeoffs that come
with each option.
BITS Pilani, Pilani Campus
Software or Hardware?
• If special hardware is included, it dramatically limits your choice with regards to
what equipment can be used to run the platform.
• But it likely increases stability, performance, and capacity on a node (all else
being equal).
• The opposite view is that leveraging a VSA [ Vector Signal Analysis] and no
custom hardware opens up the solution to a wide variety of hardware
possibilities.
• While flexible, the downside of this approach is that it consumes resources from
the hypervisor which would have served virtual machine workloads in a
traditional design.
• This can add up to a considerable amount of overhead.
• Which direction ends up being the best choice is dependent on myriad variables
and is unique to each environment.

BITS Pilani, Pilani Campus


The Relationship Between SDS and HCI

• It’s important to realize how much software defined storage (SDS) technology
makes the concept of hyperconvergence infrastructure (HCI) possible.
• If SDS didn’t exist to abstract the physical storage resource from the storage
consumer, the options left would be the architectures that have already been
shown to be broken.
• Namely, those architectures are silos of direct attached storage and shared
storage in a monolithic storage array.
• Pooled local storage has advantages over both of those designs, but would
not be possible without the help of SDS which performs the abstraction and
pooling.
• One of the main advantages of pooled local storage is a highlight of the
hyperconvergence model in general: the ability to scale the infrastructure with
building blocks that each deliver predictable capacity and performance.

BITS Pilani, Pilani Campus


The Relationship Between SDS and HCI

• Hyperconvergence has SDS to thank for the fact that as this infrastructure
grows over time, the storage provided to workloads is a single distributed
system (an aggregation of local storage) as opposed to an ever-growing stack
of storage silos.
• Most hyperconverged platforms offer the ability to apply data protection and
performance policies at a virtual machine granularity.
• This capability is also a function of the SDS component of the hyperconverged
system.
• Policy from the management engine interacts with the SDS interface to apply
specific changes to only the correct data.
• This granularity, again, would not be possible without software defined storage.

BITS Pilani, Pilani Campus


The Role of Flash in Hyperconvergence

• There are many things that go into making a hyperconverged model successful,
but one component that hyperconvergence absolutely could not be successful
without is flash storage.
• The performance capabilities of modern flash storage are the only reason it’s
possible to attain acceptable performance from a hyperconverged platform.
• In a legacy monolithic storage array, there was one way of achieving additional
performance for quite some time: add more disks.
• Each disk in a storage array can serve a certain amount of data at a time.
• This disk performance is measured in I/O Operations per Second (IOPS).
• In other words, how many individual I/O requests (reads or writes) can the disk
complete in one second.
• As spinning disks have ceased to increase in rotational speed, the fastest
spinning disks topped out somewhere between 160 and 180 IOPS.

BITS Pilani, Pilani Campus


The Role of Flash in Hyperconvergence
• The implication, then, is that regardless of the storage capacity being used, if
performance was depleted (meaning a workload needed more than 180 IOPS)
then another disk was required to meet that need.
• In a massive monolithic array, this was no problem.
• Add another shelf of disk, and you’re on your way.
• In the land of hyperconvergence, this becomes a serious problem however.
• You can’t just go on adding disks in perpetuity.
• A disk-focused 2U server using 2.5 inch disks can usually only fit 24 of them.
• So what happens if the workload requires more IOPS per node than 24 spinning
disks are capable of providing? Flash storage is orders of magnitude faster than
magnetic disk due to its solid state (non-mechanical) nature.
• A single solid-state drive (SSD) could easily deliver the IOPS performance of all
24 spinning disks.

BITS Pilani, Pilani Campus


The Role of Flash in Hyperconvergence

• Because of this dramatic performance benefit, flash storage is critical to


hyperconvergence.
• Physical limitations would not allow for the creation of a high performing
hyperconverged system without the performance boost that flash can provide.
• Raw performance aside, SSDs can also provide high performing cache which can
front-end a large amount of capacity.
• Using SSDs as cache allows hyperconverged platforms to get high performance
and great capacity numbers at the same time.
• Using flash to provide caching for a group of higher capacity, slower spinning
disks is commonly referred to as hybrid.

BITS Pilani, Pilani Campus


The Role of Flash in Hyperconvergence

BITS Pilani, Pilani Campus


The Role of Flash in Hyperconvergence

• There are a number of different disk configurations that you might see used in
hyperconvergence (Figure 3-1):

• DRAM for metadata, SSD for cache

• SSD for metadata and cache, disk for capacity

• SSD for all tiers (“all flash”)

• Choosing a hyperconvergence platform that uses the right storage


optimization method for a given workload has a big impact on the cost of
achieving acceptable performance without overpaying.

BITS Pilani, Pilani Campus


Summary

BITS Pilani, Pilani Campus


Summary

BITS Pilani, Pilani Campus


Summary

BITS Pilani, Pilani Campus


THANK YOU !!

BITS Pilani, Pilani Campus

You might also like