0% found this document useful (0 votes)
147 views

Module 4

Virtualization allows multiple virtual machines to run on a single physical machine by mimicking hardware functions in software. This improves hardware efficiency by enabling greater sharing of resources. It also provides flexibility, as virtual machines can be easily moved, copied, and managed remotely. A company example shows how virtualization allows three different applications, each with unique requirements, to run simultaneously on one physical server rather than three. This saves costs compared to using individual dedicated servers for each application.

Uploaded by

guruprasad93927
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
147 views

Module 4

Virtualization allows multiple virtual machines to run on a single physical machine by mimicking hardware functions in software. This improves hardware efficiency by enabling greater sharing of resources. It also provides flexibility, as virtual machines can be easily moved, copied, and managed remotely. A company example shows how virtualization allows three different applications, each with unique requirements, to run simultaneously on one physical server rather than three. This saves costs compared to using individual dedicated servers for each application.

Uploaded by

guruprasad93927
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

What is Virtualization?

Virtualization is technology that you can use to create virtual representations of servers,
storage, networks, and other physical machines. Virtual software mimics the functions of
physical hardware to run multiple virtual machines simultaneously on a single physical
machine. Businesses use virtualization to use their hardware resources efficiently and get
greater returns from their investment. It also powers cloud computing services that help
organizations manage infrastructure more efficiently.

Why is virtualization important?

By using virtualization, you can interact with any hardware resource with greater flexibility.
Physical servers consume electricity, take up storage space, and need maintenance. You are
often limited by physical proximity and network design if you want to access them.
Virtualization removes all these limitations by abstracting physical hardware functionality
into software. You can manage, maintain, and use your hardware infrastructure like an
application on the web.

Virtualization example

Consider a company that needs servers for three functions:

1. Store business email securely


2. Run a customer-facing application
3. Run internal business applications

Each of these functions has different configuration requirements:

 The email application requires more storage capacity and a Windows operating
system.
 The customer-facing application requires a Linux operating system and high
processing power to handle large volumes of website traffic.
 The internal business application requires iOS and more internal memory (RAM).
To meet these requirements, the company sets up three different dedicated physical servers
for each application. The company must make a high initial investment and perform ongoing
maintenance and upgrades for one machine at a time. The company also cannot optimize its
computing capacity. It pays 100% of the servers’ maintenance costs but uses only a fraction
of their storage and processing capacities.

Efficient hardware use

With virtualization, the company creates three digital servers, or virtual machines, on a
single physical server. It specifies the operating system requirements for the virtual
machines and can use them like the physical servers. However, the company now has less
hardware and fewer related expenses.

Infrastructure as a service

The company can go one step further and use a cloud instance or virtual machine from a
cloud computing provider such as AWS. AWS manages all the underlying hardware, and the
company can request server resources with varying configurations. All the applications run
on these virtual servers without the users noticing any difference. Server management also
becomes easier for the company’s IT team.

Kernel-based Virtual Machine (KVM)

To properly understand Kernel-based Virtual Machine (KVM), you first need to understand
some basic concepts in virtualization. Virtualization is a process that allows a computer to
share its hardware resources with multiple digitally separated environments. Each
virtualized environment runs within its allocated resources, such as memory, processing
power, and storage. With virtualization, organizations can switch between different
operating systems on the same server without rebooting.

Virtual machines and hypervisors are two important concepts in virtualization.

Virtual machine

A virtual machine is a software-defined computer that runs on a physical computer with a


separate operating system and computing resources. The physical computer is called
the host machine and virtual machines are guest machines. Multiple virtual machines can
run on a single physical machine. Virtual machines are abstracted from the computer
hardware by a hypervisor.

Hypervisor

The hypervisor is a software component that manages multiple virtual machines in a


computer. It ensures that each virtual machine gets the allocated resources and does not
interfere with the operation of other virtual machines. There are two types of hypervisors.

Type 1 hypervisor
A type 1 hypervisor, or bare-metal hypervisor, is a hypervisor program installed directly on
the computer’s hardware instead of the operating system. Therefore, type 1 hypervisors
have better performance and are commonly used by enterprise applications. KVM uses the
type 1 hypervisor to host multiple virtual machines on the Linux operating system.

Type 2 hypervisor

Also known as a hosted hypervisor, the type 2 hypervisor is installed on an operating


system. Type 2 hypervisors are suitable for end-user computing.

Role of Virtualization in Cloud Computing

Virtualization technology in cloud computing has played and is playing an immense role in
cloud computing, it provides an ability to share a single resource with multiple users so
that the resources that are available can be used efficiently.

Along with it, it also revolutionized the way software development and operations used
to work, before it was mandatory to use physical servers and bear all the expenses of
hardware, maintenance, and others. But with virtualization tech, it made it really simple
to develop, test, and release software with less or no overhead of the infrastructure.

This encouraged many people to start working on their software ideas since it eliminated
the need for huge initial capital investment instead people start off with very minimal
investment and get their pieces of software out in the market.

Characteristics Of Virtualization

1. Managed Resources

The VMs or any resources provisioned from the cloud are completely managed by the
cloud providers, meaning apart from specifying the requirements, users don't have to
maintain or worry about the underlying hardware and configuration of their resources.
For example, the cloud providers handle all the work such as hardware supplies, backups,
monitoring, etc.

2. Resource Allotment

The resource allotment is made simple with virtualization technology, the process is as
simple as clicking a few buttons. Users can get their whole infrastructure ready within
hours, and they can customize it later without any hassle.

3. Resource Isolation

Resource isolation is one of the important characteristics of virtualization in the cloud.


This enables applications to run in an environment that is completely dedicated to them.
This also helps in preventing data breaches and makes sure to enable efficient resource
utilization.
4. Load balancing

The load balancer as the name suggest handles the request load of a server. The
virtualization platforms make sure that the incoming requests are distributed to the
appropriate servers. This allows the servers to serve requests sooner instead of being
overloaded by the huge amount of requests.

5. Portability

The virtualization resources are portable, meaning they can be copied and moved from
one system to another, and the same functionality can be expected. This allows the users
to create and reuse the configuration instead of repeating it.

What are the benefits of virtualization?

Virtualization provides several benefits to any organization:

Efficient resource use

Virtualization improves hardware resources used in your data center. For example, instead
of running one server on one computer system, you can create a virtual server pool on the
same computer system by using and returning servers to the pool as required. Having fewer
underlying physical servers frees up space in your data center and saves money on
electricity, generators, and cooling appliances.

Automated IT management

Now that physical computers are virtual, you can manage them by using software tools.
Administrators create deployment and configuration programs to define virtual machine
templates. You can duplicate your infrastructure repeatedly and consistently and avoid
error-prone manual configurations.

Faster disaster recovery

When events such as natural disasters or cyberattacks negatively affect business operations,
regaining access to IT infrastructure and replacing or fixing a physical server can take hours
or even days. By contrast, the process takes minutes with virtualized environments. This
prompt response significantly improves resiliency and facilitates business continuity so that
operations can continue as scheduled.

How does virtualization work?

Virtualization uses specialized software, called a hypervisor, to create several cloud


instances or virtual machines on one physical computer.
Cloud instances or virtual machines

After you install virtualization software on your computer, you can create one or more
virtual machines. You can access the virtual machines in the same way that you access other
applications on your computer. Your computer is called the host, and the virtual machine is
called the guest. Several guests can run on the host. Each guest has its own operating
system, which can be the same or different from the host operating system.

From the user’s perspective, the virtual machine operates like a typical server. It has
settings, configurations, and installed applications. Computing resources, such as central
processing units (CPUs), Random Access Memory (RAM), and storage appear the same as on
a physical server. You can also configure and update the guest operating systems and their
applications as necessary without affecting the host operating system.

Levels of Virtualization Implementation

A traditional computer runs with a host operating system specially tailored for its hardware
architecture. After virtualization, different user applications managed by their own operating
systems (guest OS) can run on the same hardware, independent of the host OS. This is often
done by adding additional software, called a virtualization layer. This virtualization layer is
known as hypervisor or virtual machine monitor (VMM). he main function of the software
layer for virtualization is to virtualize the physical hardware of a host machine into virtual
resources to be used by the VMs, exclusively. This can be implemented at various
operational levels. The virtualization software creates the abstraction of VMs by interposing
a virtualization layer at various levels of a computer system. Common virtualization layers
include the instruction set architecture (ISA) level, hardware level, operating system level,
library support level, and application level
1. Instruction Set Architecture Level
At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host
machine. For example, MIPS binary code can run on an x86-based host machine with the
help of ISA emulation. With this approach, it is possible to run a large amount of legacy
binary code written for various processors on any given new hardware host machine.
Instruction set emulation leads to virtual ISAs created on any hardware machine.
The basic emulation method is through code interpretation. An interpreter program
interprets the source instructions to target instructions one by one. One source instruction
may require tens or hundreds of native target instructions to perform its function.
Obviously, this process is relatively slow. For better performance, dynamic binary
translation is desired. This approach translates basic blocks of dynamic source instructions
to target instructions. The basic blocks can also be extended to program traces or super
blocks to increase translation efficiency. Instruction set emulation requires binary
translation and optimization. A virtual instruction set architecture (V-ISA) thus requires
adding a processor-specific software translation layer to the compiler.
2. Hardware Abstraction Level
Hardware-level virtualization is performed right on top of the bare hardware. On the one
hand, this approach generates a virtual hardware environment for a VM. On the other hand,
the process manages the underlying hardware through virtualization. The idea is to
virtualize a computer’s resources, such as its processors, memory, and I/O devices. The
intention is to upgrade the hardware utilization rate by multiple users concurrently. The
idea was implemented in the IBM VM/370 in the 1960s. More recently, the Xen hypervisor
has been applied to virtualize x86-based machines to run Linux or other guest OS
applications
3. Operating System Level
This refers to an abstraction layer between traditional OS and user applications. OS-level
virtualization creates isolated containers on a single physical server and the OS instances to
utilize the hard-ware and software in data centers. The containers behave like real servers.
OS-level virtualization is commonly used in creating virtual hosting environments to allocate
hardware resources among a large number of mutually distrusting users. It is also used, to a
lesser extent, in consolidating server hardware by moving services on separate hosts into
containers or VMs on one server.
4. Library Support Level
Most applications use APIs exported by user-level libraries rather than using lengthy system
calls by the OS. Since most systems provide well-documented APIs, such an interface
becomes another candidate for virtualization. Virtualization with library interfaces is
possible by controlling the communication link between applications and the rest of a
system through API hooks. The software tool WINE has implemented this approach to
support Windows applications on top of UNIX hosts. Another example is the vCUDA which
allows applications executing within VMs to leverage GPU hardware acceleration.

5. User-Application Level
Virtualization at the application level virtualizes an application as a VM. On a traditional OS,
an application often runs as a process. Therefore, application-level virtualization is also
known as process-level virtualization. The most popular approach is to deploy high level
language (HLL)
VMs. In this scenario, the virtualization layer sits as an application program on top of the
operating system, and the layer exports an abstraction of a VM that can run programs
written and compiled to a particular abstract machine definition. Any program written in the
HLL and compiled for this VM will be able to run on it. The Microsoft .NET CLR and Java
Virtual Machine (JVM) are two good examples of this class of VM.

Types of virtualizations
Server virtualization

Server virtualization is a process that partitions a physical server into multiple virtual
servers. It is an efficient and cost-effective way to use server resources and deploy IT
services in an organization. Without server virtualization, physical servers use only a small
amount of their processing capacities, which leave devices idle.

Storage virtualization

Storage virtualization combines the functions of physical storage devices such as network
attached storage (NAS) and storage area network (SAN). You can pool the storage hardware
in your data center, even if it is from different vendors or of different types. Storage
virtualization uses all your physical data storage and creates a large unit of virtual storage
that you can assign and control by using management software. IT administrators can
streamline storage activities, such as archiving, backup, and recovery, because they can
combine multiple network storage devices virtually into a single storage device.

Network virtualization

Any computer network has hardware elements such as switches, routers, and firewalls. An
organization with offices in multiple geographic locations can have several different network
technologies working together to create its enterprise network. Network virtualization is a
process that combines all of these network resources to centralize administrative tasks.
Administrators can adjust and control these elements virtually without touching the physical
components, which greatly simplifies network management.
The following are two approaches to network virtualization.

Software-defined networking

Software-defined networking (SDN) controls traffic routing by taking over routing


management from data routing in the physical environment. For example, you can program
your system to prioritize your video call traffic over application traffic to ensure consistent
call quality in all online meetings.

Network function virtualization

Network function virtualization technology combines the functions of network appliances,


such as firewalls, load balancers, and traffic analyzers that work together, to improve
network performance.

Data virtualization

Modern organizations collect data from several sources and store it in different formats.
They might also store data in different places, such as in a cloud infrastructure and an on-
premises data center. Data virtualization creates a software layer between this data and the
applications that need it. Data virtualization tools process an application’s data request and
return results in a suitable format. Thus, organizations use data virtualization solutions to
increase flexibility for data integration and support cross-functional data analysis.

Application virtualization

Application virtualization pulls out the functions of applications to run on operating systems
other than the operating systems for which they were designed. For example, users can run
a Microsoft Windows application on a Linux machine without changing the machine
configuration. To achieve application virtualization, follow these practices:
 Application streaming – Users stream the application from a remote server, so it runs
only on the end user's device when needed.
 Server-based application virtualization – Users can access the remote application
from their browser or client interface without installing it.
 Local application virtualization – The application code is shipped with its own
environment to run on all operating systems without changes.

Desktop virtualization

Most organizations have nontechnical staff that use desktop operating systems to run
common business applications. For instance, you might have the following staff:

 A customer service team that requires a desktop computer with Windows 10 and
customer-relationship management software
 A marketing team that requires Windows Vista for sales applications

You can use desktop virtualization to run these different desktop operating systems on
virtual machines, which your teams can access remotely. This type of virtualization makes
desktop management efficient and secure, saving money on desktop hardware. The
following are types of desktop virtualization.

Virtual desktop infrastructure

Virtual desktop infrastructure runs virtual desktops on a remote server. Your users can
access them by using client devices.

Local desktop virtualization

In local desktop virtualization, you run the hypervisor on a local computer and create a
virtual computer with a different operating system. You can switch between your local and
virtual environment in the same way you can switch between applications.

How is virtualization different from cloud computing?

Cloud computing is the on-demand delivery of computing resources over the internet with
pay-as-you-go pricing. Instead of buying, owning, and maintaining a physical data center,
you can access technology services, such as computing power, storage, and databases, as
you need them from a cloud provider.

Virtualization technology makes cloud computing possible. Cloud providers set up and
maintain their own data centers. They create different virtual environments that use the
underlying hardware resources. You can then program your system to access these cloud
resources by using APIs. Your infrastructure needs can be met as a fully managed service.

How is server virtualization different from containerization?

Containerization is a way to deploy application code to run on any physical or virtual


environment without changes. Developers bundle application code with related libraries,
configuration files, and other dependencies that the code needs to run. This single package
of the software, called a container, can run independently on any platform. Containerization
is a type of application virtualization.

You can think of server virtualization as building a road to connect two places. You have to
recreate an entire virtual environment and then run your application on it. By comparison,
containerization is like building a helicopter that can fly to either of those places. Your
application is inside a container and can run on all types of physical or virtual environments.

Advantages of Virtualization

Here are some of the benefits of application virtualization in cloud computing, which
provides more details on the role of virtualization in cloud computing.

1. Reduced Cost

Virtualization in the cloud provides an easy-to-use platform, which enables the users to
provision resources with few clicks, and pay for only what they use.

Due to this, the user doesn't have to set up their own physical server and handle the
maintenance, which would be expensive.

2. Increased Uptime

Virtualization in the cloud enables users to set up virtual resources in many locations
around the world as a backup.

This increases the uptime and availability of the resources and the user doesn't have to
worry if one or more of their server goes down, since there are backup resources
available.

3. Increased Security

Cloud providers take extra measures to secure every single resource they provide, and
these security measures are implemented layer by layer, starting from the hardware level
to the software level.

The security measures include Firewall to defend against cyber and virus threats, End-to-
End encryption, Data Backups, and more. The cloud providers let users define some of
their security policies as well, such as adding a member, whitelisting IP addresses, and
more.

4. Flexible Provisioning

The resourcing provisioning in the cloud is as simple as clicking a few buttons. Users
specify the type of resource and the capacity of resources they need, and those
requested sets of resources will be provided within a few minutes.
In addition to this, increasing and decreasing the existing capacity of the resources isn't
complicated, users just have to edit the capacity of the resources, and updated resources
need will be fulfilled.

5. Easy setup and migration

Virtualization in cloud makes sure that the platform is easy to use for the users who are
setting up their infrastructure. In addition to that cloud providers make sure to provide
an easy solution to migrate resources from one service to another.

For instance, virtualization in the cloud allows users to create a virtual database and
helps them migrate their existing database to the managed database without any hassle.

Disadvantages of Virtualization

1. Data Privacy Concerns

Along with all the benefits of the cloud and virtualization, one of the important
disadvantages is privacy, because even though virtualization in the cloud would help
users to create any number of virtual resources, the data and all the activities are stored
and managed by a third party.

This wouldn't be a concern for all the application, but for some it does. Cloud providers
often come up with agreements, encryption, and many other ways to overcome this.

2. Learning Curve

Mastering virtualization technology in cloud has a bit difficult learning curve and can take
some time and experience, since along with creating resources one must handle the
network configuration, defining policies, whitelisting IPs, deploying applications, etc.

3. Increased Risk of Over Billing

Overbilling is a common concern in the cloud, this usually depends on how the user
provisions the resources and uses them. Some users may forget to terminate their
resources or increases the resource capacity which increases the billing amount.

Although cloud provides few solutions such as billing alerts, budget management, etc.
But this is a common issue found for every 8 out of 10 users

4. Possibilities of Vendor Locking

Vendor locking is one of the ways where cloud providers restrict users from moving out
of their platform. The whole platform is designed in such a way that all the services
depends on one another, and due to this it is hard for the user to move out of one
platform to another.
This is one of the biggest issues when there's a downtime, meaning for some unexpected
reason if the cloud service goes down, then all the services that are being used by the
users goes down as well.

5.High Initial Investment

Clouds have a very high initial investment, but it is also true that it will help in reducing the
cost of companies.

6.Learning New Infrastructure

As the companies shifted from Servers to Cloud, it requires highly skilled staff who have
skills to work with the cloud easily, and for this, you must hire new staff or provide training
to current staff.

7.Risk of Data

Hosting data on third-party resources can lead to putting the data at risk, it has the chance
of getting attacked by any hacker or cracker very easily.

Software Defined Networks

SDN is an approach to networking that uses software controllers that can be driven by
application programming interfaces (APIs) to communicate with hardware infrastructure to
direct network traffic. Using software, it creates and operates a series of virtual overlay
networks that work in conjunction with a physical underlay network. SDNs offer the
potential to deliver application environments as code and minimize the hands-on time
needed for managing the network.

Why use SDN?

Companies today are looking to SDN to bring the benefits of the cloud to network
deployment and management. With network virtualization, organizations can open the door
to greater efficiency through new tools and technology, such as Software-as-a-Service
(SaaS), Infrastructure-as-a-Service (IaaS) and other cloud computing services, as well as
integrate via APIs with their software-defined network.

SDN also increases visibility and flexibility. In a traditional environment, a router or switch—
whether in the cloud or physically in the data center—is only aware of the status of network
devices next to it. SDN centralizes this information so that organizations can view and
control the entire network and devices. Organizations can also segment different virtual
networks within a single physical network or connect different physical networks to create a
single virtual network, offering a high degree of flexibility.

Simply put, companies are using SDN because it’s a way to efficiently control traffic and
scale as needed.
How SDN works

To better understand how SDN works, it helps to define the basic components that create
the network ecosystem. The components used to build a software-defined network may or
may not be in the same physical area. These include:

· Applications – Tasked with relaying information about the network or requests for
specific resource availability or allocation.

· SDN controllers – Handle communication with the apps to determine the destination
of data packets. The controllers are the load balancers within SDN.

· Networking devices – Receive instructions from the controllers regarding how to


route the packets.

· Open-source technologies – Programmable networking protocols, such as OpenFlow,


direct traffic among network devices in an SDN network. The Open Networking Foundation
(ONF) helped to standardize the OpenFlow protocol and other open source SDN
technologies.

By combining these components, organizations get a simpler, centralized way to manage


networks. SDN strips away the routing and packet forwarding functions, known as the
control plane, from the data plane, or underlying infrastructure. SDN then implements
controllers, considered the brain of the SDN network, and layers them above the network
hardware in the cloud or on-premises. This lets teams use policy-based management—a
kind of automation—to manage network control directly.

SDN controllers tell switches where to send packets. In some cases, virtual switches that
have been embedded in software or the hardware will replace the physical switches. This
consolidates their functions into a single, intelligent switch that can check data packets and
their virtual machine destinations to ensure there are no issues before moving packets
along.

Virtualization and software-defined networking

The term “virtual network” is sometimes erroneously used synonymously with the
term SDN. These two concepts are distinctly different, but they do work well together.

Network functions virtualization (NFV) segments one or many logical, or virtual, networks
within a single physical network. NFV can also connect devices on different networks to
create a single virtual network, often including virtual machines.

SDN works well with NFV. It assists NFV by refining the process of controlling data
packet routing through a centralized server, improving visibility and control.

Types of SDN
There are four primary types of software-defined networking (SDN):

· Open SDN – Open protocols are used to control the virtual and physical devices
responsible for routing the data packets.

· API SDN – Through programming interfaces, often called southbound APIs,


organizations control the flow of data to and from each device.

· Overlay Model SDN – It creates a virtual network above existing hardware, providing
tunnels containing channels to data centers. This model then allocates bandwidth in each
channel and assigns devices to each channel.

· Hybrid Model SDN – By combining SDN and traditional networking, the hybrid model
assigns the optimal protocol for each type of traffic. Hybrid SDN is often used as an
incremental approach to SDN.

Benefits of SDN

SDN architecture comes with many advantages, largely due to the centralization of network
control and management. Some of the benefits include:

· Ease of network control – Separating the packet forwarding functions from the data
plane enables direct programming and simpler network control. This could include
configuring network services in real time, such as Ethernet or firewalls, or quickly allocating
virtual network resources to change the network infrastructure through one centralized
location.

· Agility – Because SDN enables dynamic load balancing to manage the traffic flow as
need and usage fluctuates, it reduces latency, increasing the efficiency of the network.

· Flexibility – With a software-based control layer, network operators have more


flexibility to control the network, change configuration settings, provision resources, and
increase network capacity.

· Greater control over network security – SDN lets network administrators set policies
from one central location to determine access control and security measures across the
network by workload type or by network segments. You can also use micro-segmentation to
reduce complexity and establish consistency across any network architecture—
whether public cloud, private cloud, hybrid cloud or multicloud.

· Simplified network design and operation – Administrators can use a single protocol to
communicate with a wide range of hardware devices through a central controller. It also
offers more flexibility in choosing networking equipment, since organizations often prefer to
use open controllers rather than vendor-specific devices and protocols.

· Modernizing telecommunications – SDN technology combined with virtual


machines and virtualization of networks lets service providers provide distinct network
separation and control to customers. This helps service providers improve their scalability
and provide bandwidth on demand to customers who need greater flexibility and have
variable bandwidth usage.

The risks of software-defined networking

SDN solutions come with significant benefits but can pose a risk if not implemented
correctly. The controller is critical in maintaining a secure network. It is centralized and,
therefore, a potential single point of failure. This potential vulnerability can be mitigated by
implementing controller redundancy on the network with automatic failover. This may be
costly but is no different from creating redundancy in other areas of the network to ensure
business continuity.

SD-WAN advances cloud implementation

Service providers and organizations alike can benefit from a software-defined wide area
network, or SD-WAN. A traditional WAN (wide-area network) is used to connect users to
applications hosted on an organization’s servers in a data center. Typically, multiprotocol
label switching (MPLS) circuits were used to route traffic along the shortest path, ensuring
reliability.

As an alternative, an SD-WAN is programmatically configured and provides a centralized


management function for any cloud, on-premises or hybrid network topology in a wide area
network. SD-WAN can not only handle massive amounts of traffic but also multiple types
of connectivity, including SDN, virtual private networks, MPLS and others.

Software Defined Storage:


As the generation of data increases exponentially, storing data and managing servers
becomes more costly and complex. And there are several systems that assist organizations
to manage their data. And we are about to see one of the systems that are more efficient
and effective than traditional ones. That is software-defined storage (SDS).

 Software-defined storage (SDS) is a way of virtually separating storage software


from their hardware. The storage software is a layer between physical storage
and data requests which helps you control storage requests. That means we
can manipulate where and how data is stored.
 SDS is typically designed to run on basic server hardware with Intel X86
processors which is cost saving when compared to traditional systems.

Why is SDS better than traditional systems?

SDS is better than traditional systems because it is virtually separating or abstracting the
storage software from its hardware. And this enables you to expand your storage capacity,
instead of adding another hardware. The SDS controller software offers connectivity,
networking, and storage access facilities.

key features of SDS


1. Automation: Reduces manual processes and operational costs by easy and
efficient management.
2. Standard infrastructure: To manage and maintain the large pool of storage
devices and software, an API is required.
3. Scalability and Transparency: The ability to expand the storage capacity
without reducing the performance and to monitor them for resource
availability and cost estimation.

Working of Software Defined Storage:

SDS abstracts the storage software from hardware and groups all the storage software and
resources virtually. This gives a large pool of united storage of separate hardware.
Therefore, you can store data on any of the hardware among them. It gives you the
flexibility to upgrade or downgrade hardware as per the need.

Benefits of SDS:

1. Flexibility: SDS infrastructure can be built on commodity servers with X86


processors. So you don’t have to worry about choosing a company’s SDS
controller from which you didn’t buy the hardware.
2. Scalability: Adding extra storage devices to the virtual pool, CPUs and memory
for increasing the performance and capacity is easier.
3. Agility: SDS is capable of supporting traditional as well as new generation
applications simultaneously.
4. Automation: SDS is capable of adapting data and performance requirements
without any manual operations of an admin.
5. Cost efficiency: Since SDS utilises the available resources efficiently and
optimizes storage capacities. And the automation of SDS reduces the
operational cost by automating the administration.
6. Virtualization and Interoperability: Virtual integration of various storage
resources and managing them as a single unit is made possible by SDS.
Integration of server hardware from different vendors is feasible by SDS
because it acts as a mediator to unify different storage services and virtually
put them in a single pool.

You might also like