0% found this document useful (0 votes)
7 views

Cloud Computing

Uploaded by

afni8123
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Cloud Computing

Uploaded by

afni8123
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

1.

Cloud service model:


In cloud computing, there are three primary service models: Infrastructure as a Service
(IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).

1. Infrastructure as a Service (IaaS):


- **Description**: IaaS provides virtualized computing resources over the internet.
Customers can rent virtual machines, storage, and networking from the cloud provider.
- **Advantages**:
- Scalability: Easily scale resources up or down based on demand.
- Cost-effective: Pay only for the resources used, reducing capital expenses.
- Flexibility: Customers have control over their infrastructure configuration.
- **Disadvantages**:
- Technical expertise required: Users need to manage and maintain the infrastructure.
- Security concerns: Customers are responsible for securing their data and applications.

2. Platform as a Service (PaaS):


- **Description**: PaaS offers a platform allowing customers to develop, run, and manage
applications without dealing with infrastructure.
- **Advantages**:
- Faster development: Developers can focus on coding without worrying about
infrastructure.
- Cost-effective: Reduces the need for hardware and software maintenance.
- Scalability: Easily scale applications without managing the underlying infrastructure.
- **Disadvantages**:
- Vendor lock-in: Customers may face challenges if they want to switch providers.
- Limited control: Less flexibility compared to IaaS as the platform is predefined.

3. Software as a Service (SaaS):


- **Description**: SaaS delivers software applications over the internet on a subscription
basis.
- **Advantages**:
- Accessibility: Applications can be accessed from any device with an internet connection.
- Automatic updates: Providers handle maintenance and updates.
- Cost-effective: Eliminates the need for individual software licenses.
- **Disadvantages**:
- Data security: Users entrust their data to the SaaS provider, raising security concerns.
- Customization limitations: Users may not be able to customize the software extensively.

Each service model has its own set of advantages and disadvantages, catering to different
needs and preferences of users depending on their requirements and technical expertise.
2.Cloud deployment models:
Sure! Let's dive into the details of cloud deployment models along with their advantages and
disadvantages:

1. **Public Cloud**:
- **Description**: Public cloud services are provided by third-party cloud service providers
over the internet. These services are shared among multiple organizations.
- **Advantages**: Cost-effective, scalable, no maintenance required, and offers a
pay-as-you-go pricing model.
- **Disadvantages**: Security concerns due to shared resources, potential lack of
customization, and dependency on the provider's infrastructure and services.

2. **Private Cloud**:
- **Description**: Private cloud services are dedicated to a single organization and can be
hosted on-premises or by a third-party provider.
- **Advantages**: Enhanced security, more control over data and resources, customization
options, and compliance with specific regulations.
- **Disadvantages**: Higher costs, maintenance requirements, limited scalability compared
to public cloud, and potential initial setup complexities.

3. **Hybrid Cloud**:
- **Description**: Hybrid cloud combines public and private cloud services, allowing data
and applications to be shared between them.
- **Advantages**: Flexibility to choose where to store data based on sensitivity, scalability
options, cost-effectiveness, and ability to leverage the benefits of both public and private
clouds.
- **Disadvantages**: Complexity in managing multiple environments, potential security
risks due to data movement between clouds, and the need for robust integration and
orchestration.

4. **Multi-Cloud**:
- **Description**: Multi-cloud involves using services from multiple cloud providers to meet
specific business needs.
- **Advantages**: Avoids vendor lock-in, increased resilience against outages, optimization
of services based on specific requirements, and potential cost savings through competitive
pricing.
- **Disadvantages**: Management challenges in coordinating multiple providers, potential
interoperability issues, complexity in data and application management across different
clouds, and increased security concerns due to multiple environments.

Each cloud deployment model offers unique benefits and challenges, and organizations
need to carefully evaluate their requirements to choose the most suitable model or a
combination of models that best align with their goals and constraints.
3.Design challenges in cloud
architecture:

Designing cloud architecture comes with several challenges that need to be addressed for a
successful implementation. Some of the key design challenges in cloud architecture include:

1. **Scalability**: One of the primary challenges is designing a system that can scale easily
to handle increasing workloads. This involves ensuring that the architecture can grow
seamlessly to accommodate more users, data, and transactions without compromising
performance.

2. **Reliability and Availability**: Maintaining high availability and reliability is crucial in cloud
architecture. Designing for redundancy, fault tolerance, and disaster recovery mechanisms is
essential to ensure continuous operation even in the face of failures.

3. **Security**: Security is a significant concern in cloud architecture. Designing robust


security measures to protect data, applications, and infrastructure from unauthorized access,
data breaches, and other cyber threats is critical.

4. **Performance Optimization**: Optimizing performance in a cloud environment involves


designing systems that can deliver high performance while efficiently utilizing resources.
This includes considerations such as network latency, data transfer speeds, and workload
distribution.

5. **Cost Management**: Designing cost-effective cloud architecture involves optimizing


resource allocation, monitoring usage, and implementing cost control measures. Balancing
performance requirements with cost considerations is essential to avoid unnecessary
expenses.

6. **Compliance and Governance**: Ensuring compliance with regulations and industry


standards is a challenge in cloud architecture. Designing systems that meet legal
requirements and adhering to governance policies while maintaining data integrity and
privacy is crucial.

Addressing these design challenges requires careful planning, implementation of best


practices, and continuous monitoring and optimization to ensure a robust and efficient cloud
architecture.
4.NIST cloud computing reference
architecture:
In the NIST cloud computing reference architecture, let's break down the roles of cloud
consumer, provider, carrier, broker, and auditor:

1. **Cloud Consumer**:
- **Description**: The cloud consumer is an individual or organization that uses cloud
services provided by cloud providers. They can be end-users, developers, or IT departments
within an organization.

2. **Cloud Provider**:
- **Description**: The cloud provider is the entity that offers cloud services, such as
computing resources, storage, and applications, to cloud consumers. These providers
manage and maintain the infrastructure and services offered to consumers.

3. **Carrier**:
- **Description**: The carrier refers to the network service provider that offers connectivity
and networking services to facilitate the transfer of data between cloud consumers and
providers. Carriers ensure reliable and secure communication between different components
of the cloud architecture.

4. **Broker**:
- **Description**: A cloud broker acts as an intermediary between cloud consumers and
providers, helping consumers select the most suitable cloud services based on their
requirements. Brokers may offer services such as cloud service aggregation, integration,
customization, and management.

1. **Service Intermediation**:
- **Explanation**: Service intermediation involves adding value to cloud services by
enhancing or modifying them before they reach the cloud consumer. Intermediaries may
provide services such as data encryption, authentication, or transformation to meet specific
consumer requirements or enhance security and performance.

2. **Service Aggregation**:
- **Explanation**: Service aggregation combines multiple cloud services from different
providers into a single, unified service offering. This allows cloud consumers to access a
variety of services through a single interface, simplifying management and improving
efficiency.

3. **Service Arbitrage**:
- **Explanation**: Service arbitrage involves selecting and utilizing cloud services based on
factors such as cost, performance, and availability. Cloud brokers or consumers may engage
in service arbitrage to optimize their cloud service usage, leveraging different providers or
service models to achieve the best value for their specific needs.

These concepts of service intermediation, service aggregation, and service arbitrage


contribute to the flexibility, customization, and efficiency of cloud service delivery within the
NIST cloud computing reference architecture.

5. **Auditor**:
- **Description**: The cloud auditor is responsible for assessing and ensuring compliance,
security, and performance of cloud services. They conduct audits to verify that cloud
providers adhere to industry standards, regulations, and service level agreements (SLAs).

advantages and disadvantages:


Pros:
1. **Cost-Efficiency**: Cloud computing eliminates the need for upfront infrastructure
investment, reducing operational and maintenance costs.
2. **Scalability**: Easily scale resources up or down based on demand, allowing flexibility
and cost savings.
3. **Accessibility**: Access data and applications from anywhere with an internet connection,
promoting collaboration and remote work.
4. **Reliability**: Cloud providers offer high availability and reliability through redundant
systems and data backups.
5. **Automatic Updates**: Providers handle software updates and maintenance, ensuring
systems are up-to-date and secure.

Cons:
1. **Security Concerns**: Data security and privacy risks exist, especially with sensitive
information stored off-site.
2. **Internet Dependency**: Reliance on internet connectivity can hinder operations if there
are outages or slow connections.
3. **Limited Control**: Users have limited control over the infrastructure and services, relying
on the provider for maintenance and management.
4. **Compliance Challenges**: Meeting industry-specific regulations and compliance
requirements can be challenging in the cloud.
5. **Downtime**: Despite high availability, cloud services can experience downtime,
impacting business operations.

By understanding these pros and cons, organizations can make informed decisions on
adopting cloud computing to leverage its benefits while mitigating potential drawbacks.
Unit-2

1.Hypervisor:
In cloud computing, a hypervisor is a virtualization platform that allows multiple operating
systems to run on a host computer at the same time. The term usually refers to an
implementation using full virtualization.

A hypervisor is a software layer installed on the physical hardware, which allow splitting the
physical machine into many virtual machines. This allows multiple operating systems to be
run simultaneously on the same physical hardware.

The operating system installed on the virtual machine is called a guest OS, and is
sometimes also called an instance. The hardware the hypervisor runs on is called the host
machine.

Types of Hypervisor

TYPE-1 Hypervisor:

The hypervisor runs directly on the underlying host system. It is also known as a “Native
Hypervisor” or “Bare metal hypervisor”. It does not require any base server operating
system. It has direct access to hardware resources. Examples of Type 1 hypervisors include
VMware ESXi, Citrix XenServer, and Microsoft Hyper-V hypervisor.

Pros & Cons of Type-1 Hypervisor:


Pros: Such kinds of hypervisors are very efficient because they have direct access to the
physical hardware resources(like Cpu, Memory, Network, and Physical storage). This causes
the empowerment of the security because there is nothing any kind of the third party
resource so that attacker couldn’t compromise with anything.
Cons: One problem with Type-1 hypervisors is that they usually need a dedicated separate
machine to perform their operation and to instruct different VMs and control the host
hardware resources.

TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as ‘Hosted
Hypervisor”. Such kind of hypervisors doesn’t run directly over the underlying hardware
rather they run as an application in a Host system(physical machine). Basically, the software
is installed on an operating system. Hypervisor asks the operating system to make hardware
calls. An example of a Type 2 hypervisor includes VMware Player or Parallels Desktop.
Hosted hypervisors are often found on endpoints like PCs. The type-2 hypervisor is very
useful for engineers, and security analysts (for checking malware, or malicious source code
and newly developed applications).

Pros & Cons of Type-2 Hypervisor:

Pros: Such kind of hypervisors allows quick and easy access to a guest Operating System
alongside the host machine running. These hypervisors usually come with additional useful
features for guest machines. Such tools enhance the coordination between the host machine
and the guest machine.

Cons: Here there is no direct access to the physical hardware resources so the efficiency of
these hypervisors lags in performance as compared to the type-1 hypervisors, and potential
security risks are also there an attacker can compromise the security weakness if there is
access to the host operating system so he can also access the guest operating system
2 Full Virtualization:

● Full Virtualization was introduced by IBM in the year 1966.


● It is the first software solution for server virtualization and uses binary
translation and direct approach techniques.
● In full virtualization, guest OS is completely isolated by the virtual
machine from the virtualization layer and hardware.
● Microsoft and Parallels systems are examples of full virtualization.
● Full virtualization uses a combination of direct execution and binary
translation. This allows direct execution of non-sensitive CPU
instructions, whereas sensitive CPU instructions are translated on the
fly.
● To improve performance, the hypervisor maintains a cache of the
recently translated instructions
● In the full virtualization technique, the hypervisor completely simulates
the underlying hardware.
● The main advantage of this technique is that it allows the running of the
unmodified OS.
● In full virtualization, the guest OS is completely unaware that it’s being
virtualized.
para Virtualization:

Paravirtualization is the category of CPU virtualization which uses hypercalls


for operations to handle instructions at compile time. In paravirtualization,
guest OS is not completely isolated but it is partially isolated by the virtual
machine from the virtualization layer and hardware. VMware and Xen are
some examples of paravirtualization.

In paravirtualization, the hypervisor doesn’t simulate underlying hardware.


Instead, it provides hypercalls. The guest OS uses hypercalls to execute
sensitive CPU instructions.

This technique is not as portable as full virtualization, as it requires


modification in the guest OS. However, it provides better performance
because the guest OS is aware that it’s being virtualized.

Hypercalls are similar to kernel system calls. They allow the guest OS to
communicate with the hypervisor.

The open-source Xen project uses the paravirtualization technique.


3.Hardware virtualisation:

Hardware Virtualization is also popularly known as Server Virtualization


because it refers to a single physical server that consolidates multiple physical
servers into virtual servers for running the key physical server. Every small
server has the capability to host a virtual machine, but if any task requires
Hardware, it will treat the entire server cluster as one device.
Here, Hypervisor takes the responsibility to allocate the Hardware resources.
Also known as native virtualization, in this technique, underlying hardware
provides special CPU instructions to aid virtualization. This technique is also
highly portable as the hypervisor can run an unmodified guest OS. This
technique makes hypervisor implementation less complex and more
maintainable.
Intel’s Intel-VT and AMD’s AMD-V processors provide CPU virtualization
instructions that software vendors use to implement hardware-assisted
virtualization.

Types of Hardware Virtualization:

Below we have mentioned the three varieties of Hardware Virtualization. They


are as follows:

● Full Virtualization
● Emulation Virtualization
● Para-Virtualization

Emulation Virtualization:

Moving on towards the second Virtualization type, it is known as Emulation


Virtualization. In this type of Virtualization, the virtual machine simulates the
Hardware, thus becoming independent of it. In this Virtualization, the guest
operating system is not required to perform any modifications

Cpu virtualization:

In cloud computing, virtualization of the CPU involves creating virtual


instances of a physical central processing unit (CPU) to run multiple virtual
machines (VMs) on a single physical server. This is achieved by creating a
layer of abstraction between the software and the hardware, which makes it
possible for multiple operating systems to run simultaneously on a single
physical CPU without interfering with each other.

This virtualization technology allows for better utilization of computing


resources by enabling the sharing of a single physical CPU among multiple
VMs.

Each VM operates as if it has its own dedicated CPU, even though they are all
sharing the underlying physical hardware..

At its core, CPU virtualization is about resource allocation. It allows for the
effective distribution of computational resources, such as processing power
and memory, among various virtual machines. This technology makes it
possible to run multiple applications and processes on the same hardware,
significantly improving efficiency and reducing costs.

The virtualization of the CPU works by using a hypervisor, which is a software


layer that sits between the physical hardware and the virtual machines. The
hypervisor allocates CPU resources to each virtual machine, allowing them to
run independently as if they have their own dedicated CPU. This process
enables multiple VMs to share the same physical CPU efficiently, maximizing
resource utilization and enabling the seamless operation of various
applications and workloads in the cloud.

How CPU Virtualization Works?


Step 1: Creating Virtual Machines (VMs)
Step 2: Allocating Resources
Step 3: Isolation and Independence
Step 4: Running Operating Systems and Apps
Step 5: Managing Workload.
Step 6: Efficient Use of Resources

Memory virtualization:
Memory virtualization in cloud computing involves creating virtual instances of
physical memory resources to allocate and manage memory for multiple
virtual machines (VMs) running on a single physical server. Just like CPU
virtualization, memory virtualization uses a hypervisor to abstract and allocate
memory resources to different VMs.

This allows each VM to operate as if it has its own dedicated memory, even
though they are sharing the physical memory of the server. Memory
virtualization enhances resource utilization, scalability, and flexibility in cloud
environments by efficiently managing and optimizing memory allocation
across multiple VMs.

In cloud computing, memory virtualization involves the hypervisor, which acts


as a mediator between the physical memory hardware and the virtual
machines (VMs). The hypervisor abstracts the physical memory, creating
virtual memory spaces for each VM. When a VM requests memory, the
hypervisor allocates a portion of the physical memory to that VM. It manages
memory access, ensuring that each VM operates independently without
interfering with others.

The hypervisor uses techniques like memory overcommitment, where it can


assign more memory to VMs than physically available by transparently
swapping memory to disk or using memory compression. This allows for
better resource utilization. Additionally, memory ballooning can dynamically
adjust memory allocation among VMs based on their current needs.

I/O Virtualization:
I/O Virtualization in cloud computing refers to the process of abstracting and
managing inputs and outputs between a guest system and a host system in a
cloud environment.This is possible using hypervisor that acts as an
intermediary between the VMs and the physical devices.

The hypervisor manages the I/O requests from the VMs and directs them to
the appropriate device. This process is transparent to the VMs, which operate
as if they have their own dedicated I/O devices.

It is a critical component of cloud infrastructure, enabling efficient, flexible,


and scalable data transmission between different system layers and hardware.
.
I/O virtualization in cloud computing involves abstracting and virtualizing
physical I/O devices to provide virtual machines (VMs) with independent and
efficient access to these devices. The main goal is to optimize I/O resource
utilization and enhance performance in cloud environments. I/O virtualization
is typically managed by the hypervisor, which acts as a middle layer between
the physical hardware and the VMs.

The hypervisor uses techniques like device emulation, passthrough, and direct
assignment to provide VMs with access to physical I/O devices. Device
emulation involves emulating virtual devices that communicate with the
physical hardware, enabling VMs to interact with them. Passthrough allows
VMs to access the physical devices directly without going through the
hypervisor, enhancing performance for I/O-intensive workloads. Direct
assignment assigns specific physical devices to individual VMs, providing
dedicated access and optimal performance.

By abstracting the I/O devices, virtualization helps to improve resource


utilization and efficiency. It allows for more VMs to be hosted on a single
physical server, reducing hardware costs and power consumption.
Additionally, it enables easy migration of VMs from one server to another,
which can be a significant advantage in terms of load balancing and fault
tolerance.

I/O virtualization offers several benefits that contribute to the efficiency and
effectiveness of cloud computing. One of the main advantages is the improved
resource utilization
Sure, I can explain those concepts for you:

1. Application Virtualization: Application virtualization separates applications


from the underlying operating system, allowing them to run in isolated
environments. This approach eliminates conflicts between applications and
simplifies deployment. It enables running multiple versions of the same
application on a single machine without interference.

2. Network Virtualization: Network virtualization abstracts network resources,


creating virtual networks independent of the physical infrastructure. It enables
the segmentation of networks, improves network scalability, and enhances
security by isolating traffic. Network virtualization helps in optimizing network
resource utilization and simplifying network management.

3. Desktop Virtualization: Desktop virtualization separates the desktop


environment from the physical device, allowing users to access their desktops
remotely from different devices. It centralizes desktop management, enhances
security by keeping data in the data center, and provides flexibility in
accessing desktop resources from anywhere.

4. Storage Virtualization: Storage virtualization aggregates physical storage


resources into a single virtual storage pool, simplifying management and
improving utilization. It enables features like data migration, replication, and
thin provisioning. Storage virtualization enhances scalability, flexibility, and
data protection in storage environments.

5. Server Virtualization: Server virtualization allows multiple virtual machines


to run on a single physical server, optimizing resource utilization. It improves
server efficiency, reduces hardware costs, and enhances scalability and
flexibility in managing workloads. Server virtualization is a key technology in
data centers for efficient resource allocation.

6. Data Virtualization in Cloud Computing: Data virtualization in cloud


computing abstracts data from underlying data sources, providing a unified
view of data across multiple sources. It integrates data from disparate sources
in real-time, enabling data access and analysis without physical data
movement. Data virtualization simplifies data integration, enhances agility, and
supports data-driven decision-making in cloud environments.

You might also like