CC All
CC All
SOFTWARE AS A SERVICE:
SaaS is also known as "On-Demand Software". It is a software distribution model in which
services are hosted by a cloud service provider. These services are available to end-users over
the internet so, the end-users do not need to install any software on their devices to access
these services.
Platform as a Service:
(PaaS) provides a runtime environment. It allows programmers to easily create, test, run, and
deploy web applications. You can purchase these applications from a cloud service provider
on a pay-as-per use basis and access them using the Internet connection. In PaaS, back end
scalability is managed by the cloud service provider, so end- users do not need to worry about
managing the infrastructure.
PaaS includes infrastructure and platform to support the web application life cycle.
Infrastructure as a services:
Iaas is also known as Hardware as a Service (HaaS). It is one of the layers of the cloud
computing platform. It allows customers to outsource their IT infrastructures such as servers,
networking, processing, storage, virtual machines, and other resources. Customers access
these resources on the Internet using a pay-as-per use model. In traditional hosting services,
IT infrastructure was rented out for a specific period of time, with pre-determined hardware
configuration. The client paid for the configuration and time, regardless of the actual use. With
the help of the IaaS cloud computing platform layer, clients can dynamically scale the
configuration to meet changing requirements and are billed only for the services actually
used. IaaS cloud computing platform layer eliminates the need for every organization to
maintain the IT infrastructure. IaaS is offered in three models: public, private, and hybrid
cloud.
Types of deployment cloud
Public cloud:
Public Cloud provides a shared platform that is accessible to the general public through an
Internet connection.
Public cloud operated on the pay-as-per-use model and administrated by the third party, i.e.,
Cloud service provider.
In the Public cloud, the same storage is being used by multiple users at the same time.
Public cloud is owned, managed, and operated by businesses, universities, government
organizations, or a combination of them.
Amazon Elastic Compute Cloud (EC2), Microsoft Azure, IBM's Blue Cloud, Sun Cloud, and
Google Cloud are examples of the public cloud.
Private cloud:
Private cloud is also known as an internal cloud or corporate cloud.
Private cloud provides computing services to a private internal network (within the
organization) and selected users instead of the general public.
Private cloud provides a high level of security and privacy to data through firewalls and
internal hosting. It also ensures that operational and sensitive data are not accessible to third-
party providers.
HP Data Centers, Microsoft, Elastra-private cloud, and Ubuntu are the example of a private
cloud.
Hybrid cloud:
Hybrid cloud is a combination of public and private clouds.
Hybrid cloud = public cloud + private cloud
The main aim to combine these cloud (Public and Private) is to create a unified, automated,
and well-managed computing environment.
In the Hybrid cloud, non-critical activities are performed by the public cloud and critical
activities are performed by the private cloud.
Mainly, a hybrid cloud is used in finance, healthcare, and Universities.
The best hybrid cloud provider companies are Amazon, Microsoft, Google, Cisco, and NetApp.
Community cloud:
Community cloud is a cloud infrastructure that allows systems and services to be accessible
by a group of several organizations to share the information. It is owned, managed, and
operated by one or more organizations in the community, a third party, or a combination of
them.
Cloud
Computing Virtualization
Virtualization:
Virtualization is a technique how to separate a service from the underlying physical delivery
of that service. It is the process of creating a virtual version of something like computer
hardware. It was initially developed during the mainframe era. It involves using specialized
software to create a virtual or software-created version of a computing resource rather than
the actual version of the same resource. With the help of Virtualization, multiple operating
systems and applications can run on the same machine and its same hardware at the same
time, increasing the utilization and flexibility of hardware.
In other words, one of the main cost-effective, hardware-reducing, and energy-saving
techniques used by cloud providers is Virtualization. Virtualization allows sharing of a single
physical instance of a resource or an application among multiple customers and
organizations at one time.
Benefits of Virtualization
More flexible and efficient allocation of resources.
Enhance development productivity.
It lowers the cost of IT infrastructure.
Remote access and rapid scalability.
High availability and disaster recovery.
Pay peruse of the IT infrastructure on demand.
Enables running multiple operating systems.
Characteristics of Virtualization
Increased Security: The ability to control the execution of a guest program in a completely
transparent manner opens new possibilities for delivering a secure, controlled execution
environment. All the operations of the guest programs are generally performed against the
virtual machine, which then translates and applies them to the host programs.
Managed Execution: In particular, sharing, aggregation, emulation, and isolation are the
most relevant features.
Sharing: Virtualization allows the creation of a separate computing environment within the
same host.
Aggregation: It is possible to share physical resources among several guests, but
virtualization also allows aggregation, which is the opposite process.
Types of Virtualization:
Application Virtualization: Application virtualization helps a user to have remote access to
an application from a server. The server stores all personal information and other
characteristics of the application but can still run on a local workstation through the internet.
Technologies that use application virtualization are hosted applications and packaged
applications.
Storage Virtualization: Storage virtualization is an array of servers that are managed by a
virtual storage system. The servers aren’t aware of exactly where their data is stored and
instead function more like worker bees in a hive. It makes managing storage from multiple
sources be managed and utilized as a single repository. storage virtualization software
maintains smooth operations, consistent performance, and a continuous suite of advanced
functions despite changes, breaks down, and differences in the underlying equipment.
Server Virtualization: This is a kind of virtualization in which the masking of server
resources takes place. Here, the central server (physical server) is divided into multiple
different virtual servers by changing the identity number, and processors. So, each system
can operate its operating systems in an isolated manner. Where each sub-server knows the
identity of the central server. It causes an increase in performance and reduces the operating
cost by the deployment of main server resources into a sub-server resource. It’s beneficial
in virtual migration, reducing energy consumption, reducing infrastructural costs, etc.
Data Virtualization: This is the kind of virtualization in which the data is collected from
various sources and managed at a single place without knowing more about the technical
information like how data is collected, stored & formatted then arranged that data logically
so that its virtual view can be accessed by its interested people and stakeholders, and users
through the various cloud services remotely. Many big giant companies are providing their
services like Oracle, IBM, At scale, Cdata, etc.
Hyper-V VMware
Has extensive security protocols, such Implements data encryption during storage and
as Active Directory, that manage overall motion. Has a less extensive security suite as
security concerns compared to Hyper-V
Interoperability:
It is defined as the capacity of at least two systems or applications to trade with data and
utilize it. On the other hand, cloud interoperability is the capacity or extent at which one
cloud service is connected with the other by trading data as per strategy to get results.
The two crucial components in Cloud interoperability are usability and connectivity, which
are further divided into multiple layers.
1. Behaviour
2. Policy
3. Semantic
4. Syntactic
5. Transport
6. Portability
It is the process of transferring the data or an application from one framework to others,
making it stay executable or usable. Portability can be separated into two types: Cloud data
portability and Cloud application portability.
Cloud data portability –
It is the capability of moving information from one cloud service to another and so on
without expecting to re-enter the data.
Cloud application portability –
It is the capability of moving an application from one cloud service to another or between a
client’s environment and a cloud service.
Service Interoperability:-
Refers to the ability of various cloud services to interact and complement each other in a
standardized way.
Security portability:
Ensures that security mechanisms and protocols are interoperable across different cloud
providers.
Storage as a services:
Storage as a service (STaaS) is a managed service in which the provider supplies the customer
with access to a data storage platform. The service can be delivered on premises from
infrastructure that is dedicated to a single customer, or it can be delivered from the public
cloud as a shared service that's purchased by subscription and is billed according to one or
more usage metrics.
STaaS customers access individual storage services through standard system interface
protocols or application program interfaces (APIs).
Storage as a service was originally seen as a cost-effective way for small and mid-size
businesses that lacked the technical personnel and capital budget to implement and maintain
their own storage infrastructure.
Advantages of STaaS
Key advantages to STaaS in the enterprise include the following:
Storage costs. Personnel, hardware and physical storage space expenses are reduced.
Disaster recovery. Having multiple copies of data stored in different locations can better
enable disaster recovery measures.
Scalability. With most public cloud services, users only pay for the resources that they use.
Syncing. Files can be automatically synced across multiple devices.
Security. Security can be both an advantage and a disadvantage, as security methods may
change per vendor. Data tends to be encrypted during transmission and while at rest.
Database as a Service (DBaaS) :
Like SaaS, PaaS and IaaS of cloud computing we can consider DBaaS (also known as Managed
Database Service) as a cloud computing service. It allows users associated with database activities to
access and use a cloud database system without purchasing it.
DBaaS and cloud database comes under Software as a Service (SaaS) whose demand is
growing so fast
In simple we can say Database as a Service (DBaaS) is self service/ on demand database
consumption coupled with automation of operations. As we know cloud computing services
are like pay per use so DBaaS also based on same payment structure like how much you will
use just pay for your usage. This DBaaS provides same function as like standard traditional
and relational database models. So using DBaaS, organizations can avoid data base
configuration, management, upgradation and security.
Key Characteristics of DBaaS :
A fully managed info service helps to line up, manage, and administer your info within the
cloud and conjointly offer services for hardware provisioning and Backup.
DBaaS permits the availability of info’s effortlessly to Database shoppers from numerous
backgrounds and IT expertise.
Provides on demand services.
Supported the resources offered, it delivers a versatile info platform that tailors itself to the
environment’s current desires.
A team of consultants at your disposal, endlessly watching the Databases.
Automates info administration and watching.
Leverages existing servers and storage.
Advantages of DBaaS :
DBaaS is responsible of the info supplier to manage and maintain info hardware and code.
The hefty power bills for ventilation and cooling bills to stay the servers running area unit
eliminated.
An organization that subscribes to DBaaS is free from hiring info developers or constructing
a info system in-house.
Make use of the most recent automation, straightforward outs of clouds area unit possible
at low price and fewer time.
Human resources needed to manage the upkeep of the system is eliminated.
Since DBaaS is hosted off-site, the organization is free from the hassles of power or network
failure.
Explore the portfolio of Oracle info as a service.
Process as a services:
"Process as a Service" (PaaS) is a cloud computing model that provides a platform allowing
customers to develop, run, and manage business processes without the complexity of building
and maintaining the underlying infrastructure. PaaS is an evolution of traditional business
process outsourcing and aims to provide a more flexible and scalable solution. Here are some
key aspects of Process as a Service:
Information as a services:
"Information as a Service" (IaaS) is a concept that refers to providing access to specific
information or data as a service over the internet. In this model, organizations can leverage
external sources to obtain the information they need without having to maintain the data
internally. IaaS is part of the broader trend of providing various IT resources and capabilities
as services in the cloud.
Integration as a services:
Integration as a Service (IaaS) is a cloud-based service model that provides capabilities for
integrating different systems, applications, and data sources within an organization or
between organizations. IaaS enables seamless connectivity and data exchange, allowing
disparate systems to communicate and share information effectively.
In cloud computing, IaaS typically refers to a specific category of services provided by cloud
service providers. These services focus on simplifying and accelerating the integration process
by offering pre-built connectors, tools, and infrastructure components that facilitate data and
application integration.
Testing as a services:
Companies use the outsourcing approach known as “Testing as a Service” in short “TaaS” to
test their products prior to deployment. The application is tested to find flaws in simulated
real-world environments. Testing solutions are provided by a third-party service provider
with testing knowledge rather than internal employees of the organization.
Over conventional testing environments, TaaS has been shown to have substantial
advantages. TaaS is a highly scalable approach, which is its main advantage. Small businesses
and corporations don’t have to worry about finding empty space for servers or other
infrastructure because it is a cloud-based delivery strategy.
Types of TaaS
• Cloud Testing as a Service: TaaS provider checks all cloud services used by the organizations.
• Functional Testing as a Service: TaaS purposeful Testing could embody UI/GUI Testing,
regression, integration, and automatic User Acceptance Testing (UAT) however not necessary
to be a part of purposeful testing.
• Load Testing as a Service: TaaS test the estimated volume of the software used.
• Performance Testing as a Service: Multiple users are accessing the appliance at an identical
time. TaaS mimic a real-world user setting by making virtual users and playacting the load.
• Quality Assurance Testing as a Service: The vendors ensure a product that meets the
company’s requirements.
• Security Testing as a Service: TaaS scans the applications and websites for any vulnerability
to check malware and virus attacks.
• Penetration Testing as a Service: TaaS seller tests the company’s security natural virtue
against cyber threats by performing mock activity attacks.
Scaling a cloud infrastructure -Capacity Planning:
Scaling a cloud infrastructure involves adjusting the capacity of your cloud resources to meet
the changing demands of your applications and services. Capacity planning is a critical aspect
of this process, as it helps you determine the right amount of resources needed to handle
current and future workloads efficiently. Here are key steps and considerations for capacity
planning when scaling a cloud infrastructure:
Analyze your application's usage patterns, such as peak times and periods of low activity.
Identify resource-intensive tasks and their impact on different components of your
infrastructure.
Use monitoring tools to collect data on various aspects of your infrastructure, including CPU
usage, memory utilization, storage, and network traffic.
Gather historical performance data to identify trends and patterns.
Establish key performance indicators (KPIs) that align with your application's goals and user
expectations.
Metrics may include response time, throughput, and error rates.
Choose scalable cloud services that can dynamically adjust resources based on demand.
Consider horizontal scaling (adding more instances) and vertical scaling (increasing the size of
existing instances) based on your application's requirements.
Cost Analysis:
Conduct load testing to simulate various levels of user activity and assess the performance of
your scaled infrastructure.
Use testing environments to validate the effectiveness of your scaling strategies.
Capacity Reservations:
Consider reserving capacity for critical resources to ensure availability during peak times.
Leverage reserved instances for cost-effective capacity planning.
Keep abreast of new features and services offered by your cloud provider that may enhance
your capacity planning strategies.
Explore managed services that can simplify certain aspects of your infrastructure
management.
Diagonal Scaling
It is a mixture of both Horizontal and Vertical scalability where the resources are added both
vertically and horizontally. Well, you get diagonal scaling, which allows you to experience the
most efficient infrastructure scaling. When you combine vertical and horizontal, you simply
grow within your existing server until you hit the capacity. Then, you can clone that server as
necessary and continue the process, allowing you to deal with a lot of requests and traffic
concurrently.
Cloud Platforms in Industry:
Amazon Web Services (AWS) –
AWS provides different wide-ranging clouds IaaS services, which ranges from virtual compute,
storage, and networking to complete computing stacks. AWS is well known for its storage and
compute on demand services, named as Elastic Compute Cloud (EC2) and Simple Storage Service
(S3). EC2 offers customizable virtual hardware to the end user which can be utilize as the base
infrastructure for deploying computing systems on the cloud. It is likely to choose from a large variety
of virtual hardware configurations including GPU and cluster instances. Either the AWS console,
which is a wide-ranged Web portal for retrieving AWS services, or the web services API available for
several programming language is used to deploy the EC2 instances. EC2 also offers the capability of
saving an explicit running instance as image, thus allowing users to create their own templates for
deploying system. S3 stores these templates and delivers persistent storage on demand. S3 is well
ordered into buckets which contains objects that are stored in binary form and can be grow with
attributes. End users can store objects of any size, from basic file to full disk images and have them
retrieval from anywhere. In addition, EC2 and S3, a wide range of services can be leveraged to build
virtual computing system including: networking support, caching system, DNS, database support, and
others.
Cloud Hypervisor
The key is to enable hypervisor virtualization. In its simplest form, a hypervisor is specialized
firmware or software, or both, installed on a single hardware that will allow you to host
multiple virtual machines. This allows physical hardware to be shared across multiple virtual
machines. The computer on which the hypervisor runs one or more virtual machines is called
the host machine.
Virtual machines are called guest machines. The hypervisor allows the physical host machine
to run various guest machines. It helps to get maximum benefit from computing resources
such as memory, network bandwidth and CPU cycles.
Types of Hypervisors in Cloud Computing
There are two main types of hypervisors in cloud computing.
Type I Hypervisor
A Type I hypervisor operates directly on the host's hardware to monitor the hardware and
guest virtual machines, and is referred to as bare metal. Typically, they do not require the
installation of software ahead of time.
Instead, you can install it directly on the hardware. This type of hypervisor is powerful and
requires a lot of expertise to function well. In addition, Type I hypervisors are more complex
and have few hardware requirements to run adequately. Because of this it is mostly chosen
by IT operations and data center computing.
Examples of Type I hypervisors include Oracle VM Server for Xen, SPARC, Oracle VM Server for
x86, Microsoft Hyper-V, and VMware's ESX/ESXi.
Type II Hypervisor
It is also called a hosted hypervisor because it is installed on an existing operating system, and
they are not more capable of running more complex virtual tasks. People use it for basic
development, testing and simulation.
If a security flaw is found inside the host OS, it can potentially compromise all running virtual
machines. This is why Type II hypervisors cannot be used for data center computing, and they
are designed for end-user systems where security is less of a concern. For example, developers
can use a Type II hypervisor to launch virtual machines to test software products prior to their
release.
Hardware Stack:
Servers:
The foundational hardware components in a private cloud are servers. These servers host
virtual machines (VMs) or containers that run the applications and services.
Storage:
Storage infrastructure includes devices such as hard disk drives (HDDs), solid-state drives
(SSDs), and network-attached storage (NAS). Storage is used for storing virtual machine
images, application data, and other relevant information.
Networking Equipment:
Networking hardware, including routers, switches, and firewalls, is crucial for connecting
servers and storage devices. It enables communication within the private cloud and may
include security measures to protect against unauthorized access.
Hypervisors:
Hypervisors, also known as Virtual Machine Monitors (VMMs), are essential for creating and
managing virtual machines on physical servers. They allow multiple VMs to run on a single
physical server, optimizing resource utilization.
Load Balancers:
Load balancers distribute network traffic evenly across multiple servers to ensure efficient
resource utilization and prevent any single server from becoming a bottleneck. This helps in
improving the performance and availability of applications.
Backup and Recovery Systems:
Private clouds need robust backup and recovery systems to safeguard data and applications
against loss or corruption. This may involve regular backups, snapshot technologies, and
disaster recovery solutions.
Hardware monitoring tools are used to track the health and performance of physical
components. These tools provide insights into resource utilization, potential issues, and
overall system health.
Software Stack:
Virtualization Software:
Orchestration tools like OpenStack, Kubernetes, or Microsoft Azure Stack are used to
automate and manage the deployment, scaling, and maintenance of applications and
services. They help streamline complex workflows in the private cloud environment.
Operating Systems:
Virtual machines in the private cloud run on operating systems. These could be various flavors
of Linux (e.g., CentOS, Ubuntu) or Windows Server, depending on the application and
organizational preferences.
A cloud management platform provides a unified interface for administrators to manage and
monitor the private cloud environment. Examples include OpenStack Horizon, Microsoft
System Center, and VMware vRealize Suite.
Security Software:
Security software, including firewalls, intrusion detection and prevention systems, antivirus
solutions, and encryption tools, is essential to protect the private cloud infrastructure and
data from security threats.
Networking Software:
Application Middleware:
Middleware components, such as web servers, application servers, and messaging systems,
facilitate communication and integration between different software applications and services
within the private cloud.
Logging and monitoring tools (e.g., ELK Stack, Prometheus, Grafana) help administrators track
performance metrics, troubleshoot issues, and maintain the health of the private cloud
infrastructure.
Q. State and explain the different examples of Cloud-computing offerings, which include various
vendors available and their service types.
• Amazon Web Services (AWS): Offers IaaS services through Amazon Elastic
Compute Cloud (EC2) for virtual servers, Amazon Simple Storage Service (S3)
for storage, and Amazon Virtual Private Cloud (VPC) for networking.
• Microsoft Azure: Provides IaaS solutions like Azure Virtual Machines and
Azure Blob Storage.
• Google Cloud Platform (GCP): Offers IaaS components such as Compute
Engine (virtual machines) and Google Cloud Storage.
• PaaS provides a platform that allows customers to develop, run, and manage
applications without dealing with the complexity of infrastructure.
Examples and Vendors:
• Heroku: A cloud platform that enables developers to build, deploy, and scale
applications easily.
• Google App Engine: A fully managed platform for building and deploying
applications.
• Microsoft Azure App Service: Offers a platform for building, deploying, and
scaling web apps.
• SaaS delivers software applications over the internet, eliminating the need for
users to install, maintain, and update the software locally.
Xen Architecture:
1. **Hypervisor (Xen):
• Xen is the core hypervisor that runs directly on the hardware and
manages the virtualization of resources.
• It is responsible for creating and managing virtual machines and
providing them with access to the physical hardware resources.
2. **Domain 0 (Dom0):
• Dom0 is a privileged domain that runs a modified Linux kernel and
serves as the management domain for the Xen hypervisor.
• It has direct access to physical hardware and performs administrative
tasks, such as creating and configuring other VMs.
3. **Domain U (DomU):
• DomU represents unprivileged domains that run guest operating
systems.
• Multiple DomU instances can run simultaneously, each with its own
guest OS.
Guest OS Management:
1. Paravirtualization:
• Xen initially introduced paravirtualization, which involves modifying the
guest operating system to be aware of the hypervisor.
• Modified or paravirtualized guest OSes, known as DomU in Xen, can
communicate with the Xen hypervisor directly, improving performance
by reducing the need for virtualization overhead.
2. Hardware-Assisted Virtualization (HVM):
• Xen also supports hardware-assisted virtualization, allowing it to run
unmodified guest operating systems.
• For HVM, Xen utilizes hardware virtualization extensions such as Intel
VT-x and AMD-V to improve the performance of virtualization.
3. Virtual Machine Configuration:
• Each VM is configured with a set of parameters, including the amount
of memory, virtual CPUs, disk storage, and network interfaces.
• Configuration files define these parameters, and administrators can
modify them to adjust the resources allocated to each VM.
4. Device Model (QEMU):
• Xen uses a device model, often based on QEMU (Quick Emulator), to
provide emulated devices for VMs.
• QEMU acts as a hardware emulator, allowing VMs to access virtualized
devices even if the underlying physical hardware does not support
direct passthrough.
5. Inter-Domain Communication:
• Xen facilitates communication between VMs and between VMs and
Dom0 using a mechanism known as XenStore. XenStore is a shared
repository for configuration and status information.
6. Xen Management Tools:
• Xen provides command-line tools and graphical interfaces for
managing VMs and the overall virtualized environment.
• Tools like xm (deprecated) and xl are used for VM lifecycle
management, including creation, suspension, migration, and deletion.
7. Live Migration:
• Xen supports live migration, allowing VMs to be moved from one
physical host to another without downtime. This is useful for load
balancing, resource optimization, and maintenance.
8. Resource Isolation:
• Xen provides resource isolation for VMs, ensuring that the performance
of one VM does not significantly impact the performance of others.
• This includes CPU and memory isolation, as well as network and disk
I/O isolation.
These APIs support different cloud models like a private, public, hybrid Cloud.
o Multiple Structures:
o Aneka is a software platform for developing cloud computing applications.
o In Aneka, cloud applications are executed.
o Aneka is a pure PaaS solution for cloud computing.
o Aneka is a cloud middleware product.
o Manya can be deployed over a network of computers, a multicore server, a data
center, a virtual cloud infrastructure, or a combination thereof.
o Textile services
o Foundation Services
o Application Services
1. Textile Services:
Fabric Services defines the lowest level of the software stack that represents multiple
containers. They provide access to resource-provisioning subsystems and monitoring
features implemented in many.
2. Foundation Services:
Fabric Services are the core services of Manya Cloud and define the infrastructure
management features of the system. Foundation services are concerned with the
logical management of a distributed system built on top of the infrastructure and
provide ancillary services for delivering applications.
3. Application Services:
Application services manage the execution of applications and constitute a layer that
varies according to the specific programming model used to develop distributed
applications on top of Aneka.
And
A runtime engine and platform for managing the deployment and execution of
applications on a private or public cloud.
One of the notable features of Aneka Pass is to support the provision of private cloud
resources from desktop, cluster to a virtual data center using VMware, Citrix Zen
Server, and public cloud resources such as Windows Azure, Amazon EC2,
and GoGrid cloud service.
Architecture of Aneka
Aneka is a platform and framework for developing distributed applications on the
Cloud. It uses desktop PCs on-demand and CPU cycles in addition to a heterogeneous
network of servers or datacenters. Aneka provides a rich set of APIs for developers to
transparently exploit such resources and express the business logic of applications
using preferred programming abstractions.
System administrators can leverage a collection of tools to monitor and control the
deployed infrastructure. It can be a public cloud available to anyone via the Internet or
a private cloud formed by nodes with restricted access.
One of the key features of Aneka is its ability to provide a variety of ways to express
distributed applications by offering different programming models; Execution services
are mostly concerned with providing middleware with the implementation of these
models. Additional services such as persistence and security are inverse to the whole
stack of services hosted by the container.
At the application level, a set of different components and tools are provided to
ADVERTISEMENT