0% found this document useful (0 votes)
10 views26 pages

CC All

Cloud computing is the storage and access of data and computing services over the internet, characterized by on-demand self-service, broad network access, rapid elasticity, resource pooling, and measured services. It offers advantages such as cost savings, strategic edge, high speed, data backup, and automatic software integration, while also presenting disadvantages like variable performance, technical issues, and security threats. Virtualization, a key component of cloud computing, allows sharing of physical resources among multiple users and includes types like application, network, desktop, and storage virtualization.

Uploaded by

dnyangitte01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views26 pages

CC All

Cloud computing is the storage and access of data and computing services over the internet, characterized by on-demand self-service, broad network access, rapid elasticity, resource pooling, and measured services. It offers advantages such as cost savings, strategic edge, high speed, data backup, and automatic software integration, while also presenting disadvantages like variable performance, technical issues, and security threats. Virtualization, a key component of cloud computing, allows sharing of physical resources among multiple users and includes types like application, network, desktop, and storage virtualization.

Uploaded by

dnyangitte01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Define cloud computing.

What are the characteristics of cloud computing


Cloud Computing is defined as storing and accessing of data and computing services over
the internet. It doesn’t store any data on your personal computer. It is the on-demand
availability of computer services like servers, data storage, networking, databases, etc. The
main purpose of cloud computing is to give access to data centers to many users. Users
can also access data from a remote server.
There are basically 5 essential characteristics of Cloud Computing.

1. On-demand self service


The Cloud computing services does not require any human administrators, user
themselves are able to provision, monitor and manage computing resources as
needed.

2. Broad network access:


The Computing services are generally provided over standard networks and
heterogeneous devices.

3. Rapid elasticity:
The Computing services should have IT resources that are able to scale out and in
quickly and on as needed basis. Whenever the user require services it is provided to
him and it is scale out as soon as its requirement gets over.

4. Resource pooling:
The IT resource (e.g., networks, servers, storage, applications, and services) present
are shared across multiple applications and occupant in an uncommitted manner.
Multiple clients are provided service from a same physical resource.

5. Measured services:
The resource utilization is tracked for each application and occupant, it will provide
both the user and the resource provider with an account of what has been used.
This is done for various reasons like monitoring billing and effective use of resource.

Advantages and disadvantages Advantages:

1. Cost Savings
Cost saving is one of the biggest Cloud Computing benefits. It helps you to save
substantial capital cost as it does not need any physical hardware investments. Also,
you do not need trained personnel to maintain the hardware. The buying and
managing of equipment is done by the cloud service provider.
2. Strategic edge
Cloud computing offers a competitive edge over your competitors. It is one of the
best advantages of Cloud services that helps you to access the latest applications
any time without spending your time and money on installations.
3. High Speed
Cloud computing allows you to deploy your service quickly in fewer clicks. This
faster deployment allows you to get the resources required for your system within
fewer minutes.
4. Back-up and restore data
Once the data is stored in a Cloud, it is easier to get the back-up and recovery of
that, which is otherwise very time taking process onpremise.
5. Automatic Software Integration
In the cloud, software integration is something that occurs automatically. Therefore,
you don’t need to take additional efforts to customize and integrate your applications
as per your preferences.
Disadvantages

1. Performance Can Vary


When you are working in a cloud environment, your application is running on the
server which simultaneously provides resources to other businesses. Any greedy
behavior or DDOS attack on your tenant could affect the performance of your shared
resource.
2. Technical Issues
Cloud technology is always prone to an outage and other technical issues. Even, the
best cloud service provider companies may face this type of trouble despite
maintaining high standards of maintenance.
3. Security Threat in the Cloud
Another drawback while working with cloud computing services is security risk.
Before adopting cloud technology, you should be well aware of the fact that you will
be sharing all your company’s sensitive information to a third-party cloud
computing service provider. Hackers might access this information.
What is virtualization. And types of virtualization
Virtualization is a technique, which allows to share a single physical instance of a resource
or an application among multiple customers and organizations. It does by assigning a
logical name to a physical storage and providing a pointer to that physical resource when
demanded.
Types of Virtualization:

1. Application Virtualization.
Network Virtualization. 3. Desktop Virtualization.
4. Storage Virtualization.

• Application Virtualization:
o Application virtualization helps a user to have remote access of an application
from a server. The server stores all personal information and other characteristics
of the application but can still run on a local workstation through the internet.
o Example of this would be a user who needs to run two different versions of the
same software. Technologies that use application virtualization are hosted
applications and packaged applications..
• Network Virtualization:
o The ability to run multiple virtual networks with each has a separate control and
data plan. It co-exists together on top of one physical network. It can be managed
by individual parties that potentially confidential to each other.
o Network virtualization provides a facility to create and provision virtual
networks—logical switches, routers, firewalls, load balancer, Virtual Private
Network (VPN), and workload security within days or even in weeks.
• Desktop virtualization:
o Desktop virtualization allows the users’ OS to be remotely stored on a server in
the data centre. It allows the user to access their desktop virtually, from any
location by a different machine.
o Users who want specific operating systems other than Windows Server will need
to have a virtual desktop. Main benefits of desktop virtualization are user
mobility, portability, easy management of software installation, updates, and
patches.
• Storage Virtualization:
o Storage virtualization is an array of servers that are managed by a virtual storage
system. The servers aren’t aware of exactly where their data is stored, and instead
function more like worker bees in a hive. It makes managing storage from
multiple sources to be managed and utilized as a single repository. storage
virtualization software maintains smooth operations, consistent performance
and a continuous suite of advanced functions despite changes, break down and
differences in the underlying equipment.

Characteristics of virtualized environment • Increased Security –


The ability to control the execution of a guest program in a completely transparent
manner opens new possibilities for delivering a secure, controlled execution
environment. All the operations of the guest programs are generally performed
against the virtual machine, which then translates and applies them to the host
programs. A virtual machine manager can control and filter the activity of the guest
programs, thus preventing some harmful operations from being performed.
Resources exposed by the host can then be hidden or simply protected from the
guest. Increased security is a requirement when dealing with untrusted code.
• Managed Execution –
In particular, sharing, aggregation, emulation, and isolation are the most relevant
features.
• Sharing –
Virtualization allows the creation of a separate computing environment within the
same host. This basic feature is used to reduce the number of active servers and
limit power consumption.
• Aggregation –
It is possible to share physical resources among several guests, but virtualization
also allows aggregation, which is the opposite process. A group of separate hosts
can be tied together and represented to guests as a single virtual host. This
functionality is implemented with cluster management software, which harnesses
the physical resources of a homogeneous group of machines and represents them
as a single resource.

• Emulation –
Guest programs are executed within an environment that is controlled by the
virtualization layer, which ultimately is a program. Also, a completely different
environment with respect to the host can be emulated, thus allowing the execution
of guest programs requiring specific characteristics that are not present in the
physical host.
TAXONOMY OF VIRTUALIZATION:
Virtualization techniques can be categorized into different types based on the level of
abstraction and the components they virtualize. Here is a taxonomy of virtualization
techniques:

1. Hardware Virtualization:
• Full Virtualization (Type 1 Hypervisor): This involves running a
hypervisor directly on the hardware to create and manage virtual machines.
Guest operating systems run on top of the hypervisor without modification.
Examples include VMware ESXi and Microsoft Hyper-V.
• Para-virtualization (Type 1 Hypervisor): The guest operating systems are
aware of the virtualization layer, and their kernels are modified to interact
more efficiently with the hypervisor. This approach aims to reduce the
performance overhead associated with full virtualization. Xen is an example
of a para-virtualization hypervisor.
• Hardware-Assisted Virtualization (Type 1 Hypervisor): This type
leverages hardware extensions, such as Intel VT-x or AMD-V, to enhance
virtualization performance. It allows virtual machines to execute certain
instructions directly on the hardware, improving efficiency. Examples include
KVM (Kernel-based Virtual Machine) and Hyper-V with hardware-assisted
virtualization.
2. Operating System Virtualization (Type 2 Hypervisor):
• Full Virtualization (Type 2 Hypervisor): In this case, a hypervisor runs on
a host operating system, and virtual machines are created and managed
within the host OS. Examples include VMware Workstation and Oracle
VirtualBox.
• Para-virtualization (Type 2 Hypervisor): Similar to the Type 1 para-
virtualization, the guest operating systems are aware of the virtualization
layer, and their kernels are modified to interact more efficiently with the
hypervisor. However, in this case, the hypervisor runs on top of the host
operating system.
3. Application Virtualization:
• Application Layer Virtualization: This technique virtualizes individual
applications, separating them from the underlying operating system.
Applications run in isolated containers, allowing for better portability and
simplified deployment. Docker and Kubernetes are popular examples.
4. Network Virtualization:
• Network Function Virtualization (NFV): NFV involves decoupling network
functions from dedicated hardware and running them as software-based
instances on commodity hardware. This enhances flexibility and scalability in
network management.
• Software-Defined Networking (SDN): SDN separates the control plane from
the data plane in networking, providing a programmable and centralized
approach to network management. It allows for more efficient resource
utilization and dynamic network configuration.
5. Storage Virtualization:
• Storage Area Network (SAN) Virtualization: This involves abstracting and
pooling physical storage resources, allowing for centralized management and
better utilization of storage capacity.
• Network-Attached Storage (NAS) Virtualization: Similar to SAN
virtualization, NAS virtualization abstracts and pools storage resources but is
designed for network-attached storage environments.

PROS AND CONS VIRTUALIZATION:

Pros of Virtualization:

1. Resource Utilization: Virtualization allows efficient use of hardware resources by


running multiple virtual machines (VMs) on a single physical server.
2. Cost Savings: By consolidating servers, organizations can reduce hardware costs,
save on power and cooling expenses, and achieve overall cost savings.
3. Flexibility and Scalability: Virtualization provides flexibility in resource
allocation, enabling dynamic scaling to meet changing workloads and demands.
4. Isolation and Security: Virtual machines are isolated from each other, enhancing
security. Features like snapshots and rollbacks contribute to a more secure and
recoverable environment.
5. Testing and Development: Virtualization facilitates easy testing and
development, allowing for the creation of isolated environments for experimentation
and software development.

Cons of Virtualization:

1. Resource Overhead: Virtualization introduces some performance overhead due to


the additional layer of abstraction, which can impact the performance of
applications.
2. Complexity: Managing a virtualized environment can be complex, requiring
specialized skills and tools. Compatibility issues may arise with certain applications.
3. Single Point of Failure: The hypervisor becomes a single point of failure. If it
fails, all virtual machines on that host may be affected, necessitating high-
availability configurations.
4. Licensing Costs: Some virtualization platforms may involve licensing fees,
potentially offsetting initial hardware savings.
5. Security Concerns: Hypervisor vulnerabilities could expose all virtual machines
on a host to risks. Inter-VM threats are possible, though the isolation measures
generally prevent such issues.

Paravirtualization:

Paravirtualization is a virtualization technique where the guest operating system is


modified to be aware of the virtualization layer (hypervisor). Unlike full virtualization,
where the guest OS is unaware that it is running in a virtual environment,
paravirtualization involves making modifications to the guest OS kernel.
UNIT 2:

Cloud Computing Architecture

As we know, cloud computing technology is used by both small and large organizations
to store the information in cloud and access it from anywhere at anytime using the
internet connection. Cloud computing architecture is a combination of service-oriented
architecture and event-driven architecture. Cloud computing architecture is divided
into the following two parts – 1) Front End 2) Back End.

The below diagram shows the architecture of cloud computing -

Front End

The front end is used by the client. It contains client-side interfaces and applications that
are required to access the cloud computing platforms. The front end includes web servers
(including Chrome, Firefox, internet explorer, etc.), thin & fat clients, tablets, and mobile
devices.

Back End

The back end is used by the service provider. It manages all the resources that are required
to provide cloud computing services. It includes a huge amount of data storage, security
mechanism, virtual machines, deploying models, servers, traffic control mechanisms, etc.

Components of Cloud Computing Architecture

There are the following components of cloud computing architecture -

1. Client Infrastructure: Client Infrastructure is a Front end component. It provides GUI


(Graphical User Interface) to interact with the cloud.

2. Application: The application may be any software or platform that a client wants to
access.

3. Service: A Cloud Services manages that which type of service you access according to
the client’s requirement. Cloud computing offers the following three type of services:
4. Runtime Cloud: Runtime Cloud provides the execution and runtime environment to
the virtual machines.

5. Storage: Storage is one of the most important components of cloud computing. It


provides a huge amount of storage capacity in the cloud to store and manage data.

6. Infrastructure: It provides services on the host level, application level, and network
level. Cloud infrastructure includes hardware and software components such as servers,
storage, network devices, virtualization software, and other storage resources that are
needed to support the cloud computing model.

7. Management: Management is used to manage components such as application,


service, runtime cloud, storage, infrastructure, and other security issues in the backend
and establish coordination between them.

8. Security: Security is an in-built back end component of cloud computing. It


implements a security mechanism in the back end.

9. Internet: The Internet is medium through which front end and back end can interact
and communicate with each other.

CLOUD SERVICES / REFERENCE MODEL:

IAAS:

Iaas is also known as Hardware as a Service (HaaS). It is one of the layers of the cloud
computing platform. It allows customers to outsource their IT infrastructures such as
servers, networking, processing, storage, virtual machines, and other resources.
Customers access these resources on the Internet using a pay-as-per use model. In
traditional hosting services, IT infrastructure was rented out for a specific period of time,
with pre-determined hardware configuration. The client paid for the configuration and
time, regardless of the actual use. With the help of the IaaS cloud computing platform
layer, clients can dynamically scale the configuration to meet changing requirements and
are billed only for the services actually used. IaaS cloud computing platform layer
eliminates the need for every organization to maintain the IT infrastructure. IaaS is offered
in three models: public, private, and hybrid cloud. The private cloud implies that the
infrastructure resides at the customer-premise. In the case of public cloud, it is located at
the cloud computing platform vendor's data center, and the hybrid cloud is a combination
of the two in which the customer selects the best of both public cloud or private cloud.
Charactertics:

✓ Resources are available as a service


✓ Services are highly scalable
✓ Dynamic and flexible
✓ GUI and API-based access
✓ Automated administrative tasks
PAAS:

Platform as a Service (PaaS) provides a runtime environment. It allows programmers to


easily create, test, run, and deploy web applications. You can purchase these applications
from a cloud service provider on a pay-as-per use basis and access them using the Internet
connection. In PaaS, back end scalability is managed by the cloud service provider, so
end- users do not need to worry about managing the infrastructure. PaaS includes
infrastructure (servers, storage, and networking) and platform (middleware, development
tools, database management systems, business intelligence, and more) to support the web
application life cycle. Example: Google App Engine, Force.com, Joyent, Azure. Platform as
a Service is a strategy that offers a high level of abstraction to make a cloud readily
programmable in addition to infrastructure-oriented clouds that offer basic compute and
storage capabilities (PaaS). Developers can construct and deploy apps on a cloud platform
without necessarily needing to know how many processors or how much memory their
applications would use. A PaaS offering that provides a scalable environment for creating
and hosting web applications is Google App Engine, for instance.

Charactertics:

✓ Managed from a central location


✓ Hosted on a remote server
✓ Accessible over the internet
✓ Users are not responsible for hardware and software updates. Updates are applied
automatically.
✓ The services are purchased on the pay-as-per-use basis

SAAS:

SaaS is also known as "On-Demand Software". It is a software distribution model in which


services are hosted by a cloud service provider. These services are available to end-users
over the internet so, the end-users do not need to install any software on their devices to
access these services. Software as a Service (SaaS) is a form of application delivery that
relieves users of the burden of software maintenance while making development and
testing easier for service providers. The cloud delivery model's top layer is where
applications are located. End customers get access to the services this tier offers via web
portals. Because online software services provide the same functionality as locally
installed computer programs, consumers (users) are rapidly switching from them. Today,
ILMS and other application software can be accessed via the web as a service. In terms of
data access, collaboration, editing, storage, and document sharing, SaaS is
unquestionably a crucial service. Email service in a web browser is the most well-known
and widely used example of SaaS, but SaaS applications are becoming more cooperative
and advanced.
✓ Managed from a central location
✓ Hosted on a remote server
✓ Accessible over the internet
✓ Users are not responsible for hardware and software updates. Updates are applied
automatically.
✓ The services are purchased on the pay-as-per-use basis.

IaaS Paas SaaS

It provides a virtual data center It provides virtual platforms It provides web software
to store information and create and tools to create, test, and and apps to complete
platforms for app development, deploy apps. business tasks.
testing, and deployment.

It provides access to resources It provides runtime It provides software as a


such as virtual machines, environments and service to the end-
virtual storage, etc. deployment tools for users.
applications.

It is used by network architects. It is used by developers. It is used by end users.

IaaS provides only PaaS provides SaaS provides


Infrastructure. Infrastructure+Platform. Infrastructure+Platform
+Software.

TYPES OF CLOUD:

Public cloud: Public clouds are managed by third parties which provide cloud services
over the internet to the public, these services are available as pay-as-you-go billing models.
They offer solutions for minimizing IT infrastructure costs and become a good option for
handling peak loads on the local infrastructure. Public clouds are the go-to option for
small enterprises, which can start their businesses without large upfront investments by
completely relying on public infrastructure for their IT needs. The fundamental
characteristics of public clouds are multitenancy. A public cloud is meant to serve multiple
users, not a single customer. A user requires a virtual computing environment that is
separated, and most likely isolated, from other users.

Private cloud: Private clouds are distributed systems that work on private infrastructure
and provide the users with dynamic provisioning of computing resources. Instead of a pay-
as-you-go model in private clouds, there could be other schemes that manage the usage
of the cloud and proportionally billing if the different departments or sections of an
enterprise. Private cloud providers are HP Data Centers, Ubuntu, Elastic-Private cloud,
Microsoft, etc.

Hybrid cloud: A hybrid cloud is an IT environment comprising several environments that


appear to be connected by LANs, WANs, VPNs, or APIs to form a single, unified
environment. You should be able to link many machines and combine IT assets using
hybrid clouds. Finance, healthcare, and higher education are three industries that mostly
use hybrid clouds. However, when apps may move in and out of many distinct yet
connected environments, every IT system turns into a hybrid cloud. These environments
must be derived from centralized IT resources that can scale as needed, at the very least.
And a platform for integrated management and orchestration must be used to manage
each of those environments as a single environment.

Parameter Public Private Hybrid Community Multi-


Cloud Cloud Cloud Cloud Cloud

Host Service Enterprise Enterprise Community Multiple


provider (Third party) (Third (Third party) cloud
party) providers

Users General Selected Selected Community Multiple


public users users members organizations

Access Internet Internet, VPN Internet, Internet, VPN Internet, VPN


VPN

Owner Service Enterprise Enterprise Community Multiple


provider organizations

Cost Pay-per- Infrastructure Mixed Shared cost Variable


usage investment (variable) among depending on
members usage

Security Provider's Enhanced Varied Varied Varied


responsibility control (depends (depends on (depends on
on setup) setup) setup)

Scalability Highly Scalable Scalable Scalable Scalable


scalable within within within within
resources resources resources resources

Customization Limited High control Varied Varied Varied


control (depends (depends on (depends on
on setup) setup) setup)
Resource Not shared Not shared Varied Shared Shared
Sharing (depends among among
on setup) community providers

Open challenges

1. Data security and privacy: Data security is a major concern when switching to cloud
computing. User or organizational data stored in the cloud is critical and private. Even if
the cloud service provider assures data integrity, it is your responsibility to carry out user
authentication and authorization, identity management, data encryption, and access
control. Security issues on the cloud include identity theft, data breaches, malware
infections, and a lot more which eventually decrease the trust amongst the users of your
applications.

2. Cost management: Even as almost all cloud service providers have a “Pay As You Go”
model, which reduces the overall cost of the resources being used, there are times when
there are huge costs incurred to the enterprise using cloud computing. When there is
under optimization of the resources, let’s say that the servers are not being used to their
full potential, add up to the hidden costs. If there is a degraded application performance
or sudden spikes or overages in the usage, it adds up to the overall cost. Unused resources
are one of the other main reasons why the costs go up.

3. Multi cloud environment: Due to an increase in the options available to the


companies, enterprises not only use a single cloud but depend on multiple cloud service
providers. Most of these companies use hybrid cloud tactics and close to 84% are
dependent on multiple clouds. This often ends up being hindered and difficult to manage
for the infrastructure team. The process most of the time ends up being highly complex
for the IT team due to the differences between multiple cloud providers.

4. Performance challenges: Performance is an important factor while considering cloud-


based solutions. If the performance of the cloud is not satisfactory, it can drive away users
and decrease profits. Even a little latency while loading an app or a web page can result
in a huge drop in the percentage of users. This latency can be a product of inefficient load
balancing, which means that the server cannot efficiently split the incoming traffic so as
to provide the best user experience.

5. Interoperability and flexibility: When an organization uses a specific cloud service


provider and wants to switch to another cloud-based solution, it often turns up to be a
tedious procedure since applications written for one cloud with the application stack are
required to be re-written for the other cloud. There is a lack of flexibility from switching
from one cloud to another due to the complexities involved. Handling data movement,
setting up the security from scratch and network also add up to the issues encountered
when changing cloud solutions, thereby reducing flexibility.

6. High dependence on network: Since cloud computing deals with provisioning


resources in real-time, it deals with enormous amounts of data transfer to and from the
servers. This is only made possible due to the availability of the high-speed network.
Although these data and resources are exchanged over the network, this can prove to be
highly vulnerable in case of limited bandwidth or cases when there is a sudden outage.
Even when the enterprises can cut their hardware costs, they need to ensure that the
internet bandwidth is high as well there are zero network outages, or else it can result in
a potential business loss.

7. Lack of knowledge and expertise: Due to the complex nature and the high demand
for research working with the cloud often ends up being a highly tedious task. It requires
immense knowledge and wide expertise on the subject. Although there are a lot of
professionals in the field they need to constantly update themselves. Cloud computing is
a highly paid job due to the extensive gap between demand and supply. There are a lot of
vacancies but very few talented cloud engineers, developers, and professionals. Therefore,
there is a need for upskilling so these professionals can actively understand, manage and
develop cloud-based applications with minimum issues and maximum reliability.

ACTORS IN CLOUD COMPUTING:

1) Cloud Consumer: A person or organisation that maintains a business


relationship with, and uses service from, cloud providers.
2) Cloud Provider: A person, organisation, or entity responsible for
making a service available to interested parties.
3) Cloud Auditor: A party that can conduct independent assessment of
cloud services, information system operations, performance and
security of the cloud implementation.
4) Cloud Carrier: An intermediary that provides connectivity and
transport of cloud services from cloud providers to cloud consumers.
5) Cloud Broker: An entity that manages the use, performance and
delivery of cloud services, and negotiates relationships between cloud
providers and cloud consumers.
Interoperability: It is defined as the capacity of at least two systems or applications to
trade with data and utilize it. On the other hand, cloud interoperability is the capacity or
extent at which one cloud service is connected with the other by trading data as per
strategy to get results.
The two crucial components in Cloud interoperability are usability and connectivity, which
are further divided into multiple layers.
• Behaviour
• Policy
• Semantic
• Syntactic
• Transport
• Portability
It is the process of transferring the data or an application from one framework to others,
making it stay executable or usable. Portability can be separated into two types: Cloud
data portability and Cloud application portability.
Cloud data portability –
It is the capability of moving information from one cloud service to another and so on
without expecting to re-enter the data.
Cloud application portability –
It is the capability of moving an application from one cloud service to another or between
a client’s environment and a cloud service.
Categories of Cloud Computing Interoperability and portability :
The Cloud portability and interoperability can be divided into –
Data Portability –
Data portability, which is also termed as cloud portability, refers to the transfer of data
from one source to another source or from one service to another service, i.e. from one
application to another application or it may be from one cloud service to another cloud
service in the aim of providing a better service to the customer without affecting it’s
usability. Moreover, it makes the cloud migration process more easier.
Application Portability –
It enables re-use of various application components in different cloud PaaS services. If the
components are independent in their cloud service provider, then application portability
can be a difficult task for the enterprise. But if components are not platform specific,
porting to another platform is easy and effortless.
Platform Portability –
There are two types of platform portability- platform source portability and machine image
portability. In the case of platform source portability, e.g. UNIX OS, which is mostly written
in C language, can be implemented by re-compiling on various different hardware and re-
writing sections that are hardware-dependent which are not coded in C. Machine image
portability binds application with platform by porting the resulting bundle which requires
standard program representation.
Application Interoperability –
It is the interoperability between deployed components of an application deployed in a
system. Generally, applications that are built on the basis of design principles show better
interoperability than those which are not.
Platform Interoperability –
It is the interoperability between deployed components of platforms deployed in a system.
It is an important aspect, as application interoperability can’t be achieved without platform
interoperability.
Management Interoperability –
Here, the Cloud services like SaaS, PaaS or IaaS and applications related to self-service
are assessed. It would be pre-dominant as Cloud services are allowing enterprises to work-
in-house and eradicate dependency from third parties.
Publication and Acquisition Interoperability –
Generally, it is the interoperability between various platforms like PaaS services and the
online marketplace.

Unit 3:

Xaas:
Xaas is an acronym that stands for "Anything as a Service". It refers to the delivery of
various services, such as software, infrastructure, and platform services, over the internet,
on a subscription basis. The idea behind XaaS is to allow organizations to access the
services they need, when they need them, without having to make a large upfront
investment in hardware, software, or IT infrastructure.

Storage as a service:
✓ Storage as a Service (SaaS) is a cloud business model in which a company leases or
rents its storage infrastructure to another company or individuals to store data.
✓ Small companies and individuals often find this to be a convenient methodology for
managing backups, and providing cost savings in personnel, hardware and physical
space.
✓ As an alternative to storing magnetic tapes offsite in a vault, IT administrators are
meeting their storage and backup needs by Service Level Agreements (SLAs) with an
SaaS provider, usually on a cost-per gigabyte- stored and cost-per-data-transferred
basis. The client transfers the data meant for storage to the service provider on a
set schedule over the SaaS provider's wide area network or over the Internet.
✓ The storage provider provides the client with the software required to access their
stored data. Clients use the software to perform standard tasks associated with
storage, including data transfers and data. backups. Corrupted or lost company
data can easily be restored.

Process as a service:
✓ Business Process as a Service (PaaS) is a cloud-based delivery model where a service
provider manages the operations and processes of a business organization on behalf
of the client.
✓ This includes activities such as HR management, financial management, customer
service, and other business operations.
✓ The service provider uses its technology and expertise to automate, streamline, and
manage these processes, freeing up the client's time and resources to focus on other
areas of the business.
✓ The PaaS model allows businesses to outsource specific business processes,
reducing the cost and effort associated with managing them in-house, and
improving overall efficiency and productivity.
Database as a service:
✓ Database as a Service (DBaaS) is a cloud business model in which a company leases
or rents Database services to another company or individuals to store their data.
✓ Database-as-a-Service (DBaaS) is the fastest growing cloud service.
✓ The term "Database-as-a-Service" (DBaaS) refers to software that enables users to
provision, manage, consume, configure, and operate database software using a
common set of abstractions (primitives), without having to either know nor care
about the exact implementations of those abstractions for the specific database
software.
✓ Database-as-a-Service (DBaaS) is a cloud computing service model in which a third-
party provider hosts and manages the infrastructure and maintenance of a
customer's database. This eliminates the need for the customer to manage their own
hardware and software, freeing up resources and reducing costs.
✓ The provider manages the database servers, backup and recovery, and security,
allowing customers to access and use their data through APIs or web interfaces.

Information as a service:
✓ Information as a Service (IaaS) is a model where information or data is
provided as a service over the internet. This service provides access to a wide
range of information such as news, weather, financial data, and market
trends, among others. IaaS offers a centralized platform for users to access
and manage the information they need, without the need for software or
hardware installations.
✓ IaaS is designed to be flexible, scalable, and cost-effective, making it a popular
choice for organizations of all sizes. The data is stored in cloud based servers
and accessed through a secure web portal. The service providers are
responsible for managing and maintaining the infrastructure, security, and
performance of the platform.
✓ With IaaS, organizations can reduce their costs associated with data
management and gain access to the latest information in real-time. This can
help them make more informed decisions and stay ahead of their competitors.
The services can be customized based on the specific needs of the
organizations, and the data can be accessed from anywhere, at any time,
through any device with internet access. Integration as a service
✓ Integration as a Service (iPaaS) in the cloud refers to a cloud-based platform
for integrating different applications, services, and data sources. It provides
a centralized solution for connecting, integrating, and managing various
systems, which helps organizations automate business processes and
streamline data exchange across the enterprise. iPaaS enables users to build,
deploy, and manage integrations without the need for extensive technical
expertise or on-premise infrastructure.
✓ Some common use cases for iPaaS in the cloud include data migration, data
integration, system integration, application integration, and business process
automation. iPaaS solutions typically provide a range of connectivity options,
including APIs, pre-built connectors, and custom code options. Additionally,
many iPaaS solutions offer monitoring, management, and security features
that help organizations manage and secure their integration environments.
Testing as a service
✓ Testing as a Service is an outsourcing model, in which testing activities are
outsourced to a third party that specializes in simulating real world testing
environments as per client requirements. It is also abbreviated as TaaS.
Types of TaaS
Functional Testing.
Performance Testing.
Security Testing.
✓ Functional Testing as a Service o TaaS Functional Testing may include
UI/GUI Testing, regression, integration and automated User Acceptance
Testing (UAT) but not necessary to be part of functional testing.
✓ Performance Testing as a Service o Multiple users are accessing the
application at the same time. TaaS mimic as a real world users environment
by creating virtual users and performing the load and stress test.
✓ Security Testing as a Service o TaaS scans the application and websites for
any vulnerability
✓ Features of TaaS
✓ Self-service portal for running application for function and load tests.
✓ Test library with full security controls that saves all the test assets available
to end users.
✓ To maximize the hardware utilization, sharing of Cloud hardware
✓ On-demand availability for complete test labs that includes ability to deploy
complex deploy complex multi-tier applications, test scripts, and test tools
✓ It ensures the detection of bottlenecks and solve the problems for the
application under test by monitoring it
✓ The metering capabilities allows tracking and charging for that the services
used by customer

Scaling of Cloud:

Cloud scalability in cloud computing refers to increasing or decreasing IT resources as


needed to meet changing demand. Scalability is one of the hallmarks of the cloud and the
primary driver of its explosive popularity with businesses. Data storage capacity,
processing power, and networking can all be increased by using existing cloud computing
infrastructure. Scaling can be done quickly and easily, usually without any disruption or
downtime. Third-party cloud providers already have the entire infrastructure in place; In
the past, when scaling up with on-premises physical infrastructure, the process could
take weeks or months and require exorbitant expenses. This is one of the most popular
and beneficial features of cloud computing, as businesses can grow up or down to meet
the demands depending on the season, projects, development, etc. By implementing cloud
scalability, you enable your resources to grow as your traffic or organization grows and
vice versa. There are a few main ways to scale to the cloud: If your business needs more
data storage capacity or processing power, you'll want a system that scales easily and
quickly. Cloud computing solutions can do just that, which is why the market has grown
so much. Using existing cloud infrastructure, third-party cloud vendors can scale with
minimal disruption.
Types of scaling
o Vertical Scalability (Scaled-up)
o horizontal scalability
o diagonal scalability

Vertical Scaling

To understand vertical scaling, imagine a 20-story hotel. There are innumerable rooms
inside this hotel from where the guests keep coming and going. Often there are spaces
available, as not all rooms are filled at once. People can move easily as there is space for
them. As long as the capacity of this hotel is not exceeded, no problem. This is vertical
scaling. With computing, you can add or subtract resources, including memory or storage,
within the server, as long as the resources do not exceed the capacity of the machine.
Although it has its limitations, it is a way to improve your server and avoid latency and
extra management. Like in the hotel example, resources can come and go easily and
quickly, as long as there is room for them.

Horizontal Scaling

Horizontal scaling is a bit different. This time, imagine a two-lane highway. Cars travel
smoothly in each direction without major traffic problems. But then the area around the
highway develops - new buildings are built, and traffic increases. Very soon, this two-lane
highway is filled with cars, and accidents become common. Two lanes are no longer
enough. To avoid these issues, more lanes are added, and an overpass is constructed.
Although it takes a long time, it solves the problem. Horizontal scaling refers to adding
more servers to your network, rather than simply adding resources like with vertical
scaling. This method tends to take more time and is more complex, but it allows you to
connect servers together, handle traffic efficiently and execute concurrent workloads.
Diagonal Scaling

It is a mixture of both Horizontal and Vertical scalability where the resources are added
both vertically and horizontally. Well, you get diagonal scaling, which allows you to
experience the most efficient infrastructure scaling. When you combine vertical and
horizontal, you simply grow within your existing server until you hit the capacity. Then,
you can clone that server as necessary and continue the process, allowing you to deal with
a lot of requests and traffic concurrently.

Unit 4

Aneka is a cloud computing middleware framework developed by Manjrasoft,


which is designed to simplify the development and execution of parallel and
distributed applications. It provides a platform for the deployment and
management of applications on cloud infrastructure. Here is an overview of the
key components and architecture of the Aneka framework:

1. Application Model:
• Components: Applications in Aneka are composed of tasks, which are the
basic units of computation. Tasks are submitted to the Aneka framework for
execution.
2. Task Execution Environment:
• Components: Aneka provides a runtime environment for executing tasks,
including libraries, APIs, and execution policies.
• Responsibilities: Manages the execution of tasks, ensuring parallelism and
distributed computing.
3. Resource Manager:
• Components: Aneka includes a Resource Manager responsible for
managing and allocating resources across the cloud infrastructure.
• Responsibilities: Monitors the availability and performance of resources,
dynamically allocating resources based on application requirements.
4. Communication Middleware:
• Components: A communication middleware facilitates communication
between tasks and resources in a distributed environment.
• Responsibilities: Ensures efficient and reliable communication among
tasks and between different nodes in the Aneka framework.
5. Task Scheduling and Load Balancing:
• Components: Aneka incorporates algorithms for task scheduling and load
balancing.
• Responsibilities: Optimizes the distribution of tasks across available
resources to maximize efficiency and minimize execution time.
6. Service-Oriented Architecture (SOA):
• Components: Aneka is designed with a service-oriented architecture,
allowing developers to expose their applications as services.
• Responsibilities: Enables the development of scalable and modular
applications using a service-oriented approach.
7. Elasticity and Auto-Scaling:
• Components: Aneka supports elasticity and auto-scaling features.
• Responsibilities: Allows the Aneka framework to dynamically scale
resources up or down based on demand, optimizing resource utilization.
8. Security:
• Components: Security features are integrated to ensure the protection of
data and resources.
• Responsibilities: Manages access control, authentication, and encryption
to secure the execution environment.
9. Cloud Integration:
• Components: Aneka can integrate with various cloud providers and
platforms.
• Responsibilities: Facilitates the deployment of applications on public,
private, or hybrid cloud environments.
10. Developer Tools:
• Components: Aneka provides tools and APIs for developers to build, deploy,
and manage applications.
• Responsibilities: Aids developers in creating parallel and distributed
applications efficiently.

classified into three major categories aneka:


o Textile services
o Foundation Services
o Application Services

1. Textile Services:

Fabric Services defines the lowest level of the software stack that represents multiple
containers. They provide access to resource-provisioning subsystems and monitoring
features implemented in many.

2. Foundation Services:

Fabric Services are the core services of Manya Cloud and define the infrastructure
management features of the system. Foundation services are concerned with the logical
management of a distributed system built on top of the infrastructure and provide
ancillary services for delivering applications.

3. Application Services:

Application services manage the execution of applications and constitute a layer that
varies according to the specific programming model used to develop distributed
applications on top of Aneka.
Logical organization of aneka:
• The logical organization of Aneka Clouds can be very diverse, since it strongly
depends on the configuration selected for each of the container instances belonging
to the Cloud. The most common scenario is to use a master-worker configuration
with separate nodes for storage.
• The master node features all the services that are most likely to be present in one
single copy and that provide the intelligence of the Aneka Cloud.
• A common configuration of the master node is as follows:
o Index Service (master copy)
o Heartbeat Service
o Logging Service
o Reservation Service
o Resource Provisioning Service
o Accounting Service o Reporting and Monitoring Service
• The master node also provides connection to an RDBMS facility where the state
of several services is maintained. For the same reason, all the scheduling services
are maintained in the master node. They share the application store that is
normally persisted on the RDBMS in order to provide a fault-tolerant
infrastructure.
• The worker nodes constitute the workforce of the Aneka Cloud and are generally
configured for the execution of applications.
• Storage nodes are optimized to provide storage support to applications. They
feature, among the mandatory and usual services, the presence of the Storage
Service. The number of storage nodes strictly depends on the predicted workload
and storage consumption of applications. Storage nodes mostly reside on machines
that have considerable disk space to accommodate a large quantity of files.

Infrastructure Organization:
Infrastructure organization refers to the structured management and
arrangement of physical and virtual components that make up the underlying
foundation of an IT environment. This organization is crucial for ensuring
that IT resources are efficiently utilized, maintained, and optimized to
support the overall goals and operations of an organization
Private cloud deployment mode: A private deployment mode is mostly
constituted by local physical resources and infrastructure management
software providing access to a local pool of nodes, which might be virtualized.
In this scenario Aneka Clouds are created by harnessing a heterogeneous
pool of resources such has desk- top machines, clusters, or workstations.
These resources can be partitioned into different groups, and Aneka can be
configured to leverage these resources according to application needs.
Moreover, leveraging the Resource Provisioning Service, it is possible to
integrate virtual nodes provisioned from a local resource pool managed by
systems such as Xen Server. Eucalyptus, and OpenStack. A private
deployment mode is mostly constituted by local physical resources and
infrastructure management software providing access to a local pool of nodes,
which might be virtualized. In this scenario Aneka Clouds are created by
harnessing a heterogeneous pool of resources such has desk- top machines,
clusters, or workstations. These resources can be partitioned into different
groups, and Aneka can be configured to leverage these resources according
to application needs. Moreover, leveraging the Resource Provisioning Service,
it is possible to integrate virtual nodes provisioned from a local resource pool
managed by systems such as XenServer. Eucalyptus, and OpenStack.
Public cloud deployment mode: Public Cloud deployment mode features
the installation of Aneka master and worker nodes over a completely
virtualized infrastructure that is hosted on the infrastructure of one or more
resource providers such as Amazon EC2 or GoGrid. In this case it is possible
to have a static deployment where the nodes are provisioned beforehand and
used as though they were real machines. This deployment merely replicates
a classic Aneka installation on a physical infrastructure without any dynamic
provisioning capability. More interesting is the use of the elastic features of
IaaS providers and the creation of a Cloud that is completely dynamic.
Hybrid cloud deployment mode: The hybrid deployment model constitutes
the most common deployment of Aneka. In many cases, there is an existing
computing infrastructure that can be leveraged to address the computing
needs of applications. This infrastructure will constitute the static
deployment of Aneka that can be elastically scaled on demand when
additional resources are required. This scenario constitutes the most
complete deployment for Aneka that is able to leverage all the capabilities of
the framework: • Dynamic Resource Provisioning • Resource Reservation •
Workload Partitioning Accounting. Monitoring, and Reporting
AWS : Amazon Web Services (AWS) is a cloud computing platform that provides a
wide range of services to help organizations and businesses build, deploy, and
manage applications and services in the cloud. It is one of the leading cloud
computing platforms and is used by many organizations and businesses worldwide.
With AWS, you only pay for the services you use, and you can easily scale up or
down as your needs change. Additionally, AWS provides a highly secure and
reliable infrastructure, with multiple layers of security and compliance built in.
AWS offers a variety of services, including compute, storage, databases,
networking, security, analytics, machine learning, mobile, and application services.
Amazon Web Services offers a wide range of different business purpose global
cloud-based products. The products include storage, databases, analytics,
networking, mobile, development tools, enterprise applications, with a pay as-you-
go pricing model.
AWS Compute Services Here, are Cloud Compute Services offered by Amazon: 1.
EC2(Elastic Compute Cloud)- EC2 is a virtual machine in the cloud on which you
have OS level control. You can run this cloud server whenever you want.
2. LightSail- This cloud computing tool automatically deploys and manages the
computer, storage, and networking capabilities required to run your applications.
3. AWS Lambda- This AWS service allows you to run functions in the cloud. The
tool is a big cost saver for you as you to pay only when your functions execute.
Storage services:
1. Amazon Glacier- It is an extremely low-cost storage service. It offers secure and
fast storage for data archiving and backup.
2. Amazon Elastic Block Store (EBS)- It provides block-level storage to use with
Amazon EC2 instances. Amazon Elastic Block Store volumes are network-attached
and remain independent from the life of an instance.
3. AWS Storage Gateway- This AWS service is connecting on-premises software
applications with cloud-based storage. It offers secure integration between the
company’s on-premises and AWS’s storage infrastructure. Database Services :
1. Amazon RDS- This Database AWS service is easy to set up, operate, and scale
a relational database in the cloud.
2. Amazon DynamoDB- It is a fast, fully managed NoSQL database service. It is a
simple service which allow cost-effective storage and retrieval of data. It also allows
you to serve any level of request traffic.
Advantages of AWS:
1. Cost Savings: Pay-as-you-go pricing model, allowing you to only pay for the
resources you use, rather than investing in expensive hardware.
2. Scalability: The ability to quickly and easily scale up or down as your computing
needs change.
3. Reliability: Built on a global infrastructure with multiple data centers and
redundant systems, ensuring high availability and reliability.
4. Flexibility: A wide range of services, making it easy to integrate with existing
systems and workflows.

Google App engine:


Google App Engine (GAE) is a platform for building scalable web applications and
mobile backends. It is a fully managed platform, which means that Google takes
care of the infrastructure and handles tasks such as resource allocation, load
balancing, and automatic scaling. This allows developers to focus on writing code,
without having to worry about the underlying infrastructure.
With GAE, developers can build and deploy applications using a variety of
programming languages, including Python, Java, PHP, and Go. It also includes a
set of APIs and services for common tasks, such as data storage, user
authentication, and task scheduling. GAE integrates with other Google Cloud
Platform services, providing a unified solution for building, deploying, and
managing applications.
Features of googleApp engine
1. Collection of Development Languages and Tools The App Engine
supports numerous programming languages for developers and offers the
flexibility to import libraries and frameworks through docker containers. You
can develop and test an app locally using the SDK tools for deploying apps.
Every language has its SDK and runtime. Some of the languages offered
include — Python, PHP, .NET, Java, Ruby, C#, Go, and NodeJs.
2. Fully Managed Google allows you to add your web application code to the
platform while managing the infrastructure for you. The engine ensures that
your web apps are secure and running and saves them from malware and
threats by enabling the firewall.
3. Pay-as-you-Go The app engine works on a pay-as-you-go model, i.e., you
only pay for what you use. The app engine automatically scales up resources
when the application traffic picks up and vice-versa.
4. Traffic Splitting The app engine automatically routes the incoming traffic
to different versions of the apps as a part of A/B testing. You can plan the
consecutive increments based on the app’s best version.
Microsoft Azure: it is a comprehensive cloud computing platform that
provides a variety of services, allowing users to build, deploy, and manage
applications and services through a global network of data centers. Here are
some core concepts and components of Microsoft Azure:

1. Azure Resource Manager (ARM):


• Definition: Azure Resource Manager is the deployment and
management service for Azure. It provides a consistent management
layer that enables you to create, update, and delete resources in your
Azure subscription.
2. Azure Subscription:
• Definition: An Azure subscription is a logical container used to
provision and manage Azure resources. It is linked to an Azure
account and billing information.
3. Azure Services:
• Definition: Azure provides a wide range of services, including
computing, storage, databases, networking, AI, and more. These
services are categorized into different Azure products.
4. Azure Virtual Machines (VMs):
• Definition: Azure VMs are scalable computing resources that run
Windows or Linux virtual machines in the cloud.
5. Azure Blob Storage:
• Definition: Azure Blob Storage is a scalable object storage solution for
the cloud. It is used to store and retrieve large amounts of
unstructured data.
6. Azure SQL Database:
• Definition: Azure SQL Database is a fully managed relational
database service in the cloud.
7. Azure Virtual Network:
• Definition: Azure Virtual Network allows you to create private,
isolated networks in the Azure cloud.
.8. Azure Active Directory (AAD):
• Definition: Azure Active Directory is Microsoft's cloud-based identity
and access management service.
.9. Azure App Service:
• Definition: Azure App Service is a fully managed platform for building,
deploying, and scaling web apps.
10. Azure Marketplace:
• Definition: The Azure Marketplace is an online store that offers a wide
range of applications and services from Microsoft and third-party
vendors.
CRM and ERP in Cloud Computing –
• What is CRM?
CRM stands for Customer Relationship Management and is a software
that is hosted in cloud so that the users can access the information
using internet. CRM software provides high level of security and
scalability to its users and can be easily used on mobile phones to
access the data.
Now a days, many business vendors and service providers are using
these CRM software to manage the resources so that the user can
access them via internet. Moving the business computation from
desktop to the cloud is proving a beneficial step in both the IT and Non-
IT fields. Some of the major CRM vendors include Oracle Siebel,
Mothernode CRM, Microsoft Dynamics CRM, Infor CRM, SAGE CRM,
NetSuite CRM.

What is ERP?
ERP is an abbreviation for Enterprise Resource Planning and is a software
similar to CRM that is hosted on cloud servers which helps the enterprises to
manage and manipulate their business data as per their needs and user
requirements. ERP software follows pay per use methodologies of payment,
that is at the end of the month, the enterprise pay the amount as per the
cloud resources utilized by them. There are various ERP vendors available
like Oracle, SAP, Epicor, SAGE, Microsoft Dynamics, Lawson Softwares and
many more.

You might also like