0% found this document useful (0 votes)
22 views23 pages

CloudComputing Unit5

The document discusses Service-Oriented Architecture (SOA) and its components, principles, advantages, and disadvantages. SOA defines a way to make software reusable using interfaces. It packages functionalities into interoperable services that can be integrated into different software systems. The document also discusses Online Analytical Processing (OLAP) servers and their types, operations like roll-up, drill-down, slice and dice.

Uploaded by

gat267286
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views23 pages

CloudComputing Unit5

The document discusses Service-Oriented Architecture (SOA) and its components, principles, advantages, and disadvantages. SOA defines a way to make software reusable using interfaces. It packages functionalities into interoperable services that can be integrated into different software systems. The document also discusses Online Analytical Processing (OLAP) servers and their types, operations like roll-up, drill-down, slice and dice.

Uploaded by

gat267286
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Unit -5

SOA and cloud


Service-Oriented Architecture (SOA) is a stage in the evolution of application
development and/or integration. It defines a way to make software components
reusable using the interfaces.
Service-oriented architecture (SOA) is a method of software development that uses
software components called services to create business applications. Each service
provides a business capability, and services can also communicate with each other
across platforms and languages. Developers use SOA to reuse services in different
systems or combine several independent services to perform complex tasks.
For example, multiple business processes in an organization require the user
authentication functionality. Instead of rewriting the authentication code for all business
processes, you can create a single authentication service and reuse it for all
applications. Similarly, almost all systems across a healthcare organization, such as
patient management systems and electronic health record (EHR) systems, need to
register patients. These systems can call a single, common service to perform the
patient registration task.

What are the benefits of service-oriented architecture?


Service-oriented architecture (SOA) has several benefits over the traditional monolithic
architectures in which all processes run as a single unit. Some major benefits of SOA
include the following:
Faster time to market
Developers reuse services across different business processes to save time and costs.
They can assemble applications much faster with SOA than by writing code and
performing integrations from scratch.
Efficient maintenance
It’s easier to create, update, and debug small services than large code blocks in
monolithic applications. Modifying any service in SOA does not impact the overall
functionality of the business process.
Greater adaptability
SOA is more adaptable to advances in technology. You can modernize your
applications efficiently and cost effectively. For example, healthcare organizations can
use the functionality of older electronic health record systems in newer cloud-based
applications.

● SOA allows users to combine a large number of facilities from existing


services to form applications.
● SOA encompasses a set of design principles that structure system
development and provide means for integrating components into a coherent
and decentralized system.
● SOA-based computing packages functionalities into a set of interoperable
services, which can be integrated into different software systems belonging to
separate business domains.
There are two major roles within Service-oriented Architecture:
1. Service provider: The service provider is the maintainer of the service and
the organization that makes available one or more services for others to use.
To advertise services, the provider can publish them in a registry, together
with a service contract that specifies the nature of the service, how to use it,
the requirements for the service, and the fees charged.
2. Service consumer: The service consumer can locate the service metadata in
the registry and develop the required client components to bind and use the
service.

Services might aggregate information and data retrieved from other services or
create workflows of services to satisfy the request of a given service consumer.
This practice is known as service orchestration Another important interaction
pattern is service choreography, which is the coordinated interaction of services
without a single point of control.
Components of
SOA:

Guiding Principles of SOA:


1. Standardized service contract: Specified through one or more service
description documents.
2. Loose coupling: Services are designed as self-contained components,
maintain relationships that minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and
description documents. They hide their logic, which is encapsulated within
their implementation.
4. Reusability: Designed as components, services can be reused more
effectively, thus reducing development time and the associated costs.
5. Autonomy: Services have control over the logic they encapsulate and, from a
service consumer point of view, there is no need to know about their
implementation.
6. Discoverability: Services are defined by description documents that
constitute supplemental metadata through which they can be effectively
discovered. Service discovery provides an effective means for utilizing third-
party resources.
7. Composability: Using services as building blocks, sophisticated and complex
operations can be implemented. Service orchestration and choreography
provide a solid support for composing services and achieving business goals.
Advantages of SOA:
● Service reusability: In SOA, applications are made from existing services.
Thus, services can be reused to make many applications.
● Easy maintenance: As services are independent of each other they can be
updated and modified easily without affecting other services.
● Platform independent: SOA allows making a complex application by
combining services picked from different sources, independent of the
platform.
● Availability: SOA facilities are easily available to anyone on request.
● Reliability: SOA applications are more reliable because it is easy to debug
small services rather than huge codes
● Scalability: Services can run on different servers within an environment, this
increases scalability
Disadvantages of SOA:
● High overhead: A validation of input parameters of services is done
whenever services interact this decreases performance as it increases load
and response time.
● High investment: A huge initial investment is required for SOA.

● Complex service management: When services interact they exchange


messages to tasks. the number of messages may go in millions. It becomes a
cumbersome task to handle a large number of messages.

OLAP( Online Analytical Processing Server)

Online Analytical Processing Server (OLAP) is based on the multidimensional data


model. It allows managers, and analysts to get an insight of the information through fast,
consistent, and interactive access to information.

Types of OLAP Servers


We have four types of OLAP servers

● Relational OLAP (ROLAP)


● Multidimensional OLAP (MOLAP)
● Hybrid OLAP (HOLAP)
● Specialized SQL Servers
Relational OLAP
ROLAP servers are placed between relational back-end server and client front-end
tools. To store and manage warehouse data, ROLAP uses relational or extended-
relational DBMS.
ROLAP includes the following

● Implementation of aggregation navigation logic.


● Optimization for each DBMS back end.
● Additional tools and services.

Multidimensional OLAP
MOLAP uses array-based multidimensional storage engines for multidimensional views
of data. With multidimensional data stores, the storage utilization may be low if the data
set is sparse. Therefore, many OLAP servers use two levels of data storage
representation to handle dense and sparse data sets.

Hybrid OLAP
Hybrid OLAP is a combination of both ROLAP and MOLAP. It offers higher scalability of
ROLAP and faster computation of MOLAP. HOLAP servers allow the storage of large
data volumes of detailed information. The aggregations are stored separately in the
MOLAP store.

Specialized SQL Servers


Specialized SQL servers provide advanced query language and query processing
support for SQL queries over star and snowflake schemas in a read-only environment.

OLAP Operations
Since OLAP servers are based on a multidimensional view of data, we will discuss
OLAP operations in multidimensional data.
Here is the list of OLAP operations

● Roll-up
● Drill-down
● Slice and dice
● Pivot (rotate)
Roll-up
Roll-up performs aggregation on a data cube in any of the following ways

● By climbing up a concept hierarchy for a dimension


● By dimension reduction

The following diagram illustrates how roll-up works.

● Roll-up is performed by climbing up a concept hierarchy for the dimension


location.
● Initially the concept hierarchy was "street < city < province < country".

● On rolling up, the data is aggregated by ascending the location hierarchy from the
level of city to the level of country.
● The data is grouped into cities rather than countries.
● When roll-up is performed, one or more dimensions from the data cube are
removed.
Drill-down
Drill-down is the reverse operation of roll-up. It is performed by either of the following ways

● By stepping down a concept hierarchy for a dimension


● By introducing a new dimension.

The following diagram illustrates how drill-down works

● Drill-down is performed by stepping down a concept hierarchy for the dimension


time.
● Initially the concept hierarchy was "day < month < quarter < year."

● On drilling down, the time dimension is descended from the level of quarter to the
level of month.
● When drill-down is performed, one or more dimensions from the data cube are
added.
● It navigates the data from less detailed data to highly detailed data.
Slice
The slice operation selects one particular dimension from a given cube and provides a
new sub-cube. Consider the following diagram that shows how slice works.

● Here Slice is performed for the dimension "time" using the criterion time = "Q1".
● It will form a new sub-cube by selecting one or more dimensions.
Dice
Dice selects two or more dimensions from a given cube and provides a new sub-cube.
Consider the following diagram that shows the dice operation.
The dice operation on the cube based on the following selection criteria involves three
dimensions.

● (location = "Toronto" or "Vancouver")


● (time = "Q1" or "Q2")
● (item =" Mobile" or "Modem")
Pivot
The pivot operation is also known as rotation. It rotates the data axes in view in order to
provide an alternative presentation of data. Consider the following diagram that shows
the pivot operation.
ISV (Independent Software Vendor)-

A software producer that is not owned or controlled by a hardware


manufacturer; a company whose primary function is to distribute software.
Hardware manufacturers that distribute software (such as IBM and Unisys)
are not ISVs, nor are users (such as banks) that may also sell software
products.
ISVs typically offer products that the primary vendor (i.e., IBM) does not offer,
allowing clients of that vendor to round out their software needs. ISVs create
price competition and also increase the pace of technology innovation in their
markets.
An ISV may also incorporate software from a software platform provider into its offering
by embedding database technology from Microsoft or Oracle, for example.
ISVs have increasingly targeted the cloud as a vehicle for delivering software by offering
products on a software as a service (SaaS) basis. In this delivery method, an ISV may
sell its software through a public cloud or cloud marketplace. Examples include Amazon
Web Services (AWS), Microsoft Azure and Salesforce AppExchange.
Additionally, an independent software maker provides software in the form of virtual
appliances that run on virtual machines (VMs).

QOS issues in cloud


Currently, cloud computing system used by the companies because the features
provided by cloud application. We need to ensure about the quality of the services
provided to a company to achieve high manner of work. We will discuss about the
quality and its factors, quality of service QoS, Cloud computing, advantages,
disadvantages and its architecture and techniques used to provide QoS for cloud
application. Quality and effecting factors Quality defined as the degree to which a set of
inherent characteristics meet requirement. Characteristics defined by ISO as well-
known feature that means the other features are not included in the definition of the
quality. Inherent characteristics are the necessary part of the system and cannot be
separated from the system. According to definition, Quality should be related to
requirement, to imply that requirements should be there. So, no requirements, no quality
[1]. There are many factors that will affect the quality of a system or application.
Flexibility, the ability of the software to manage the functionality without destroying the
system. Maintainability and readability, maintainability is a little similar with flexibility but
it focus on modifications about error correction. Performance and efficiency,
performance is about the response time of the software. Scalability, a scalable system
responds user’s actions in acceptable amount of time. Availability and Robustness, A
robust software should be available even if there is failure state. Usability and
Accessibility, User interface is the visible part of the software to the user, so it must be
easy to use. Platform Compatibility, A quality software should run on as much various
platforms as it can. Therefore, it will cover many users to use the software. The
meaning of platform means operating system and internet browsers. Security, Security
is important factor to specify the quality of software. You should implement a security
policy and apply it correctly on the software and do not leave any entrance gap. Security
policies like authentication and authorization techniques, data encryption with high level
algorithms and network attack protection.

QUALITY OF SERVICE (QOS):


Users of internet network in increasing day by day, network requirement also increases
to achieve good performance. Therefore, many online services need a very large
bandwidth and network performance. Network performance is the element that disquiet
the users and service providers. Internet service providers should bring new
technologies to provide the best services before competitors strike them. Quality of
Service refers to the ability of networks to attain maximum bandwidth and handle other
network elements like latency, error rate and uptime. Quality of Service include the
management of other networks resource by allocating priorities to specific type of data
(audio, video and file). Basic implementation of QoS need three major component: a.
QoS within one network element. b. QoS policy and management functions to control
end-to-end traffic across network. c. Identification techniques for coordinating QoS from
end-to-end between network elements.

Techniques to Provide QoS of Cloud Application:


As we explained before about the QoS, it is a challenge to implement QoS in cloud
computing applications. There are many techniques to provide quality of service to the
cloud applications. Scheduling, admission control and dynamic resource provisioning
are some techniques used to achieve that goal. 1- Scheduling: Cloud service
scheduling categorized into two categories: user level and system level. At user level
scheduling deals with problems raised by service providing between both service
provider and customer. Market based and auction based schedulers are fit for ruling the
supply and demand of cloud resources. Market based resource allocation is powerful in
cloud computing environment where resources are handed over to user as a service.
The system level scheduling handles with resource management in datacenter.
Datacenter contain many physical machines, Million request sent from user’s side,
scheduling these requests to the physical machines done in datacenter. This scheduling
affect the performance of datacenter. Service provisioning in cloud systems based on
Service Level Agreement (SLA). SLA is the contract between service provider and
customer mentioning the terms of agreement including the nonfunctional requirement
represented as QoS.
2- Admission Control: The main purpose of admission control is to provide strong
performance. At admission control time, the Infrastructure Provider (IP) must consider
the extra requirement along with the fundamental computational and networking
necessities that may be required to be added to runtime so it become flexible. In many
cases, these flexible requirements may be very large comparing it to the normal
requirements. For example, if there are many users are working on cloud application
with high divergence, the number of virtual machines are required more and that may
be added at runtime many times multiple of the number of the basic ones. So that, the
number of flexible requirements plays important role in the total requirements and
therefore the cost of hosting the service [10]. 3- Resource provisioning: Dynamic
resource provisioning is the process of assigning available resources to the cloud
application. Resource allocation will make services suffer if the allocation not managed
in the right way. Resource provisioning will solve this problem by allowing the service
providers to manage the resources of modules individually. Resource Allocation
Strategy (RAS) is all about integrating service provider services activities to allocate
insufficient resources within the limit of cloud environment so that it meets the needs of
the cloud application. It need the demand and type of resources for each application to
complete the user task. The order and allocation time for resources are inputs for
optimal RAS.

Mobile Cloud Computing

Mobile cloud computing (MCC) is the method of using cloud technology to deliver
mobile apps. Complex mobile apps today perform tasks such as authentication,
location-aware functions, and providing targeted content and communication for end
users. Hence, they require extensive computational resources such as data storage
capacity, memory, and processing power. Mobile cloud computing takes the pressure
off mobile devices by harnessing the power of cloud infrastructure. Developers build
and update rich mobile apps using cloud services and then deploy them for remote
access from any device. These cloud-based mobile apps use cloud technology to store
and process data so that the app is usable on all types of old and new mobile devices.
In this technology, data processing, and data storage happen outside of mobile
devices. Mobile Cloud Computing applications leverage this IT architecture to
generate the following advantages:
1. Extended battery life.
2. Improvement in data storage capacity and processing power.
3. Improved synchronization of data due to “store in one place, accessible from
anywhere ” platform theme.
4. Improved reliability and scalability.
5. Ease of integration.

Why is mobile cloud computing important?


Modern customers expect the convenience of accessing a company's website and
applications remotely from anywhere and at any time. Organizations use mobile cloud
computing applications to meet this expectation efficiently and cost-effectively. They run
complex workloads on cloud resources so that users are not limited by their device
capacity or operating system. Advantages of using mobile cloud computing include the
following:
Wider reach
Mobile application developers can reach a large market because MCC is platform
independent. Cloud-based mobile apps are serverless and run on any device and
operating system. Developers can maintain them centrally and publish updates across
all platforms with minimal effort.

Real-time analytics
Cloud apps store data centrally on the same cloud infrastructure. The backend cloud
services can integrate multiple data points quickly, and communicate with several other
applications to provide accurate real-time analytics. Users can securely collect and
integrate data from various sources. Internet of Things (IoT) also enables cloud
connected, real-time experiences and communications in mobile apps.
Improved user experience
As long as they have a strong internet connection, mobile cloud application users can
enjoy a seamless application experience across platforms and devices such as
desktops, mobiles, and tablets. They can access rich computational resources not
present on their device. If the device is lost or stolen, their data remains backed up to
cloud data storage, and they can recover it quickly.
Cost efficiency
Cloud providers offer a pay-as-you-go model so that you pay only for the cloud-based
resources that you actually use. This makes it less costly than purchasing and
maintaining your on-premises servers. Additionally, if the cloud apps are for internal
use, your organization can permit employees to install the mobile apps on their own
devices. They do not have to purchase specific device configurations for all employees.

Characteristics Of Mobile Cloud Computing Application

1. Cloud infrastructure: Cloud infrastructure is a specific form of information


architecture that is used to store data.
2. Data cache: In this, the data can be locally cached.
3. User Accommodation: Scope of accommodating different user requirements
in cloud app development is available in mobile Cloud Computing.
4. Easy Access: It is easily accessed from desktop or mobile devices alike.
5. Cloud Apps facilitate to provide access to a whole new range of services.
Today smartphones are employed with rich cloud services by integrating applications
that consume web services. These web services are deployed in cloud.
There are several Smartphone operating systems available such as Google's Android,
Apple's iOS, RIM BlackBerry, Symbian, and Windows Mobile Phone. Each of these
platforms support third-party applications that are deployed in cloud.
Architecture
MCC includes four types of cloud resources:

● Distant mobile cloud


● Distant immobile cloud
● Proximate mobile computing entities
● Proximate immobile computing entities
● Hybrid

The following diagram shows the framework for mobile cloud computing architecture:

Mobile Cloud Computing Applications

There are two types of applications of mobile cloud computing (MCC) that are
almost similar. These are as follows:
1. Mobile Cloud application: It is defined as a model where
processing is done in the cloud, and the storage is also in the cloud,
and the presentation platform is the mobile device. For this, the
internet connection should have to reliable and cell-phone to run a
browser. It enables to use the smartphone with cloud technology
with the following characteristics :
2. Mobile Web Services: In Mobile Web Services mobile devices consume more
network traffic. It may lead to some challenges for web services such as
mismatch of resolution and details of desktop computers. The device needs to
know about that service and the way it can be accessed to use any web-service
so that the mobile device can transmit specific information about the condition of
the device and the user. Enabling Mobile Web Services are as follows:
1. Enables web-service systems with web services.
2. Enables in-built external services.
3. Enable the rest protocol.
4. Enables XML-RPC protocols.
5. Enables the capabilities to authenticate user roles.

Benefits of Mobile Cloud Computing

1. Mobile Cloud Computing saves Business money.


2. Because of the portability which makes their work easy and efficient.
3. Cloud consumers explore more features on their mobile phones.
4. Developers reach greater markets through mobile cloud web services.
5. More network providers can join up in this field.

Challenges of Mobile Cloud Computing

1. Low bandwidth: This is one of the big issues in mobile cloud computing.
Mobile cloud use radio waves which are limited as compared to wired
networks. Available wavelength is distributed in different mobile devices.
Therefore, it has been three times slower in accessing speed as compared to
a wired network.
2. Security and Privacy: It is difficult to identify ad manage threats on mobile
devices as compared to desktop devices because in a wireless network there
are more chances of the absence of the information from the network.
3. Service Availability: Users often find complaints like a breakdown of the
network, transportation crowding, out of coverage, etc. Sometimes customers
get a low-frequency signal, which affects the access speed and storage
facility.
4. Alteration of Networks: Mobile cloud computing is used in a different
operating system driven platforms like Apple iOS, Android, and Windows
Phone. So it has to be compatible with different platforms. The performance of
different mobile platform network is managed by the IRNA (Intelligent Radio
Network Access) technique.
5. Limited Energy source: Mobile devices consume more energy and are less
powerful. Mobile cloud computing increases battery usage of mobile devices
which becomes an important issue. Devices should have a long-life battery to
access applications and other operations. When the size of the altered code is
small, the offloading consumes more energy than local processing.

Sky computing
We’re about to transition from the cloud computing era to the sky computing era. As the name
suggests, sky computing is a layer above cloud platforms — and its goal is to enable
interoperability between clouds. If you think that sounds like the current industry
buzzword, multicloud, you’re on the right track.
In 2021, there isn’t one single underlying cloud platform with a set of open
standards that anyone can use. Instead, cloud computing has evolved into a series
of proprietary platforms that are largely incompatible with each other: Amazon
Web Services (AWS), Microsoft Azure, Google Cloud, and others. The new paper
by Stoica and Shenker lays out a vision for “a more commoditized version of cloud
computing, which we call the Sky computing.”
In essence, sky computing is about enabling multicloud application
development.
In essence then, this is about enabling multicloud application development. “To
fulfil the vision of utility computing, applications should be able to run on any cloud
provider (i.e., write-once, run-anywhere),”
Sky computing is made up of three layers: compatibility, intercloud, and
peering.
The compatibility layer will enable an application developer to easily pick up and
move their app from (for example) AWS to Google Cloud. Where multicloud
comes in is with the intercloud layer, as it will allow applications to run across
multiple cloud providers — depending on user needs. Here’s how Stoica explained
it:

“The intercloud layer is going one level up [from the compatibility layer]. Ideally,
with the intercloud layer you specify the preferences for your job — say I want to
minimize costs, or minimize time, or I need to process this data locally — and the
intercloud layer will decide where to run your job to satisfy these preferences.”

Regarding the data locality example, there may be reasons — geopolitical or


otherwise — why an application must use a specific geographic location. Consider
an application that wants to process some data that must not leave a country’s
boundaries and that there is only an AWS cloud data center in that country. In this
case, the intercloud layer would automatically route that application to AWS’s data
center. But all other applications might use different cloud platforms, depending on
the intercloud rules the application developer defines. (The user wouldn’t know
which cloud platform they’re on, by the way; this is all at the application
deployment level.)

Cloud Computing Platforms and Technologies

Cloud computing applications develops by leveraging platforms and frameworks.


Various types of services are provided from the bare metal infrastructure to
customize-able applications serving specific purposes.
Amazon Web Services (AWS) –
AWS provides different wide-ranging clouds IaaS services, which ranges from
virtual compute, storage, and networking to complete computing stacks. AWS is
well known for its storage and compute on demand services, named as Elastic
Compute Cloud (EC2) and Simple Storage Service (S3). EC2 offers
customizable virtual hardware to the end user which can be utilize as the base
infrastructure for deploying computing systems on the cloud. It is likely to choose
from a large variety of virtual hardware configurations including GPU and cluster
instances. Either the AWS console, which is a wide-ranged Web portal for
retrieving AWS services, or the web services API available for several
programming language is used to deploy the EC2 instances. EC2 also offers the
capability of saving an explicit running instance as image, thus allowing users to
create their own templates for deploying system. S3 stores these templates and
delivers persistent storage on demand. S3 is well ordered into buckets which
contains objects that are stored in binary form and can be grow with attributes.
End users can store objects of any size, from basic file to full disk images and
have them retrieval from anywhere. In addition, EC2 and S3, a wide range of
services can be leveraged to build virtual computing system including:
networking support, caching system, DNS, database support, and others.
Google AppEngine –
Google AppEngine is a scalable runtime environment frequently dedicated to
executing web applications. These utilize benefits of the large computing
infrastructure of Google to dynamically scale as per the demand. AppEngine
offers both a secure execution environment and a collection of which simplifies
the development if scalable and high-performance Web applications. These
services include: in-memory caching, scalable data store, job queues,
messaging, and corn tasks. Developers and Engineers can build and test
applications on their own systems by using the AppEngine SDK, which replicates
the production runtime environment, and helps test and profile applications. On
completion of development, Developers can easily move their applications to
AppEngine, set quotas to containing the cost generated, and make it available to
the world. Currently, the supported programming languages are Python, Java,
and Go.
Microsoft Azure –
Microsoft Azure is a Cloud operating system and a platform in which user can
develop the applications in the cloud. Generally, a scalable runtime environment
for web applications and distributed applications is provided. Application in Azure
are organized around the fact of roles, which identify a distribution unit for
applications and express the application’s logic. Azure provides a set of
additional services that complement application execution such as support for
storage, networking, caching, content delivery, and others.
Hadoop –
Apache Hadoop is an open source framework that is appropriate for processing
large data sets on commodity hardware. Hadoop is an implementation of
MapReduce, an application programming model which is developed by Google.
This model provides two fundamental operations for data processing: map and
reduce. Yahoo! Is the sponsor of the Apache Hadoop project, and has put
considerable effort in transforming the project to an enterprise-ready cloud
computing platform for data processing. Hadoop is an integral part of the Yahoo!
Cloud infrastructure and it supports many business processes of the corporates.
Currently, Yahoo! Manages the world’s largest Hadoop cluster, which is also
available to academic institutions.

Eucalyptus

Eucalyptus is a paid and open-source computer software for building Amazon Web
Services (AWS)-compatible private and hybrid cloud computing environments, originally
developed by the company Eucalyptus Systems. Eucalyptus is an acronym for Elastic
Utility Computing Architecture for Linking Your Programs To Useful Systems.Eucalyptus
enables pooling compute, storage, and network resources that can be dynamically
scaled up or down as application workloads change. Eucalyptus was acquired
by Hewlett-Packard and then maintained by DXC Technology. After DXC stopped
developing the product in late 2017, AppScale Systems forked the code and started
supporting Eucalyptus customers.

Eucalyptus in cloud computing pools together existing virtualised framework to make


cloud resources for storage as a service, network as a service and infrastructure as a
service. Elastic Utility Computing Architecture for Linking Your Programs To Useful
Systems is short known as Eucalyptus in cloud computing.

Eucalyptus architecture
Eucalyptus CLIs can oversee both Amazon Web Services and their own private
occasions. Clients can undoubtedly relocate cases from Eucalyptus to Amazon Elastic
Cloud. Network, storage, and compute are overseen by the virtualisation layer.
Occurrences are isolated by hardware virtualisation. The following wording is utilised
by Eucalyptus architecture in cloud computing.
1. Images: Any software application, configuration, module software or framework
software packaged and conveyed in the Eucalyptus cloud is known as a Eucalyptus
Machine Image.
2. Instances: When we run the picture and utilise it, it turns into an instance.
3. Networking: The Eucalyptus network is partitioned into three modes: Static mode,
System mode, and Managed mode.
4. Access control: It is utilised to give limitation to clients.
5. Eucalyptus elastic block storage: It gives block-level storage volumes to connect
to an instance.
6. Auto-scaling and load adjusting: It is utilised to make or obliterate cases or
administrations dependent on necessities.

An Introduction to OpenNebula
OpenNebula is a simple, feature-rich and flexible solution for the management of
virtualised data centres. It enables private, public and hybrid clouds. Here are a few
facts about this solution.
OpenNebula is an open source cloud middleware solution that manages heterogeneous
distributed data centre infrastructures. It is designed to be a simple but feature-rich,
production-ready, customisable solution to build and manage enterprise clouds—simple
to install, update and operate by the administrators; and simple to use by end users.
OpenNebula combines existing virtualisation technologies with advanced features for
multi-tenancy, automated provisioning and elasticity. A built-in virtual network manager
maps virtual networks to physical networks. Distributions such as Ubuntu and Red Hat
Enterprise Linux have already integrated OpenNebula. As you’ll learn in this article, you
can set up OpenNebula by installing a few packages and performing some cursory
configurations. OpenNebula supports Xen, KVM and VMware hypervisors.
The OpenNebula deployment model

An OpenNebula deployment is modelled after the classic cluster architecture.


Figure 1 shows the layout of the OpenNebula deployment model.

Master node: A single gateway or front-end machine, sometimes also called


the master node, is responsible for queuing, scheduling and submitting jobs to
the machines in the cluster. It runs several other OpenNebula services
mentioned below:

● Provides an interface to the user to submit virtual machines and monitor


their status.
● Manages and monitors all virtual machines running on different nodes in the
cluster.

● It hosts the virtual machine repository and also runs a transfer service to
manage the transfer of virtual machine images to the concerned worker
nodes.

● Provides an easy-to-use mechanism to set up virtual networks in the cloud.

● Finally, the front-end allows you to add new machines to your cluster.

Worker node: The other machines in the cluster, known as ‘worker nodes’,
provide raw computing power for processing the jobs submitted to the cluster.
The worker nodes in an OpenNebula cluster are machines that deploy a
virtualisation hypervisor, such as VMware, Xen or KVM.

Apache VCL

VCL stands for Virtual Computing Lab. It is a free & open-source cloud computing
platform with the primary goal of delivering dedicated, custom compute environments to
users.
The compute environments can range from something as simple as a virtual machine
running productivity software to a cluster of powerful physical servers running complex
HPC simulations.
VCL supports provisioning several different types of compute resources including
physical bare-metal machines, virtual machines hosted on several different hypervisors,
and traditional computing lab computers you would normally find on a university
campus.
The user interface consists of a self-service web portal. Using the portal, users select
from a list of customized environments and make reservations.

You might also like