0% found this document useful (0 votes)
34 views12 pages

Unit 3cloud

Uploaded by

haiertvhall786
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views12 pages

Unit 3cloud

Uploaded by

haiertvhall786
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

III BCA

Cloud Computing

Unit 3 : Cloud application programming and the aneka platform


Aneka cloud application platform
Aneka is Manjrasoft’s solution for developing, deploying, and managing
Cloud applications.
• It consists of a scalable Cloud middleware that can be deployed on top of
heterogeneous computing resources.
• It offers an extensible collection of services coordinating the execution of
applications, helping administrators to monitor the status of the Cloud, and
providing integration with existing Cloud technologies.
• One of the key advantages of Aneka is its extensible set of APIs associated
with different types of programming models-such as Task, Thread, and
MapReduce used for developing distributed applications, integrating new
capabilities into the Cloud, and supporting different types of Cloud
deployment models: public, private, and hybrid .

Aneka Framework
Aneka is a platform and a framework for developing distributed applications on the Cloud. It
harnesses the spare CPU cycles of a heterogeneous network of desktop PCs and servers or
datacenters on demand.
• Aneka provides developers with a rich set of APIs for transparently exploiting such
resources and expressing the business logic of applications by using the preferred
programming abstractions. This can be a public cloud available to anyone through the
Internet, or a private cloud constituted by a set of nodes with restricted access.
• The Aneka based computing cloud is a collection of physical and virtualized resources
connected through a network, which are either the Internet or a private intranet. Each of

Government First Grade College, Shimoga 1


III BCA
Cloud Computing

these resources hosts an instance of the Aneka Container representing the runtime
environment where the distributed applications are executed.
• The container provides the basic management features of the single node and leverages all
the other operations on the services that it is hosting. The services are broken up into fabric,
foundation, and execution services.
• Fabric services directly interact with the node through the Platform Abstraction Layer
(PAL) and perform hardware profiling and dynamic resource provisioning.
• Foundation services identify the core system of the Aneka middleware, providing a set of
basic features to enable Aneka containers to perform specialized and specific sets of tasks.
• Execution services directly deal with the scheduling and execution of applications in the
Cloud.

Aneka implements a Service-Oriented Architecture (SOA), and services are the fundamental
components of an Aneka Cloud.
Within a Cloud environment, there are different aspects involved in providing a scalable and
elastic infrastructure distributed runtime for applications. These services involve the following:
(a) Elasticity and Scaling: With its dynamic provisioning service, Aneka supports
dynamically up-sizing and down-sizing of the infrastructure available for applications.
(b) Runtime Management: The runtime machinery is responsible for keeping the
infrastructure up and running, and serves as a hosting environment for services.

Government First Grade College, Shimoga 2


III BCA
Cloud Computing

Primarily represented by the container and a collection of services managing service


membership and lookup, infrastructure maintenance, and profiling.
(c) Resource Management: Aneka is an elastic infrastructure where resources are added
and removed dynamically, according to the application needs and user requirements.
(d) Application Management: A specific subset of services is devoted to manage
applications: these services include scheduling, execution, monitoring, and storage
management.
(e) User Management: Aneka is a multi-tenant distributed environment where multiple
applications, potentially belonging to different users, are executed. The framework
provides an extensible user system where it is possible to define users, groups, and
permissions.
(f) QoS/SLA Management and Billing: Within a Cloud environment, application
execution is metered and billed. And provides usage of resources by each application
and billing the owning user accordingly.

Anatomy of aneka container


The Aneka container constitutes the building block of Aneka Clouds and represents the runtime
Machinery available to services and applications.
All operations performed within Aneka are carried out by the service managed by the
container.
Platform abstraction layer (PAL) : The PAL is responsible for detecting the supported
hosting environment, and providing the corresponding to interact with it for supporting the
activity of the container.
The services installed in the Aneka container can be classified into three major categories:
• Fabric services
• foundation services
• application services

Fabric Services
Fabric services define the lowest level of the software stack representing the Aneka
Container. They provide access to the resource provisioning subsystem.
1. Profiling and Monitoring
This infrastructure is composed by the Reporting and the Monitoring services.
▪ The Reporting Service manages the store for monitored data and makes them
accessible to other services or external applications for analysis purposes.
▪ the Monitoring Service acts as a gateway to the Reporting Service and forwards all
the monitored data that has been collected on the node to it.
Several built-in services provide information through this channel:
▪ The Membership Catalogue tracks the performance information of nodes.

Government First Grade College, Shimoga 3


III BCA
Cloud Computing

▪ The Execution Service monitors several time intervals for the execution of jobs.
▪ The Scheduling Service tracks the state transitions of jobs.
▪ The Storage Service monitors and makes available the information about data transfer,
such as upload and download times, file names, and sizes.
▪ The Resource Provisioning Service tracks the provisioning and lifetime information of
virtual nodes.
All these information can be stored on RDBMS or a flat file, and they can be further
analyzed by specific applications.

2. Resource Management
Resource management is another fundamental feature of Aneka Clouds.
• It comprises several tasks: resource membership , resource reservation ,resource
provisioning.
• Aneka provides a collection of services that are in charge of managing resources.
These are: Index Service (or Membership Catalogue ), Reservation Service, and
Resource Provisioning Service.
The Membership Catalogue is the fundamental component for resource management
since it keeps track of the basic node information for all the nodes that are connected or
disconnected.
Dynamic resource provisioning is automatically allocate and adjust computing resources
based on the current demand.
Resource Provisioning Service which includes all the operations that are needed for
provisioning virtual instance. It is a feature designed to support Quality of Service (QoS)
requirements driven execution of applications.

Foundation Services
Fabric services are fundamental services of the Aneka Cloud, and define the basic
infrastructure management features of the system.
Foundation services are related to the logical management of the distributed system
built on top of the infrastructure, and provide supporting services for the execution of
distributed applications.
These services cover:
• Storage management for applications
• Accounting, billing, and resource pricing
• Resource reservation
1.Storage management:
Any infrastructure supporting the execution of distributed applications needs
to provide facilities for file/data transfer management and persistent storage
Aneka offers two different facilities for storage management:

Government First Grade College, Shimoga 4


III BCA
Cloud Computing

• a centralized file storage, which is mostly used for the execution of compute-
intensive applications, and
• a distributed file system, which is more suitable for the execution of data-
intensive applications.
2.Accounting, Billing, and Resource Pricing
Accounting keeps track of the status of applications in the Aneka Cloud. The
collected information provides a detailed breakdown of the usage of the
distributed infrastructure
Aneka is a multi-tenant Cloud programming platform where the execution of
applications can involve provisioning additional resources from commercial
IaaS providers
3.Resource Reservation
Resource reservation supports the execution of distributed applications and allows
for reserving resources for exclusive use by specific applications.
• Resource reservation is built out of two different kinds of services: Resource
Reservation and Allocation Service.
• The Reservation Service will return a reservation identifier as a proof of the
resource booking.
• Resource reservation is fundamental for ensuring the Quality of Service that
is negotiated for applications.

Application Services
Application services manage the execution of applications and constitute a layer that
differentiates ac- cording to the specific programming model used for developing
distributed applications on top of Aneka
It is possible to identify two major types of activities that are common across all the
supported models: scheduling and execution

1.Scheduling
Scheduling services are in charge of planning the execution of distributed applications
on top of Aneka, and governing the allocation of jobs composing an application to
nodes
Common tasks hat are performed by the scheduling component are the following:
• Job-to-node mapping
• Rescheduling of failed jobs
• Job status monitoring
• Application status monitoring

2. Execution

Government First Grade College, Shimoga 5


III BCA
Cloud Computing

Execution services control the execution of single jobs that compose applications. They
are in charge of setting up the runtime environment hosting the execution of jobs.
some common operations that apply across all the range of supported models:
• unpacking the jobs received from the scheduler
• Retrieval of input files required for the job execution
• Submission of output files at the end of execution
• Execution failure management (i.e., capturing sufficient contextual information
useful to identify the nature of the failure)
• Performance monitoring
• Packing jobs and sending them back to the scheduler
Application services constitute the runtime support of programming model in the
Aneka Cloud. Currently, there are several supported models:
(a) Task Model:
(b) Thread Model :
(c) MapReduce Model
(d) Parameter Sweep Model

Building Aneka clouds


Aneka is primarily a platform for developing distributed applications for clouds. As a
software platform it requires infrastructure on which to be deployed; this infrastructure needs
to be managed.
• Infrastructure management tools are specifically designed for this task, and building
clouds is one of the primary tasks of administrators. Aneka supports various
deployment models for public, private, and hybrid clouds.
Infrastructure organization
Figure 5.3 provides an overview of Aneka Clouds from an infrastructure point of view. The
scenario is a reference model for all the different deployments Aneka supports.
• A central role is played by the Administrative Console, which performs all the required
management operations.
• A fundamental element for Aneka Cloud deployment is constituted by repositories.
• A repository provides storage for all the libraries required to lay out and install the basic
Aneka platform.
• Repositories can make libraries available through a variety of communication channels,
such as HTTP, FTP, common file sharing, and so on.
• The Management Console can manage multiple repositories and select the one that best
suits the specific deployment.
• The infrastructure is deployed by harnessing a collection of nodes and installing on
them the Aneka node manager, also called the Aneka daemon.

Government First Grade College, Shimoga 6


III BCA
Cloud Computing

• The daemon constitutes the remote management service used to deploy and control
container instances.
• The collection of resulting containers identifies the Aneka Cloud.

From an infrastructure point of view, the management of physical or virtual nodes is


performed uniformly as long as it is possible to have an Internet connection and remote
administrative access to the node.
• It is also possible to simply install the container or install the Aneka daemon, and the
selection of the proper solution mostly depends on the lifetime of virtual resources.
Logical organization
The logical organization of Aneka Clouds can be very diverse, since it strongly depends on
the configuration selected for each of the container instances belonging to the Cloud.

Government First Grade College, Shimoga 7


III BCA
Cloud Computing

• The most common scenario is to use a master-worker configuration with separate nodes
for storage, as shown in below Figure.

❖ The master node features all the services that are most likely to be present in one
single copy and that provide the intelligence of the Aneka Cloud.
What specifically characterizes a node as a master node is the presence of the Index
Service (or Membership Catalogue) configured in master mode;
A common configuration of the master node is as follows:
• Index Service (master copy)
• Heartbeat Service
• Logging Service
• Reservation Service
• Resource Provisioning Service
• Accounting Service
• Reporting and Monitoring Service
• Scheduling Services for the supported programming models
o The master node also provides connection to an RDBMS facility where the state
of several services is maintained. For the same reason, all the scheduling
services are maintained in the master node.
❖ The worker nodes constitute the workforce of the Aneka Cloud and are generally
configured for the execution of applications.
A very common configuration is

Government First Grade College, Shimoga 8


III BCA
Cloud Computing

the following:
• Index Service
• Heartbeat Service
• Logging Service
• Allocation Service
• Monitoring Service
• Execution Services for the supported programming models
❖ Storage nodes mostly reside on machines that have considerable disk space to
accommodate a large quantity of files.
The common configuration of a storage node is the following:
• Index Service
• Heartbeat Service
• Logging Service
• Monitoring Service
• Storage Service
• All nodes are registered with the master node and transparently refer to any failover
partner in the case of a high-availability configuration.

Private cloud deployment mode


A private deployment mode is mostly constituted by local physical resources and
infrastructure management software providing access to a local pool of nodes, which
might be virtualized.
• In this scenario Aneka Clouds are created by harnessing a heterogeneous pool of
resources such has desktop machines, clusters, or workstations.
• These resources can be partitioned into different groups, and Aneka can be
configured to holds these resources according to application needs.
• Moreover, holding the Resource Provisioning Service, it is possible to integrate
virtual nodes provisioned from a local resource pool managed by systems such as
XenServer, Eucalyptus, and OpenStack.
• The below Figure shows a common deployment for a private Aneka Cloud. This
deployment is acceptable for a scenario in which the workload of the system is
predictable and a local virtual machine manager can easily address excess capacity
demand.

Government First Grade College, Shimoga 9


III BCA
Cloud Computing

• Most of the Aneka nodes are constituted of physical nodes with a long lifetime and a
static configuration and generally do not need to be reconfigured often.
• For example, desktop machines that are used during the day for office automation can
be exploited outside the standard working hours to execute distributed applications.

Public cloud deployment mode


Public Cloud deployment mode features the installation of Aneka master and worker
nodes over a completely virtualized infrastructure that is hosted on the infrastructure of
one or more resource providers such as Amazon EC2 or GoGrid.
• In this case it is possible to have a static deployment where the nodes are
provisioned beforehand and used as though they were real machines.
• This deployment merely replicates a classic Aneka installation on a physical
infrastructure without any dynamic provisioning capability.
• More interesting is the use of the elastic features of IaaS providers and the creation
of a Cloud that is completely dynamic.

• Figure provides an overview of this scenario.

Government First Grade College, Shimoga 10


III BCA
Cloud Computing

The deployment is generally contained within the infrastructure boundaries of a single


IaaS provider.
• In this scenario it is possible to deploy an Aneka Cloud composed of only one node
and to completely leverage dynamic provisioning to elastically scale the
infrastructure on demand.
• A fundamental role is played by the Resource Provisioning Service, which can be
configured with different images and templates to instantiate.
• Accounting and Reporting Services provide details about resource utilization by
users and applications and are fundamental in a multitenant Cloud where users are
billed according to their consumption of Cloud capabilities.
• Dynamic instances provisioned on demand will mostly be configured as worker
nodes
• Another example is the Storage Service. In multitenant Clouds, multiple
applications can leverage the support for storage;
• Dynamic provisioning can easily solve this issue as it does for increasing the
computing capability of an Aneka Cloud.

Hybrid cloud deployment mode


The hybrid deployment model constitutes the most common deployment of Aneka. In
many cases, there is an existing computing infrastructure that can be leveraged to
address the computing needs of applications.
• This infrastructure will constitute the static deployment of Aneka that can be
elastically scaled on demand when additional resources are required.

• An overview of this deployment is presented in Figure

• This scenario constitutes the most complete deployment for Aneka that is able to
leverage all the capabilities of the framework:
• Dynamic Resource Provisioning
• Resource Reservation
• Workload Partitioning

Government First Grade College, Shimoga 11


III BCA
Cloud Computing

• Accounting, Monitoring, and Reporting


• In a hybrid scenario, heterogeneous resources can be used for different purposes. in
the case of a private cloud deployment, desktop machines can be reserved for low
priority workload outside the common working hours. which are the nodes that are
constantly connected to the Aneka Cloud.
• Different from the Aneka Public Cloud deployment is the case in which it makes
more sense to support a variety of resource providers to provision virtual resources.
Since part of the infrastructure is local, a cost in data transfer to the external IaaS
infrastructure cannot be avoided.
• The Resource Provisioning Service simplify the development of custom policies
that can better serve the needs of a specific hybrid deployment.

Government First Grade College, Shimoga 12

You might also like