CC Notes
CC Notes
Computing paradigms refer to fundamental models or approaches used in the field of computer science
and information technology to solve computational problems and process data. These paradigms
encompass various principles, methodologies, and technologies that guide how computing tasks are
conceptualized and executed. Different computing paradigms are suited to different types of problems
and application domains. Some common computing paradigms include:
1. Distributed Computing:
Distributed computing is defined as a type of computing where multiple computer systems work on
a single problem. Here all the computer systems are linked together and the problem is divided into
sub-problems where each part is solved by different computer systems.
The goal of distributed computing is to increase the performance and efficiency of the system and
ensure fault tolerance.
In the below diagram, each processor has its own local memory and all the processors communicate
with each other over a network.
2. Parallel Computing:
Parallel computing is defined as a type of computing where multiple computer systems are used
simultaneously. Here a problem is broken into sub-problems and then further broken down into
instructions. These instructions from each sub-problem are executed concurrently on different
processors.
Here in the below diagram you can see how the parallel computing system consists of multiple
processors that communicate with each other and perform multiple tasks over a shared memory
simultaneously.
The goal of parallel computing is to save time and provide concurrency.
3. Cluster Computing:
A cluster is a group of independent computers that work together to perform the tasks given.
Cluster computing is defined as a type of computing that consists of two or more independent
computers, referred to as nodes, that work together to execute tasks as a single machine.
The goal of cluster computing is to increase the performance, scalability and simplicity of the system.
As you can see in the below diagram, all the nodes, (irrespective of whether they are a parent node or
child node), act as a single entity to perform the tasks.
4. Grid Computing:
Grid computing is defined as a type of computing where it is constitutes a network of computers that
work together to perform tasks that may be difficult for a single machine to handle. All the computers on
that network work under the same umbrella and are termed as a virtual super computer.
The tasks they work on is of either high computing power and consist of large data sets.
All communication between the computer systems in grid computing is done on the “data grid”.
The goal of grid computing is to solve more high computational problems in less time and improve
productivity.
5. Utility Computing:
Utility computing is defined as the type of computing where the service provider provides the needed
resources and services to the customer and charges them depending on the usage of these resources as
per requirement and demand, but not of a fixed rate.
Utility computing involves the renting of resources such as hardware, software, etc. depending on the
demand and the requirement.
The goal of utility computing is to increase the usage of resources and be more cost-efficient.
6. Edge Computing:
Edge computing is defined as the type of computing that is focused on decreasing the long distance
communication between the client and the server. This is done by running fewer processes in the cloud
and moving these processes onto a user’s computer, IoT device or edge device/server.
The goal of edge computing is to bring computation to the network’s edge which in turn builds less gap
and results in better and closer interaction.
7. Fog Computing:
Fog computing is defined as the type of computing that acts a computational structure between the cloud
and the data producing devices. It is also called as “fogging”.
This structure enables users to allocate resources, data, applications in locations at a closer range within
each other.
The goal of fog computing is to improve the overall network efficiency and performance.
8. Cloud Computing:
Cloud is defined as the usage of someone else’s server to host, process or store data.
Cloud computing is defined as the type of computing where it is the delivery of on-demand computing
services over the internet on a pay-as-you-go basis. It is widely distributed, network-based and used for
storage.
There type of cloud are public, private, hybrid and community and some cloud providers are Google
cloud, AWS, Microsoft Azure and IBM cloud.
A PaaS provider hosts the hardware and software on its own infrastructure. As a result, PaaS frees users
from having to install in-house hardware and software to develop or run a new application. Thus, the
development and deployment of the application take place independent of the hardware.
The consumer does not manage or control the underlying cloud infrastructure including network,
servers, operating systems, or storage, but has control over the deployed applications and possibly
configuration settings for the application-hosting environment. To make it simple, take the example of
an annual day function, you will have two options either to create a venue or to rent a venue but the
function is the same.
Advantages of PaaS:
1. Simple and convenient for users: It provides much of the infrastructure and other IT services,
which users can access anywhere via a web browser.
2. Cost-Effective: It charges for the services provided on a per-use basis thus eliminating the
expenses one may have for on-premises hardware and software.
3. Efficiently managing the lifecycle: It is designed to support the complete web application
lifecycle: building, testing, deploying, managing, and updating.
4. Efficiency: It allows for higher-level programming with reduced complexity thus, the overall
development of the application can be more effective.
5. The various companies providing Platform as a service are Amazon Web services Elastic
Beanstalk, Salesforce, Windows Azure, Google App Engine, cloud Bees and IBM smart cloud.
Disadvantages of Paas:
1. Limited control over infrastructure: PaaS providers typically manage the underlying
infrastructure and take care of maintenance and updates, but this can also mean that users have
less control over the environment and may not be able to make certain customizations.
2. Dependence on the provider: Users are dependent on the PaaS provider for the availability,
scalability, and reliability of the platform, which can be a risk if the provider experiences outages
or other issues.
3. Limited flexibility: PaaS solutions may not be able to accommodate certain types of workloads
or applications, which can limit the value of the solution for certain organizations.
Infrastructure as a Service
Infrastructure as a service (IaaS) is a service model that delivers computer infrastructure on an
outsourced basis to support various operations. Typically IaaS is a service where infrastructure is
provided as outsourcing to enterprises such as networking equipment, devices, database, and web
servers.
It is also known as Hardware as a Service (HaaS). IaaS customers pay on a per-user basis, typically by
the hour, week, or month. Some providers also charge customers based on the amount of virtual machine
space they use.
It simply provides the underlying operating systems, security, networking, and servers for developing
such applications, and services, and deploying development tools, databases, etc.
Advantages of IaaS:
1. Cost-Effective: Eliminates capital expense and reduces ongoing cost and IaaS customers pay on
a per-user basis, typically by the hour, week, or month.
2. Website hosting: Running websites using IaaS can be less expensive than traditional web
hosting.
3. Security: The IaaS Cloud Provider may provide better security than your existing software.
4. Maintenance: There is no need to manage the underlying data center or the introduction of new
releases of the development or underlying software. This is all handled by the IaaS Cloud
Provider.
5. The various companies providing Infrastructure as a service are Amazon web services,
Bluestack, IBM, Openstack, Rackspace, and Vmware.
Disadvantages of laaS :
1. Limited control over infrastructure: IaaS providers typically manage the underlying
infrastructure and take care of maintenance and updates, but this can also mean that users have
less control over the environment and may not be able to make certain customizations.
2. Security concerns: Users are responsible for securing their own data and applications, which
can be a significant undertaking.
3. Limited access: Cloud computing may not be accessible in certain regions and countries due to
legal policies.
Anything as a Service
It is also known as Everything as a Service. Most of the cloud service providers nowadays offer
anything as a service that is a compilation of all of the above services including some additional
services.
Advantages of XaaS:
1. Scalability: XaaS solutions can be easily scaled up or down to meet the changing needs of an
organization.
2. Flexibility: XaaS solutions can be used to provide a wide range of services, such as storage,
databases, networking, and software, which can be customized to meet the specific needs of an
organization.
3. Cost-effectiveness: XaaS solutions can be more cost-effective than traditional on-premises
solutions, as organizations only pay for the services.
Disadvantages of XaaS:
1. Dependence on the provider: Users are dependent on the XaaS provider for the availability,
scalability, and reliability of the service, which can be a risk if the provider experiences outages
or other issues.
2. Limited flexibility: XaaS solutions may not be able to accommodate certain types of workloads
or applications, which can limit the value of the solution for certain organizations.
3. Limited integration: XaaS solutions may not be able to integrate with existing systems and data
sources, which can limit the value of the solution for certain organizations.
Comparison of different service model:
Infrastructure as a
Platform as a service. Software as a service.
Stands for service.
It is a cloud computing
It is a service model that It is a service model in
model that delivers tools
provides virtualized cloud computing that
that are used for the
computing resources hosts software to make it
development of
over the internet. available to clients.
Model applications.
about technicalities
required for the basic
knowledge. company handles
setup.
understanding. everything.
It is popular among
It is popular among
It is popular among consumers and
developers who focus on
developers and companies, such as file
the development of apps
researchers. sharing, email, and
and scripts.
Popularity networking.
Outsourced
Salesforce Force.com, Gigaspaces. AWS, Terremark
cloud services.
Operating System,
Runtime, Middleware, Data of the application Nothing
User Controls and Application data
Others It is highly scalable and It is highly scalable to suit It is highly scalable to suit
flexible. the different businesses the small, mid and
Basis Of IAAS PAAS SAAS
1. Public clouds are available to the general public, and data are created and stored on third-party
servers.
Server infrastructure belongs to service providers that manage it and administer pool resources, which is
why there is no need for user companies to buy and maintain their own hardware. Provider companies
offer resources as a service both free of charge or on a pay-per-use basis via the Internet. Users can scale
resources as required.
The public cloud deployment model is the first choice for businesses with low privacy concerns. When
it comes to popular public cloud deployment models, examples are Amazon Elastic Compute Cloud
(Amazon EC2 — the top service provider according to ZDNet), Microsoft Azure, Google App Engine,
IBM Cloud, Salesforce Heroku and others.
The Advantages of a Public Cloud
Hassle-free infrastructure management. Having a third party running your cloud infrastructure is
convenient: you do not need to develop and maintain your software because the service provider does
it for you. In addition, the infrastructure setup and use are uncomplicated.
High scalability. You can easily extend the cloud’s capacity as your company requirements increase.
Reduced costs. You pay only for the service you use, so there’s no need to invest in hardware or
software.
24/7 uptime. The extensive network of your provider’s servers ensures your infrastructure is
constantly available and has improved operation time.
The Disadvantages of a Public Cloud
Compromised reliability. That same server network is also meant to ensure against failure But often
enough, public clouds experience outages and malfunction, as in the case of the 2016 Salesforce
CRM disruption that caused a storage collapse.
Data security and privacy issues give rise to concern. Although access to data is easy, a public
deployment model deprives users of knowing where their information is kept and who has access to
it.
The lack of a bespoke service. Service providers have only standardized service options, which is
why they often fail to satisfy more complex requirements.
2. Private Cloud
There is little to no difference between a public and a private model from the technical point of view, as
their architectures are very similar. However, as opposed to a public cloud that is available to the general
public, only one specific company owns a private cloud. That is why it is also called
an internal or corporate model.
The server can be hosted externally or on the premises of the owner company. Regardless of their
physical location, these infrastructures are maintained on a designated private network and use software
and hardware that are intended for use only by the owner company.
A clearly defined scope of people have access to the information kept in a private repository, which
prevents the general public from using it. In light of numerous breaches in recent years, a growing
number of large corporations has decided on a closed private cloud model, as this minimizes data
security issues.
Compared to the public model, the private cloud provides wider opportunities for customizing the
infrastructure to the company’s requirements. A private model is especially suitable for companies that
seek to safeguard their mission-critical operations or for businesses with constantly changing
requirements.
Multiple public cloud service providers, including Amazon, IBM, Cisco, Dell and Red Hat, also provide
private solutions.
Data security
Low High Comparatively high High
and privacy
Little to
Data control High Comparatively high Comparatively high
none
Infrastructure Layer
1. It is a layer of virtualization where physical resources are divided into a collection of
virtual resources using virtualization technologies like Xen, KVM, and VMware.
2. This layer serves as the Central Hub of the Cloud Environment, where resources are
constantly added utilizing a variety of virtualization techniques.
3. A base upon which to create the platform layer. constructed using the virtualized network,
storage, and computing resources. Give users the flexibility they want.
4. Automated resource provisioning is made possible by virtualization, which also improves
infrastructure management.
5. The infrastructure layer sometimes referred to as the virtualization layer, partitions the
physical resources using virtualization technologies like Xen, KVM, Hyper-V, and
VMware to create a pool of compute and storage resources.
6. The infrastructure layer is crucial to cloud computing since virtualization technologies are
the only ones that can provide many vital capabilities, like dynamic resource assignment.
Datacenter Layer
In a cloud environment, this layer is responsible for Managing Physical Resources such as
servers, switches, routers, power supplies, and cooling systems.
Providing end users with services requires all resources to be available and managed in
data centers.
Physical servers connect through high-speed devices such as routers and switches to the
data center.
In software application designs, the division of business logic from the persistent data it
manipulates is well-established. This is due to the fact that the same data cannot be
incorporated into a single application because it can be used in numerous ways to support
numerous use cases. The requirement for this data to become a service has arisen with the
introduction of microservices.
A single database used by many microservices creates a very close coupling. As a result, it
is hard to deploy new or emerging services separately if such services need database
modifications that may have an impact on other services. A data layer containing many
databases, each serving a single microservice or perhaps a few closely related
microservices, is needed to break complex service interdependencies.
Virtualization
Virtualization is a technique, which allows sharing single physical instance of an application or
resource among multiple organizations or tenants (customers). It does this by assigning a logical name
to a physical resource and providing a pointer to that physical resource when demanded.
The Multitenant architecture offers virtual isolation among the multiple tenants. Hence, the
organizations can use and customize their application as though they each have their instances running.
Features of Virtualization;
Increased Security: The ability to control the execution of a guest program in a
completely transparent manner opens new possibilities for delivering a secure, controlled
execution environment. All the operations of the guest programs are generally performed
against the virtual machine, which then translates and applies them to the host programs.
Managed Execution: In particular, sharing, aggregation, emulation, and isolation are the
most relevant features.
Sharing: Virtualization allows the creation of a separate computing environment within the
same host.
Aggregation: It is possible to share physical resources among several guests, but
virtualization also allows aggregation, which is the opposite process.
Types of Virtualizations-
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization
Network Virtualization
3. Desktop Virtualization: Desktop virtualization allows the users’ OS to be remotely stored on a
server in the data center. It allows the user to access their desktop virtually, from any location by a
different machine. Users who want specific operating systems other than Windows Server will need to
have a virtual desktop. The main benefits of desktop virtualization are user mobility, portability, and
easy management of software installation, updates, and patches.
4. Storage Virtualization: Storage virtualization is an array of servers that are managed by a virtual
storage system. The servers aren’t aware of exactly where their data is stored and instead function
more like worker bees in a hive. It makes managing storage from multiple sources be managed and
utilized as a single repository. storage virtualization software maintains smooth operations, consistent
performance, and a continuous suite of advanced functions despite changes, breaks down, and
differences in the underlying equipment.
5. Server Virtualization: This is a kind of virtualization in which the masking of server resources
takes place. Here, the central server (physical server) is divided into multiple different virtual servers
by changing the identity number, and processors. So, each system can operate its operating systems in
an isolated manner. Where each sub-server knows the identity of the central server. It causes an
increase in performance and reduces the operating cost by the deployment of main server resources
into a sub-server resource. It’s beneficial in virtual migration, reducing energy consumption, reducing
infrastructural costs, etc.
Server Virtualization
6. Data Virtualization: This is the kind of virtualization in which the data is collected from various
sources and managed at a single place without knowing more about the technical information like how
data is collected, stored & formatted then arranged that data logically so that its virtual view can be
accessed by its interested people and stakeholders, and users through the various cloud services
remotely. Many big giant companies are providing their services like Oracle, IBM, At scale, Cdata,
etc.
Uses of Virtualization
Data-integration
Business-integration
Service-oriented architecture data-services
Searching organizational data
S.N
O Cloud Computing Virtualization
Cloud computing is used to provide pools and While It is used to make various simulated
1. automated resources that can be accessed on- environments through a physical hardware
demand. system.
The total cost of cloud computing is higher than The total cost of virtualization is lower than
7.
virtualization. Cloud Computing.
Cloud computing requires many dedicated While single dedicated hardware can do a
8.
hardware. great job in it.
Cloud computing provides unlimited storage While storage space depends on physical
9.
space. server capacity in virtualization.
In cloud computing, we utilize the entire server In Virtualization, the entire servers are on-
12.
capacity and the entire servers are consolidated. demand.
VMware’s technology is based on the concept of full virtualization, where the underlying hardware is
replicated and made available to the guest operating system, which runs unaware of such abstraction
layers and does not need to be modified. VMware implements full virtualization either in the desktop
environment, by means of Type II hypervisors, or in the server environment, by means of Type I
hypervisors. In both cases, full virtualization is made possible by means of direct execution (for
nonsensitive instructions) and binary translation (for sensitive instructions), thus allowing the
virtualization of architecture such as x86. Besides these two core solutions, VMware provides additional
tools and software that simplify the use of virtualization technology either in a desktop environment,
with tools enhancing the integration of virtual guests with the host, or in a server environment, with
solutions for building and managing virtual computing infrastructures.
Microsoft Hyper-V.:
Hyper-V is an infrastructure virtualization solution developed by Microsoft for server virtualization. As
the name recalls, it uses a hypervisor-based approach to hardware virtualization, which leverages several
techniques to support a variety of guest operating systems. Hyper-V is currently shipped as a component
of Windows Server 2008 R2 that installs the hypervisor as a role within the server.
A hypervisor is a form of virtualization software used in Cloud hosting to divide and allocate the
resources on various pieces of hardware. The program which provides partitioning, isolation, or
abstraction is called a virtualization hypervisor. The hypervisor is a hardware virtualization technique
that allows multiple guest operating systems (OS) to run on a single host system at the same time. A
hypervisor is sometimes also called a virtual machine manager (VMM).
ANEKA: CLOUD APPLICATION PLATFORM:
Aneka is Manjrasoft Pty. Ltd.’s solution for developing, deploying, and managing cloud applications.
Aneka consists of a scalable cloud middleware that can be deployed on top of heterogeneous computing
resources. It offers an extensible collection of services coordinating the execution of applications,
helping administrators monitor the status of the cloud, and providing integration with existing cloud
technologies. One of Aneka’s key advantages is its extensible set of application programming interfaces
(APIs) associated with different types of programming models - such as Task, Thread, and MapReduce -
used for developing distributed applications, integrating new capabilities into the cloud, and supporting
different types of cloud deployment models: public, private, and hybrid. These features differentiate
Aneka from infrastructure management software and characterize it as a platform for developing,
deploying, and managing execution of applications on various types of clouds.
Aneka is a software platform for developing cloud computing applications. Aneka is a pure PaaS
solution for cloud computing. Aneka is a cloud middleware product that can be deployed on a
heterogeneous set of resources: a network of computers, a multicore server, datacenters, virtual cloud
infrastructures, or a mixture of these. The framework provides both middleware for managing and
scaling distributed applications and an extensible set of APIs for developing them.
Framework Overview:
Figure provides a complete overview of the components of the Aneka framework. The core
infrastructure of the system provides a uniform layer that allows the framework to be deployed over
different platforms and operating systems. The physical and virtual resources representing the bare metal
of the cloud are managed by the Aneka container, which is installed on each node and constitutes the
basic building block of the middleware. A collection of interconnected containers constitutes the Aneka
cloud: a single domain in which services are made available to users and developers.
The container features three different classes of services. Fabric services, foundation services, and
execution services. These take care of infrastructure management, supporting services for the Aneka
Cloud and application management and execution respectively.
These services are made available to developers and administrators by means of the application
management and development layer which includes interfaces and APIs for developing cloud
applications and the management tools and interfaces for controlling Aneka clouds.
Aneka Implements a service-oriented architecture (SOA) Find services are the fundamental components
of an Aneka cloud. Services operate at container level and except for the platform abstraction layer they
provide developers, users and administrators with all features offered by the framework services also
constitute the extension and customization point of Aneka Cloud’s the infrastructure allows for the
integration of new services or replacement of the existing ones with the different implementations.
The framework includes the basic services for infrastructure and node management, application
execution, accounting, and system monitoring; existing services can be extended and new features can
be added to the cloud by dynamically plugging new ones into the container. Such extensible and flexible
infrastructure enables Aneka Clouds to support different programming and execution models for
applications. A programming model represents a collection of abstractions that developers can use to
express distributed applications; the runtime support for a programming model is constituted by a
collection of execution and foundation services interacting together to carry out application execution.
Thus, the implementation of a new model requires the development of the specific programming
abstractions used by application developers and the services, providing you’re shy and sounds for lovely
diamond final score tie seeking for them runtime support for them. Programming models are just one
aspect of application management and execution. Within an Aneka Cloud environment, there are
different aspects involved in providing a scalable and elastic infrastructure and distributed runtime for
applications. These services involve:
Elasticity and Scaling: By means of the dynamic provisioning service, Aneka supports dynamically
upsizing and downsizing of the infrastructure available for applications.
Runtime Management: The runtime machinery is responsible for keeping the infrastructure up and
running and serves as a hosting environment for services. It is primarily represented by the container and
a collection of services that manage service membership and lookup, infrastructure maintenance, and
profiling.
Resource Management: Aneka is an elastic infrastructure in which resources are added and removed
dynamically according to application needs and user requirements. To provide QoS-based execution, the
system not only allows dynamic provisioning but also provides capabilities for reserving nodes for
exclusive use by specific applications.
Application Management: A specific subset of services is devoted to managing applications. These
services include scheduling, execution, monitoring, and storage management.
User Management: Aneka is a multitenant distributed environment in which multiple applications,
potentially belonging to different users, are executed. The framework provides an extensible user system
via which it is possible to define users, groups, and permissions. The services devoted to user management
build up the security infrastructure of the system and constitute a fundamental element for accounting management.
QoS / SLA Management & Billing: Within a cloud environment, application execution is metered and billed. Aneka
provides a collection of services that coordinate together to take into account the usage of resources by each application
and to bill the owning user accordingly.
Anatomy of the Aneka container:
The Aneka container constitutes the building blocks of Anika Cloud and represents the runtime machinery available to
services and applications. The container, the unit of deployment in Aneka Clouds is a lightweight software layer
designed to host services and interact with the underlining operating system and hardware the main role of the
container is to provide a lightweight environment in which to deploy services and some basic capabilities such as
communication channels through which it interacts with other nodes in the Aneka Cloud almost all operations
performed within Aneka are carried out by the services managed by the container the services installed in the Anika
container can be classified into three major categories.
1. Fabric Services
2. Foundation Services
3. Application Services
The services stack resides on the top of the platform abstraction layer (PAL), Representing the interface to the
underlining operating system and hardware. It provides a uniform view of the software and hardware environment in
which the container is running persistence and security travers all the services stack to provide a secure and reliable
infrastructure.
The Platform Abstraction Layer
The platform abstraction layer addresses this heterogeneity and provides the container with a uniform interface for
assessing the relevant hardware and operating system information thus allowing the rest of the container to run
unmodified on any supported platform. The PAL is responsible for detecting the supported hosting environment and
providing the corresponding implementation to interact with it to support the activity of the container. The PAL
provides the following features:
Uniform and platform-independent implementation interface for accessing the hosting platform Uniform access to
extended and additional properties of the hosting platform Uniform and platform-independent access to remote nodes
Uniform and platform-independent management interfaces
The PAL is a small layer of software that comprises a detection engine, which automatically configures the container at
boot time, with the platform-specific component to access the above information and an implementation of the
abstraction layer for the Windows, Linux, and Mac OS X operating systems.
The collectible data that are exposed by the PAL are the following:
Number of cores, frequency, and CPU usage
Memory size and usage
Aggregate available disk space
Network addresses and devices attached to the node
Fabric Services Fabric Services define the lowest level of the software stack representing the Aneka Container. They
provide access to the resource-provisioning subsystem and to the monitoring facilities implemented in Aneka.
Resource-provisioning services are in charge of dynamically providing new nodes on demand by relying on
virtualization technologies, while monitoring services allow for hardware profiling and implement a basic monitoring
infrastructure that can be used by all the services installed in the container.
Foundation Services
Fabric Services are fundamental services of the Aneka Cloud and define the basic infrastructure management features
of the system. Foundation Services are related to the logical management of the distributed system built on top of the
infrastructure and provide supporting services for the execution of distributed applications. All the supported
programming models can integrate with and leverage these services to provide advanced and comprehensive
application management. These services cover:
Storage management for applications
Accounting, billing, and resource pricing
Resource reservation
Foundation Services provide a uniform approach to managing distributed applications and allow developers to
concentrate only on the logic that distinguishes a specific programming model from the others. Together with the
Fabric Services, Foundation Services constitute the core of the Aneka middleware. These services are mostly consumed
by the execution services and Management Consoles. External applications can leverage the exposed capabilities for
providing advanced application management.
Application Services
Application Services manage the execution of applications and constitute a layer that differentiates according to the
specific programming model used for developing distributed applications on top of Aneka. The types and the number
of services that compose This layer for each of the programming models may vary according to the specific needs or
features of the selected model it is possible to identify two major types of activities that are common across all the
supported models, scheduling and execution. Aneka defines a reference model for implementing the runtime support
for the programming models that abstracts these two activities in corresponding services; scheduling service and the
execution service. Moreover, it also defines base implementations that can be extended in order to integrate new
models.
BUILDING ANEKA CLOUDS
Aneka is primarily a platform for developing distributed applications for clouds. As a software platform it requires
infrastructure on which to be deployed; this infrastructure needs to be managed. Infrastructure management tools are
specifically designed for this task, and building clouds is one of the primary tasks of administrators. Aneka supports
various deployment models for public, private, and hybrid clouds.
PRIVATE CLOUD DEPLOYMENT MODE
A private deployment mode is mostly constituted by local physical resources and infrastructure management software
providing access to a local pool of nodes, which might be virtualized. In this scenario Aneka Clouds are created by
harnessing a heterogeneous pool of resources such has desktop machines, clusters, or workstations.
These resources can be partitioned into different groups, and Aneka can be configured to leverage these resources
according to application needs. Moreover, leveraging the Resource Provisioning Service, it is possible to integrate
virtual nodes provisioned from a local resource pool managed by systems such as XenServer, Eucalyptus, and
OpenStack.
Google AppEngine- Google App Engine is a cloud computing Platform as a Service (PaaS) which
provides Web app developers and businesses with access to Google’s scalable hosting in Google
managed data centers and tier 1 Internet service. It enables developers to take full advantage of its
serverless platform. These applications are required to be written in, namely: Java, Python, PHP, Go,
Node.JS, . NET, and Ruby. Applications in the Google App Engine require the use of Google query
language and store data in Google Bigtable.
AppEngine provides various types of storage, which operate differently depending on the volatility of
the data. There are three different levels of storage: in memory-cache, storage for semi structured data, and long-
term storage for static data.
Static file servers: Web applications are composed of dynamic and static data. Dynamic data are a result of the
logic of the application and the interaction with the user. Static data often are mostly constituted of the
components that define the graphical layout of the application (CSS files, plain HTML files, JavaScript files,
images, icons, and sound files) or data files. These files can be hosted on static file servers, since they are not
frequently modified. Such servers are optimized for serving static content, and users can specify how dynamic
content should be served when uploading their applications to AppEngine
Data Store: DataStore is a service that allows developers to store semistructured data. The service is designed to
scale and optimized to quickly access data. DataStore can be considered as a large object database in which to
store objects that can be retrieved by a specified key. Both the type of the key and the structure of the object can
vary.
ervices:
Applications hosted on AppEngine take the most from the services made available through the runtime
environment. These services simplify most of the common operations that are performed in Web applications:
access to data, account management, integration of external resources, messaging and communication, image
manipulation, and asynchronous computation.
1. urlFetch:
urlFetch is a service in GAE that allows you to make HTTP requests to external websites and
services.
It's commonly used to retrieve data from external APIs, scrape web content, or interact with third-
party services.
urlFetch provides secure and efficient outbound HTTP communication, and it's important for
integrating external data sources into your GAE applications.
2. MemCache:
MemCache is a distributed, in-memory data store provided by GAE for caching frequently
accessed data.
It helps improve application performance by reducing the need to retrieve data from slower,
persistent storage solutions.
MemCache is a key-value store and can be used to store frequently accessed data like query
results, session data, and more.
3. Mail and Instant Messaging:
GAE offers email and instant messaging services for communication within your applications.
Mail: You can send email notifications and messages directly from your application using the
built-in mail service.
Instant Messaging: Google Cloud's Pub/Sub service can be used for building real-time messaging
and event-driven systems in GAE.
4. Account Management:
GAE allows you to handle user account management and authentication for your applications.
You can use Google Cloud Identity-Aware Proxy (IAP) and other identity and access
management (IAM) services to control user access to your application.
User authentication and authorization are crucial for securing and personalizing your application.
5. Image Manipulation:
GAE provides various tools and libraries for image manipulation and processing.
You can resize, crop, rotate, and enhance images within your application using these tools.
Image manipulation is useful for tasks like generating thumbnails, image editing, and optimizing
media for web display.
These services and features in GAE enable you to build and enhance your applications with functionalities like
data retrieval, caching, communication, user management, and media processing. They contribute to the overall
functionality and user experience of your Google App Engine applications.
Compute Services:
Web applications are mostly designed to interface applications with users by means of a ubiquitous channel, that
is, the Web. Most of the interaction is performed synchronously: Users navigate the Web pages and get
instantaneous feedback in response to their actions. This feedback is often the result of some computation
happening on the Web application, which implements the intended logic to serve the user request. AppEngine
offers additional services such as Task Queues and Cron Jobs that simplify the execution of computations that are
off-bandwidth or those that cannot be performed within the timeframe of the Web request.
Task Queues:
Task Queues in Google App Engine allow you to offload and manage background tasks and
processes in your application.
Background tasks can include tasks that are time-consuming, need to be executed asynchronously, or
are not suitable for immediate request handling.
Key features:
Asynchronous Processing: Task queues execute tasks independently from user requests, which helps
maintain application responsiveness.
Scalability: Task queues automatically scale to handle varying workloads, ensuring efficient resource
utilization.
Retry Mechanism: They provide built-in retry options for failed tasks, enhancing task reliability.
Prioritization: You can assign priority levels to tasks, ensuring high-priority tasks are processed first.
Common use cases include sending emails, processing large data sets, handling data migrations, and
performing periodic maintenance tasks.
Cron Jobs:
Cron Jobs in Google App Engine are scheduled, automated tasks that run at specified intervals,
similar to traditional cron jobs on Unix-based systems.
You can define a schedule using cron expressions to determine when and how often a task should be
executed.
Key features:
Automation: Cron jobs automate routine tasks, reducing the need for manual intervention.
Scheduled Execution: They can be configured to run tasks daily, hourly, weekly, or at custom
intervals.
Integration: Cron jobs are often used for data backups, periodic database cleanups, and report
generation.
Fine-Grained Control: You can set custom schedules for specific tasks in your application.
Cron jobs are a valuable tool for managing recurring tasks and maintenance activities in your
application, ensuring they are executed consistently and reliably.
Both Task Queues and Cron Jobs play crucial roles in optimizing the performance and functionality of
your Google App Engine applications. Task Queues help manage background processing efficiently,
while Cron Jobs handle the automation of repetitive tasks according to a schedule.
Application Life-Cycle:
The application life cycle in GAE involves stages of development, testing, deployment, scaling, monitoring, and
ongoing maintenance. The platform simplifies many aspects of application management, allowing developers to
focus on their code and application logic while GAE handles infrastructure and scaling.
The SDKs released by Google provide developers with most of the functionalities required by these tasks.
Currently there are two SDKs available for development: Java SDK and Python SDK.
Here's an overview of the typical application life cycle in GAE:
1. Development:
Developers write, test, and debug their application code using the GAE Software Development Kits
(SDKs), which provide a local development environment simulating the GAE platform. During this
phase, developers use their choice of supported programming languages (e.g., Java, Python, Go) to build
the application's functionality.
Java SDK: The Java SDK provides developers with the facility for building applications with the Java 5 and
Java 6 runtime environments. Alternatively, it is possible to develop applications within the Eclipse development
environment by using the Google AppEngine plug-in, which integrates the features of the SDK within the
powerful Eclipse environment. Using the Eclipse software installer, it is possible to download and install Java
SDK, Google Web Toolkit, and Google AppEngine plug-ins into Eclipse. These three components allow
developers to program powerful and rich Java applications for AppEngine. The SDK supports the development of
applications by using the servlet abstraction, which is a common development model. Together with servlets,
many other features are available to build applications. Moreover, developers can easily create Web applications
by using the Eclipse Web Platform, which provides a set of tools and components. The plug-in allows developing,
testing, and deploying applications on AppEngine. Other tasks, such as retrieving the log of applications, are
available by means of command-line tools that are part of the SDK.
Python SDK: The Python SDK allows developing Web applications for AppEngine with Python 2.5. It provides
a standalone tool, called GoogleAppEngineLauncher, for managing Web applications locally and deploying them
to AppEngine. The tool provides a convenient user interface that lists all the available Web applications, controls
their execution, and integrates them with the default code editor for editing application files. In addition, the
launcher provides access to some important services for application monitoring and analysis, such as the logs, the
SDK console, and the dashboard. The log console captures all the information that is logged by the application
while it is running. The console SDK provides developers with a Web interface via which they can see the
application profile in terms of utilized resource. This feature is particularly useful because it allows developers to
preview the behavior of the applications once they are deployed on AppEngine, and it can be used to tune
applications made available through the runtime.
2. Local Testing:
The application is tested locally on the developer's machine using the GAE SDK. Developers can verify
that their code works as expected, handle HTTP requests, interact with data storage services, and utilize
GAE's application services (e.g., Memcache, Task Queues) in a controlled environment.
3. Configuration and Deployment:
Configuration files are created to specify settings for the application, including resource allocation,
scaling parameters, and service dependencies. The application code, along with its configuration, is
deployed to Google App Engine. GAE supports multiple services within an application, and developers
configure and deploy each service separately.
4. Version Management: Google App Engine allows for versioning of applications. Developers can
deploy multiple versions of their application simultaneously. Different versions can be created
for purposes such as A/B testing, staging, and canary releases. Version management enables easy
rollbacks if issues are encountered in a new release.
5. Scaling and Resource Allocation: GAE automatically handles resource allocation and scaling
based on incoming traffic. Developers can configure the minimum and maximum number of
instances, automatic scaling policies, and performance settings based on the application's
requirements.
6. Health Monitoring and Diagnostics: GAE provides comprehensive monitoring and logging tools
for applications. Developers and operators can monitor application performance, review logs,
and receive alerts for any issues that may arise.
7. Maintenance and Updates: Ongoing maintenance, bug fixes, and feature updates can be
performed on the application as needed. Developers can deploy new versions with improvements
and enhancements. Data migrations, scheduled tasks, and maintenance activities are managed
within the application life cycle.
8. Scalability and Optimization: Application owners can optimize the application's resource usage
and scalability settings as traffic patterns change over time. The application can be configured to
handle increased workloads efficiently.
9. Security and Access Control: Access control and security measures are continually monitored
and adjusted as needed. GAE integrates with Google Cloud Identity-Aware Proxy (IAP) and
other identity and access management (IAM) services for user authentication and authorization.
10. End-of-Life or Decommissioning: When the application is no longer needed or is being replaced,
it can be decommissioned. Data and resources are appropriately managed, and the application is
shut down, ensuring that it no longer incurs costs.
Cost Model:
Google App Engine (GAE) employs a cost model that charges users based on their usage of the
platform's resources and services. The cost model for GAE is designed to be pay-as-you-go, meaning
you are charged for the specific resources you consume rather than pre-paying for a fixed infrastructure.
Here are the key aspects of the cost model in GAE: Resource consumption, Pricing tiers, free quotas,
billing & invoicing, pricing calculator, monitoring and alerts. Budgets and cost controls.
The cost model in GAE provides transparency and flexibility, allowing you to control and manage your
expenses based on your application's usage patterns and requirements. It's important to monitor your
resource usage and keep track of your billing to avoid surprises and optimize your application's costs.
In Google App Engine (GAE), the cost model includes various types of quotas and limits that impact
billing. These quotas help define the usage and costs associated with running applications on the
platform. Three common categories of quotas in GAE are:
1. Billable Quotas:
Billable quotas are the resource limits that, when exceeded, result in charges on your
Google Cloud Platform (GCP) bill. These are the primary factors affecting the cost of
running your GAE application.
Examples of billable quotas include the number of running instances, CPU and memory
usage, and data storage limits in services like Cloud Datastore and Cloud Storage.
When you surpass the free tier or allocated limits in these areas, you may incur additional
costs.
2. Fixed Quotas:
Fixed quotas are predefined, non-configurable resource limits that GAE imposes on all
applications. These quotas exist to ensure fair resource allocation across all users.
Examples of fixed quotas include limits on URL fetches, outbound socket connections,
and certain types of HTTP headers. These quotas cannot be modified by individual GAE
users.
3. Per-Minute Quotas:
Per-minute quotas define the maximum rate at which specific resources can be used
within a minute. They ensure efficient resource allocation and prevent overuse.
Examples of per-minute quotas include the maximum rate of creating or deleting Cloud
Datastore entities and the rate at which you can make requests to the Task Queue service.
Observations;
Google App Engine (AppEngine) is a framework for creating scalable web applications. It leverages
Google's infrastructure to provide developers with a scalable and secure environment. Key components
include a sandboxed runtime for application execution and a set of services that cover common web
development features, making it easier to build applications that can scale effortlessly.
AppEngine emphasizes simplicity with straightforward interfaces for performing optimized and scalable
operations. Developers can construct applications by utilizing these building blocks, allowing
AppEngine to handle scalability when necessary. Compared to traditional web development, creating
robust applications with AppEngine may require a shift in perspective and more effort. Developers must
familiarize themselves with AppEngine's capabilities and adapt their implementations to adhere to the
AppEngine application model.
SQL Azure:
SQL Azure is a cloud-based relational database service hosted on Microsoft Azure, built on SQL Server
technologies. It extends SQL Server's capabilities to the cloud, providing developers with a scalable,
highly available, and fault-tolerant relational database. Here's a summary of its key features and
components:
Compatibility: SQL Azure is fully compatible with SQL Server, making it easy for applications
developed for SQL Server to migrate to SQL Azure. It maintains the same interface exposed by SQL
Server.
Accessibility: It's accessible from anywhere with access to the Azure Cloud, providing flexibility in
connecting to your database.
Manageability: SQL Azure is fully manageable using REST APIs, allowing developers to control
databases and set firewall rules for accessibility.
Architecture: It uses the Tabular Data Stream (TDS) protocol for data access, and a service layer
provides provisioning, billing, and connection-routing services. SQL Azure Fabric manages the
distributed database infrastructure.
Account Activation: Developers need a Windows Azure account to use SQL Azure. Once activated,
they can create servers and configure access to servers using the Windows Azure Management Portal or
REST APIs.
Server Abstractions: SQL Azure servers closely resemble physical SQL Servers and have fully
qualified domain names under the database.windows.net domain. Multiple synchronized copies of each
server are maintained within Azure Cloud.
Billing Model: SQL Azure is billed based on space usage and edition. Two editions are available: Web
Edition for small web applications (1 GB or 5 GB databases) and Business Edition for larger
applications (10 GB to 50 GB databases). A bandwidth fee applies for data transfers outside the Azure
Cloud or region, and a monthly fee per user/database is based on peak database size during the month.
b. Biology (Protein Structure Prediction and Gene Expression Data Analysis for Cancer
Diagnosis): In biology, cloud applications support tasks like protein structure prediction and
gene expression data analysis for cancer diagnosis.
Key Features:
i. Data Integration: Biological data from various sources, including DNA sequencing
and protein databases, is integrated into the cloud.
ii. Processing: Cloud-based algorithms are applied to predict protein structures and
analyze gene expression data.
iii. Machine Learning: Machine learning models help identify genetic markers for
cancer diagnosis and prognosis.
iv. Collaboration: Researchers from different locations can collaborate, share data, and
jointly analyze findings using cloud-based tools.
1. Cost-Efficiency:
- Cloud services eliminate the need for businesses to invest in and maintain their own IT infrastructure,
reducing capital expenses.
- Pay-as-you-go pricing models allow businesses to pay only for the resources they use, making it
cost-effective.
2. Scalability:
- Cloud resources can be quickly scaled up or down based on business demand, providing flexibility
during growth or seasonal fluctuations.
3. Accessibility:
- Cloud services can be accessed from anywhere with an internet connection, enabling remote work
and collaboration.
4. Security:
- Many cloud providers offer advanced security features and compliance certifications, improving data
protection.
- Regular updates and patch management help keep systems secure.
6. Collaboration:
- Cloud-based collaboration tools facilitate real-time collaboration among teams, enabling efficient
document sharing and communication.
2. Data Synchronization:
- Cloud storage solutions keep user data synchronized across multiple devices, ensuring access to files
and content from anywhere.
3. Entertainment:
- Streaming services, such as Netflix and Spotify, offer consumers a wide range of entertainment
content on-demand, without the need to download large files.
4. Collaboration:
- Cloud-based collaboration tools allow consumers to work on shared documents, making it easier for
students and professionals to collaborate on projects.
5. Communication:
- Email, messaging, and video conferencing services in the cloud facilitate communication with
friends, family, and colleagues worldwide.
1. Focus: CRM is primarily focused on managing and improving customer interactions, relationships,
and sales processes.
2. Purpose: It helps businesses build and maintain strong relationships with their customers by providing
tools for tracking customer information, communication, and sales opportunities.
3. Key Features:
- Contact Management: Store and manage customer contact information and history.
- Sales and Lead Management: Track sales opportunities, leads, and customer accounts.
- Marketing Automation: Automate marketing campaigns, email communications, and lead nurturing.
- Customer Support: Provide tools for managing customer inquiries, complaints, and support tickets.
- Analytics and Reporting: Generate insights into customer behavior and sales performance.
4. Benefits: CRM enhances customer engagement, streamlines sales processes, improves customer
service, and provides valuable insights for better decision-making.
Example:
Salesforce: A popular CRM platform that helps businesses manage their customer interactions,
sales leads, and marketing campaigns.
HubSpot: Offers CRM tools with integrated marketing and sales features for businesses of all
sizes.
Media Applications: Media apps provide access to various forms of digital media, including streaming video,
music, and news.
Key Features:
Video streaming and on-demand content
Music streaming and playlists
News and articles
Personalized content recommendations
User-generated content (e.g., reviews)
Examples:
Netflix: A popular streaming platform offering a vast library of TV shows and movies.
Spotify: A music streaming service that provides access to a vast collection of songs and
playlists.
Multiplayer Online Gaming: Online gaming apps support multiplayer gaming experiences, enabling players
to connect, compete, and collaborate in real-time.
Key Features:
Game lobbies and matchmaking
Real-time multiplayer gameplay
In-game chat and communication
Virtual item purchases
Leaderboards and achievements
Examples:
Fortnite: A popular online multiplayer battle royale game available on multiple platforms.
World of Warcraft: A massively multiplayer online role-playing game (MMORPG) with a large
player community.