0% found this document useful (0 votes)
8 views

CC Notes

The document outlines various computing paradigms, including distributed, parallel, cluster, grid, utility, edge, fog, and cloud computing, each defined by unique characteristics and use cases. It also discusses the basics, features, advantages, disadvantages, applications, and current trends in cloud computing, emphasizing its evolution and impact on various industries. Key trends include multi-cloud adoption, edge computing, serverless computing, and the integration of AI and machine learning.

Uploaded by

Harshitha P
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

CC Notes

The document outlines various computing paradigms, including distributed, parallel, cluster, grid, utility, edge, fog, and cloud computing, each defined by unique characteristics and use cases. It also discusses the basics, features, advantages, disadvantages, applications, and current trends in cloud computing, emphasizing its evolution and impact on various industries. Key trends include multi-cloud adoption, edge computing, serverless computing, and the integration of AI and machine learning.

Uploaded by

Harshitha P
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 75

Introduction:

Computing paradigms refer to fundamental models or approaches used in the field of computer science
and information technology to solve computational problems and process data. These paradigms
encompass various principles, methodologies, and technologies that guide how computing tasks are
conceptualized and executed. Different computing paradigms are suited to different types of problems
and application domains. Some common computing paradigms include:
1. Distributed Computing:

Distributed computing is defined as a type of computing where multiple computer systems work on
a single problem. Here all the computer systems are linked together and the problem is divided into
sub-problems where each part is solved by different computer systems.
The goal of distributed computing is to increase the performance and efficiency of the system and
ensure fault tolerance.
In the below diagram, each processor has its own local memory and all the processors communicate
with each other over a network.

2. Parallel Computing:
Parallel computing is defined as a type of computing where multiple computer systems are used
simultaneously. Here a problem is broken into sub-problems and then further broken down into
instructions. These instructions from each sub-problem are executed concurrently on different
processors.
Here in the below diagram you can see how the parallel computing system consists of multiple
processors that communicate with each other and perform multiple tasks over a shared memory
simultaneously.
The goal of parallel computing is to save time and provide concurrency.
3. Cluster Computing:
A cluster is a group of independent computers that work together to perform the tasks given.
Cluster computing is defined as a type of computing that consists of two or more independent
computers, referred to as nodes, that work together to execute tasks as a single machine.
The goal of cluster computing is to increase the performance, scalability and simplicity of the system.
As you can see in the below diagram, all the nodes, (irrespective of whether they are a parent node or
child node), act as a single entity to perform the tasks.

4. Grid Computing:
Grid computing is defined as a type of computing where it is constitutes a network of computers that
work together to perform tasks that may be difficult for a single machine to handle. All the computers on
that network work under the same umbrella and are termed as a virtual super computer.
The tasks they work on is of either high computing power and consist of large data sets.
All communication between the computer systems in grid computing is done on the “data grid”.
The goal of grid computing is to solve more high computational problems in less time and improve
productivity.
5. Utility Computing:
Utility computing is defined as the type of computing where the service provider provides the needed
resources and services to the customer and charges them depending on the usage of these resources as
per requirement and demand, but not of a fixed rate.
Utility computing involves the renting of resources such as hardware, software, etc. depending on the
demand and the requirement.
The goal of utility computing is to increase the usage of resources and be more cost-efficient.

6. Edge Computing:
Edge computing is defined as the type of computing that is focused on decreasing the long distance
communication between the client and the server. This is done by running fewer processes in the cloud
and moving these processes onto a user’s computer, IoT device or edge device/server.
The goal of edge computing is to bring computation to the network’s edge which in turn builds less gap
and results in better and closer interaction.

7. Fog Computing:
Fog computing is defined as the type of computing that acts a computational structure between the cloud
and the data producing devices. It is also called as “fogging”.
This structure enables users to allocate resources, data, applications in locations at a closer range within
each other.
The goal of fog computing is to improve the overall network efficiency and performance.

8. Cloud Computing:
Cloud is defined as the usage of someone else’s server to host, process or store data.
Cloud computing is defined as the type of computing where it is the delivery of on-demand computing
services over the internet on a pay-as-you-go basis. It is widely distributed, network-based and used for
storage.
There type of cloud are public, private, hybrid and community and some cloud providers are Google
cloud, AWS, Microsoft Azure and IBM cloud.

Comparison of various Computing Technologies:


Paradigm Key Characteristics Use Cases Examples

- Scalable network applications - Social media platforms -


- Geographically
Distributed - Content delivery systems - Video streaming services -
dispersed components
Distributed databases Blockchain networks

- High-performance scientific - Large Hadron Collider (LHC)


- Utilizes resources from computing - Data-intensive data analysis - Climate
Grid
multiple organizations research - Distributed modeling - Drug discovery
simulations research

- IoT applications - Real-time - Smart home automation -


- Edge computing, closer
Fog analytics - Low-latency Autonomous vehicles -
to data source and users
services Industrial IoT sensors

- Amazon Web Services


- Provides on-demand - Web hosting - Data storage
(AWS) - Google Cloud
Cloud resources through the and processing - SaaS, PaaS,
Platform (GCP) - Microsoft
internet IaaS services
Azure

- Scientific simulations - - Weather forecasting models -


- Concurrent execution
Parallel Multimedia processing - Video rendering software -
of tasks or processes
Parallel algorithms Parallel sorting algorithms

Cluster - Group of - Load balancing - High - High-traffic e-commerce


interconnected availability - HPC clusters, websites (load balancing) -
Paradigm Key Characteristics Use Cases Examples

Datacenter clusters - Beowulf


computers or servers web server farms
clusters
Cloud Computing Basics-
What is Cloud Computing?
Cloud computing is a technology paradigm that involves delivering various computing services,
including servers, storage, databases, networking, software, and more, over the internet. Instead of
owning and maintaining physical hardware and software resources, users and organizations can access
and use these resources on a pay-as-you-go basis, typically through a cloud service provider.

History of Cloud Computing


The concept of Cloud Computing came into existence in the year 1950 with implementation of
mainframe computers, accessible via thin/static clients. Since then, cloud computing has been evolved
from static clients to dynamic ones and from software to services. The following diagram explains the
evolution of cloud computing:

Characteristics of Cloud Computing


There are four key characteristics of cloud computing. They are shown in the following diagram:
On Demand Self Service
Cloud Computing allows the users to use web services and resources on demand. One can logon to a
website at any time and use them.
Broad Network Access
Since cloud computing is completely web based, it can be accessed from anywhere and at any time.
Resource Pooling
Cloud computing allows multiple tenants to share a pool of resources. One can share single physical
instance of hardware, database and basic infrastructure.
Rapid Elasticity
It is very easy to scale the resources vertically or horizontally at any time. Scaling of resources means
the ability of resources to deal with increasing or decreasing demand.
The resources being used by customers at any given point of time are automatically monitored.
Measured Service
In this service cloud provider controls and monitors all the aspects of cloud service. Resource
optimization, billing, and capacity planning etc. depend on it.

Features of Cloud Computing:


1. Virtualization: Cloud computing relies on virtualization technology to abstract and pool physical
resources, such as servers and storage, into virtual instances that can be managed and allocated
dynamically.
2. Multi-Tenancy: Cloud providers serve multiple customers on the same physical infrastructure by
isolating and securing each customer's data and resources. This multi-tenancy approach ensures
efficient resource utilization.
3. Scalability: Cloud services offer the ability to scale resources both vertically (adding more power to
a single instance) and horizontally (adding more instances) to accommodate changing workloads and
user demand.
4. Self-Service Portals: Cloud providers often offer web-based interfaces and management consoles
that allow users to provision, configure, and manage cloud resources without the need for direct
human intervention.
5. Redundancy and High Availability: Cloud providers typically build redundancy and failover
mechanisms into their infrastructure to ensure high availability and reliability of services.
6. Security and Compliance: Cloud providers invest in robust security measures and compliance
certifications to protect data and ensure adherence to industry-specific regulations.
7. API Access: Cloud services often expose application programming interfaces (APIs) that allow
developers to programmatically interact with and automate the management of cloud resources.
8. Data Backup and Recovery: Cloud providers offer data backup and recovery services, ensuring
data resilience and protection against data loss.
9. Geographic Distribution: Many cloud providers offer data centers in multiple geographic regions,
allowing users to deploy applications and store data in locations that align with their needs for
latency, data sovereignty, and disaster recovery.
10. Service Models: Cloud computing includes various service models, such as Infrastructure as a
Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS), catering to different
levels of abstraction and management.
Advantages of Cloud Computing:
1. Cost-Efficiency:
 Advantage: Cloud computing eliminates the need for organizations to invest in and maintain
physical hardware, reducing capital expenses. Users only pay for the resources they use,
leading to cost savings.
 Example: Small businesses can avoid the upfront cost of purchasing and maintaining servers
by using cloud-based services.
2. Scalability:
 Advantage: Cloud services can quickly scale up or down based on demand. This scalability
ensures that organizations can efficiently handle fluctuating workloads.
 Example: E-commerce websites can seamlessly accommodate increased traffic during
holiday seasons without overprovisioning resources.
3. Flexibility and Accessibility:
 Advantage: Cloud resources can be accessed from anywhere with an internet connection,
allowing for remote work and collaboration.
 Example: Remote teams can collaborate on projects using cloud-based productivity tools like
Google Workspace or Microsoft 365.
4. Reliability and High Availability:
 Advantage: Leading cloud providers offer redundancy and failover mechanisms, ensuring
high availability and minimizing downtime.
 Example: Critical applications and websites can maintain continuous service availability.
5. Security and Compliance:
 Advantage: Cloud providers invest in robust security measures and compliance certifications,
often surpassing what individual organizations can achieve.
 Example: Healthcare organizations can store and process sensitive patient data in compliance
with regulations like HIPAA.
6. Automatic Updates and Maintenance:
 Advantage: Cloud providers handle software updates, maintenance, and patching, reducing
the administrative burden on users.
 Example: Users of cloud-based software benefit from the latest features and security patches
without manual intervention.
Disadvantages of Cloud Computing:
1. Data Security Concerns:
 Disadvantage: Storing data in the cloud raises concerns about data privacy and security.
Organizations must trust their cloud provider's security measures.
 Example: A data breach or unauthorized access to cloud-stored data can have severe
consequences.
2. Downtime and Service Outages:
 Disadvantage: Even with high availability measures, cloud services can experience downtime
due to technical issues or outages, impacting business operations.
 Example: A major cloud provider experiencing an outage can disrupt services for many users.
3. Limited Control:
 Disadvantage: Users have less control over the underlying infrastructure and may be
constrained by the cloud provider's policies and limitations.
 Example: Customizing hardware configurations may be limited in a cloud environment.
4. Data Transfer Costs:
 Disadvantage: Moving large volumes of data in and out of the cloud can incur data transfer
costs, which can be significant.
 Example: Frequent data transfers for backup and recovery may lead to unexpected expenses.
5. Dependence on Internet Connectivity:
 Disadvantage: Cloud services require a reliable internet connection. Downtime or slow
internet can disrupt access to critical resources.
 Example: Remote users in areas with poor connectivity may experience usability issues.
6. Vendor Lock-In:
 Disadvantage: Switching cloud providers or migrating on-premises data to the cloud can be
complex and costly, potentially locking organizations into a specific vendor.
 Example: Transitioning from one cloud provider to another may involve rewriting or
reconfiguring applications.
7. Compliance and Legal Issues:
 Disadvantage: Organizations must navigate legal and compliance issues when storing
sensitive data in the cloud, ensuring compliance with industry regulations.
 Example: Financial institutions must adhere to strict regulations when using cloud
services for customer data.
Applications of Cloud Computing:
1. Web Hosting and Development:
 Many websites and web applications are hosted in the cloud, allowing for easy
scalability, high availability, and simplified deployment.
2. Data Storage and Backup:
 Cloud storage services like Dropbox, Google Drive, and Amazon S3 provide convenient
and cost-effective data storage and backup solutions for individuals and businesses.
3. Software as a Service (SaaS):
 SaaS applications, such as Google Workspace, Microsoft 365, and Salesforce, are
delivered via the cloud, enabling users to access software applications through a web
browser without the need for local installation.
4. Infrastructure as a Service (IaaS):
 IaaS providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud
Platform (GCP) offer virtualized infrastructure resources, including servers, storage, and
networking, allowing organizations to build and manage their IT environments in the
cloud.
5. Platform as a Service (PaaS):
 PaaS offerings, such as Heroku and Google App Engine, provide a platform and
development environment for developers to build, deploy, and scale applications without
worrying about underlying infrastructure.
6. Big Data and Analytics:
 Cloud computing platforms provide the computational power and storage needed for big
data processing and analytics. Tools like Amazon EMR, Google BigQuery, and Azure
HDInsight are commonly used for this purpose.
7. Machine Learning and AI:
 Cloud-based machine learning and AI services, including AWS SageMaker, Google AI
Platform, and Azure Machine Learning, enable organizations to develop and deploy
machine learning models without significant upfront investment.
8. IoT (Internet of Things):
 Cloud platforms support IoT applications by collecting, storing, and analyzing data from
connected devices. AWS IoT, Azure IoT, and Google Cloud IoT Core offer IoT-specific
services.
9. Content Delivery and Streaming:
 Content delivery networks (CDNs) like Akamai and Cloudflare use cloud infrastructure
to distribute content efficiently, reducing latency and improving the delivery of web
content and streaming media.
10. E-commerce:
 Many e-commerce businesses leverage cloud computing to handle spikes in traffic during
sales events, manage inventory, and provide a seamless online shopping experience.
11. Gaming:
 Cloud-based gaming services, such as Google Stadia and NVIDIA GeForce Now, allow
gamers to stream and play video games from the cloud, eliminating the need for high-end
gaming hardware.
12. Healthcare:
 Cloud computing aids in storing and processing vast amounts of medical data, facilitating
telemedicine, enabling medical research, and ensuring secure access to patient records.
13. Financial Services:
 The financial industry uses cloud computing for risk analysis, fraud detection,
algorithmic trading, and secure storage of financial data while adhering to strict
regulatory requirements.
14. Education:
 Cloud-based learning management systems (LMS) and collaboration tools have become
crucial in online education, making it easier for students and educators to access
resources and collaborate remotely.
15. Government and Public Services:
 Government agencies leverage cloud computing for cost-effective and secure data
storage, disaster recovery, citizen services, and infrastructure modernization.
Trends in Cloud Computing:
Multi-Cloud and Hybrid Cloud Adoption:
 Organizations are increasingly adopting multi-cloud and hybrid cloud strategies to avoid vendor
lock-in, enhance resilience, and optimize cost and performance. This trend involves using
multiple cloud providers and integrating on-premises infrastructure with cloud resources.
Edge Computing:
 Edge computing extends cloud capabilities to the edge of the network, closer to data sources and
users. It reduces latency and enables real-time processing, making it essential for applications
like IoT, autonomous vehicles, and augmented reality.
Serverless Computing:
 Serverless computing, often referred to as Function as a Service (FaaS), allows developers to
focus on writing code without managing server infrastructure. This trend simplifies development,
enhances scalability, and reduces operational overhead.
Containerization and Kubernetes:
 Containers, particularly Docker, and orchestration platforms like Kubernetes, are becoming
standard for deploying and managing applications in the cloud. Containers offer consistency
across environments and improve resource utilization.
AI and Machine Learning Integration:
 Cloud providers are offering AI and machine learning services that enable organizations to build
and deploy models for various applications, including data analytics, natural language
processing, and computer vision.
Quantum Computing in the Cloud:
 Some cloud providers are experimenting with quantum computing services, allowing researchers
and organizations to access and experiment with quantum computing power for complex
problem-solving.
Blockchain as a Service (BaaS):
 BaaS offerings provide pre-configured blockchain infrastructure, making it easier for
organizations to develop and deploy blockchain applications for supply chain management,
finance, and more.
Serverless Databases:
 In addition to serverless compute, serverless databases are emerging, offering automatic scaling
and cost savings based on actual usage. These databases are suitable for applications with
unpredictable workloads.
Green Cloud Computing:
 Environmental concerns are driving the adoption of green cloud computing practices. Cloud
providers are investing in renewable energy sources and energy-efficient data centers to reduce
their carbon footprint.
Security and Compliance Services:
 With increased cyber threats, cloud providers are enhancing their security and compliance
offerings. This includes identity and access management, encryption, and compliance with
industry-specific regulations.
Data Analytics and Data Lakes:
 Cloud platforms are expanding their data analytics capabilities, enabling organizations to analyze
large volumes of data using tools like Apache Spark and Hadoop. Data lakes are becoming
central repositories for structured and unstructured data.
DevOps and Continuous Integration/Continuous Deployment (CI/CD):
 Cloud-native development practices, along with DevOps principles and CI/CD pipelines, are
standardizing application development and deployment, improving agility and collaboration.
5G Integration:
 The rollout of 5G networks is expected to further enhance cloud computing capabilities by
providing faster and more reliable connectivity for edge computing and IoT applications.
Compliance Automation:
 Automation tools are emerging to help organizations ensure compliance with regulatory
requirements in the cloud, streamlining audits and reducing compliance risks.
Leading Cloud Platform Service Providers.
1. Amazon Web Services (AWS): One of the most successful cloud-based businesses is Amazon
Web Services (AWS), which is an Infrastructure as a Service (Iaas) offering that pays rent for
virtual computers on Amazon’s infrastructure.
2. Microsoft Azure Platform: Microsoft is creating the Azure platform which enables the .NET
Framework Application to run over the internet as an alternative platform for Microsoft
developers. This is the classic Platform as a Service(PaaS).
3. Google: Google has built a worldwide network of data centers to service its search engine. From
this service, Google has captured the world’s advertising revenue. By using that revenue, Google
offers free software to users based on infrastructure. This is called Software as a Service(SaaS).
4. IBM Cloud is a collection of cloud computing services for businesses provided by the IBM
Corporation. It provides infrastructure as a service, software as a service, and platform as a
service.
5. Oracle Cloud is a collection of cloud services offered by Oracle Corporation, including
infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS).
6. Alibaba Cloud is the cloud computing arm of Alibaba Group, providing a comprehensive suite of
global cloud computing services to power both their international customers’ online businesses
and Alibaba Group’s own e-commerce ecosystem.
7. Tencent Cloud is a cloud service platform provided by Tencent. It provides a range of services
such as virtual machines, storage, databases, and analytics.
8. Rackspace is a provider of hybrid cloud computing, founded in 1998. It provides managed
hosting, cloud hosting, and email and app services.
9. Salesforce – A cloud-based customer relationship management (CRM) platform used for sales,
marketing, and customer service.
10. VMware Cloud – A cloud platform by VMware, offering services such as virtualization, cloud
management, and network virtualization.
11. DigitalOcean – A cloud platform focused on providing easy-to-use, scalable computing services.
12. Red Hat OpenShift – A cloud platform by Red Hat, offering container-based application
development and management.
13. Cisco Cloud – A cloud platform by Cisco, offering a range of services including networking,
security, and application development.
14. HP Helion – A cloud platform by HP, offering services such as computing, storage, and
networking.
15. SAP Cloud Platform – A cloud platform by SAP, offering services such as analytics, application
development, and integration.
16. Fujitsu Cloud – A cloud platform by Fujitsu, offering services such as computing, storage, and
networking.
17. OVHcloud – A cloud platform offering a range of services including computing, storage, and
networking.
18. CenturyLink Cloud – A cloud platform offering a range of services including computing,
storage, and networking.
19. Joyent – A cloud platform offering services such as computing, storage, and container-based
application development.
20. NTT Communications Cloud – A cloud platform offering services such as computing, storage,
and networking.
Cloud Architecture:

Cloud Service Models:


Cloud Computing can be defined as the practice of using a network of remote servers hosted on the
Internet to store, manage, and process data, rather than a local server or a personal computer. Companies
offering such kinds of cloud computing services are called cloud providers and typically charge for
cloud computing services based on usage. Grids and clusters are the foundations for cloud computing.
Types of Cloud Computing: Most cloud computing services fall into five broad categories:
1. Software as a service (SaaS)
2. Platform as a service (PaaS)
3. Infrastructure as a service (IaaS)
4. Anything/Everything as a service (XaaS)
5. Function as a Service (FaaS)
These are sometimes called the cloud computing stack because they are built on top of one another.
Knowing what they are and how they are different, makes it easier to accomplish your goals. These
abstraction layers can also be viewed as a layered architecture where services of a higher layer can be
composed of services of the underlying layer i.e, SaaS can provide Infrastructure.
Software as a Service(SaaS)
Software-as-a-Service (SaaS) is a way of delivering services and applications over the Internet. Instead
of installing and maintaining software, we simply access it via the Internet, freeing ourselves from the
complex software and hardware management. It removes the need to install and run applications on our
own computers or in the data centers eliminating the expenses of hardware as well as software
maintenance.
SaaS provides a complete software solution that you purchase on a pay-as-you-go basis from a cloud
service provider. Most SaaS applications can be run directly from a web browser without any downloads
or installations required. The SaaS applications are sometimes called Web-based software, on-demand
software, or hosted software.
Advantages of SaaS
1. Cost-Effective: Pay only for what you use.
2. Reduced time: Users can run most SaaS apps directly from their web browser without needing
to download and install any software. This reduces the time spent in installation and
configuration and can reduce the issues that can get in the way of the software deployment.
3. Accessibility: We can Access app data from anywhere.
4. Automatic updates: Rather than purchasing new software, customers rely on a SaaS provider to
automatically perform the updates.
5. Scalability: It allows the users to access the services and features on-demand.
The various companies providing Software as a service are Cloud9 Analytics, Salesforce.com, Cloud
Switch, Microsoft Office 365, Big Commerce, Eloqua, dropBox, and Cloud Tran.
Disadvantages of Saas :
1. Limited customization: SaaS solutions are typically not as customizable as on-premises
software, meaning that users may have to work within the constraints of the SaaS provider’s
platform and may not be able to tailor the software to their specific needs.
2. Dependence on internet connectivity: SaaS solutions are typically cloud-based, which means
that they require a stable internet connection to function properly. This can be problematic for
users in areas with poor connectivity or for those who need to access the software in offline
environments.
3. Security concerns: SaaS providers are responsible for maintaining the security of the data stored
on their servers, but there is still a risk of data breaches or other security incidents.
4. Limited control over data: SaaS providers may have access to a user’s data, which can be a
concern for organizations that need to maintain strict control over their data for regulatory or
other reasons.
Platform as a Service
PaaS is a category of cloud computing that provides a platform and environment to allow developers to
build applications and services over the internet. PaaS services are hosted in the cloud and accessed by
users simply via their web browser.

A PaaS provider hosts the hardware and software on its own infrastructure. As a result, PaaS frees users
from having to install in-house hardware and software to develop or run a new application. Thus, the
development and deployment of the application take place independent of the hardware.
The consumer does not manage or control the underlying cloud infrastructure including network,
servers, operating systems, or storage, but has control over the deployed applications and possibly
configuration settings for the application-hosting environment. To make it simple, take the example of
an annual day function, you will have two options either to create a venue or to rent a venue but the
function is the same.
Advantages of PaaS:
1. Simple and convenient for users: It provides much of the infrastructure and other IT services,
which users can access anywhere via a web browser.
2. Cost-Effective: It charges for the services provided on a per-use basis thus eliminating the
expenses one may have for on-premises hardware and software.
3. Efficiently managing the lifecycle: It is designed to support the complete web application
lifecycle: building, testing, deploying, managing, and updating.
4. Efficiency: It allows for higher-level programming with reduced complexity thus, the overall
development of the application can be more effective.
5. The various companies providing Platform as a service are Amazon Web services Elastic
Beanstalk, Salesforce, Windows Azure, Google App Engine, cloud Bees and IBM smart cloud.
Disadvantages of Paas:
1. Limited control over infrastructure: PaaS providers typically manage the underlying
infrastructure and take care of maintenance and updates, but this can also mean that users have
less control over the environment and may not be able to make certain customizations.
2. Dependence on the provider: Users are dependent on the PaaS provider for the availability,
scalability, and reliability of the platform, which can be a risk if the provider experiences outages
or other issues.
3. Limited flexibility: PaaS solutions may not be able to accommodate certain types of workloads
or applications, which can limit the value of the solution for certain organizations.
Infrastructure as a Service
Infrastructure as a service (IaaS) is a service model that delivers computer infrastructure on an
outsourced basis to support various operations. Typically IaaS is a service where infrastructure is
provided as outsourcing to enterprises such as networking equipment, devices, database, and web
servers.
It is also known as Hardware as a Service (HaaS). IaaS customers pay on a per-user basis, typically by
the hour, week, or month. Some providers also charge customers based on the amount of virtual machine
space they use.
It simply provides the underlying operating systems, security, networking, and servers for developing
such applications, and services, and deploying development tools, databases, etc.
Advantages of IaaS:
1. Cost-Effective: Eliminates capital expense and reduces ongoing cost and IaaS customers pay on
a per-user basis, typically by the hour, week, or month.
2. Website hosting: Running websites using IaaS can be less expensive than traditional web
hosting.
3. Security: The IaaS Cloud Provider may provide better security than your existing software.
4. Maintenance: There is no need to manage the underlying data center or the introduction of new
releases of the development or underlying software. This is all handled by the IaaS Cloud
Provider.
5. The various companies providing Infrastructure as a service are Amazon web services,
Bluestack, IBM, Openstack, Rackspace, and Vmware.
Disadvantages of laaS :
1. Limited control over infrastructure: IaaS providers typically manage the underlying
infrastructure and take care of maintenance and updates, but this can also mean that users have
less control over the environment and may not be able to make certain customizations.
2. Security concerns: Users are responsible for securing their own data and applications, which
can be a significant undertaking.
3. Limited access: Cloud computing may not be accessible in certain regions and countries due to
legal policies.
Anything as a Service
It is also known as Everything as a Service. Most of the cloud service providers nowadays offer
anything as a service that is a compilation of all of the above services including some additional
services.
Advantages of XaaS:
1. Scalability: XaaS solutions can be easily scaled up or down to meet the changing needs of an
organization.
2. Flexibility: XaaS solutions can be used to provide a wide range of services, such as storage,
databases, networking, and software, which can be customized to meet the specific needs of an
organization.
3. Cost-effectiveness: XaaS solutions can be more cost-effective than traditional on-premises
solutions, as organizations only pay for the services.
Disadvantages of XaaS:
1. Dependence on the provider: Users are dependent on the XaaS provider for the availability,
scalability, and reliability of the service, which can be a risk if the provider experiences outages
or other issues.
2. Limited flexibility: XaaS solutions may not be able to accommodate certain types of workloads
or applications, which can limit the value of the solution for certain organizations.
3. Limited integration: XaaS solutions may not be able to integrate with existing systems and data
sources, which can limit the value of the solution for certain organizations.
Comparison of different service model:

Basis Of IAAS PAAS SAAS

Infrastructure as a
Platform as a service. Software as a service.
Stands for service.

IAAS is used by PAAS is used by SAAS is used by the end


Uses network architects. developers. user.

PAAS gives access to run


IAAS gives access to
time environment to
the resources like virtual SAAS gives access to the
deployment and
machines and virtual end user.
development tools for
storage.
Access application.

It is a cloud computing
It is a service model that It is a service model in
model that delivers tools
provides virtualized cloud computing that
that are used for the
computing resources hosts software to make it
development of
over the internet. available to clients.
Model applications.

Technical It requires technical Some knowledge is There is no requirement


Basis Of IAAS PAAS SAAS

about technicalities
required for the basic
knowledge. company handles
setup.
understanding. everything.

It is popular among
It is popular among
It is popular among consumers and
developers who focus on
developers and companies, such as file
the development of apps
researchers. sharing, email, and
and scripts.
Popularity networking.

It has about a 27 % rise in


It has around a 12% It has around 32%
the cloud computing
increment. increment.
Percentage rise model.

Used by the skilled Used by mid-level


Used among the users of
developer to develop developers to build
entertainment.
Usage unique applications. applications.

Amazon Web Services, Facebook, and Google MS Office web, Facebook


Cloud services. sun, vCloud Express. search engine. and Google Apps.

Enterprise AWS virtual private


Microsoft Azure. IBM cloud analysis.
services. cloud.

Outsourced
Salesforce Force.com, Gigaspaces. AWS, Terremark
cloud services.

Operating System,
Runtime, Middleware, Data of the application Nothing
User Controls and Application data

Others It is highly scalable and It is highly scalable to suit It is highly scalable to suit
flexible. the different businesses the small, mid and
Basis Of IAAS PAAS SAAS

according to resources. enterprise level business

Cloud Deployment model:


A cloud deployment model is a specific configuration of environment parameters such as the
accessibility and proprietorship of the deployment infrastructure and storage size. This means
that deployment types vary depending on who controls the infrastructure and where it's located.
The cloud delivery models can be briefly classified into 4 types:
1. Public
2. Private
3. Hybrid
4. Community

1. Public clouds are available to the general public, and data are created and stored on third-party
servers.
Server infrastructure belongs to service providers that manage it and administer pool resources, which is
why there is no need for user companies to buy and maintain their own hardware. Provider companies
offer resources as a service both free of charge or on a pay-per-use basis via the Internet. Users can scale
resources as required.
The public cloud deployment model is the first choice for businesses with low privacy concerns. When
it comes to popular public cloud deployment models, examples are Amazon Elastic Compute Cloud
(Amazon EC2 — the top service provider according to ZDNet), Microsoft Azure, Google App Engine,
IBM Cloud, Salesforce Heroku and others.
The Advantages of a Public Cloud
 Hassle-free infrastructure management. Having a third party running your cloud infrastructure is
convenient: you do not need to develop and maintain your software because the service provider does
it for you. In addition, the infrastructure setup and use are uncomplicated.
 High scalability. You can easily extend the cloud’s capacity as your company requirements increase.
 Reduced costs. You pay only for the service you use, so there’s no need to invest in hardware or
software.
 24/7 uptime. The extensive network of your provider’s servers ensures your infrastructure is
constantly available and has improved operation time.
The Disadvantages of a Public Cloud
 Compromised reliability. That same server network is also meant to ensure against failure But often
enough, public clouds experience outages and malfunction, as in the case of the 2016 Salesforce
CRM disruption that caused a storage collapse.
 Data security and privacy issues give rise to concern. Although access to data is easy, a public
deployment model deprives users of knowing where their information is kept and who has access to
it.
 The lack of a bespoke service. Service providers have only standardized service options, which is
why they often fail to satisfy more complex requirements.
2. Private Cloud
There is little to no difference between a public and a private model from the technical point of view, as
their architectures are very similar. However, as opposed to a public cloud that is available to the general
public, only one specific company owns a private cloud. That is why it is also called
an internal or corporate model.
The server can be hosted externally or on the premises of the owner company. Regardless of their
physical location, these infrastructures are maintained on a designated private network and use software
and hardware that are intended for use only by the owner company.
A clearly defined scope of people have access to the information kept in a private repository, which
prevents the general public from using it. In light of numerous breaches in recent years, a growing
number of large corporations has decided on a closed private cloud model, as this minimizes data
security issues.
Compared to the public model, the private cloud provides wider opportunities for customizing the
infrastructure to the company’s requirements. A private model is especially suitable for companies that
seek to safeguard their mission-critical operations or for businesses with constantly changing
requirements.
Multiple public cloud service providers, including Amazon, IBM, Cisco, Dell and Red Hat, also provide
private solutions.

The Advantages of a Private Cloud


All the benefits of this deployment model result from its autonomy. They are the following:
 Bespoke and flexible development and high scalability, which allows companies to customize their
infrastructures in accordance with their requirements
 High security, privacy and reliability, as only authorized persons can access resources
The Disadvantages of a Private Cloud
The major disadvantage of the private cloud deployment model is its cost, as it requires considerable
expense on hardware, software and staff training. That is why this secure and flexible computing
deployment model is not the right choice for small companies.
3. Community Cloud
A community deployment model largely resembles the private one; the only difference is the set of
users. Whereas only one company owns the private cloud server, several organizations with similar
backgrounds share the infrastructure and related resources of a community cloud.
If all the participating organizations have uniform security, privacy and performance requirements, this
multi-tenant data center architecture helps these companies enhance their efficiency, as in the case of
joint projects. A centralized cloud facilitates project development, management and implementation. The
costs are shared by all users.

The advantages of a Community Cloud


 Cost reduction
 Improved security, privacy and reliability
 Ease of data sharing and collaboration
The disadvantages of a Community Cloud
 High cost compared to the public deployment model
 Sharing of fixed storage and bandwidth capacity
 Not commonly used yet
4. Hybrid Cloud
As is usually the case with any hybrid phenomenon, a hybrid cloud encompasses the best features of the
abovementioned deployment models (public, private and community). It allows companies to mix and
match the facets of the three types that best suit their requirements.
As an example, a company can balance its load by locating mission-critical workloads on a secure
private cloud and deploying less sensitive ones to a public one. The hybrid cloud deployment model not
only safeguards and controls strategically important assets but does so in a cost- and resource-effective
way. In addition, this approach facilitates data and application
The Benefits of a Hybrid Cloud
 Improved security and privacy
 Enhanced scalability and flexibility
 Reasonable price
However, the hybrid deployment model only makes sense if companies can split their data into mission-
critical and non-sensitive.
The Comparison of Top Cloud Deployment Models
To facilitate your choice of a deployment model, we have created a comparative table that provides an
overview of the most business-critical features of each type of cloud.
The comparative analysis of the best deployment models

Public Private Community Hybrid

Ease of setup Requires IT Requires IT


Easy Requires IT proficiency
and use proficiency proficiency

Data security
Low High Comparatively high High
and privacy

Little to
Data control High Comparatively high Comparatively high
none

Reliability Low High Comparatively high High


Scalability and
High High Fixed capacity High
flexibility

Cost-intensive; Cost is shared Cheaper than a private


Cost- The
the most among community model but more costly
effectiveness cheapest
expensive model members than a public one

Demand for in-


No Depends Depends Depends
house hardware

Cloud Computing Architecture- Layered Architecture of Cloud.

The cloud reference model:


Cloud computing supports any IT service that can be consumed as a utility and delivered through a
network, most likely the Internet. Such characterization includes quite different aspects: infrastructure,
development platforms, application and services.
It is possible to organize all the concrete realizations of cloud computing into a layered view covering
the entire, from hardware appliances to software systems.
All of the physical manifestations of cloud computing can be arranged into a layered picture that
encompasses anything from software systems to hardware appliances.
Layered Architecture of cloud
Application Layer
1. The application layer, which is at the top of the stack, is where the actual cloud apps are
located. Cloud applications, as opposed to traditional applications, can take advantage of
the automatic-scaling functionality to gain greater performance, availability, and lower
operational costs.
2. This layer consists of different Cloud Services which are used by cloud users. Users can
access these applications according to their needs. Applications are divided into Execution
layers and Application layers.
3. In order for an application to transfer data, the application layer determines whether
communication partners are available. Whether enough cloud resources are accessible for
the required communication is decided at the application layer. Applications must
cooperate in order to communicate, and an application layer is in charge of this.
4. The application layer, in particular, is responsible for processing IP traffic handling
protocols like Telnet and FTP. Other examples of application layer systems include web
browsers, SNMP protocols, HTTP protocols, or HTTPS, which is HTTP’s successor
protocol.
Platform Layer
1. The operating system and application software make up this layer.
2. Users should be able to rely on the platform to provide them with Scalability,
Dependability, and Security Protection which gives users a space to create their apps,
test operational processes, and keep track of execution outcomes and performance. SaaS
application implementation’s application layer foundation.
3. The objective of this layer is to deploy applications directly on virtual machines.
4. Operating systems and application frameworks make up the platform layer, which is built
on top of the infrastructure layer. The platform layer’s goal is to lessen the difficulty of
deploying programmers directly into VM containers.
5. By way of illustration, Google App Engine functions at the platform layer to provide API
support for implementing storage, databases, and business logic of ordinary web apps.

Infrastructure Layer
1. It is a layer of virtualization where physical resources are divided into a collection of
virtual resources using virtualization technologies like Xen, KVM, and VMware.
2. This layer serves as the Central Hub of the Cloud Environment, where resources are
constantly added utilizing a variety of virtualization techniques.
3. A base upon which to create the platform layer. constructed using the virtualized network,
storage, and computing resources. Give users the flexibility they want.
4. Automated resource provisioning is made possible by virtualization, which also improves
infrastructure management.
5. The infrastructure layer sometimes referred to as the virtualization layer, partitions the
physical resources using virtualization technologies like Xen, KVM, Hyper-V, and
VMware to create a pool of compute and storage resources.

6. The infrastructure layer is crucial to cloud computing since virtualization technologies are
the only ones that can provide many vital capabilities, like dynamic resource assignment.

Datacenter Layer
 In a cloud environment, this layer is responsible for Managing Physical Resources such as
servers, switches, routers, power supplies, and cooling systems.
 Providing end users with services requires all resources to be available and managed in
data centers.
 Physical servers connect through high-speed devices such as routers and switches to the
data center.
 In software application designs, the division of business logic from the persistent data it
manipulates is well-established. This is due to the fact that the same data cannot be
incorporated into a single application because it can be used in numerous ways to support
numerous use cases. The requirement for this data to become a service has arisen with the
introduction of microservices.
 A single database used by many microservices creates a very close coupling. As a result, it
is hard to deploy new or emerging services separately if such services need database
modifications that may have an impact on other services. A data layer containing many
databases, each serving a single microservice or perhaps a few closely related
microservices, is needed to break complex service interdependencies.
Virtualization
Virtualization is a technique, which allows sharing single physical instance of an application or
resource among multiple organizations or tenants (customers). It does this by assigning a logical name
to a physical resource and providing a pointer to that physical resource when demanded.

The Multitenant architecture offers virtual isolation among the multiple tenants. Hence, the
organizations can use and customize their application as though they each have their instances running.
Features of Virtualization;
 Increased Security: The ability to control the execution of a guest program in a
completely transparent manner opens new possibilities for delivering a secure, controlled
execution environment. All the operations of the guest programs are generally performed
against the virtual machine, which then translates and applies them to the host programs.
 Managed Execution: In particular, sharing, aggregation, emulation, and isolation are the
most relevant features.
 Sharing: Virtualization allows the creation of a separate computing environment within the
same host.
 Aggregation: It is possible to share physical resources among several guests, but
virtualization also allows aggregation, which is the opposite process.
Types of Virtualizations-

1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization

1. Application Virtualization: Application virtualization helps a user to have remote access to an


application from a server. The server stores all personal information and other characteristics of the
application but can still run on a local workstation through the internet. An example of this would be a
user who needs to run two different versions of the same software. Technologies that use application
virtualization are hosted applications and packaged applications.
2. Network Virtualization: The ability to run multiple virtual networks with each having a separate
control and data plan. It co-exists together on top of one physical network. It can be managed by
individual parties that are potentially confidential to each other. Network virtualization provides a
facility to create and provision virtual networks, logical switches, routers, firewalls, load
balancers, Virtual Private Networks (VPN) , and workload security within days or even weeks.

Network Virtualization
3. Desktop Virtualization: Desktop virtualization allows the users’ OS to be remotely stored on a
server in the data center. It allows the user to access their desktop virtually, from any location by a
different machine. Users who want specific operating systems other than Windows Server will need to
have a virtual desktop. The main benefits of desktop virtualization are user mobility, portability, and
easy management of software installation, updates, and patches.

4. Storage Virtualization: Storage virtualization is an array of servers that are managed by a virtual
storage system. The servers aren’t aware of exactly where their data is stored and instead function
more like worker bees in a hive. It makes managing storage from multiple sources be managed and
utilized as a single repository. storage virtualization software maintains smooth operations, consistent
performance, and a continuous suite of advanced functions despite changes, breaks down, and
differences in the underlying equipment.
5. Server Virtualization: This is a kind of virtualization in which the masking of server resources
takes place. Here, the central server (physical server) is divided into multiple different virtual servers
by changing the identity number, and processors. So, each system can operate its operating systems in
an isolated manner. Where each sub-server knows the identity of the central server. It causes an
increase in performance and reduces the operating cost by the deployment of main server resources
into a sub-server resource. It’s beneficial in virtual migration, reducing energy consumption, reducing
infrastructural costs, etc.

Server Virtualization
6. Data Virtualization: This is the kind of virtualization in which the data is collected from various
sources and managed at a single place without knowing more about the technical information like how
data is collected, stored & formatted then arranged that data logically so that its virtual view can be
accessed by its interested people and stakeholders, and users through the various cloud services
remotely. Many big giant companies are providing their services like Oracle, IBM, At scale, Cdata,
etc.
Uses of Virtualization
 Data-integration
 Business-integration
 Service-oriented architecture data-services
 Searching organizational data

Virtualization and Cloud Computing:


Virtualization plays an important role in cloud computing since it allows for the appropriate degree of
customization, security, isolation, and manageability that are fundamental for delivering IT services on
demand. Virtualization technologies are primarily used to offer configurable computing environments
and storage.

Difference between Cloud computing and Virtualization: -

S.N
O Cloud Computing Virtualization

Cloud computing is used to provide pools and While It is used to make various simulated
1. automated resources that can be accessed on- environments through a physical hardware
demand. system.

While virtualization setup is simple as


2. Cloud computing setup is tedious, complicated.
compared to cloud computing.

While virtualization is low scalable


3. Cloud computing is high scalable.
compared to cloud computing.

While virtualization is less flexible than


4. Cloud computing is Very flexible.
cloud computing.

In the condition of disaster recovery, cloud


5. While it relies on single peripheral device.
computing relies on multiple machines.

6. In cloud computing, the workload is stateless. In virtualization, the workload is stateful.


S.N
O Cloud Computing Virtualization

The total cost of cloud computing is higher than The total cost of virtualization is lower than
7.
virtualization. Cloud Computing.

Cloud computing requires many dedicated While single dedicated hardware can do a
8.
hardware. great job in it.

Cloud computing provides unlimited storage While storage space depends on physical
9.
space. server capacity in virtualization.

Virtualization is of two types : Hardware


Cloud computing is of two types : Public cloud
10. virtualization and Application
and Private cloud.
virtualization.

In Cloud Computing, Configuration is image In Virtualization, Configuration is template


11.
based. based.

In cloud computing, we utilize the entire server In Virtualization, the entire servers are on-
12.
capacity and the entire servers are consolidated. demand.

In cloud computing, the pricing pay as you go


In Virtualization, the pricing is totally
13. model, and consumption is the metric on which
dependent on infrastructure costs.
billing is done.

Pros of Virtualization in Cloud Computing:


 Utilization of Hardware Efficiently –
With the help of Virtualization Hardware is Efficiently used by user as well as Cloud
Service Provider. In this the need of Physical Hardware System for the User is decreases
and this results in less costly.In Service Provider point of View, they will vitalize the
Hardware using Hardware Virtualization which decrease the Hardware requirement from
Vendor side which are provided to User is decreased. Before Virtualization, Companies
and organizations have to set up their own Server which require extra space for placing
them, engineers to check its performance and require extra hardware cost but with the help
of Virtualization the all these limitations are removed by Cloud vendor’s who provide
Physical Services without setting up any Physical Hardware system.
 Availability increases with Virtualization –
One of the main benefits of Virtualization is that it provides advance features which allow
virtual instances to be available all the times. It also has capability to move virtual instance
from one virtual Server another Server which is very tedious and risky task in Server Based
System. During migration of Data from one server to another it ensures its safety. Also, we
can access information from any location and any time from any device.
 Disaster Recovery is efficient and easy –
With the help of virtualization Data Recovery, Backup, Duplication becomes very easy. In
traditional method, if somehow due to some disaster if Server system Damaged then the
surety of Data Recovery is very less. But with the tools of Virtualization real time data
backup recovery and mirroring become easy task and provide surety of zero percent data
loss.
 Virtualization saves Energy –
Virtualization will help to save Energy because while moving from physical Servers to
Virtual Server’s, the number of Server’s decreases due to this monthly power and cooling
cost decreases which will Save Money as well. As cooling cost reduces it means carbon
production by devices also decreases which results in Fresh and pollution free environment.
 Quick and Easy Set up –
In traditional methods Setting up physical system and servers are very time-consuming.
Firstly, Purchase them in bulk after that wait for shipment. When Shipment is done then
wait for Setting up and after that again spend time in installing required software etc.
Which will consume very time. But with the help of virtualization the entire process is
done in very less time which results in productive setup.
 Cloud Migration becomes easy –
Most of the companies those who already have spent a lot in the server have a doubt of
Shifting to Cloud. But it is more cost-effective to shift to cloud services because all the data
that is present in their server’s can be easily migrated into the cloud server and save
something from maintenance charge, power consumption, cooling cost, cost to Server
Maintenance Engineer etc.
Cons of Virtualization:
 Data can be at Risk –
Working on virtual instances on shared resources means that our data is hosted on third
party resource which put’s our data in vulnerable condition. Any hacker can attack on our
data or try to perform unauthorized access. Without Security solution our data is in threaten
situation.
 Learning New Infrastructure –
As Organization shifted from Servers to Cloud. They required skilled staff who can work
with cloud easily. Either they hire new IT staff with relevant skill or provide training on
that skill which increase the cost of company.
 High Initial Investment –
It is true that Virtualization will reduce the cost of companies but also it is truth that Cloud
has high initial investment. It provides numerous services which are not required and when
unskilled organization will try to set up in cloud, they purchase unnecessary services which
are not even required to them.

Technology Examples- Xen: Paravirtualization, VMware: Full Virtualization, Microsoft Hyper-V.:


A wide range of virtualization technology is available especially for virtualizing computing
environments.
Xen: Paravirtualization:
Xen is an open-source initiative implementing a virtualization platform based on paravirtualization.
Initially developed by a group of researchers at the University of Cambridge in the United Kingdom,
Xen now has a large open-source community backing it. Citrix also offers it as a commercial solution,
XenSource. Xen-based technology is used for either desktop virtualization or server virtualization, and
recently it has also been used to provide cloud computing solutions by means of Xen Cloud Platform
(XCP). At the basis of all these solutions is the Xen Hypervisor, which constitutes the core technology of
Xen. Recently Xen has been advanced to support full virtualization using hardware-assisted
virtualization. Xen is the most popular implementation of paravirtualization, which, in contrast with full
virtualization, allows high-performance execution of guest operating systems. This is made possible by
eliminating the performance loss while executing instructions that require special management. This is
done by modifying portions of the guest operating systems run by Xen with reference to the execution of
such instructions. Therefore, it is not a transparent solution for implementing virtualization. This is
particularly true for x86, which is the most popular architecture on commodity machines and servers.
VMware: full virtualization:

VMware’s technology is based on the concept of full virtualization, where the underlying hardware is
replicated and made available to the guest operating system, which runs unaware of such abstraction
layers and does not need to be modified. VMware implements full virtualization either in the desktop
environment, by means of Type II hypervisors, or in the server environment, by means of Type I
hypervisors. In both cases, full virtualization is made possible by means of direct execution (for
nonsensitive instructions) and binary translation (for sensitive instructions), thus allowing the
virtualization of architecture such as x86. Besides these two core solutions, VMware provides additional
tools and software that simplify the use of virtualization technology either in a desktop environment,
with tools enhancing the integration of virtual guests with the host, or in a server environment, with
solutions for building and managing virtual computing infrastructures.
Microsoft Hyper-V.:
Hyper-V is an infrastructure virtualization solution developed by Microsoft for server virtualization. As
the name recalls, it uses a hypervisor-based approach to hardware virtualization, which leverages several
techniques to support a variety of guest operating systems. Hyper-V is currently shipped as a component
of Windows Server 2008 R2 that installs the hypervisor as a role within the server.

A hypervisor is a form of virtualization software used in Cloud hosting to divide and allocate the
resources on various pieces of hardware. The program which provides partitioning, isolation, or
abstraction is called a virtualization hypervisor. The hypervisor is a hardware virtualization technique
that allows multiple guest operating systems (OS) to run on a single host system at the same time. A
hypervisor is sometimes also called a virtual machine manager (VMM).
ANEKA: CLOUD APPLICATION PLATFORM:
Aneka is Manjrasoft Pty. Ltd.’s solution for developing, deploying, and managing cloud applications.
Aneka consists of a scalable cloud middleware that can be deployed on top of heterogeneous computing
resources. It offers an extensible collection of services coordinating the execution of applications,
helping administrators monitor the status of the cloud, and providing integration with existing cloud
technologies. One of Aneka’s key advantages is its extensible set of application programming interfaces
(APIs) associated with different types of programming models - such as Task, Thread, and MapReduce -
used for developing distributed applications, integrating new capabilities into the cloud, and supporting
different types of cloud deployment models: public, private, and hybrid. These features differentiate
Aneka from infrastructure management software and characterize it as a platform for developing,
deploying, and managing execution of applications on various types of clouds.

Aneka is a software platform for developing cloud computing applications. Aneka is a pure PaaS
solution for cloud computing. Aneka is a cloud middleware product that can be deployed on a
heterogeneous set of resources: a network of computers, a multicore server, datacenters, virtual cloud
infrastructures, or a mixture of these. The framework provides both middleware for managing and
scaling distributed applications and an extensible set of APIs for developing them.
Framework Overview:
Figure provides a complete overview of the components of the Aneka framework. The core
infrastructure of the system provides a uniform layer that allows the framework to be deployed over
different platforms and operating systems. The physical and virtual resources representing the bare metal
of the cloud are managed by the Aneka container, which is installed on each node and constitutes the
basic building block of the middleware. A collection of interconnected containers constitutes the Aneka
cloud: a single domain in which services are made available to users and developers.
The container features three different classes of services. Fabric services, foundation services, and
execution services. These take care of infrastructure management, supporting services for the Aneka
Cloud and application management and execution respectively.
These services are made available to developers and administrators by means of the application
management and development layer which includes interfaces and APIs for developing cloud
applications and the management tools and interfaces for controlling Aneka clouds.
Aneka Implements a service-oriented architecture (SOA) Find services are the fundamental components
of an Aneka cloud. Services operate at container level and except for the platform abstraction layer they
provide developers, users and administrators with all features offered by the framework services also
constitute the extension and customization point of Aneka Cloud’s the infrastructure allows for the
integration of new services or replacement of the existing ones with the different implementations.
The framework includes the basic services for infrastructure and node management, application
execution, accounting, and system monitoring; existing services can be extended and new features can
be added to the cloud by dynamically plugging new ones into the container. Such extensible and flexible
infrastructure enables Aneka Clouds to support different programming and execution models for
applications. A programming model represents a collection of abstractions that developers can use to
express distributed applications; the runtime support for a programming model is constituted by a
collection of execution and foundation services interacting together to carry out application execution.
Thus, the implementation of a new model requires the development of the specific programming
abstractions used by application developers and the services, providing you’re shy and sounds for lovely
diamond final score tie seeking for them runtime support for them. Programming models are just one
aspect of application management and execution. Within an Aneka Cloud environment, there are
different aspects involved in providing a scalable and elastic infrastructure and distributed runtime for
applications. These services involve:
 Elasticity and Scaling: By means of the dynamic provisioning service, Aneka supports dynamically
upsizing and downsizing of the infrastructure available for applications.
 Runtime Management: The runtime machinery is responsible for keeping the infrastructure up and
running and serves as a hosting environment for services. It is primarily represented by the container and
a collection of services that manage service membership and lookup, infrastructure maintenance, and
profiling.
 Resource Management: Aneka is an elastic infrastructure in which resources are added and removed
dynamically according to application needs and user requirements. To provide QoS-based execution, the
system not only allows dynamic provisioning but also provides capabilities for reserving nodes for
exclusive use by specific applications.
 Application Management: A specific subset of services is devoted to managing applications. These
services include scheduling, execution, monitoring, and storage management.
 User Management: Aneka is a multitenant distributed environment in which multiple applications,
potentially belonging to different users, are executed. The framework provides an extensible user system
via which it is possible to define users, groups, and permissions. The services devoted to user management
build up the security infrastructure of the system and constitute a fundamental element for accounting management.
 QoS / SLA Management & Billing: Within a cloud environment, application execution is metered and billed. Aneka
provides a collection of services that coordinate together to take into account the usage of resources by each application
and to bill the owning user accordingly.
Anatomy of the Aneka container:
The Aneka container constitutes the building blocks of Anika Cloud and represents the runtime machinery available to
services and applications. The container, the unit of deployment in Aneka Clouds is a lightweight software layer
designed to host services and interact with the underlining operating system and hardware the main role of the
container is to provide a lightweight environment in which to deploy services and some basic capabilities such as
communication channels through which it interacts with other nodes in the Aneka Cloud almost all operations
performed within Aneka are carried out by the services managed by the container the services installed in the Anika
container can be classified into three major categories.
1. Fabric Services
2. Foundation Services
3. Application Services
The services stack resides on the top of the platform abstraction layer (PAL), Representing the interface to the
underlining operating system and hardware. It provides a uniform view of the software and hardware environment in
which the container is running persistence and security travers all the services stack to provide a secure and reliable
infrastructure.
The Platform Abstraction Layer

The platform abstraction layer addresses this heterogeneity and provides the container with a uniform interface for
assessing the relevant hardware and operating system information thus allowing the rest of the container to run
unmodified on any supported platform. The PAL is responsible for detecting the supported hosting environment and
providing the corresponding implementation to interact with it to support the activity of the container. The PAL
provides the following features:
 Uniform and platform-independent implementation interface for accessing the hosting platform  Uniform access to
extended and additional properties of the hosting platform  Uniform and platform-independent access to remote nodes
 Uniform and platform-independent management interfaces
The PAL is a small layer of software that comprises a detection engine, which automatically configures the container at
boot time, with the platform-specific component to access the above information and an implementation of the
abstraction layer for the Windows, Linux, and Mac OS X operating systems.
The collectible data that are exposed by the PAL are the following:
 Number of cores, frequency, and CPU usage
 Memory size and usage
 Aggregate available disk space
 Network addresses and devices attached to the node
Fabric Services Fabric Services define the lowest level of the software stack representing the Aneka Container. They
provide access to the resource-provisioning subsystem and to the monitoring facilities implemented in Aneka.
Resource-provisioning services are in charge of dynamically providing new nodes on demand by relying on
virtualization technologies, while monitoring services allow for hardware profiling and implement a basic monitoring
infrastructure that can be used by all the services installed in the container.
Foundation Services
Fabric Services are fundamental services of the Aneka Cloud and define the basic infrastructure management features
of the system. Foundation Services are related to the logical management of the distributed system built on top of the
infrastructure and provide supporting services for the execution of distributed applications. All the supported
programming models can integrate with and leverage these services to provide advanced and comprehensive
application management. These services cover:
 Storage management for applications
 Accounting, billing, and resource pricing
 Resource reservation
Foundation Services provide a uniform approach to managing distributed applications and allow developers to
concentrate only on the logic that distinguishes a specific programming model from the others. Together with the
Fabric Services, Foundation Services constitute the core of the Aneka middleware. These services are mostly consumed
by the execution services and Management Consoles. External applications can leverage the exposed capabilities for
providing advanced application management.
Application Services
Application Services manage the execution of applications and constitute a layer that differentiates according to the
specific programming model used for developing distributed applications on top of Aneka. The types and the number
of services that compose This layer for each of the programming models may vary according to the specific needs or
features of the selected model it is possible to identify two major types of activities that are common across all the
supported models, scheduling and execution. Aneka defines a reference model for implementing the runtime support
for the programming models that abstracts these two activities in corresponding services; scheduling service and the
execution service. Moreover, it also defines base implementations that can be extended in order to integrate new
models.
BUILDING ANEKA CLOUDS
Aneka is primarily a platform for developing distributed applications for clouds. As a software platform it requires
infrastructure on which to be deployed; this infrastructure needs to be managed. Infrastructure management tools are
specifically designed for this task, and building clouds is one of the primary tasks of administrators. Aneka supports
various deployment models for public, private, and hybrid clouds.
PRIVATE CLOUD DEPLOYMENT MODE
A private deployment mode is mostly constituted by local physical resources and infrastructure management software
providing access to a local pool of nodes, which might be virtualized. In this scenario Aneka Clouds are created by
harnessing a heterogeneous pool of resources such has desktop machines, clusters, or workstations.
These resources can be partitioned into different groups, and Aneka can be configured to leverage these resources
according to application needs. Moreover, leveraging the Resource Provisioning Service, it is possible to integrate
virtual nodes provisioned from a local resource pool managed by systems such as XenServer, Eucalyptus, and
OpenStack.

Private Cloud Deployment Mode


The above Figure shows a common deployment for a private Aneka Cloud. This deployment is acceptable for a
scenario in which the workload of the system is predictable and a local virtual machine manager can easily address
excess capacity demand. Most of the Aneka nodes are constituted of physical nodes with a long lifetime and a static
configuration and generally do not need to be reconfigured often. The different nature of the machines harnessed in a
private environment allows for specific policies on resource management and usage that can be accomplished by means
of the Reservation Service.
For example, desktop machines that are used during the day for office automation can be exploited outside the standard
working hours to execute distributed applications. Workstation clusters might have some specific legacy software that
is required for supporting the execution of applications and should be preferred for the execution of applications with
special requirements.
PUBLIC CLOUD DEPLOYMENT MODE
Public Cloud deployment mode features the installation of Aneka master and worker nodes over a completely
virtualized infrastructure that is hosted on the infrastructure of one or more resource providers such as Amazon EC2 or
GoGrid. In this case it is possible to have a static deployment where the nodes are provisioned beforehand and used as
though they were real MACHINES. This deployment merely replicates a classic Aneka installation on a physical
infrastructure without any dynamic provisioning capabilities more interesting is the use of the elastic features of IaaS
providers and the creation of a cloud that is completely dynamic the following picture provides an overview of this
scenario
The deployment is generally contained with the infrastructure boundaries of a single IaaS provider the reasons for this
are to minimize the data transfer between different providers which is generally priced at a higher cost and to have
better network performance. In this scenario it is possible to deploy an anecdote composed of only 1 node into
completely leverage dynamic provisioning to elastically scale the infrastructure on demand a fundamental role is
played by the resource provisioning surveys, which can be configured with different images and templates to instantiate
other important services that have to be included in the master node are the accounting and reporting services. These
provide details about resource utilization by users and applications and are fundamental in the multi-tenant cloud where
users are billed According to their consumption of cloud capabilities.
Dynamic instances provisioned on demand will mostly be configured as worker nodes and in the specific case of
Amazon EC2 different images featuring a different hardware setup can be made available to instantiate worker
containers applications with specific requirements for computing capacity or memory can provide additional
information to the scheduler that will trigger the appropriate provisioning request. Application execution is not the only
use of dynamic instances, any surveys requiring elastic scaling can leverage dynamic provisioning. Another example is
the storage Service. In multitenant Clouds, multiple applications can leverage the support for storage; in this scenario it
is then possible to introduce bottlenecks or simply reach the quota limits allocated for storage on the node. Dynamic
provisioning can easily solve this issue as it does for increasing the computing capability of an Aneka Cloud.

Public Cloud Deployment Mode


Deployments using different providers are unlikely to happen because of the data transfer costs among providers, but
they might be a possible scenario for federated Aneka Clouds. In this scenario resources can be shared or leased among
providers under specific agreements and more convenient prices. In this case the specific policies installed in the
Resource Provisioning Service can discriminate among different resource providers, mapping different IaaS providers
to provide the best solution to a provisioning request.
HYBRID CLOUD DEPLOYMENT MODE
The hybrid deployment model constitutes the most common deployment of Aneka. In many cases, there is an existing
computing infrastructure that can be leveraged to address the computing needs of applications. This infrastructure will
constitute the static deployment of Aneka that can be elastically scaled on demand when additional resources are
required. An overview of this deployment is presented in Figure beow.
This scenario constitutes the most complete deployment for Aneka that is able to leverage all the capabilities of the
framework:
 Dynamic Resource Provisioning
 Resource Reservation
 Workload Partitioning
 Accounting, Monitoring, and Reporting

Hybrid Cloud Deployment Mode


Moreover, if the local premises offer some virtual machine management capabilities, it is possible to provide a very
efficient Use of resources, thus minimizing the expenditure for application execution.
Hybrids scenario, heterogeneous resources can be used for different purposes as in the case of private cloud
deployment, desktop machines can be reserved for low priority workload outside the common working hours. the
majority of the applications will be executed on workstations and clusters which are the nodes that are constantly
connected to the Aneka cloud. Any additional computing capability demand can be primarily addressed by the local
visualization facilities and if more computing power is required, it is possible to leverage external iaas providers.
Different from the Anika public cloud deployment is the case in which it makes more sense to leverage a variety of
resource providers to provision virtual resources since part of the infrastructure is local a cost in data transfer to the
external iaas infrastructure cannot be avoided. it is then important to select the most suitable option to address
application needs. The resource provisioning service implemented in Aneka exposes the capability of leveraging
several resource pools at the same time and configuring specific policies to select the most appropriate pool for
satisfying a provision request. These features simplify the development of custom policies that can better serve the
needs of a specific hybrid deployment.
CLOUD PROGRAMMING AND MANAGEMENT
Aneka’s primary purpose is to provide a scalable middleware product in which to execute distributed applications.
Application development and management constitute the two major features that are exposed to developers and system
administrators. Aneka provides developers with a comprehensive and extensible set of APIs and administrators with
powerful and intuitive management tools. The APIs for development are mostly concentrated in the Aneka SDK;
management tools are exposed through the Management Console.
1. Aneka SDK: Aneka provides APIs for developing applications on top of existing programming models,
implementing new programming models, and developing new services to integrate into the Aneka Cloud. The
development of applications mostly focuses on the use of existing features and leveraging the services of the
middleware, while the implementation of new programming models or new services enriches the features of
Aneka. The SDK provides support for both programming models and services by means of the Application
Model and the Service Model. The former covers the development of applications and new programming
models; the latter defines the general infrastructure for service development.
a) Application Model: The Application Model represents the minimum set of APIs that is common to all the
programming models for representing and programming distributed applications on top of Aneka. This model is further
specialized according to the needs and the particular features of each of the programming models.
b) Service Model: The Aneka Service Model defines the basic requirements to implement a service that can be hosted
in an Aneka Cloud. The container defines the runtime environment in which services are hosted. Each service that is
hosted in the container must be compliant with the IService interface, which exposes the following methods and
properties:
 Name and status
 Control operations such as Start, Stop, Pause, and Continue methods
 Message handling by means of the HandleMessage method
Management Tools
Aneka is a pure PaaS implementation and requires virtual or physical hardware to be deployed. This layer also includes
capabilities for managing services and applications running in the Aneka Cloud.
a) Infrastructure Management: Aneka leverages virtual and physical hardware in order to deploy Aneka Clouds. Virtual
hardware is generally managed by means of the Resource Provisioning Service, which acquires resources on demand
according to the need of applications, while physical hardware is directly managed by the Administrative Console by
leveraging the Aneka management API of the PAL. The management features are mostly concerned with the
provisioning of physical hardware and the remote installation of Aneka on the hardware.
b) Platform Management: provides the basic layer on top of which Aneka Clouds are deployed. The creation of Clouds
is orchestrated by deploying a collection of services on the physical infrastructure that allows the installation and the
management of containers. A collection of connected containers defines the Platform on top of which applications are
executed. The features available for platform management are mostly concerned with the logical organization and
structure of Anika clouds it is possible to partition the available hardware into several clouds variably configured for
different purposes. Services implement the core features of Anika Cloud and the management layer exposes operation
for some of them such as cloud monitoring resource provisioning and reservation user management and application
profiling.
Application management applications identify the user contribution to the cloud the management a provide
administrators with monitoring ion profiling features that help them track the usage of resources and relate them to
users and applications. This was an important feature in cloud computing scenario in which users are built for their
resource usage Aneka exposes capabilities for giving summary and detailed information about application execution
and resource utilization. All these features are made accessible through the Anika Cloud Management Studio which
constitutes the main administrative console for the cloud.
Cloud Platforms in Industry: Amazon Web Services:
Amazon web services Amazon Web Services (AWS) is a platform that allows the development of flexible
applications by providing solutions for elastic infrastructure scalability, messaging, and data storage. The platform
is accessible through SOAP or RESTful Web service interfaces.
Amazon SES provides AWS users with a scalable email service that leverages the AWS infrastructure. Once users
are signed up for the service, they have to provide an email that SES will use to send emails on their behalf.
Additional services:
Besides compute, storage, and communication services, AWS provides a collection of services that allow users to
utilize services in aggregation. The two relevant services are Amazon CloudWatch and Amazon Flexible Payment
Service (FPS). Amazon CloudWatch is a service that provides a comprehensive set of statistics that help
developers understand and optimize the behavior of their application hosted on AWS. CloudWatch collects
information from several other AWS services: EC2, S3, SimpleDB, CloudFront, and others. Using CloudWatch,
developers can see a detailed breakdown of their usage of the service they are renting on AWS and can devise
more efficient and cost-saving applications. Earlier services of CloudWatch were offered only through
subscription, but now it is made available for free to all the AWS users. Amazon FPS infrastructure allows AWS
users to leverage Amazon’s billing infrastructure to sell goods and services to other AWS users. Using Amazon
FPS, developers do not have to set up alternative payment methods, and they can charge users via a billing
service. The payment models available through FPS include one-time payments and delayed and periodic
payments, required by subscriptions and usage-based services, transactions, and aggregate multiple payments.

Google AppEngine- Google App Engine is a cloud computing Platform as a Service (PaaS) which
provides Web app developers and businesses with access to Google’s scalable hosting in Google
managed data centers and tier 1 Internet service. It enables developers to take full advantage of its
serverless platform. These applications are required to be written in, namely: Java, Python, PHP, Go,
Node.JS, . NET, and Ruby. Applications in the Google App Engine require the use of Google query
language and store data in Google Bigtable.

Architecture and Core Concepts :


GAE Architecture
App Engine is created under Google Cloud Platform project when an application resource is created.
The Application part of GAE is a top-level container that includes the service, version and instance-
resources that make up the app. When you create App Engine application, all your resources are created
in the user defined region, including app code and collection of settings, credentials and your app's
metadata.
Each GAE application includes at least one service, the default service, which can hold many versions,
depends on your app's billing status.
The following diagram shows the hierarchy of a GAE application running with two services. In this
diagram, the app has 2 services that contain different versions, and two of those versions are actively
running on different instances:

Google AppEngine Platform Architecture


ronment:
1. The runtime environment represents the execution context of applications hosted on AppEngine.
Sandboxing refers to the practice of isolating and restricting the execution environment of a program or
application to enhance security and stability. In the context of cloud platforms like Google App Engine,
sandboxing is a critical security and resource isolation mechanism.
Working of sandboxing in Google App Engine:
 When you deploy your application to Google App Engine, it runs in a controlled, isolated
environment (the "sandbox").
 The sandbox imposes restrictions on what your application can do to prevent malicious or
unintentional actions that could harm the system or other applications.
 For example, the sandbox might restrict access to certain system resources, limit the use of
certain libraries, or control outbound network connections.
 This isolation helps ensure that one application's actions do not impact the security and stability
of other applications running on the platform.
2. While sandboxing enhances security, it can also impose limitations on what your application can
do. Developers need to be aware of these limitations when building and deploying applications
in a PaaS environment like Google App Engine.
Supported Runtimes:
Supported Runtimes refer to the programming languages, frameworks, and libraries that a platform, like
Google App Engine, officially supports for application development.
In the context of Google App Engine, the platform has historically supported several runtimes,
including:
 Python
 Java
 Go
 Node.js
 PHP (in the earlier days, but this runtime has been deprecated)
Each of these runtimes comes with a set of libraries and tools that are optimized and configured to work
seamlessly within the App Engine environment.
The choice of runtime impacts the language and framework you use to build your application. It's
essential to select a supported runtime that aligns with your application's requirements and your team's
expertise.
In addition to the officially supported runtimes, Google App Engine's Flexible environment allows you
to run custom runtimes and containers, offering more flexibility in choosing the technology stack for
your application.

AppEngine provides various types of storage, which operate differently depending on the volatility of
the data. There are three different levels of storage: in memory-cache, storage for semi structured data, and long-
term storage for static data.
Static file servers: Web applications are composed of dynamic and static data. Dynamic data are a result of the
logic of the application and the interaction with the user. Static data often are mostly constituted of the
components that define the graphical layout of the application (CSS files, plain HTML files, JavaScript files,
images, icons, and sound files) or data files. These files can be hosted on static file servers, since they are not
frequently modified. Such servers are optimized for serving static content, and users can specify how dynamic
content should be served when uploading their applications to AppEngine
Data Store: DataStore is a service that allows developers to store semistructured data. The service is designed to
scale and optimized to quickly access data. DataStore can be considered as a large object database in which to
store objects that can be retrieved by a specified key. Both the type of the key and the structure of the object can
vary.
ervices:
Applications hosted on AppEngine take the most from the services made available through the runtime
environment. These services simplify most of the common operations that are performed in Web applications:
access to data, account management, integration of external resources, messaging and communication, image
manipulation, and asynchronous computation.
1. urlFetch:
 urlFetch is a service in GAE that allows you to make HTTP requests to external websites and
services.
 It's commonly used to retrieve data from external APIs, scrape web content, or interact with third-
party services.
 urlFetch provides secure and efficient outbound HTTP communication, and it's important for
integrating external data sources into your GAE applications.
2. MemCache:
 MemCache is a distributed, in-memory data store provided by GAE for caching frequently
accessed data.
 It helps improve application performance by reducing the need to retrieve data from slower,
persistent storage solutions.
 MemCache is a key-value store and can be used to store frequently accessed data like query
results, session data, and more.
3. Mail and Instant Messaging:
 GAE offers email and instant messaging services for communication within your applications.
 Mail: You can send email notifications and messages directly from your application using the
built-in mail service.
 Instant Messaging: Google Cloud's Pub/Sub service can be used for building real-time messaging
and event-driven systems in GAE.
4. Account Management:
 GAE allows you to handle user account management and authentication for your applications.
 You can use Google Cloud Identity-Aware Proxy (IAP) and other identity and access
management (IAM) services to control user access to your application.
 User authentication and authorization are crucial for securing and personalizing your application.
5. Image Manipulation:
 GAE provides various tools and libraries for image manipulation and processing.
 You can resize, crop, rotate, and enhance images within your application using these tools.
 Image manipulation is useful for tasks like generating thumbnails, image editing, and optimizing
media for web display.
These services and features in GAE enable you to build and enhance your applications with functionalities like
data retrieval, caching, communication, user management, and media processing. They contribute to the overall
functionality and user experience of your Google App Engine applications.
 Compute Services:
Web applications are mostly designed to interface applications with users by means of a ubiquitous channel, that
is, the Web. Most of the interaction is performed synchronously: Users navigate the Web pages and get
instantaneous feedback in response to their actions. This feedback is often the result of some computation
happening on the Web application, which implements the intended logic to serve the user request. AppEngine
offers additional services such as Task Queues and Cron Jobs that simplify the execution of computations that are
off-bandwidth or those that cannot be performed within the timeframe of the Web request.
Task Queues:
 Task Queues in Google App Engine allow you to offload and manage background tasks and
processes in your application.
 Background tasks can include tasks that are time-consuming, need to be executed asynchronously, or
are not suitable for immediate request handling.
 Key features:
 Asynchronous Processing: Task queues execute tasks independently from user requests, which helps
maintain application responsiveness.
 Scalability: Task queues automatically scale to handle varying workloads, ensuring efficient resource
utilization.
 Retry Mechanism: They provide built-in retry options for failed tasks, enhancing task reliability.
 Prioritization: You can assign priority levels to tasks, ensuring high-priority tasks are processed first.
 Common use cases include sending emails, processing large data sets, handling data migrations, and
performing periodic maintenance tasks.
Cron Jobs:
 Cron Jobs in Google App Engine are scheduled, automated tasks that run at specified intervals,
similar to traditional cron jobs on Unix-based systems.
 You can define a schedule using cron expressions to determine when and how often a task should be
executed.
 Key features:
 Automation: Cron jobs automate routine tasks, reducing the need for manual intervention.
 Scheduled Execution: They can be configured to run tasks daily, hourly, weekly, or at custom
intervals.
 Integration: Cron jobs are often used for data backups, periodic database cleanups, and report
generation.
 Fine-Grained Control: You can set custom schedules for specific tasks in your application.
 Cron jobs are a valuable tool for managing recurring tasks and maintenance activities in your
application, ensuring they are executed consistently and reliably.
Both Task Queues and Cron Jobs play crucial roles in optimizing the performance and functionality of
your Google App Engine applications. Task Queues help manage background processing efficiently,
while Cron Jobs handle the automation of repetitive tasks according to a schedule.
Application Life-Cycle:
The application life cycle in GAE involves stages of development, testing, deployment, scaling, monitoring, and
ongoing maintenance. The platform simplifies many aspects of application management, allowing developers to
focus on their code and application logic while GAE handles infrastructure and scaling.
The SDKs released by Google provide developers with most of the functionalities required by these tasks.
Currently there are two SDKs available for development: Java SDK and Python SDK.
Here's an overview of the typical application life cycle in GAE:
1. Development:
Developers write, test, and debug their application code using the GAE Software Development Kits
(SDKs), which provide a local development environment simulating the GAE platform. During this
phase, developers use their choice of supported programming languages (e.g., Java, Python, Go) to build
the application's functionality.
Java SDK: The Java SDK provides developers with the facility for building applications with the Java 5 and
Java 6 runtime environments. Alternatively, it is possible to develop applications within the Eclipse development
environment by using the Google AppEngine plug-in, which integrates the features of the SDK within the
powerful Eclipse environment. Using the Eclipse software installer, it is possible to download and install Java
SDK, Google Web Toolkit, and Google AppEngine plug-ins into Eclipse. These three components allow
developers to program powerful and rich Java applications for AppEngine. The SDK supports the development of
applications by using the servlet abstraction, which is a common development model. Together with servlets,
many other features are available to build applications. Moreover, developers can easily create Web applications
by using the Eclipse Web Platform, which provides a set of tools and components. The plug-in allows developing,
testing, and deploying applications on AppEngine. Other tasks, such as retrieving the log of applications, are
available by means of command-line tools that are part of the SDK.
Python SDK: The Python SDK allows developing Web applications for AppEngine with Python 2.5. It provides
a standalone tool, called GoogleAppEngineLauncher, for managing Web applications locally and deploying them
to AppEngine. The tool provides a convenient user interface that lists all the available Web applications, controls
their execution, and integrates them with the default code editor for editing application files. In addition, the
launcher provides access to some important services for application monitoring and analysis, such as the logs, the
SDK console, and the dashboard. The log console captures all the information that is logged by the application
while it is running. The console SDK provides developers with a Web interface via which they can see the
application profile in terms of utilized resource. This feature is particularly useful because it allows developers to
preview the behavior of the applications once they are deployed on AppEngine, and it can be used to tune
applications made available through the runtime.
2. Local Testing:
The application is tested locally on the developer's machine using the GAE SDK. Developers can verify
that their code works as expected, handle HTTP requests, interact with data storage services, and utilize
GAE's application services (e.g., Memcache, Task Queues) in a controlled environment.
3. Configuration and Deployment:
Configuration files are created to specify settings for the application, including resource allocation,
scaling parameters, and service dependencies. The application code, along with its configuration, is
deployed to Google App Engine. GAE supports multiple services within an application, and developers
configure and deploy each service separately.
4. Version Management: Google App Engine allows for versioning of applications. Developers can
deploy multiple versions of their application simultaneously. Different versions can be created
for purposes such as A/B testing, staging, and canary releases. Version management enables easy
rollbacks if issues are encountered in a new release.
5. Scaling and Resource Allocation: GAE automatically handles resource allocation and scaling
based on incoming traffic. Developers can configure the minimum and maximum number of
instances, automatic scaling policies, and performance settings based on the application's
requirements.
6. Health Monitoring and Diagnostics: GAE provides comprehensive monitoring and logging tools
for applications. Developers and operators can monitor application performance, review logs,
and receive alerts for any issues that may arise.
7. Maintenance and Updates: Ongoing maintenance, bug fixes, and feature updates can be
performed on the application as needed. Developers can deploy new versions with improvements
and enhancements. Data migrations, scheduled tasks, and maintenance activities are managed
within the application life cycle.
8. Scalability and Optimization: Application owners can optimize the application's resource usage
and scalability settings as traffic patterns change over time. The application can be configured to
handle increased workloads efficiently.
9. Security and Access Control: Access control and security measures are continually monitored
and adjusted as needed. GAE integrates with Google Cloud Identity-Aware Proxy (IAP) and
other identity and access management (IAM) services for user authentication and authorization.
10. End-of-Life or Decommissioning: When the application is no longer needed or is being replaced,
it can be decommissioned. Data and resources are appropriately managed, and the application is
shut down, ensuring that it no longer incurs costs.

Cost Model:
Google App Engine (GAE) employs a cost model that charges users based on their usage of the
platform's resources and services. The cost model for GAE is designed to be pay-as-you-go, meaning
you are charged for the specific resources you consume rather than pre-paying for a fixed infrastructure.
Here are the key aspects of the cost model in GAE: Resource consumption, Pricing tiers, free quotas,
billing & invoicing, pricing calculator, monitoring and alerts. Budgets and cost controls.
The cost model in GAE provides transparency and flexibility, allowing you to control and manage your
expenses based on your application's usage patterns and requirements. It's important to monitor your
resource usage and keep track of your billing to avoid surprises and optimize your application's costs.
In Google App Engine (GAE), the cost model includes various types of quotas and limits that impact
billing. These quotas help define the usage and costs associated with running applications on the
platform. Three common categories of quotas in GAE are:
1. Billable Quotas:
 Billable quotas are the resource limits that, when exceeded, result in charges on your
Google Cloud Platform (GCP) bill. These are the primary factors affecting the cost of
running your GAE application.
 Examples of billable quotas include the number of running instances, CPU and memory
usage, and data storage limits in services like Cloud Datastore and Cloud Storage.
 When you surpass the free tier or allocated limits in these areas, you may incur additional
costs.
2. Fixed Quotas:
 Fixed quotas are predefined, non-configurable resource limits that GAE imposes on all
applications. These quotas exist to ensure fair resource allocation across all users.
 Examples of fixed quotas include limits on URL fetches, outbound socket connections,
and certain types of HTTP headers. These quotas cannot be modified by individual GAE
users.
3. Per-Minute Quotas:
 Per-minute quotas define the maximum rate at which specific resources can be used
within a minute. They ensure efficient resource allocation and prevent overuse.
 Examples of per-minute quotas include the maximum rate of creating or deleting Cloud
Datastore entities and the rate at which you can make requests to the Task Queue service.
Observations;
Google App Engine (AppEngine) is a framework for creating scalable web applications. It leverages
Google's infrastructure to provide developers with a scalable and secure environment. Key components
include a sandboxed runtime for application execution and a set of services that cover common web
development features, making it easier to build applications that can scale effortlessly.
AppEngine emphasizes simplicity with straightforward interfaces for performing optimized and scalable
operations. Developers can construct applications by utilizing these building blocks, allowing
AppEngine to handle scalability when necessary. Compared to traditional web development, creating
robust applications with AppEngine may require a shift in perspective and more effort. Developers must
familiarize themselves with AppEngine's capabilities and adapt their implementations to adhere to the
AppEngine application model.
SQL Azure:
SQL Azure is a cloud-based relational database service hosted on Microsoft Azure, built on SQL Server
technologies. It extends SQL Server's capabilities to the cloud, providing developers with a scalable,
highly available, and fault-tolerant relational database. Here's a summary of its key features and
components:
Compatibility: SQL Azure is fully compatible with SQL Server, making it easy for applications
developed for SQL Server to migrate to SQL Azure. It maintains the same interface exposed by SQL
Server.
Accessibility: It's accessible from anywhere with access to the Azure Cloud, providing flexibility in
connecting to your database.
Manageability: SQL Azure is fully manageable using REST APIs, allowing developers to control
databases and set firewall rules for accessibility.
Architecture: It uses the Tabular Data Stream (TDS) protocol for data access, and a service layer
provides provisioning, billing, and connection-routing services. SQL Azure Fabric manages the
distributed database infrastructure.
Account Activation: Developers need a Windows Azure account to use SQL Azure. Once activated,
they can create servers and configure access to servers using the Windows Azure Management Portal or
REST APIs.
Server Abstractions: SQL Azure servers closely resemble physical SQL Servers and have fully
qualified domain names under the database.windows.net domain. Multiple synchronized copies of each
server are maintained within Azure Cloud.
Billing Model: SQL Azure is billed based on space usage and edition. Two editions are available: Web
Edition for small web applications (1 GB or 5 GB databases) and Business Edition for larger
applications (10 GB to 50 GB databases). A bandwidth fee applies for data transfers outside the Azure
Cloud or region, and a monthly fee per user/database is based on peak database size during the month.

SQL Azure Architecture


SQL Azure simplifies database management in the cloud, offering compatibility with SQL Server and
scalability while handling essential management tasks for developers. It's a valuable tool for hosting and
managing relational databases in the cloud.
Windows Azure Platform Appliance.
The Windows Azure Platform Appliance was a solution provided by Microsoft to enable organizations
to deploy Azure cloud services in their own data centers, bringing the cloud to their own infrastructure.
Here's a brief note on this concept:
 The Windows Azure Platform Appliance was designed to extend the capabilities of Microsoft's
Azure cloud platform to on-premises data centers and hosting service providers.
 It allowed organizations to build and operate their own cloud services using Azure technology,
providing cloud-like services within their private data centers.
 The appliance integrated Microsoft's Windows Azure, SQL Azure, and other Azure services into
a unified platform.
 It included a combination of hardware and software components, offering scalable compute and
storage resources to meet the demand of cloud-based applications and services.
 By implementing the Windows Azure Platform Appliance, organizations could take advantage of
Azure's development and management tools, and leverage a consistent platform for both private
and public cloud environments.
 Microsoft provided this solution to large enterprises, governments, and service providers who
required the flexibility and control of a private cloud, but also wanted to harness the power and
features of the Azure cloud platform.
The Windows Azure Platform Appliance aimed to bridge the gap between on-premises data centers and
the public cloud, offering a unified approach to cloud computing for organizations with diverse needs
and requirements. However, it's worth noting that as of my last knowledge update in January 2022,
Microsoft had shifted its focus from the Azure Platform Appliance to other cloud-related initiatives.
Cloud Applications: Cloud computing has revolutionized how scientific applications are developed and
deployed. It offers scalability, computational power, and accessibility that can significantly enhance
various scientific domains. Here are three examples of cloud applications for scientific use cases:
I) Scientific Applications- Scientific applications are a sector that is increasingly using cloud computing
systems and technologies. The immediate benefit seen by researchers and academics is the potentially
infinite availability of computing resources and storage at sustainable prices compared to a complete in-
house deployment.
a. Healthcare (ECG Analysis in the Cloud): In healthcare, cloud-based applications are used for
Electrocardiogram (ECG) analysis. ECG data from medical devices is securely transmitted to the
cloud for real-time processing.
Key Features:
i. Data Ingestion: ECG data is collected from monitoring devices in hospitals and patient
homes.
ii. Cloud Processing: The cloud application analyzes ECG data, detecting anomalies,
arrhythmias, and other cardiac conditions.
iii. Real-time Alerts: If irregularities are detected, healthcare professionals receive real-time
alerts for immediate action.
iv. Data Storage: ECG data is securely stored in the cloud, allowing for longitudinal patient
monitoring and research.

b. Biology (Protein Structure Prediction and Gene Expression Data Analysis for Cancer
Diagnosis): In biology, cloud applications support tasks like protein structure prediction and
gene expression data analysis for cancer diagnosis.
Key Features:
i. Data Integration: Biological data from various sources, including DNA sequencing
and protein databases, is integrated into the cloud.
ii. Processing: Cloud-based algorithms are applied to predict protein structures and
analyze gene expression data.
iii. Machine Learning: Machine learning models help identify genetic markers for
cancer diagnosis and prognosis.
iv. Collaboration: Researchers from different locations can collaborate, share data, and
jointly analyze findings using cloud-based tools.

c. Geoscience (Satellite Image Processing); In geoscience, cloud applications support satellite


image processing, including land cover classification, climate modeling, and disaster
monitoring.
Key Features:
i. Data Collection: Satellite images are collected from various sources and transmitted
to the cloud.
ii. Image Processing: Cloud-based applications employ image processing algorithms to
classify land cover, track weather patterns, and assess environmental changes.
iii. Scalability: Cloud resources are scaled up during disaster events for rapid response
and resource allocation.
iv. Visualization: Researchers, governments, and organizations can access visualizations
and real-time data for decision-making.
These cloud applications leverage the cloud's computational power, scalability, and collaboration
capabilities to advance research and scientific discoveries. They offer real-time processing, secure data
storage, and the ability to analyze vast datasets, benefiting healthcare, biology, geoscience, and various
other scientific domains.
Business and Consumer Applications- Cloud-based business and consumer applications leverage
cloud computing resources and services to provide flexibility, scalability, and accessibility.
Cloud computing offers numerous benefits to both businesses and consumers, transforming the way they
access, store, and use data and applications. Here's how the cloud helps in both business and consumer
contexts:

Benefits for Business:

1. Cost-Efficiency:
- Cloud services eliminate the need for businesses to invest in and maintain their own IT infrastructure,
reducing capital expenses.
- Pay-as-you-go pricing models allow businesses to pay only for the resources they use, making it
cost-effective.

2. Scalability:
- Cloud resources can be quickly scaled up or down based on business demand, providing flexibility
during growth or seasonal fluctuations.
3. Accessibility:
- Cloud services can be accessed from anywhere with an internet connection, enabling remote work
and collaboration.

4. Security:
- Many cloud providers offer advanced security features and compliance certifications, improving data
protection.
- Regular updates and patch management help keep systems secure.

5. Backup and Disaster Recovery:


- Data stored in the cloud is automatically backed up and can be easily restored in case of data loss or
disasters.

6. Collaboration:
- Cloud-based collaboration tools facilitate real-time collaboration among teams, enabling efficient
document sharing and communication.

7. Innovation and Agility:


- Cloud allows rapid deployment of new applications and services, fostering innovation.
- Businesses can stay competitive by quickly adapting to market changes.
Examples:
1. Salesforce:
 Type: CRM (Customer Relationship Management)
 Description: Salesforce is a cloud-based CRM platform that helps businesses manage
customer relationships, sales, and marketing.
2. Microsoft 365 (formerly Office 365):
 Type: Productivity Suite
 Description: Microsoft 365 offers cloud-based tools like Word, Excel, and Teams for
business productivity and collaboration.
3. Amazon Web Services (AWS):
 Type: Cloud Infrastructure and Services
 Description: AWS provides a wide range of cloud services, including computing power,
storage, and machine learning, for businesses of all sizes.
4. Slack:
 Type: Team Collaboration
 Description: Slack is a cloud-based messaging platform for team communication, file
sharing, and collaboration.
5. Zoom:
 Type: Video Conferencing
 Description: Zoom is a cloud-based video conferencing and communication platform
used for virtual meetings and webinars.
Consumer Cloud Applications:
Benefits for Consumers:
1. Convenience:
- Cloud-based services, like email and social media, are accessible from any device with an internet
connection, providing convenience for users.

2. Data Synchronization:
- Cloud storage solutions keep user data synchronized across multiple devices, ensuring access to files
and content from anywhere.

3. Entertainment:
- Streaming services, such as Netflix and Spotify, offer consumers a wide range of entertainment
content on-demand, without the need to download large files.

4. Collaboration:
- Cloud-based collaboration tools allow consumers to work on shared documents, making it easier for
students and professionals to collaborate on projects.

5. Communication:
- Email, messaging, and video conferencing services in the cloud facilitate communication with
friends, family, and colleagues worldwide.

6. Backup and Sharing:


- Consumers can easily back up and share personal photos, videos, and documents with friends and
family through cloud storage and sharing services.

7. Accessibility and Mobility:


- Mobile apps and cloud services enable users to access content and applications on the go, improving
productivity and entertainment options.
Examples:
1. Netflix:
 Type: Streaming Entertainment
 Description: Netflix is a cloud-based streaming service that offers a vast library of TV
shows and movies for consumers.
2. Spotify:
 Type: Music Streaming
 Description: Spotify is a cloud-based music streaming service that allows users to access
and listen to a vast collection of songs and playlists.
3. Dropbox:
 Type: Cloud Storage and File Sharing
 Description: Dropbox is a cloud-based file storage and sharing service that enables users
to store and access files from anywhere.
4. Google Drive:
 Type: Cloud-Based Office Suite and Storage
 Description: Google Drive provides cloud storage and a suite of office applications like
Docs and Sheets for consumers.
5. Facebook:
 Type: Social Networking
 Description: Facebook is a cloud-based social networking platform that connects
individuals and allows them to share content and interact with others.
In both business and consumer contexts, the cloud has become an integral part of daily life, offering
convenience, flexibility, and innovation. Its benefits are far-reaching and continue to evolve as new
cloud technologies and services are developed.

CRM and ERP:


Customer Relationship Management (CRM) and Enterprise Resource Planning (ERP) are two distinct
but interconnected software systems that play crucial roles in managing different aspects of a business.
Here's an overview of each and how they differ:

Customer Relationship Management (CRM):

1. Focus: CRM is primarily focused on managing and improving customer interactions, relationships,
and sales processes.

2. Purpose: It helps businesses build and maintain strong relationships with their customers by providing
tools for tracking customer information, communication, and sales opportunities.

3. Key Features:
- Contact Management: Store and manage customer contact information and history.
- Sales and Lead Management: Track sales opportunities, leads, and customer accounts.
- Marketing Automation: Automate marketing campaigns, email communications, and lead nurturing.
- Customer Support: Provide tools for managing customer inquiries, complaints, and support tickets.
- Analytics and Reporting: Generate insights into customer behavior and sales performance.
4. Benefits: CRM enhances customer engagement, streamlines sales processes, improves customer
service, and provides valuable insights for better decision-making.
Example:
 Salesforce: A popular CRM platform that helps businesses manage their customer interactions,
sales leads, and marketing campaigns.
 HubSpot: Offers CRM tools with integrated marketing and sales features for businesses of all
sizes.

Enterprise Resource Planning (ERP):


1. Focus: ERP is focused on the integration and management of core business processes and data across
an entire organization.
2. Purpose: It helps businesses optimize and streamline their operations, ensuring that different
departments can work together efficiently and share a unified view of business data.
3. Key Features:
- Financial Management: Includes accounting, budgeting, and financial reporting.
- Human Resources: Manages employee data, payroll, and workforce planning.
- Supply Chain and Inventory: Tracks inventory, procurement, and logistics.
- Manufacturing: Streamlines production and quality control processes.
- Sales and Distribution: Manages sales orders, customer deliveries, and order fulfillment.
- Analytics and Reporting: Provides insights into various aspects of the business.
4. Benefits: ERP improves efficiency, reduces manual data entry, and provides a holistic view of the
business, allowing for better decision-making and resource allocation.
Example:
 SAP ERP: A comprehensive suite of ERP applications for managing financials, supply chain,
HR, and more in large enterprises.
 NetSuite: A cloud-based ERP solution that integrates financials, e-commerce, CRM, and more for
businesses.
Key Differences:
- Scope: CRM is more customer-centric, while ERP is broader in scope, encompassing various business
functions.
- Data Integration: CRM primarily deals with customer data, while ERP integrates data across multiple
departments (e.g., finance, HR, supply chain).
- Primary Users: CRM is often used by sales, marketing, and customer service teams. ERP is used by
various departments, including finance, manufacturing, and operations.
- Goal: CRM focuses on improving customer relationships and sales, while ERP focuses on overall
business process optimization.
- Overlap: There can be areas of overlap, such as sales order management, where both CRM and ERP
systems may interact.
Many businesses integrate CRM and ERP systems to ensure seamless data flow between customer-
related processes and core business operations. This integration provides a comprehensive view of
customer interactions and business performance.
Productivity:
Productivity applications replicate in the cloud some of the most common tasks that we are used to performing on
our desktop: from document storage to office automation and complete desktop environments hosted in the cloud.
Example:
Dropbox and iCloud, Google docs, Cloud desktops: EyeOS and XIOS/3,
These cloud-based productivity tools offer various capabilities to users:
 Storage and File Sharing: Dropbox and iCloud are popular for storing files in the cloud and
sharing them with others.
 Collaboration: Google Docs provides a collaborative environment for creating and editing
documents in real time.
 Device Synchronization: iCloud is designed for syncing data across Apple devices, ensuring
that users have access to their data and content wherever they go.
 Cloud Desktops: EyeOS and XIOS/3 provide cloud-based desktop environments, enabling users
to run applications and access files from a web browser.
These tools cater to different needs, and users can choose the one that aligns with their requirements,
whether it's for file storage, document collaboration, or cloud-based desktop computing.
Dropbox:
 Type: Cloud Storage and File Sharing
 Description: Dropbox is a widely used cloud storage and file-sharing service. It allows
users to store, sync, and share files and documents across various devices. It also
provides collaboration features, making it easy to work on documents with others.
2. iCloud:
 Type: Cloud Storage and Syncing
 Description: iCloud is Apple's cloud service, which primarily focuses on syncing and
storing data across Apple devices. It enables users to store photos, documents, contacts,
and more in the cloud and access them from any Apple device.
3. Google Docs:
 Type: Cloud-based Office Suite
 Description: Google Docs is part of Google Workspace (formerly G Suite). It offers
cloud-based word processing, spreadsheet, and presentation tools. Users can collaborate
in real time and store documents in Google Drive.
4. Cloud Desktops: EyeOS:
 Type: Cloud-Based Desktop Environment
 Description: EyeOS is an open-source cloud desktop platform. It provides a web-based
desktop environment where users can access applications, files, and resources remotely
via a web browser. It is particularly useful for creating virtual desktops in the cloud.
5. Cloud Desktops: XIOS/3:
 Type: Cloud-Based Desktop Environment
 Description: XIOS/3 is an open-source cloud desktop platform similar to EyeOS. It
offers a web-based desktop experience, allowing users to run applications and access files
in a cloud-based environment.
Social Networking: Social networking apps connect individuals, allowing them to create profiles, share
content, and interact with friends, colleagues, and communities.
Key Features:
 User profiles and connections
 Status updates and posts
 Messaging and chat
 Content sharing (photos, videos, links)
 Groups and events
Examples:
 Facebook: A global social networking platform for connecting with friends, sharing updates, and
joining interest-based groups.
 LinkedIn: A professional social network that focuses on career and business connections.

Media Applications: Media apps provide access to various forms of digital media, including streaming video,
music, and news.
Key Features:
 Video streaming and on-demand content
 Music streaming and playlists
 News and articles
 Personalized content recommendations
 User-generated content (e.g., reviews)

Examples:
 Netflix: A popular streaming platform offering a vast library of TV shows and movies.
 Spotify: A music streaming service that provides access to a vast collection of songs and
playlists.

Multiplayer Online Gaming: Online gaming apps support multiplayer gaming experiences, enabling players
to connect, compete, and collaborate in real-time.
Key Features:
 Game lobbies and matchmaking
 Real-time multiplayer gameplay
 In-game chat and communication
 Virtual item purchases
 Leaderboards and achievements
Examples:
 Fortnite: A popular online multiplayer battle royale game available on multiple platforms.
 World of Warcraft: A massively multiplayer online role-playing game (MMORPG) with a large
player community.

You might also like