0% found this document useful (0 votes)
19 views

Cloud Comput Notes

This document provides an overview of cloud computing architecture, including: 1. Scalability and fault tolerance in cloud computing can be achieved through techniques like adding more resources and replicating data across servers. 2. Machine image design involves creating templates for virtual machines with specific software and configurations. It's important to consider image size, updates, and licensing. 3. Privacy design in the cloud focuses on data encryption, access control, and compliance with data sovereignty laws.

Uploaded by

Tukaram Kute
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Cloud Comput Notes

This document provides an overview of cloud computing architecture, including: 1. Scalability and fault tolerance in cloud computing can be achieved through techniques like adding more resources and replicating data across servers. 2. Machine image design involves creating templates for virtual machines with specific software and configurations. It's important to consider image size, updates, and licensing. 3. Privacy design in the cloud focuses on data encryption, access control, and compliance with data sovereignty laws.

Uploaded by

Tukaram Kute
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Unit 2: cloud computing architecture

Scalability and Fault Tolerance


Scalability in cloud computing refers to the ability of a system to handle an increasing amount of
work by adding more resources.

This can be done by adding more servers to a network, increasing the amount of storage, or
increasing the amount of bandwidth.

In a cloud computing environment, this can be accomplished by simply spinning up additional virtual
machines or increasing the size of existing ones.

Fault tolerance in cloud computing refers to the ability of a system to continue functioning properly
in the event of a failure.

This is achieved through various techniques such as redundancy and replication. For example, by
replicating data across multiple servers, if one server fails, the data can still be accessed from
another server.

In cloud computing, this can be achieved by using services such as load balancers and auto-scaling
groups which can automatically detect and respond to failures. Additionally, cloud providers often
have multiple data centers in different geographic locations, so if one data center goes down, the
services can be seamlessly switched to another one.

Ready for the cloud

1. Machine image design

A machine image in cloud computing is a pre-configured virtual machine that contains a specific set
of software, libraries, and configurations. It is often used as a template for creating new virtual
machines in a cloud environment. Machine image design is the process of creating, maintaining and
updating these images.

There are different ways of creating machine images, but one common approach is to use a tool
such as Packer which automates the process of creating machine images. The tool allows you to
define the software, libraries, and configurations that you want to include in the image, and then it
creates the image and makes it available in your cloud provider's image library.

When creating machine images, it is important to consider the size of the image, as larger images
can take longer to download and start up. Additionally, it's important to keep the image updated
with the latest security patches and software versions. Also it's important to consider the image
licensing compatibility, to be compliant with the licenses of the software that you are using in the
image

Another approach is to use containerization with images like Docker or Kubernetes, this way you can
have the ability to scale and update the software in runtime without affecting the whole image.
2. Privacy design

Privacy design in cloud computing refers to the measures and practices that are implemented to
protect sensitive data and user information when using cloud services.

One important aspect of privacy design in the cloud is data encryption.

Data encryption ensures that sensitive information is protected from unauthorized access by
encoding it so that only authorized parties can read it.

This can be done at rest, in transit or both. Cloud providers usually offer different encryption
options, such as server-side encryption and client-side encryption, and it's important to choose the
one that best suits the needs of the application.

Another important aspect of privacy design in the cloud is access control.

Access control is the process of determining who has access to what data. This can be done through
authentication, authorization, and role-based access control.

Cloud providers offer different ways of implementing access control, such as using identity and
access management (IAM) policies, security groups, and VPCs.

It's also important to consider data sovereignty and compliance when designing privacy in the cloud.
Data sovereignty refers to the laws and regulations that govern data storage and access based on
geographic location.

Some countries have strict laws and regulations regarding data storage and access, so it's important
to choose a cloud provider that can comply with these laws and regulations.

Finally, it's important to have a robust incident management and disaster recovery plan in place to
respond to security breaches and data loss in a timely manner. This includes regular backups,
testing, and updating the plan, and also incident response training for the team.

3. Database design

Database management in cloud computing refers to the process of managing and maintaining
databases that are hosted on cloud infrastructure.

One common way of managing databases in the cloud is to use a managed database service.

These services are offered by cloud providers and they handle the underlying infrastructure and
scaling of the databases, allowing developers to focus on building their applications.

Examples of managed database services include Amazon RDS, Azure Database and Google Cloud
SQL.

Another way of managing databases in the cloud is by using a database-as-a-service (DBaaS)


provider.

DBaaS providers offer a range of databases, such as MySQL, PostgreSQL, and MongoDB, that can be
easily provisioned and scaled.
Examples of DBaaS providers include MongoDB Atlas, AWS DocumentDB, and Azure Cosmos DB.

When managing databases in the cloud, it's important to consider data security and compliance, as
well as disaster recovery and backup.

Cloud providers usually offer different options for data encryption and access control, and it's
important to choose the one that best suits the needs of the application.

Additionally, it's important to have a disaster recovery and backup plan in place to ensure that data
can be recovered in the event of a failure or data loss.

Another important aspect is data migration, when moving from on-premises to cloud, or between
cloud providers.

It is important to choose a well-suited strategy and tool for the migration process, to ensure
minimal disruption and downtime, and also to test the migration before the actual process.
Unit 3 defining cloud for enterprise

Scaling a cloud architecture 1. Capacity planning 2. Cloud scaling

Capacity planning in cloud infrastructure refers to the process of determining the necessary
resources (such as CPU, memory, and storage) to handle the current and future workloads of an
application. This involves analyzing the current usage patterns and predicting future usage to ensure
that the infrastructure is appropriately sized to handle the anticipated load. This can be done by
using tools such as monitoring and log analytics software to collect usage data and make projections.

Cloud scaling refers to the process of adding or removing resources to meet the changing demands
of an application. This can be done in two ways:

Vertical scaling, also known as scaling up, involves adding more resources to a single machine, such
as increasing the amount of memory or CPU.

Horizontal scaling, also known as scaling out, involves adding more machines to a network, such as
adding more virtual machines to a cloud infrastructure.

Both approaches have their own advantages and disadvantages, and the best approach will depend
on the specific requirements of the application.

Cloud providers often offer auto-scaling services, which can automatically add or remove resources
based on predefined rules and metrics. This can help to ensure that the application has the
resources it needs to handle the load, without the need for manual intervention. Additionally, cloud
providers often offer load balancers, which can distribute incoming traffic across multiple machines,
to help ensure that the application can handle the load.

It's important to have a good monitoring and alerting strategy in place, to detect when the resources
are over or under-utilized, so you can take appropriate action.

Disaster in cloud

1. Disaster recovery planning

Disaster recovery planning in cloud computing refers to the process of creating a plan to ensure
that an organization's applications and data can be recovered in the event of a disaster, such as a
natural disaster, cyber attack, or hardware failure.

One common approach to disaster recovery in the cloud is to use a multi-region or multi-availability
zone strategy. This involves replicating data across multiple geographic locations or availability
zones, so that if one location becomes unavailable, the data can still be accessed from another
location. Cloud providers usually offer different services to help with this, such as Amazon S3's Cross-
Region Replication, Azure Site Recovery and Google Cloud's Regional and Multi-Regional Replication.

Another approach is to use a backup and restore strategy, which involves regularly taking snapshots
of the data and storing them in a separate location. This allows you to restore the data in the event
of a disaster. Cloud providers such as Amazon, Azure and Google Cloud offer different services for
this, such as Amazon EBS Snapshots, Azure Backup and Google Cloud Backup.

It's important to have a well-defined incident management and disaster recovery plan in place,
which outlines the steps to be taken in the event of a disaster, and assigns roles and responsibilities
to different team members. The plan should also include regular testing and updates to ensure that
it stays up to date with the current environment and requirements.

Finally, it's important to ensure that the disaster recovery plan complies with any relevant
regulations and industry standards, such as HIPAA, SOC2, and PCI-DSS

2. Disaster in cloud

A disaster in cloud computing refers to any event that disrupts the normal functioning of an
organization's cloud-based infrastructure and applications. Disasters can take many forms, including
natural disasters, cyber attacks, hardware failures, and software bugs.

Natural disasters such as floods, hurricanes, earthquakes, and tornadoes can cause physical damage
to data centers and disrupt the power supply, making it difficult to access data and applications.

Cyber attacks can also cause significant damage to cloud-based infrastructure and applications, by
stealing or corrupting data, disrupting access to applications, or launching a denial-of-service attack.

Hardware failures, such as disk failures, network outages, or power failures, can also disrupt access
to data and applications.

Software bugs can also cause issues, causing the application to stop working or to produce incorrect
results, and can cause data loss or corruption.

Disasters can have a significant impact on an organization's ability to operate, and can lead to lost
revenue, damage to reputation, and legal and regulatory penalties. Therefore, it's important for
organizations to have a well-defined disaster recovery plan in place that outlines the steps to be
taken in the event of a disaster, and assigns roles and responsibilities to different team members.

3. Disaster management

Disaster management in cloud computing refers to the process of planning for, responding to, and
recovering from disasters that can impact an organization's cloud-based infrastructure and
applications.

A key aspect of disaster management in the cloud is to have a well-defined disaster recovery plan in
place.

This should include a detailed description of the organization's cloud-based infrastructure and
applications, and the steps that need to be taken in the event of a disaster.

The plan should also include details of the team members responsible for different tasks, such as
activating the plan, communicating with stakeholders, and restoring services.
Another key aspect of disaster management in the cloud is to have a robust monitoring and alerting
system in place, which can detect and notify of potential issues in the infrastructure and
applications, and help prevent or minimize the impact of disasters.

Disaster management in the cloud also involves testing and updating the disaster recovery plan
regularly, to ensure that it stays up-to-date with the current environment and requirements, and
that the team members are prepared and familiar with the plan.

Finally, it's important to ensure that the disaster management plan is compliant with any relevant
regulations and industry standards, such as HIPAA, SOC2, and PCI-DSS.

Overall, having a well-defined disaster management plan in place, along with regular testing and
updates, can help organizations to minimize the impact of disasters and ensure a faster recovery.
Unit 4: Aneka : cloud application platform

1. Aneka framework overview:


 Aneka is a framework for building and deploying applications on cloud computing
environments. It provides a set of tools and APIs for creating, deploying, and managing
cloud-based applications. Aneka supports both public and private clouds, and is designed to
be highly scalable and fault-tolerant.
 The framework allows developers to build and deploy applications using a variety of
programming languages and frameworks, such as .NET, Java, and Python. Additionally,
Aneka provides built-in support for load balancing, resource management, and security.
 Overall, Aneka aims to make it easy for developers to build and deploy applications on the
cloud, while also providing the necessary tools for managing and scaling those applications.

Aneka container.

Aneka Container is a feature of the Aneka Cloud platform that allows users to deploy and manage
containerized applications on the cloud. Containers are a lightweight, portable, and self-sufficient
way of packaging and deploying software applications. They provide a consistent runtime
environment and allow applications to be deployed on any system that supports the container
technology.

Aneka Container provides a platform for deploying, managing, and scaling containerized applications
in a cloud environment. It allows users to create and manage container clusters, deploy applications
in containers, and monitor the performance of the deployed applications. It also provides features
such as automatic scaling, load balancing, and failover to ensure high availability of applications.

By using Aneka Container, users can take advantage of the benefits of containerization, such as
improved resource utilization, faster deployment times, and better application isolation. It enables
to easily move the containerized application from development to production and provides a
consistent environment across the different stages of the application lifecycle.

2. Container service
 Aneka's container service is a feature that allows developers to package and deploy their
applications as containers. Containers are a lightweight alternative to virtual machines,
which can be easily deployed, scaled, and managed. By packaging an application as a
container, developers can ensure that the application runs consistently across different
environments, and can be easily deployed to different cloud platforms or on-premises
infrastructure.
 The Aneka container service provides a set of APIs and command-line tools for managing
containers, such as creating, starting, stopping, and deleting them. It also supports container
orchestration, which allows developers to easily scale and manage multiple containers in a
distributed environment. Additionally, the service provides built-in support for load
balancing and resource management, which allows containers to automatically scale up or
down based on demand.
 Overall, Aneka's container service allows developers to easily package and deploy their
applications in containers, and provides the necessary tools for managing and scaling those
containers in a cloud environment.

3. Fabric services

Aneka's container service is built on top of the Fabric services. The Fabric services are a set of
distributed services that are responsible for managing and scheduling the containers across a cluster
of machines. These services provide the underlying infrastructure for the container service, and
handle tasks such as:

 Node management: The Fabric services keep track of the nodes in the cluster, and ensure
that the containers are running on healthy and available machines.
 Container scheduling: The Fabric services are responsible for scheduling the containers on
the nodes, and making sure that the resources are being used efficiently.
 Load balancing: The Fabric services handle load balancing, by distributing the incoming
traffic among the containers, ensuring that no single container becomes a bottleneck.
 Resource management: The Fabric services monitor the resource usage of the containers
and the nodes, and make sure that the resources are being used efficiently.
 Security: The Fabric services provide built-in security features, such as authentication and
authorization, to ensure that only authorized users can access the containers.
 In summary, The Fabric services form the foundation of Aneka's container service, and
provide the necessary infrastructure for managing and scheduling the containers in a cluster
of machines. They are responsible for tasks such as node management, container
scheduling, load balancing, resource management, and security.

4. Foundation service

In Aneka, the Foundation services are a set of low-level services that provide the basic infrastructure
for the framework. They form the foundation for other services in Aneka and include:

 Task Management: The Task Management service provides a way to submit and manage
tasks on the Aneka cluster. It is responsible for scheduling and dispatching tasks to the
nodes, and for monitoring the status of the tasks.
 Data Management: The Data Management service provides a way to store and retrieve data
on the Aneka cluster. It is responsible for managing the storage resources, and for providing
the necessary APIs for accessing the data.
 Security: The Security service provides a way to secure the Aneka cluster. It is responsible for
managing the authentication and authorization of users and applications.
 Resource Management: The Resource Management service provides a way to manage the
resources on the Aneka cluster. It is responsible for monitoring and allocating the resources,
and for providing the necessary APIs for accessing the resources.
 Monitoring: The Monitoring service provides a way to monitor the Aneka cluster. It is
responsible for collecting and aggregating the monitoring data, and for providing the
necessary APIs for accessing the monitoring data.

Overall, the Foundation services in Aneka provide the basic infrastructure for the framework, and
are responsible for tasks such as task management, data management, security, resource
management, and monitoring. They form the foundation for other services in Aneka, such as the
container service, and are responsible for providing the necessary functionality for building and
deploying cloud-based applications.

5. Application service

In Aneka, the Application services are a set of higher-level services that provide additional
functionality for building and deploying applications on the Aneka framework. These services build
on top of the foundation services, and include:

 Workflow Management: The Workflow Management service provides a way to create,


execute and manage workflows on the Aneka cluster. It is responsible for coordinating the
execution of tasks, and for providing the necessary APIs for creating and managing
workflows.
 Cloud Services: The Cloud Services service provides a way to interact with different cloud
providers such as AWS, Azure, and Google Cloud. It is responsible for providing the
necessary APIs for creating and managing cloud resources, such as virtual machines and
storage.
 Big Data Services: The Big Data service provides a way to process and analyze large data sets
on the Aneka cluster. It is responsible for providing the necessary APIs for processing and
analyzing data using technologies such as Hadoop and Spark.
 Machine Learning Services: The Machine Learning service provides a way to run machine
learning workloads on the Aneka cluster. It is responsible for providing the necessary APIs
for training and deploying machine learning models.

Overall, the Application services in Aneka provide additional functionality for building and deploying
applications on the framework. They build on top of the foundation services, and are responsible for
tasks such as workflow management, cloud services, big data processing, and machine learning.
These services are designed to make it easy for developers to build and deploy applications on the
Aneka framework, while also providing the necessary tools for managing and scaling those
applications.

6. Cloud programming and management

i. Aneka SDK

Aneka SDK is a Software Development Kit provided by Aneka framework which is designed to make
it easier for developers to build and deploy applications on the Aneka framework.
The SDK provides a set of APIs and libraries that developers can use to interact with the Aneka
framework and its services. It also includes samples, documentation, and other resources that
developers can use to learn how to use the SDK.

The Aneka SDK provides a set of libraries that developers can use to interact with the various
services in the Aneka framework. For example, developers can use the SDK to submit and manage
tasks, store and retrieve data, and manage resources.

The SDK also provides a set of APIs for interacting with the foundation services, such as the Task
Management service, Data Management service, and Resource Management service.

The SDK also includes a set of tools for managing and monitoring the Aneka cluster. For example,
developers can use the SDK to monitor the status of the nodes, the tasks, and the resources in the
cluster. Additionally, the SDK provides a set of command-line tools for managing and monitoring the
Aneka cluster.

Overall, the Aneka SDK is designed to make it easier for developers to build and deploy applications
on the Aneka framework, by providing a set of APIs, libraries, and tools that developers can use to
interact with the framework and its services. It also includes samples, documentation, and other
resources that developers can use to learn how to use the SDK.

ii. Management tools:

Aneka provides a set of management tools that allow administrators to monitor and manage the
Aneka cluster. These tools provide a way to monitor the status of the nodes, the tasks, and the
resources in the cluster. Some of the management tools provided by Aneka include:

 Aneka Management Console: The Aneka Management Console is a web-based interface that
allows administrators to monitor and manage the Aneka cluster. It provides a dashboard
that displays the status of the nodes, the tasks, and the resources in the cluster, and also
provides a way to manage the cluster resources and services.
 Aneka Command Line Interface (CLI): The Aneka CLI is a command-line interface that allows
administrators to monitor and manage the Aneka cluster. It provides a set of commands that
can be used to monitor and manage the cluster, such as starting and stopping nodes,
submitting and managing tasks, and managing resources.
 Aneka API: The Aneka API provides a set of RESTful APIs that can be used to interact with the
Aneka cluster. Administrators can use the API to monitor and manage the cluster, such as
starting and stopping nodes, submitting and managing tasks, and managing resources.
 Aneka Monitoring: Aneka provides built-in monitoring capabilities for the cluster nodes,
tasks, and resources, the monitoring data can be accessed via the management console or
the API.

Overall, the management tools provided by Aneka allow administrators to monitor and manage the
Aneka cluster. These tools provide a way to monitor the status of the nodes, the tasks, and the
resources in the cluster, and also provide a way to manage the cluster resources and services.
Unit 5: Cloud Application

1. Cloud platforms in industry


i. AWS

Amazon Web Services (AWS) is a collection of remote computing services (also called web services)
that make up a cloud computing platform, offered by Amazon.com.

These services operate from 12 geographical regions across the world. They provide a variety of
services such as storage, networking, database, application services, deployment, management,
mobile, developer tools, and tools for the Internet of Things (IoT).

Users can access these services through APIs or the AWS Management Console.

AWS is designed to help businesses scale and grow by providing a highly reliable, scalable, low-cost
infrastructure platform in the cloud.

It is one of the most popular cloud computing platforms used by companies of all sizes.

AWS architecture is a collection of services that work together to provide a flexible and scalable
infrastructure for building, deploying, and running applications.

At the foundation of the architecture is the global infrastructure, which includes a network of data
centers, called regions and availability zones (AZs). These regions and AZs provide the physical
infrastructure for the services and allow users to run resources in multiple locations for high
availability and disaster recovery.

The AWS Management Console is a web-based interface that allows users to access and manage the
services. Users can also access the services through the AWS Command Line Interface (CLI) or
through APIs.

The services can be broadly grouped into four categories: compute, storage, databases, and
networking.

 Compute services, such as Elastic Compute Cloud (EC2) and Lambda, provide the ability to
run virtual machines and execute code.
 Storage services, such as Simple Storage Service (S3) and Elastic Block Store (EBS), provide
scalable and durable storage for data.
 Database services, such as Amazon Relational Database Service (RDS) and Amazon
DynamoDB, provide managed and scalable database solutions.
 Networking services, such as Amazon Virtual Private Cloud (VPC) and Route 53, provide the
ability to create isolated networks and manage DNS.

AWS also offers a wide range of additional services such as security, analytics, machine learning, and
artificial intelligence to help users build, deploy, and manage their applications.

Architecture and services can be combined and coordinated through AWS Cloud Formation and AWS
Elastic Beanstalk to automate the process of provisioning and managing AWS resources.
ii. Elastic compute cloud EC2

Amazon Elastic Compute Cloud (EC2) is a web service that provides resizable compute capacity in the
cloud. It allows users to launch virtual machines (VMs), called instances, on demand. Each instance is
a virtual server that can run a variety of operating systems and applications.

EC2 instances are categorized into different types based on their computing power, memory, and
storage capacity. Users can choose the type of instance that best fits their requirements, and can
also scale the number of instances up or down as needed.

EC2 instances are launched in a virtual network called a Virtual Private Cloud (VPC), which allows
users to isolate their instances from other users and control the IP ranges, subnets, and network
gateways.

EC2 instances are protected by security groups, which act as a firewall that controls inbound and
outbound traffic to the instances. Users can also use Elastic IP addresses, which are static public IP
addresses that can be associated with an instance.

Here's a diagram that illustrates the basic components of an EC2 deployment:

[EC2 Diagram]

Users interact with the EC2 service through the AWS Management Console, the AWS Command Line
Interface (CLI), or the EC2 API.

Users can launch instances in one or more availability zones, which are physically separate locations
within a region.

Each instance runs within a security group, which controls the inbound and outbound network traffic
to the instance.

Each instance is associated with an Elastic IP address, which is a static public IP address that can be
remapped to any instance.

Each instance can be associated with one or more Elastic Block Store (EBS) volumes, which provide
persistent block-level storage for the instance.

EC2 provides a highly scalable, flexible, and cost-effective solution for running instances in the cloud,
which enables users to easily scale their computing resources as needed.

iii. Amazon simple storage s3

Amazon Simple Storage Service (S3) is a web service that provides object storage through a simple
web service interface. It allows users to store and retrieve any amount of data, at any time, from
anywhere on the web. S3 is designed to provide 99.999999999% durability, and stores data
redundantly across multiple devices in multiple facilities, and automatically carries out regular,
incremental backups.

S3 stores data as objects within buckets. Each object is made up of a file and optionally any
metadata that describes that file.
S3 provides a number of features such as:

 Data durability and availability through replication


 Data archiving through Amazon Glacier
 Data management through lifecycle policies
 Data security through access controls and encryption
 Data access through a web interface, the AWS SDKs, or the S3 API

S3 also provides a feature called "bucket" which is a container for storing files. Buckets can be
created in any of the available regions and can be configured to be private or public.

S3 also provides a feature called "Amazon S3 Select" which allows users to retrieve only the data
they need from an object, rather than the entire object, reducing the cost and improving the
performance.

S3 is a popular and powerful storage solution, that enables users to store and retrieve large amounts
of data at a low cost, with high durability and availability. It's widely used for storing files, backups,
and for serving content through Content Delivery Networks (CDNs).

4. Amazon virtual private cloud

Amazon Virtual Private Cloud (VPC) is a service that allows users to create a virtual network in the
AWS cloud. It enables users to launch AWS resources into a virtual network that they've defined. A
VPC allows users to have complete control over their virtual networking environment, including
selection of IP address ranges, creation of subnets, and configuration of route tables and network
gateways.

A VPC consists of a range of IP addresses that you select, and a router that connects to the Internet.
You can launch Amazon Elastic Compute Cloud (EC2) instances and other AWS resources in your
VPC, and configure security and network access control lists.

VPCs can be connected to a user's own data center through a VPN or Direct Connect link, and this
enables the user to extend their existing IT infrastructure into the cloud.

A VPC can be broken down into subnets, and each subnet can be associated with a different security
group. This allows for granular control over inbound and outbound traffic.

VPC also provides a feature called "Security Groups", which acts as a virtual firewall for the
instances, controlling inbound and outbound traffic at the instance level, and "Network Access
Control Lists" (NACLs) which act as a firewall for subnets, controlling inbound and outbound traffic at
the subnet level.

Amazon VPC enables users to create a virtual network in the AWS cloud, which allows them to
launch AWS resources in a virtual network that they've defined. It provides a high level of control
and security over a user's network infrastructure, and allows users to connect their VPC to their own
data center, enabling them to extend their existing IT infrastructure into the cloud.
5. Google App Engine

Google App Engine (GAE) is a platform for developing and hosting web applications in Google-
managed data centers. It is a fully managed platform that allows developers to build, test, and
deploy their applications without the need to manage the underlying infrastructure.

GAE provides a number of features such as:

 Automatic scaling: GAE automatically scales the number of instances of an application up or


down based on the traffic it receives.
 Easy deployment: GAE provides an easy-to-use web interface and command-line tools for
deploying and managing applications.
 Support for multiple languages: GAE supports several programming languages such as Java,
Python, PHP, C#, Go, and Node.js.
 Built-in services: GAE provides a number of built-in services such as a NoSQL datastore, a
caching service, and a task queue service that can be used by applications.
 Integration with other Google services: GAE can be easily integrated with other Google
services such as Google Cloud Storage, Google Cloud SQL, and Google Cloud Datastore.

GAE is designed to make it easy for developers to build and deploy web applications quickly, without
having to worry about the underlying infrastructure. It provides automatic scaling and built-in
services, allowing developers to focus on writing code and delivering features to their users. It's a
fully managed platform, which makes it easy for developers to build and deploy their applications,
and it's also easy to integrate with other Google services.

Architecture:

The architecture of Google App Engine (GAE) is designed to provide a fully managed platform for
developing and hosting web applications. It consists of several components that work together to
provide a scalable and reliable infrastructure for running applications.

Here is an overview of the main components of the GAE architecture:

 Front End: This component is responsible for handling incoming requests and routing them
to the appropriate instances of the application. It also performs load balancing and
automatic scaling to ensure that the application can handle the traffic it receives.
 Application Instances: These are the instances of the application that handle the requests.
They can run on either a standard or a flexible environment. The standard environment is
built on top of a container-based infrastructure, while the flexible environment allows the
use of custom runtime environments.
 Datastore: GAE provides a built-in NoSQL datastore, which is a highly-scalable and low-
latency data storage service. Applications can store and retrieve data from the datastore
using the Google Cloud Datastore API.
 Memcache: GAE also provides a built-in caching service called Memcache, which allows
applications to cache data in memory for faster access.
 Task Queue: GAE provides a built-in task queue service, which allows applications to
enqueue background tasks to be executed at a later time.
 Search: GAE provides a search service that allows applications to perform full-text and
structured searches on their data.
 Logging and Monitoring: GAE provides built-in logging and monitoring services, which allow
developers to view logs, trace requests, and monitor the performance of their applications.

GAE architecture is designed to handle the scaling and management of web applications
automatically, making it easy for developers to focus on writing code. It provides built-in services
such as NoSQL datastore, caching service, and task queue service, which allow developers to easily
store and retrieve data, cache data in memory, and perform background tasks, respectively. It's also
easily integrated with other Google services, allowing developers to leverage the power of Google's
infrastructure and services.

6. Microsoft azure

Microsoft Azure is a cloud computing platform and infrastructure created by Microsoft for building,
deploying, and managing applications and services through a global network of Microsoft-managed
data centers. It provides a wide range of services such as compute, storage, databases, networking,
analytics, machine learning, and internet of things (IoT) that can be used to build and deploy
applications of all types and sizes.

Azure provides a variety of tools and services to help users build, deploy, and manage their
applications, including:

 Virtual Machines (VMs) for running Windows and Linux operating systems
 Azure Kubernetes Service (AKS) for container orchestration
 Azure Functions for serverless computing
 Azure Storage for storing and managing data
 Azure SQL Database and Cosmos DB for managed relational and NoSQL databases
 Azure Virtual Network for creating isolated networks and managing DNS

Azure also provides a wide range of additional services such as security, analytics, machine learning,
and artificial intelligence to help users build, deploy, and manage their applications.

Azure also provides a number of management and development tools, including Azure Portal, Azure
PowerShell, Azure CLI, and Visual Studio, that allow users to easily manage and deploy their
resources and applications.

Azure is a popular choice among companies of all sizes, it offers a wide range of services, tools, and
features that allow users to build, deploy, and manage their applications and services at scale, in a
reliable and cost-effective way

.
Azure core components:

The core components of Microsoft Azure are:

Compute: Azure provides a variety of compute services such as Virtual Machines (VMs), Azure
Kubernetes Service (AKS), and Azure Functions that allow users to run and manage their applications
and services.

 Storage: Azure provides a variety of storage services such as Azure Storage, Azure Files, and
Azure Disks that allow users to store and manage their data.
 Databases: Azure provides a variety of database services such as Azure SQL Database, Azure
Cosmos DB, and Azure Database for MySQL that allow users to store and manage their data
in a managed and scalable way.
 Networking: Azure provides a variety of networking services such as Azure Virtual Network,
Azure ExpressRoute, and Azure DNS that allow users to create isolated networks and
manage DNS.
 Security: Azure provides a variety of security services such as Azure Active Directory, Azure
Key Vault, and Azure Security Center that allow users to secure their applications and
services.
 Analytics: Azure provides a variety of analytics services such as Azure Stream Analytics,
Azure Data Factory, and Azure Machine Learning that allow users to analyze and gain
insights from their data.
 Management: Azure provides a variety of management services such as Azure Resource
Manager, Azure Portal, and Azure PowerShell that allow users to manage their resources
and applications.

These core components work together to provide a comprehensive cloud computing platform that
allows users to build, deploy, and manage their applications and services in a reliable and cost-
effective way. They also provide a high level of scalability, security, and integration with other
services, which makes it a popular choice for companies of all sizes.

7. SQL Azure

Azure SQL is a cloud-based relational database service provided by Microsoft as part of the Azure
platform. It is based on the SQL Server database engine and provides a fully managed, scalable, and
highly available solution for running SQL Server databases in the cloud.

SQL Azure provides features such as:

 Automatic backups and point-in-time restore


 Automatic failover and high availability
 Automatic software patching and upgrades
 Automatic data encryption at rest
 Automatic monitoring and alerting
 Automatic scaling of compute and storage resources
 Automatic data masking and Dynamic Data Masking (DDM)
 Automatic threat detection and security
 Automatic threat protection and Advanced Data Security (ADS)

SQL Azure also provides a feature called "Elastic pools" which is a way to manage and scale multiple
databases together as a single pooled resource. It allows users to share resources, such as CPU and
storage, among databases in the pool, which can help reduce costs and improve performance.

SQL Azure is a fully-managed relational database service that allows users to easily create, manage
and scale SQL Server databases in the cloud. It provides automatic backups and high availability,
automatic software patching and upgrades, automatic data encryption and security features,
automatic scaling of compute and storage resources and more, making it a cost-effective and highly
available solution for running SQL databases in the cloud.

You might also like