0% found this document useful (0 votes)
25 views76 pages

Cloud Platforms and Cloud Applications

AWS (Amazon Web Services) is a comprehensive cloud computing platform that offers a wide range of services and components enabling organizations to build and deploy various applications and infrastructures in the cloud. Some key AWS services include Amazon EC2 for virtual servers, Amazon S3 for scalable object storage, Amazon RDS for managed relational databases, and Amazon DynamoDB for fast NoSQL database services. AWS Lambda and Amazon VPC allow for serverless computing and custom virtual networks.

Uploaded by

toon town
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views76 pages

Cloud Platforms and Cloud Applications

AWS (Amazon Web Services) is a comprehensive cloud computing platform that offers a wide range of services and components enabling organizations to build and deploy various applications and infrastructures in the cloud. Some key AWS services include Amazon EC2 for virtual servers, Amazon S3 for scalable object storage, Amazon RDS for managed relational databases, and Amazon DynamoDB for fast NoSQL database services. AWS Lambda and Amazon VPC allow for serverless computing and custom virtual networks.

Uploaded by

toon town
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 76

https://www.pluralsight.

com/resources/blog/cloud/what-is-amazon-web-services-aws

https://www.techtarget.com/searchaws/definition/Amazon-Web-Services

AWS (Amazon Web Services) is a comprehensive cloud computing platform provided by Amazon. It offers
a wide range of services and components that enable organizations to build and deploy various types of
applications and infrastructures. Here are some key components and services of AWS:

1. **Amazon EC2 (Elastic Compute Cloud)**: EC2 provides resizable virtual servers in the cloud, known
as EC2 instances. Users can choose from various instance types, each with different compute capacities,
memory, and storage options. EC2 allows users to quickly scale up or down their compute resources
based on demand.

2. **Amazon S3 (Simple Storage Service)**: S3 is an object storage service that offers scalable and
durable storage for data and files. It provides high availability, durability, and low latency access to stored
objects. S3 is commonly used for data backup, static website hosting, and storing and retrieving large
amounts of data.

3. **Amazon RDS (Relational Database Service)**: RDS is a managed database service that simplifies the
administration of relational databases. It supports popular database engines such as MySQL, PostgreSQL,
Oracle, and Microsoft SQL Server. RDS handles tasks like automated backups, software patching, and
database scaling, allowing users to focus on their applications.

4. **Amazon DynamoDB**: DynamoDB is a fully managed NoSQL database service that provides fast
and flexible document and key-value data storage. It is highly scalable, automatically replicates data
across multiple availability zones for durability, and provides low-latency access to data. DynamoDB is
suitable for use cases requiring high-performance and real-time applications.

5. **AWS Lambda**: Lambda is a serverless computing service that allows users to run code without
provisioning or managing servers. It executes code in response to events and automatically scales to
handle the load. Lambda is often used for building microservices, event-driven architectures, and
serverless applications.
6. **Amazon VPC (Virtual Private Cloud)**: VPC enables users to create isolated virtual networks within
AWS. It allows customization of network configurations, including IP address ranges, subnets, route
tables, and network gateways. VPC provides secure and private connectivity between resources and
supports VPN connections to on-premises networks.

7. **Amazon CloudFront**: CloudFront is a content delivery network (CDN) service that delivers static
and dynamic web content, including videos, applications, and APIs, with low latency and high transfer
speeds. It caches content at edge locations worldwide, reducing the load on origin servers and improving
user experience.

8. **Amazon SNS (Simple Notification Service)**: SNS is a messaging service that enables the publishing
and delivery of messages to various endpoints, including email, SMS, mobile push notifications, and
HTTP endpoints. It provides flexible and reliable communication between distributed components of
applications.

9. **Amazon SQS (Simple Queue Service)**: SQS is a fully managed message queuing service that
decouples the components of distributed applications. It allows messages to be stored in a queue and
processed asynchronously. SQS ensures reliable and scalable message-based communication between
application components.

10. **AWS CloudFormation**: CloudFormation is an infrastructure-as-code service that enables the


provisioning and management of AWS resources using declarative templates. It allows users to define
their infrastructure in a text file, automating the resource creation and configuration process.

These are just a few examples of the wide range of services and components offered by AWS. AWS
provides many more services, including AI and machine learning, analytics, serverless computing,
storage, networking, security, and management tools, allowing organizations to build and deploy
complex and scalable cloud-based solutions.
Amazon SimpleDB is a NoSQL data store that offers high availability and flexibility while reducing the
workload of database administration. By using web service requests, developers can easily store and query data
items without worrying about the technicalities of managing a database.

Unlike relational databases, Amazon SimpleDB is not restricted by strict requirements like a fixed schema,
pre-defined indexes, or complex join operations, allowing for greater adaptability and reduced administrative
burden. The system manages multiple replicas of our data across different locations, ensuring high availability
and data durability. The charges incurred are based solely on the resources consumed in storing data and
serving requests.

With Amazon SimpleDB, we can change our data model on-the-go, while data is automatically indexed. This
allows our team to focus on application development instead of worrying about the tasks like infrastructure
provisioning, software maintenance, schema and index management, software maintenance, or performance
tuning.

Features
Amazon SimpleDB offers a range of features that make it a powerful and flexible data storage solution:

 Efficiency: SimpleDB provides us with simple and fast data retrieval and storage.
 Flexibility: With SimpleDB, we can easily add new attributes without worrying about predefined data
formats.
 Reliability: SimpleDB creates multiple replicas of each data item and distributes them across different
locations. If one replica fails, the system can fail over to another replica.
 Budget-friendly: SimpleDB's economic model allows for payment only for the specific resources
utilized, including machine utilization, structured data storage, and data storage.
 Scalability: We can create new domains to accommodate increasing data volumes as our business
grows.
 Smooth integration: We can smoothly integrate SimpleDB with other Amazon Web Services such as
EC2 and S3.

Benefits
 Eliminates operational complexity: We don't need to worry about provisioning servers or managing
their infrastructure, as AWS handles everything for us. This saves our time and energy so that we can
work on other essential tasks.
 No schema required for data storage: We can store data in SimpleDB without defining a schema
beforehand. This makes adding new data to our database easy without modifying its structure.
 Reduces administrative burden: Since SimpleDB is a managed service, we don't need to perform
maintenance tasks like backup and recovery or software upgrades. With AWS, our team can reduce
their administrative workload as the platform takes care of these tasks on our behalf.
 Simple API for accessing and storing data: The SimpleDB API is easy to use, allowing us to
quickly access and store data without needing to learn complex query languages or database
management systems.
 Data is automatically indexed: When we store data in SimpleDB, the service indexes it for faster
querying and retrieval. This saves our time and effort, as we don't need to configure indexes manually
for our database.

SimpleDB Data Model

Drawbacks
 Weaker forms of consistency: SimpleDB's eventual consistency model means that updates to data
may take time to reflect across all nodes in the system, leading to potential data consistency. This can
be a drawback for applications requiring strong consistency guarantees.
 Storage limitations: SimpleDB limits the amount of data we can store in a single domain and limits
the size of individual attributes and the number of attributes per item. This can be a challenge for
applications with large or complex data requirements, requiring careful planning and management.

Amazon SimpleDB is a kind of distributed database service that is developed by AWS in


the Erlang programming language. Amazon SimpleDB was released on December 13,
2007, it is an AWS component that works in conjunction with Amazon Simple Storage
Service and Amazon Elastic Compute Cloud. Amazon SimpleDB is a NoSQL database.
Amazon SimpleDB, also known as a key value data store, is a highly available and
flexible non-relational database that allows developers to request and store data, with
minimal database management and administrative responsibility.
It implies as its name, Amazon SimpleDB is compatible more for less complex databas
where users quickly look up and access data in NOSQL format. As with other AWS
services, Amazon SimpleDB charges their users customers based on the resources
used to store their SimpleDB data and the requests made to access that data.

What are the features of Amazon SimpleDB?


Amazon SimpleDB features include:

 Flexibility
It allows user for the easy addition of new attributes without predefined data formats

 Efficiency
Amazon SimpleDB Provides quick and easy data storage and retrieval

 Scalability
Facilitates new domain creation to accommodate increases in data volume

 Easy integration
It is designed for easy integration with other Amazon Web services, such as Amazon
EC2 and Amazon S3

 Cost effective
Users only pay for actual consumed resources. Amazon usage types include structured
data storage, data storage and machine utilization.

Amazon SimpleDB benefits


 Operational complexity is eliminated.

 No schema is required.
 A simple application programming interface (API) is used for access and
storage.
 Data is automatically indexed.
 Administrative burden is reduced.

Amazon SimpleDB has several drawbacks, including weaker forms of consistency and
storage limitations.
Why should I use SimpleDB?

Till now we have an overview of the services of Amazon SimpleDB and what SimpleDB
can do. It is a great piece of technology that allows you to create scalable applications
with ease that are capable of using massive amounts of data, and you can put this
power and simplicity to use in your own applications.

 Architect for the cloud


 Build flexibility into your applications
 Create high-performance web applications
 Scale your applications on demand
 Make your applications simpler to architect
 Take advantage of lower costs

Amazon SimpleDB Alternatives & Comparisons Amazon SimpleDB Alternatives &


Comparisons
MySQL
The MySQL software package delivers an awfully quick, multi-threaded, multi-user, and
sturdy SQL (Structured question Language) information server. MySQL Server is meant
for mission-critical, heavy-load production systems additionally as for embedding into
mass-deployed software package.

MongoDB
MongoDB stores information in JSON-like documents which will vary in structure,
providing a dynamic, versatile schema. MongoDB was additionally designed for top
availableness and quantifiability, with inbuilt replication and auto-sharing.

Amazon DynamoDB
With it, you’ll be able to offload the executive burden of operative and scaling an
extremely offered distributed information cluster, whereas paying an occasional value
for less than what you utilize.

Cloud Firestore
Cloud Firestore may be a NoSQL document information that helps you to simply store,
sync, and question information for your mobile and net apps – at world scale.

Azure Cosmos dB
Azure DocumentDB may be an absolutely managed NoSQL information service
engineered for quick and predictable performance, high availableness, elastic scaling,
world distribution, and easy development.

How is SimpleDB priced?


Amazon provides a free tier for SimpleDB users only applicable for usage above the free
tier limit. This charges is completely depends on the utilization of each SimpleDB
request along with the amount of machine capacity that is utilized for completing the
specified request normalized to the hourly capacity of a 1.8 GHz Xeon processor.
According to the data available throughout there are no charges on the first 25 machine
hours, 1 GB of data transfer, and 1 GB of storage that you consume every month. This is
a significant amount of usage being provided for free for this limited time period by
AWS.

I believe you meant to refer to "Amazon SimpleDB" in your question. However, as of my knowledge
cutoff in September 2021, Amazon SimpleDB is not among the current services provided by Amazon
Web Services (AWS). SimpleDB was a highly available and scalable NoSQL data store provided by AWS,
but it was deprecated and stopped accepting new customers in June 2019.

AWS currently offers other managed database services that you may find useful:

1. **Amazon DynamoDB**: DynamoDB is a fast and flexible NoSQL database service that provides low-
latency access to data at any scale. It offers automatic scaling, built-in security, and seamless integration
with other AWS services.

2. **Amazon Aurora**: Aurora is a fully managed relational database service compatible with MySQL
and PostgreSQL. It offers high performance, scalability, and durability while minimizing the
administrative overhead.

3. **Amazon RDS**: Amazon RDS (Relational Database Service) is a managed service that supports
various relational database engines, including MySQL, PostgreSQL, Oracle, SQL Server, and MariaDB. It
simplifies database administration tasks such as backups, software patching, and scaling.

4. **Amazon DocumentDB**: DocumentDB is a fully managed NoSQL database service that is


compatible with the MongoDB API. It provides a scalable, highly available, and fully managed document
database for applications requiring JSON-like document storage.

These are just a few examples of the managed database services available on AWS. The platform offers a
wide range of database options to cater to different application requirements, whether they are
relational, NoSQL, or specialized use cases. I recommend consulting the official AWS documentation for
the most up-to-date information on the available database services.
Introduction to AWS Storage Services
Amazon Simple Storage Service (Amazon S3) is the most widely used

object storage service and used by most of the companies, even startups

to enterprise-level because of its scalability, data availability, security and

performance any data stored over S3 is protected, secure and always

available no matter what amount of data for a range of use cases, such

as websites, mobile applications that generate lots of data, backups for

applications, IoT devices, and big data analytics and all these at a very

minimal cost with 99.99999999999% of durability, S3 is used as storage

for many other AWS services like code commit, Streaming service, and

many others.

Types of AWS Storage Services


AWS offers seven types of storage services with choices for back-up,

archiving and recovery of lost data. Let’s see what those services are and

their features:
WINDOWS POWERSHELL Certification Course
26+ Hours of HD Videos | 7 Courses | Verifiable Certificate of Completion | Lifetime Access
4.5

1. Simple Storage Service (S3)


Amazon S3 is an object storage service that stores data of any type and

size. It can store data for any business such as web applications, mobile

applications, backup, archive, analytics. It also provides easy access

control management for all your specific requirements and is almost

100% durable and by almost I mean 99.(11 nines)%. It can also be used

to store all kinds of file formats as you would with a dropbox. S3 also

allows a simple web-based file explorer to upload files, create folders or

delete them.

2. Elastic Block Storage (EBS)


EBS provides block storage which is similar to hard drives to store any

kind of data persistently. This can be attached to any EC2 instance and

used as block storage, which even allows you to install any operating

system. EBS volumes are placed in availability zones so that they are
replicated to prevent loss of data due to single component failures. They

provide absolute low-latency performance and you can also scale up or

down your resources as and when required. EBS is available in both SSD

and HDD formats depending on your requirement of speed and volume.

3. Elastic File System (EFS)


EFS is a managed network file system that is easy to set up right from the

amazon console or CLI. When you have multiple EC2 instances needed to

access the same file system EFS helps in providing just that. Unlike EBS,

EFS is built using the NFS4.x protocol on SSDs and have a much faster

throughput. This also means that EFS is much more expensive than EBS

as it can be used on very large analytical workloads. EFS scales up or

down based on the size of the files you store and is also accessible from

multiple availability zones. The distributed nature of the file system can

tempt you to use it as a CDN. But the costs of a CDN outweigh the

benefits of using EFS. Hence it is better to use a CDN and use EFS in

conjunction with files that can’t be stored on a CDN.


Salesforce Training Course
34+ Hours of HD Videos | 4 Courses | Verifiable Certificate of Completion | Lifetime Access
4.5

4. Amazon FSx for Lustre


Luster is a file system used for compute-intensive workloads. This mainly

comes into the picture when you run machine learning operations on

large data sets or when you need to run media encoding workloads.

Running Lustre separately requires a lot of expertise in setting it up and

configuring it for the right workloads. With the help of Amazon FSx, this

can be avoided and a simple interface on the console helps you to

quickly get started and start working on your data. The ability to connect

it seamlessly to S3 and the option of running it in VPC provides a low

cost yet a performant way to achieve your compute-intensive workloads

leveraging luster without the administrative overhead of running it.

5. Amazon S3 Glacier
The glacier is used mainly for archival and long-term data storage. This

means that there is a low retrieval rate on this storage system due to
which it is offered at an extremely cheap rate. It does also come with

compliant security features to encrypt your data. Glacier allows you to

run queries and analytics on it directly and you will be charged only for

the few minutes or hours when you read the data. In terms of durability,

it offers 99.(11 nines)% durability which is one of the highest in the

industry. Glacier hopes to replace the legacy on-premise tape-based

backup service with a much more cost-effective and durable solution.

6. Amazon FSx for Windows File Server


Whenever you need to run your windows specific software that needs to

access the proprietary windows file system on the cloud, AWS

provides you with Amazon FSx to easily achieve that. Windows-

based .Net applications, ERPs and CRMs require shared file storage to

move workloads between them. Also, Amazon FSx provides support for

all native windows based technologies such as NTFS, SMB protocol,

Active Directory (AD) and Distributed File System (DFS). Similar to luster,

Amazon FSx eliminates the administrative overhead for setting up and


maintaining a windows file server and provides you with a simple cost-

effective way to run your windows file server on AWS.

7. AWS Storage Gateway

MINITAB Data Science & Statistics


49+ Hours of HD Videos | 9 Courses | 2 Mock Tests & Quizzes | Verifiable Certificate of Completion |
Lifetime Access
4.5

Storage Gateway is a simple way to let your on-premise applications

store, access or archive the data into the AWS cloud. This is achieved by

running on a hypervisor on one of the machines in your data center

which contains the storage gateway and then is available on AWS to

connect to S3, Glacier or EBS. It provides a highly optimized, network

resilient and low-cost way to move your data from on-prem to the cloud.

Local caching is also available on your on-prem to allow for accessing

the more active data. Storage gateway also supports legacy backup

stores such as tapes as virtual tapes backed up directly into AWS Glacier.
Why AWS Database?
Why not? Let’s take a step back and understand how databases were deployed and
used when cloud databases were not available. Earlier, companies used to deploy
databases on in-house servers. Obviously, they would need to hire people for the
day-to-day maintenance and performance troubleshooting of these in-house
databases, right?

Now, imagine yourself to be a startup enthusiast. You start an e-commerce


company; then, all your business revenue comes from your app, right? Therefore, it
is extremely essential for your application to be up and running all the time with
the maximum performance possible.

How would you ensure this?

You will have to:

 Make sure that all your servers are equipped with the latest hardware possible
 Continuously monitor your hardware for any faults or performance issues
 Continuously improve your application code, and think of new features to further
expand your business
Seems like too much work? This is exactly why AWS Database Services came into
the picture.

With AWS Database Services, you just have to focus on your application code and
business expansion. The rest will be managed and taken care of by AWS, i.e., all the
hardware up-gradations, security patches, and even the software up-gradations are
taken care of by AWS, that too free of cost!

When I say free, I mean to say that you just have to pay AWS for the time you used
their database services, nothing extra for the hardware up-gradations, etc.
Get 100% Hike!
Master Most in Demand Skills Now !

Submit

That sounds amazing, doesn’t it? Imagine the amount of savings this model will
bring to you: You don’t have to maintain a Database Maintenance team, no need to
buy expensive hardware for your servers, and most importantly, you can just focus
on your business problems and leave everything for AWS to handle.

And this is not all; there are huge additional benefits that you get with AWS. Let’s
discuss them.

Interested in learning AWS? Go through this  AWS Tutorial!

Advantages of Amazon AWS Databases


 Highly Scalable: You can scale your database as your application grows without any
downtime!
 Fully Managed: Everything, from maintenance to hardware upgrades, is managed
by AWS.
 Enterprise Class: You get the same world-class infrastructure used by Amazon’s
giant ecommerce platform.
 Distributed: Now that your application and database exist on separate machines,
your application becomes highly fault-tolerant.
 Workforce Reduction: Since everything is managed by AWS, you don’t need a
Database Maintenance team in your organization.

Summarizing, AWS Database Services are highly scalable, fully managed services
provided by AWS for the Internet-scale applications on a pay-as-you-go model. They
offer various databases like relational databases, non-relational databases, etc.
Moving further, let’s discuss about the types of AWS Database Services.

Are you preparing for the AWS interview? Then here are the latest  AWS interview
questions!
Types of AWS Database Services
AWS provides a wide range of fully managed, purpose-built, and both relational and
non-relational database services specially designed to handle any kind of
application requirements. From fully managed database services, and a data
warehouse for analytics, to an in-memory data store for caching, AWS has got it all.

You will find an AWS Database Service for just about any kind of database
requirement. One can import an existing MySQL, Oracle, or Microsoft SQL database
into Amazon’s databases or even build their own relational or NoSQL databases
from scratch.

The following are different types of database services provided by AWS:

1. Relational Database: In relational databases, the data is usually stored in a tabular


format. Relational databases particularly use structured query language (SQL) to run
queries to perform operations such as insertion, updating, deletion, and more. AWS
provides the following relational database services.

 Amazon RDS
 Amazon Redshift
 Amazon Aurora

2. Key–Value Database: The key–value database is a type of NoSQL database where


the method of having a value attached to a key is used to store data. Meaning that
the data is composed of two elements, keys and values.

 Amazon DynamoDB

3. In-memory Database: This type of database is primarily based on the main


memory for computer data storage. Basically, an in-memory database keeps the
whole data in the RAM. Meaning that each time you access the data, you only access
the main memory and not any disk. And the reason that the main memory is faster
than any disk is why in-memory databases are so popular.

 Amazon ElastiCache

Moving forward, let’s get acquainted with all these database services starting with
Relational Database Services.

Enroll in our  AWS training in Hyderabad  and get certified from top institutes!

What Is AWS RDS?

One of the most commonly used database services provided by AWS that falls
under the category of relational databases is Amazon RDS. Amazon RDS is a
service that supports various open-source relational database products including
the database products provided by AWS itself. RDS is used to set up, operate, and
scale a relational database in the cloud. It automates administrative tasks such as
hardware provisioning, database setup, backups, and more.

Following are some of the benefits of using RDS:

 It provides high performance and is fast to scale.


 It provides high availability as a result of two distinct replication features, namely,
Multi-AZ deployments and Read Replicas.
 It automatically takes care of Backup and Restore by patching up the AWS database
software.
 It also takes care of Maintenance and Upgrades, automatically.

What Is Amazon AWS Redshift?


Amazon Redshift is a fast and fully managed data warehouse service in the cloud.
Amazon affirms that the Redshift data warehouse delivers ten times faster
performance than other data warehouses utilizing Machine Learning techniques.
The Redshift data warehouse can be scaled up to a petabyte or more as per the
requirements.

Following are some of the benefits of using Amazon Redshift:

 Parallel queries across multiple nodes can be performed


 Automatically backed up to Amazon S3
 Cost-effective over other data warehouse solutions
 Built-in security as Amazon Redshift provides end-to-end encryption and enables
users to configure firewall rules

Grab high-paying jobs in cloud domain with the help of these Top  AWS Interview
Questions!
What Is Amazon AWS Aurora Database?

Amazon Aurora is fully managed by Amazon RDS. It is a relational database engine


built for the cloud. Amazon Aurora is also completely compatible with MySQL. Since
Amazon Aurora is fully managed by RDS, all administrative tasks such as database
setup, patching, backups, and more are automated.
Following are some of the benefits of using Amazon Aurora:

 It provides high performance and scalability.


 It’s highly secure.
 It offers high availability and durability.
 It’s fully managed.

What Is Amazon AWS DynamoDB?


Amazon DynamoDB is a fast, fully managed, and flexible NoSQL database. It also
supports document-based data. AWS affirms that DynamoDB delivers single-digit
millisecond performance at any scale. DynamoDB comes with built-in Security,
Backup, and Restore features.

Since DynamoDB is a NoSQL database, it doesn’t require any schema. In


DynamoDB, there are basically three core components:

1. Tables: The collection of data is called a table in DynamoDB. It’s not a structured
table with a fixed number of rows and columns.

2. Items: Tables in DynamoDB contain one or more items. Items are made up of a


group of uniquely identifiable attributes.
3. Attributes: Attributes are the data elements or values that reside in each item.
They are equivalent to data values in a relational database that reside in a
particular cell of a table.
Following are some of the benefits of using Amazon DynamoDB:

 Easy to set up and manage


 Data is automatically replicated across multiple Availability Zones
 Supports both key–value and document-based data models

What Is AWS ElastiCache?


Amazon ElastiCache is a fully managed caching service that offers high-
performance, cost-effective, and scalable caching solutions. Amazon ElastiCache
provides two caching engines, namely, Memcached and Redis.
Career Transition



Why Do We Need AWS ElastiCache?


There are various benefits of using Amazon ElastiCache. Besides being easy to set
up and deploy, ElastiCache also improves applications’ performance as it reduces
disk reads. Following are some of the main reasons why ElastiCache is immensely
useful:

 Response time: ElastiCache reduces the response time as it retrieves data from a
fast in-memory system. It reduces the dependence on disk-based databases which
are usually slower.
 Scalability: Amazon ElastiCache is designed to be able to modify itself,
automatically. It can scale out or scale up depending on the fluctuating application
requirements.
 Complete management: Amazon ElastiCache is fully managed, so the common
administrative tasks such as hardware provisioning, failure recovery, backups, and
more are all automated.

This is just a brief introduction to all the main database services provided by
Amazon Web Services. All these AWS Database Services will be discussed and
explained individually, and in-depth, in their respective blogs. If you wish to learn
more, do check out AWS Certification and get an in-depth understanding of
Amazon Web Services.

What is DynamoDB?
o Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that
require consistent single-digit millisecond latency at any scale.
o It is a fully managed database that supports both document and key-value data models.
o Its flexible data model and performance makes it a great fit for mobile, web, gaming, ad-
tech, IOT, and many other applications.
o It is stored in SSD storage.
o It is spread across three geographically data centres.

Because of its availability in three geographically data centres, It consists of two different
types of consistency models:

o Eventual Consistent Reads


o Strongly Consistent Reads

AD

Eventual Consistent Reads


It maintains consistency across all the copies of data which is usually reached within a
second. If you read a data from DynamoDB table, then the response would not reflect
the most recently completed write operation, and if you repeat to read the data after a
short period, then the response would be the lattest update. This is the best model for
Read performance.

Strongly Consistent Reads

A strongly consistent read returns a result that reflects all writes that received a
successful response prior to the read.

Note: If your application wants the data from DynamoDB table immediately, then
choose the Strongly Consistent Read model. If you can wait for a second, then
choose the Eventual Consistent Model.

AWS DynamoDB Throughput Capacity


DynamoDB throughput capacity depends on the read/write capacity modes for
performing read/write operation on tables.

There are two types of read/write capacity modes:

o Provisioned mode
o On-demand mode

Provisioned mode
o It defines the maximum amount of capacity that an application can use from a specified
table.
o In a provisioned mode, you need to specify the number of reads and writes per second
required by the application.
o If the limit of Provisioned mode throughput capacity is exceeded, then this leads to the
request throttling.
o A provisioned mode is good for applications that have predictable and consistent traffic.

The Provisioned mode consists of two capacity units:

o Read Capacity unit


o Write Capacity unit

Read Capacity Unit

o The total number of read capacity units depends on the item size, and read consistency
model.
o Read Capacity unit represents two types of consistency models:
o Strongly Consistent model: Read Capacity Unit represents one strong
consistent read per second for an item up to 4KB in size.
o Eventually Consistent model: Read Capacity Unit represents two eventually
consistent reads per second for an item up to 4KB in size.
o DynamoDB will require additional read capacity units when an item size is greater than
4KB. For example, if the size of an item is 8KB, 2 read capacity units are required for
strongly consistent read while 1 read capacity unit is required for eventually consistent
read.

Write Capacity Unit


o The total number of write capacity unit depends on the item size.
o Only 1 write capacity unit is required for an item up to size 1KB.
o DynamoDB will require additional write capacity units when size is greater than 1KB. For
example, if an item size is 2KB, two write capacity units are required to perform 1 write
per second.
o For example, if you create a table with 20 write capacity units, then you can perform 20
writes per second for an item up to 1KB in size.

On-Demand mode
o DynamoDB on-demand mode has a flexible new billing option which is capable of
serving thousands of requests per second without any capacity planning.
o On-Demand mode offers pay-per-request pricing for read and write requests so that you
need to pay only for what you use, thus, making it easy to balance costs and
performance.
o In On-Demand mode, DynamoDb accommodates the customer's workload instantly as
the traffic level increases or decreases.
o On-Demand mode supports all the DynamoDB features such as encryption, point-in-
time recovery, etc except auto-scaling
o If you do not perform any read/write, then you just need to pay for data storage only.
o On-Demand mode is useful for those applications that have unpredictable traffic and
database is very complex to forecast.
What is DynamoDB?
Amazon DynamoDB is a cloud-native NoSQL primarily key-value database.
Let’s define each of those terms.

 DynamoDB is cloud-native in that it does not run on-premises or even in


a hybrid cloud; it only runs on Amazon Web Services (AWS). This enables
it to scale as needed without requiring a customer’s capital investment in
hardware. It also has attributes common to other cloud-native
applications, such as elastic infrastructure deployment (meaning that
AWS will provision more servers in the background as you request
additional capacity).
 DynamoDB is NoSQL in that it does not support ANSI Structured Query
Language (SQL). Instead, it uses a proprietary API based on JavaScript
Object Notation (JSON). This API is generally not called directly by user
developers, but invoked through AWS Software Developer Kits (SDKs) for
DynamoDB written in various programming languages (C++, Go, Java,
JavaScript, Microsoft .NET, Node.js, PHP, Python and Ruby).
 DynamoDB is primarily a key-value store in the sense that its data model
consists of key-value pairs in a schemaless, very large, non-relational
table of rows (records). It does not support relational database
management systems (RDBMS) methods to join tables through foreign
keys. It can also support a document store data model using JavaScript
Object Notation (JSON).

DynamoDB’s NoSQL design is oriented towards simplicity and scalability,


which appeal to developers and devops teams respectively. It can be used for a
wide variety of semistructured data-driven applications prevalent in modern and
emerging use cases beyond traditional databases, from the Internet of Things
(IoT) to social apps or massive multiplayer games. With its broad programming
language support, it is easy for developers to get started and to create very
sophisticated applications using DynamoDB.

Image Source

What is a DynamoDB Database?


Outside of Amazon employees, the world doesn’t know much about the exact
nature of this database. There is a development version known as DynamoDB
Local used to run on developer laptops written in Java, but the cloud-native
database architecture is proprietary closed-source.

While we cannot describe exactly what DynamoDB is, we can describe how


you interact with it. When you set up DynamoDB on AWS, you do not
provision specific servers or allocate set amounts of disk. Instead, you provision
throughput — you define the database based on provisioned capacity — how
many transactions and how many kilobytes of traffic you wish to support per
second. Users specify a service level of read capacity units (RCUs) and write
capacity units (WCUs).

As stated above, users generally do not directly make DynamoDB API calls.
Instead, they will integrate an AWS SDK into their application, which will
handle the back-end communications with the server.

DynamoDB data modeling needs to be denormalized. For developers used to


working with both SQL and NoSQL databases, the process of rethinking their
data model is nontrivial, but also not insurmountable.

History of the Amazon DynamoDB Database


DynamoDB was inspired by the seminal Dynamo white paper (2007) written by
a team of Amazon developers. This white paper cited and contrasted itself from
Google’s Bigtable (2006) paper published the year before.

The original Dynamo database was used internally at Amazon as a completely


in-house, proprietary solution. It is a completely different customer-oriented
Database as a Service (DBaaS) that runs on Amazon Web Services (AWS)
Elastic Compute Cloud (EC2) instances. DynamoDB was released in 2012, five
years after the original white paper that inspired it.

While DynamoDB was inspired by the original paper, it was not beholden to it.
Many things had changed in the world of Big Data over the intervening years
since the paper was published. It was designed to build on top of a “core set of
strong distributed systems principles resulting in an ultra-scalable and highly
reliable database system.”

https://www.scylladb.com/learn/dynamodb/introduction-to-dynamodb/

https://www.amazonaws.cn/en/dynamodb/
What is Microsoft Azure: How Does It Work and Services
Lesson 1 of 5By Shyamli Jha
Last updated on Jun 1, 20231922227
PreviousNext

Table of Contents
What is Cloud Computing?

Why is Cloud Computing Important?

What is Microsoft Azure?

What are the Various Azure Services and How does Azure Work?

Why Use Azure?

View More

Today, cloud computing applications and platforms are rapidly growing across all
industries, serving as the IT infrastructure that drives new digital businesses. These
platforms and applications have revolutionized the ways in which businesses function,
and have made processes easier. In fact, more than 77 percent of businesses today
have at least some portion of their computing infrastructure in the cloud.

While there are many cloud computing platforms available, two platforms dominate the
cloud computing industry. Amazon Web Services (AWS) and Microsoft Azure are the
two giants in the world of cloud computing.
While AWS is the largest cloud computing platform, Microsoft Azure is the fastest-
growing and second-largest. This article focuses on Microsoft Azure and what is Azure
—its services and uses.

Before diving into what is Azure, you should first know what cloud computing is.

Get Certified in AWS, Azure and Google Cloud


Post-Graduate Program in Cloud ComputingEXPLORE PROGRAM

What is Cloud Computing?

Cloud computing is a technology that provides access to various computing resources


over the internet. All you need to do is use your computer or mobile device to connect to
your cloud service provider through the internet. Once connected, you get access to
computing resources, which may include serverless computing, virtual machines,
storage, and various other things.

Basically, cloud service providers have massive data centers that contain hundreds of


servers, storage systems and components that are crucial for many kinds of
organizations. These data centers are in secure locations and store a large amount of
data. The users connect to these data centers to collect data or use it when required.
Users can take advantage of various services; for example, if you want a notification
every time someone sends you a text or an email, cloud services can help you. The best
part about cloud platforms is that you pay only for the services you use, and there are
no charges upfront.

Cloud computing can be used for various purposes: machine learning, data analysis,


storage and backup, streaming media content and so much more. Here’s an interesting
fact about the cloud: all the shows and movies that you see on Netflix are actually
stored in the cloud. Also, the cloud can be beneficial for creating and testing
applications, automating software delivery, and hosting blogs.

Why is Cloud Computing Important?

Let’s assume that you have an idea for a revolutionary application that can provide great
user experience and can become highly profitable. For the application to become
successful, you will need to release it on the internet for people to find it, use it, and
spread the word about its advantages. However, releasing an application on the internet
is not as easy as it seems.

To do so, you will need various components, like servers, storage devices, developers,
dedicated networks, and application security to ensure that your solution works the way
it is intended to. These are a lot of components, which can be problematic.

Buying each of these components individually is very expensive and risky. You would
need a huge amount of capital to ensure that your application works properly. And if the
application doesn’t become popular, you would lose your investment. On the flip side, if
the application becomes immensely popular, you will have to buy more servers and
storage to cater to more users, which can again increase your costs. This is where
cloud computing can come to the rescue. It has many benefits, including offering safe
storage and scalability all at once.

Master AWS, Microsoft Azure and Google Cloud


Post-Graduate Program in Cloud ComputingEXPLORE PROGRAM

What is Microsoft Azure?


Azure is a cloud computing platform and an online portal that allows you to access and
manage cloud services and resources provided by Microsoft. These services and
resources include storing your data and transforming it, depending on your
requirements. To get access to these resources and services, all you need to have is an
active internet connection and the ability to connect to the Azure portal.

Things that you should know about Azure:

 It was launched on February 1, 2010, significantly later than its main competitor, AWS.

 It’s free to start and follows a pay-per-use model, which means you pay only for the
services you opt for.

 Interestingly, 80 percent of the Fortune 500 companies use Azure services for their
cloud computing needs.

 Azure supports multiple programming languages, including Java, Node Js, and C#.

 Another benefit of Azure is the number of data centers it has around the world. There
are 42 Azure data centers spread around the globe, which is the highest number of
data centers for any cloud platform. Also, Azure is planning to get 12 more data
centers, which will increase the number of data centers to 54, shortly.

Why Use Azure?

Now that you know more about Azure and the services it provides, you might be
interested in exploring the various uses of Azure.

 Application development: You can create any web application in Azure.

 Testing: After developing an application successfully on the platform, you can


test it.
 Application hosting: Once the testing is done, Azure can help you host the
application.

 Create virtual machines: You can create virtual machines in any configuration
you want with the help of Azure. 

 Integrate and sync features: Azure lets you integrate and sync virtual devices
and directories. 

 Collect and store metrics: Azure lets you collect and store metrics, which can
help you find what works. 

 Virtual hard drives: These are extensions of the virtual machines; they provide
a huge amount of data storage.

https://www.codeguru.com/azure/azure-cloud-fundamentals/

Azure is a cloud computing platform provided by Microsoft that offers a wide range of services for
building, deploying, and managing applications and services. Here are some core concepts and
components in Azure:

1. **Azure Regions**: Azure operates in multiple geographic regions worldwide. Each region is a
separate and independent data center that contains one or more Azure data centers. Regions are
strategically located to provide high availability and fault tolerance.

2. **Azure Resource Manager (ARM)**: ARM is the deployment and management layer of Azure. It
provides a consistent management framework for deploying, managing, and organizing Azure resources.
ARM allows you to define resources as templates and deploy them together as a group, known as an
Azure Resource Group.

3. **Azure Subscriptions**: Azure subscriptions are the billing and management containers that allow
users to access and consume Azure resources. Subscriptions define the billing boundaries and resource
quotas for organizations and individuals using Azure.

4. **Azure Resource Groups**: Resource Groups are logical containers that group related Azure
resources together. They enable you to manage and organize resources collectively, apply access
controls, and monitor them as a single entity.

5. **Azure Virtual Machines (VMs)**: Azure VMs provide virtualized computing instances in the cloud.
They offer scalable and flexible compute capacity, allowing you to run a wide range of workloads,
including Windows and Linux-based applications.

6. **Azure App Service**: Azure App Service is a fully managed platform for building, deploying, and
scaling web and mobile applications. It supports multiple programming languages, frameworks, and
tools and provides built-in capabilities for application hosting, auto-scaling, and continuous deployment.

7. **Azure Storage**: Azure Storage offers scalable and durable cloud storage services for various types
of data. It includes Blob storage for storing unstructured data, Table storage for NoSQL key-value data,
Queue storage for reliable messaging, and File storage for file-based workloads.

8. **Azure SQL Database**: Azure SQL Database is a fully managed relational database service that
offers high-performance, scalable, and secure database capabilities. It supports the SQL Server database
engine and provides features such as automatic backups, high availability, and advanced security.

9. **Azure Functions**: Azure Functions is a serverless compute service that allows you to run code
without provisioning or managing servers. It enables you to execute code in a serverless environment,
triggered by various events and integrations.

10. **Azure Networking**: Azure Networking provides various services for connecting and securing
applications and resources. It includes Virtual Network (VNet) for creating isolated virtual networks, Load
Balancer for distributing traffic, Virtual Network Gateway for connecting on-premises networks to Azure,
and Azure Firewall for network security.
These are just a few core concepts and components in Azure. Azure offers a vast array of services
covering compute, storage, networking, databases, analytics, artificial intelligence, and more, allowing
organizations to build and deploy diverse cloud-based solutions. It's recommended to explore the Azure
documentation for a comprehensive understanding of all the services and capabilities provided by the
platform.

What is Microsoft Azure: How Does It Work and Services


Lesson 1 of 5By Shyamli Jha
Last updated on Jun 1, 20231922227

PreviousNext

Table of Contents
What is Cloud Computing?

Why is Cloud Computing Important?

What is Microsoft Azure?

What are the Various Azure Services and How does Azure Work?

Why Use Azure?

View More
Today, cloud computing applications and platforms are rapidly growing across all
industries, serving as the IT infrastructure that drives new digital businesses. These
platforms and applications have revolutionized the ways in which businesses function,
and have made processes easier. In fact, more than 77 percent of businesses today
have at least some portion of their computing infrastructure in the cloud.

While there are many cloud computing platforms available, two platforms dominate the
cloud computing industry. Amazon Web Services (AWS) and Microsoft Azure are the
two giants in the world of cloud computing.

While AWS is the largest cloud computing platform, Microsoft Azure is the fastest-
growing and second-largest. This article focuses on Microsoft Azure and what is Azure
—its services and uses.

Before diving into what is Azure, you should first know what cloud computing is.

Get Certified in AWS, Azure and Google Cloud


Post-Graduate Program in Cloud ComputingEXPLORE PROGRAM

What is Cloud Computing?

Cloud computing is a technology that provides access to various computing resources


over the internet. All you need to do is use your computer or mobile device to connect to
your cloud service provider through the internet. Once connected, you get access to
computing resources, which may include serverless computing, virtual machines,
storage, and various other things.

Basically, cloud service providers have massive data centers that contain hundreds of


servers, storage systems and components that are crucial for many kinds of
organizations. These data centers are in secure locations and store a large amount of
data. The users connect to these data centers to collect data or use it when required.
Users can take advantage of various services; for example, if you want a notification
every time someone sends you a text or an email, cloud services can help you. The best
part about cloud platforms is that you pay only for the services you use, and there are
no charges upfront.

Cloud computing can be used for various purposes: machine learning, data analysis,


storage and backup, streaming media content and so much more. Here’s an interesting
fact about the cloud: all the shows and movies that you see on Netflix are actually
stored in the cloud. Also, the cloud can be beneficial for creating and testing
applications, automating software delivery, and hosting blogs.

Why is Cloud Computing Important?

Let’s assume that you have an idea for a revolutionary application that can provide great
user experience and can become highly profitable. For the application to become
successful, you will need to release it on the internet for people to find it, use it, and
spread the word about its advantages. However, releasing an application on the internet
is not as easy as it seems.

To do so, you will need various components, like servers, storage devices, developers,
dedicated networks, and application security to ensure that your solution works the way
it is intended to. These are a lot of components, which can be problematic.

Buying each of these components individually is very expensive and risky. You would
need a huge amount of capital to ensure that your application works properly. And if the
application doesn’t become popular, you would lose your investment. On the flip side, if
the application becomes immensely popular, you will have to buy more servers and
storage to cater to more users, which can again increase your costs. This is where
cloud computing can come to the rescue. It has many benefits, including offering safe
storage and scalability all at once.

Master AWS, Microsoft Azure and Google Cloud


Post-Graduate Program in Cloud ComputingEXPLORE PROGRAM

What is Microsoft Azure?

Azure is a cloud computing platform and an online portal that allows you to access and
manage cloud services and resources provided by Microsoft. These services and
resources include storing your data and transforming it, depending on your
requirements. To get access to these resources and services, all you need to have is an
active internet connection and the ability to connect to the Azure portal.

Things that you should know about Azure:

 It was launched on February 1, 2010, significantly later than its main competitor, AWS.

 It’s free to start and follows a pay-per-use model, which means you pay only for the
services you opt for.

 Interestingly, 80 percent of the Fortune 500 companies use Azure services for their
cloud computing needs.

 Azure supports multiple programming languages, including Java, Node Js, and C#.

 Another benefit of Azure is the number of data centers it has around the world. There
are 42 Azure data centers spread around the globe, which is the highest number of
data centers for any cloud platform. Also, Azure is planning to get 12 more data
centers, which will increase the number of data centers to 54, shortly.
What are the Various Azure Services and How does Azure
Work?

Azure provides more than 200 services, are divided into 18 categories. These categories
include computing, networking, storage, IoT, migration, mobile, analytics,
containers, artificial intelligence, and other machine learning, integration, management
tools, developer tools, security, databases, DevOps, media identity, and web services.
Let’s take a look at some of the major Azure services by category:

Compute Services  

 Virtual Machine

This service enables you to create a virtual machine in Windows, Linux or any other
configuration in seconds.

 Cloud Service

This service lets you create scalable applications within the cloud. Once the
application is deployed, everything, including provisioning, load balancing, and health
monitoring, is taken care of by Azure. 

 Service Fabric

With service fabric, the process of developing a microservice is immensely simplified.


Microservice is an application that contains other bundled smaller applications.

 Functions

With functions, you can create applications in any programming language. The best
part about this service is that you need not worry about hardware requirements while
developing applications because Azure takes care of that. All you need to do is
provide the code.
Become a Certified Expert in AWS, Azure and GCP
Caltech Cloud Computing BootcampEXPLORE PROGRAM

Networking

 Azure CDN

Azure CDN (Content Delivery Network) is for delivering content to users. It uses a
high bandwidth, and content can be transferred to any person around the globe.
The CDN service uses a network of servers placed strategically around the globe so
that the users can access the data as soon as possible.

 Express Route 

This service lets you connect your on-premise network to the Microsoft cloud or any
other services that you want, through a private connection. So, the only
communications that will happen here will be between the enterprise network and the
service that you want. 

 Virtual network

The virtual network allows you to have any of the Azure services communicate with
one another privately and securely. 

 Azure DNS

This service allows you to host your DNS domains or system domains on Azure.

Storage

 Disk Storage 

This service allows you to choose from either HDD (Hard Disk Drive) or SSD (Solid
State Drive) as your storage option along with your virtual machine.
 Blob Storage 

This service is optimized to store a massive amount of unstructured data, including


text and even binary data. 

 File Storage

This is a managed file storage service that can be accessed via industry SMB (server
message block) protocol. 

 Queue Storage 

With queue storage, you can provide stable message queuing for a large workload.
This service can be accessed from anywhere in this world.

Next in this what is Azure article, let’s look at what are the uses of Azure.

https://intellipaat.com/blog/tutorial/microsoft-azure-tutorial/azure-sql/

I apologize for any confusion, but as of my knowledge cutoff in September 2021, "Windows Azure" was
the previous name for Microsoft Azure. In March 2014, Microsoft rebranded the platform as "Microsoft
Azure" to better reflect its broad range of services beyond just Windows-based offerings.

Microsoft Azure is a comprehensive cloud computing platform provided by Microsoft. It offers a wide
range of services and capabilities for building, deploying, and managing applications and services in the
cloud. The platform supports various operating systems, programming languages, frameworks, and tools,
enabling organizations to develop and deploy diverse cloud-based solutions.

Here are some key components and services within the Microsoft Azure platform:

1. **Azure Virtual Machines**: Azure VMs provide scalable and flexible compute capacity, allowing you
to run virtualized instances of Windows or Linux-based applications.

2. **Azure App Service**: Azure App Service is a fully managed platform for building, deploying, and
scaling web and mobile applications. It supports various programming languages, frameworks, and tools.

3. **Azure Storage**: Azure Storage provides scalable and durable cloud storage services for different
types of data, including Blob storage, Table storage, Queue storage, and File storage.

4. **Azure SQL Database**: Azure SQL Database is a managed relational database service that offers
high-performance, scalable, and secure database capabilities based on the SQL Server engine.

5. **Azure Functions**: Azure Functions is a serverless compute service that allows you to run event-
triggered code without managing infrastructure.

6. **Azure Networking**: Azure Networking provides services such as Virtual Network (VNet) for
creating isolated virtual networks, Load Balancer for distributing traffic, Virtual Network Gateway for
connecting on-premises networks, and Azure Firewall for network security.

7. **Azure Active Directory (Azure AD)**: Azure AD is a cloud-based identity and access management
service that provides secure authentication and authorization for Azure resources and applications.

8. **Azure DevOps**: Azure DevOps provides a set of development tools and services for application
development, testing, and deployment, including source control, build automation, release
management, and project management.

9. **Azure AI and Machine Learning**: Azure offers various services for artificial intelligence and
machine learning, including Azure Machine Learning, Cognitive Services, and Bot Services.
10. **Azure IoT**: Azure IoT provides a comprehensive set of services and capabilities for building and
managing Internet of Things (IoT) solutions, including device connectivity, data ingestion, analytics, and
visualization.

These are just a few examples of the components and services available within the Microsoft Azure
platform. Azure continues to evolve and expand its offerings to cater to different use cases and industry
requirements. I recommend consulting the official Microsoft Azure documentation for the most up-to-
date information and a comprehensive understanding of the platform.

Azure SQL Database, formerly known as SQL Azure, is a fully managed relational database service
provided by Microsoft Azure. It is based on the SQL Server database engine and offers a cloud-based
solution for storing and managing relational data. Here are some key features and capabilities of Azure
SQL Database:

1. **Fully Managed Service**: Azure SQL Database is a fully managed service, which means that
Microsoft handles the infrastructure management, patching, backups, and high availability. This allows
developers and database administrators to focus on their applications and data rather than worrying
about infrastructure maintenance.

2. **Scalability and Elasticity**: Azure SQL Database offers flexible scalability options to meet changing
workload demands. You can easily scale up or down your database resources based on performance
requirements. Additionally, Azure SQL Database supports automatic and manual scaling options to
handle varying workloads effectively.

3. **High Availability and Business Continuity**: Azure SQL Database provides built-in high availability
and data protection features. It replicates your database across multiple Azure data centers to ensure
data durability and offers options for automatic and manual backups. Azure SQL Database also supports
point-in-time restore and geo-restore capabilities for disaster recovery scenarios.
4. **Security and Compliance**: Azure SQL Database includes various security features to protect your
data, including network isolation, encryption at rest and in transit, and built-in threat detection. It also
helps you comply with industry standards and regulations, such as GDPR, HIPAA, and ISO.

5. **Compatibility and Integration**: Azure SQL Database is based on the SQL Server engine and
supports T-SQL, so existing SQL Server applications and skills can be easily migrated to Azure SQL
Database. It integrates with other Azure services and tools, allowing seamless data movement and
integration with Azure Data Factory, Azure Logic Apps, Power BI, and more.

6. **Intelligent Performance**: Azure SQL Database incorporates intelligent performance features such
as automatic tuning and intelligent query processing. These features optimize query performance,
adaptively tune your database, and provide insights and recommendations to improve performance.

7. **Flexible Deployment Options**: Azure SQL Database offers different deployment options to meet
your specific needs. It includes single databases for individual applications, elastic pools for managing
multiple databases with varying workloads, and managed instances for a fully isolated environment with
broader compatibility.

8. **Developer Tools and Productivity**: Azure SQL Database integrates with popular development tools
such as Azure Data Studio, SQL Server Management Studio (SSMS), and Visual Studio. It provides
features like intelligent query performance insights, built-in monitoring, and diagnostics for easier
database development and management.

These are some of the key features and capabilities of Azure SQL Database. It is designed to provide a
reliable, scalable, and secure platform for running your relational databases in the cloud. For more
details and the latest information, I recommend referring to the official Azure SQL Database
documentation.
Healthcare ECG Analysis in the Cloud
Healthcare is an area where computer technology can be seen and as impacting
a wide variety of items in our health: from offering help for business issues to
helping in scientific growth. With recent technological developments such as cell
phones and cloud computing, a range of services and devices are developed to
provide health care. In the cloud system, medical data can be gathered and
distributed automatically to medical practitioners anywhere in the world. From
there, doctors in the field have the capability of returning input to specific
patients.

An ECG is just a visual image of a record of the electrical activity of the heart
muscle as it varies over time, typically printed on paper for easier study. Similarly,
like other muscles, the heart contracts in response to electrical depolarization
caused in the muscle cells. When it's the time of day, it's the amount of the
electrical activity, when amplified and registered for just a few seconds that we
call a heart rhythm.

With ECG data collection and tracking, it's possible to test for chest pain, low-
grade heart rhythm disturbances, arrhythmias, and more. An E-G
(electrocardiogram) is the electrical expression of the contractile movement of
the myocardium.

Due to the invention of the internet or we can say due to the availability of the
internet cloud computing has come into the picture and portray itself as an
attractive choice for developing a health monitoring system. The study of the
shape of the waveform is used to classify arrhythmias. It can be used as the most
common way of detecting heart diseases. That way a patient who has a cardiac
arrhythmia (or some other abnormal heart rate) can be continuously monitored
through ECG tests. Since E-cigarettes allow for immediate notification for doctors
and first-aid workers, at the same time such alerts do not slow down the
movement of the patient. Cloud computing technologies allow the patient to
have his/her heartbeat monitored remotely via the Internet.
 The respective information will be sent to the patient's mobile device.
Upon signing in, the mobile device sends information to the cloud-based
services to review the results.
 The online component of this platform that consists of three layers is
internal to a cloud: the front end, the middle back-end, and the host server
(i.e., "the cloud" for the IT service that supports this project).

Advantages of Healthcare ECG Analysis in the


Cloud
The following are the advantages of healthcare ECG analysis in the cloud:

 Since cloud computing systems are now readily available and deliver the
services in less time, it's got the promise to be a massive disruptor to how
the technology is distributed.
 As a consequence, the doctor doesn't need to put a huge effort into
computing, since there is a lot of software on which to run.
 Cloud infrastructure is highly scalable; it can be maximized and minimized
according to the needs of each user.
 Cloud computing (or cloud computing) systems are now available and aim
to provide reliable services to consumers with less time.
 The doctor's office would not need to invest in a broad computer system.

Performing ECG (Electrocardiogram) analysis in the cloud can offer several


advantages, including scalability, accessibility, and collaboration. Here's an outline
of the steps involved in conducting ECG analysis in the cloud:

1. **Data Acquisition**: ECG data is typically acquired using sensors or electrodes


placed on the patient's body. The ECG signals are captured and converted into
digital format for further processing and analysis.

2. **Data Transmission**: The acquired ECG data needs to be transmitted to the


cloud for analysis. This can be achieved through secure data transfer protocols,
such as HTTPS or MQTT, leveraging network connectivity or wireless
technologies.

3. **Cloud Storage**: The ECG data is stored securely in cloud storage services,
such as Amazon S3 or Azure Blob Storage. Cloud storage ensures data durability,
scalability, and accessibility for analysis purposes.

4. **Data Preprocessing**: Preprocessing techniques are applied to the ECG data


to remove noise, filter artifacts, and enhance the signal quality. This step helps to
improve the accuracy of subsequent analysis tasks.

5. **Feature Extraction**: Relevant features are extracted from the preprocessed


ECG data. This may include extracting waveform characteristics, heart rate
variability (HRV), rhythm analysis, or any specific measurements required for
diagnostic purposes.

6. **Machine Learning and Analysis**: Machine learning algorithms or statistical


techniques are applied to the extracted features to detect abnormalities, classify
arrhythmias, identify heart conditions, or perform other relevant analyses. This
step may involve building and training predictive models using cloud-based
machine learning services like Amazon SageMaker or Azure Machine Learning.

7. **Visualization and Reporting**: The analyzed results and diagnostic insights


are visualized and presented in a user-friendly format. This can include
generating reports, graphs, or charts summarizing the ECG analysis findings.

8. **Integration and Collaboration**: Cloud-based ECG analysis systems can


facilitate integration with other healthcare systems, Electronic Health Records
(EHR), or telemedicine platforms for seamless data exchange and collaboration
among healthcare professionals.

9. **Security and Compliance**: Robust security measures, such as data


encryption, access controls, and compliance with privacy regulations (e.g., HIPAA,
GDPR), should be implemented to ensure the confidentiality and integrity of the
ECG data.

By leveraging cloud computing resources and services, ECG analysis can benefit
from the scalability, computational power, and collaboration opportunities
provided by the cloud. It enables healthcare organizations to process and analyze
large volumes of ECG data efficiently, leading to improved diagnostic accuracy
and patient care.
Cloud computing is an emerging technology that provides various computing services
on demand. It provides convenient access to a shared pool of higher-level services and
other system resources. Nowadays, cloud computing has a great significance in the
fields of geology, biology, and other scientific research areas.
Protein structure prediction is the best example in research area that makes use of cloud
applications for its computation and storage.
A protein is composed of long chains of amino acids joined together by peptide bonds.
The various structures of protein help in the designing of new drugs and the various
sequences of proteins from its three-dimensional structure in predictive form is known
as a Protein structure prediction.
Firstly primary structures of proteins are formed and then prediction of the secondary,
tertiary and quaternary structures are done from the primary one. In this way predictions
of protein structures are done. Protein structure prediction also makes use of various
other technologies like artificial neural networks, artificial intelligence, machine
learning and probabilistic techniques, also holds great importance in fields like
theoretical chemistry and bioinformatics.
There are various algorithms and tools that exists for protein structure prediction. CASP
(Critical Assessment of Protein Structure Prediction) is a well-known tool that provides
methods for automated web servers and the results of research work are placed on
clouds like CAMEO (Continuous Automated Model Evaluation) server. These servers
can be accessed by anyone as per their requirements from any place. Some of the tools
or servers used in protein structure prediction are Phobius, FoldX, LOMETS, Prime,
Predict protein, SignalP, BBSP, EVfold, Biskit, HHpred, Phre, ESyired3D. Using these
tools new structures are predicted and the results are placed on the cloud-based servers.

What is Protein Structure Prediction in Cloud


Computing?
The prediction of the protein structure is the inference from its amino acid
sequence of the three-dimensional structure of a protein, that is, the prediction
of its secondary and tertiary structure from the primary structure.
Proteins are large amino acid-based molecules that our bodies and the cells in
our bodies need to function correctly. Significant quantities of protein are present
in our muscles, skin, bones, and many other areas of the body. The prediction of
protein structure is the best example in the field of science that makes use of
cloud applications for computing and storage. First, the primary protein
structures are formed, and then the secondary, tertiary, and quaternary structures
are predicted from the primary structure.

Predictions of protein structures are carried out in this manner. The prediction


of protein structures also makes use of numerous other technologies, such as
artificial neural networks, artificial intelligence, machine learning, and probabilistic
approaches, and is also of great significance in fields such as theoretical
chemistry and bioinformatics.

Fig: Composition of the Protein

Why do we need Protein?


Applications or s/w that require high computing capabilities and have large data
sets will result in high I/O operations. The prediction of the protein structure will
enable the production of new drugs by medical scientists.
System administrators should take advantage of a range of resources to monitor
and manage the technology deployed. This can be a public cloud accessible via
the Internet to everyone, or a private cloud created by a group of access-
restricted nodes.

These approaches can turn the problem in such a way that they can be divided
into three phases: initialization, classification, and a final step. Researchers can
not only easily deploy their prediction applications in a distributed environment
using the Grid middleware, but also track and control the execution in the
distributed environment.

What is Satellite Image Processing?


Satellite image processing is commonly used in engineering to design the
infrastructures or to track the environmental conditions or to detect the
responses of an imminent disaster. The variety of datasets of advanced
positioning techniques nowadays would have more variety. To extract the
knowledge of such datasets, the remote sensing scientist needs to be themselves
equipped with a better and more efficient computer and storage. Cloud
computing is a good idea because it offers all the requisite computing resources
(compute power). Possibly the most cost-effective way to access computers as a
service accessible online, to see which current cloud platform shall be suitable for
the complex analysis of remote sensing (RS) data, we present here a comparative
study between two popular cloud platforms, Amazon and Microsoft, and the
newest rival Cloud Sigma.
Cloud Computing : is a pool network of computer systems sharing various resources
and higher level services hosted on the internet servers. Various cloud applications like
Google Drive, Google Mail, Drop Box, Microsoft Skydrive, Prime Desk, SOS Online
Backup and many more apps like these are available free of cost to the user and can be
accessed anytime from any corner of the world. Hence, cloud computing has a great
significance in sharing and storing numerous amount of data and it also saves the time
and energy of the users. 
CRM and ERP in Cloud Computing – 
What is CRM? 
CRM stands for Customer Relationship Management and is a software that is hosted in
cloud so that the users can access the information using internet. CRM software
provides high level of security and scalability to its users and can be easily used on
mobile phones to access the data. 
Now a days, many business vendors and service providers are using these CRM
software to manage the resources so that the user can access them via internet. Moving
the business computation from desktop to the cloud is proving a beneficial step in both
the IT and Non-IT fields. Some of the major CRM vendors include Oracle Siebel,
Mothernode CRM, Microsoft Dynamics CRM, Infor CRM, SAGE CRM, NetSuite
CRM. 
Advantages: 
Few advantages of using CRM are as follows: 
 
 High reliability and scalability
 Easy to use
 Highly secured
 Provides flexibility to users and service providers
 Easily accessible
What is ERP? 
ERP is an abbreviation for Entity Resource Planning and is a software similar to CRM
that is hosted on cloud servers which helps the enterprises to manage and manipulate
their business data as per their needs and user requirements. ERP software follows pay
per use methodologies of payment, that is at the end of the month, the enterprise pay the
amount as per the cloud resources utilized by them. There are various ERP vendors
available like Oracle, SAP, Epicor, SAGE, Microsoft Dynamics, Lawson Softwares and
many more. 
Advantages: 
Few advantages of using ERP softwares are: 
 
 Cost effective
 High mobility
 Increase in productivity
 No security issues
 Scalable and efficient

CRM (Customer Relationship Management) and ERP (Enterprise Resource Planning) are two distinct but
interconnected systems that can be effectively deployed and utilized in a cloud computing environment.
Here's a brief overview of how CRM and ERP systems can benefit from cloud computing:

CRM in Cloud Computing:

1. **Scalability and Flexibility**: Cloud-based CRM solutions offer scalability to accommodate changing
business needs. The cloud allows organizations to scale up or down resources based on user demand,
ensuring optimal performance and user experience.

2. **Accessibility and Mobility**: Cloud-based CRM systems enable access to customer data and sales
information from anywhere, at any time, through web browsers or dedicated mobile apps. This
enhances productivity and allows sales teams to access critical information while on the go.
3. **Cost Savings**: Cloud CRM solutions typically operate on a subscription-based model, eliminating
the need for upfront hardware and software investments. Additionally, organizations can avoid the costs
associated with infrastructure maintenance, upgrades, and data backups, as these responsibilities are
handled by the cloud provider.

4. **Integration Capabilities**: Cloud CRM systems can easily integrate with other cloud-based services
and applications, such as email marketing platforms, social media platforms, and customer support
systems. This enables a seamless flow of data and enhances customer engagement and support
processes.

5. **Data Security and Backup**: Cloud CRM providers employ advanced security measures, including
encryption, access controls, and regular backups, to protect customer data. Cloud providers also adhere
to industry-standard compliance certifications, ensuring data security and regulatory compliance.

ERP in Cloud Computing:

1. **Scalability and Resource Optimization**: Cloud-based ERP systems offer scalability to accommodate
growing businesses or fluctuating resource requirements. Organizations can scale up or down computing
resources as needed, without the need for significant infrastructure investments.

2. **Collaboration and Accessibility**: Cloud ERP solutions enable real-time collaboration and access to
enterprise data from multiple locations, facilitating seamless collaboration among different departments,
branches, or remote teams. Employees can access and update information anytime, improving
productivity and decision-making.

3. **Reduced IT Overhead**: With cloud-based ERP, organizations can offload the responsibilities of
infrastructure management, software updates, and security to the cloud provider. This reduces the
burden on internal IT teams, allowing them to focus on more strategic tasks.

4. **Integration and Data Consistency**: Cloud ERP systems provide integration capabilities to connect
with other cloud-based applications or services, such as CRM, HR management, or supply chain systems.
This ensures data consistency across various business functions and eliminates the need for manual data
entry or data synchronization.

5. **Disaster Recovery and Business Continuity**: Cloud ERP solutions offer robust backup and disaster
recovery mechanisms, ensuring that critical business data is protected and accessible even in the event
of a system failure or natural disaster. This helps organizations maintain business continuity and minimize
data loss.

By leveraging cloud computing, both CRM and ERP systems can benefit from enhanced scalability,
accessibility, collaboration, data security, and cost efficiency. Cloud-based deployments of CRM and ERP
systems allow organizations to focus on their core business processes while relying on cloud providers to
handle the infrastructure and operational aspects of these critical systems.

Cloud computing plays a crucial role in enabling and supporting social networking platforms. Here are
some key aspects of how cloud computing enhances the social networking experience:

1. **Scalability and Performance**: Social networking platforms experience varying levels of user
activity and demand, which can fluctuate rapidly. Cloud computing provides the scalability to handle
these fluctuations by dynamically allocating computing resources as needed. This ensures that the
platform can accommodate a growing user base and maintain optimal performance even during peak
usage periods.

2. **Storage and Data Management**: Social networking platforms generate and handle vast amounts
of data, including user profiles, posts, images, videos, and interactions. Cloud storage solutions offer
virtually unlimited storage capacity to store and manage this data efficiently. Cloud-based databases,
such as NoSQL databases or document databases, can handle the structured and unstructured data
generated by social networks.

3. **High Availability and Reliability**: Cloud platforms provide robust infrastructure with built-in
redundancy and failover mechanisms. This ensures high availability of social networking platforms,
minimizing downtime and disruptions for users. Cloud providers have multiple data centers across
different regions, enabling replication and backup of data for disaster recovery purposes.

4. **Global Accessibility**: Cloud-based social networking platforms are accessible from anywhere with
an internet connection, allowing users to connect and interact with others globally. Cloud infrastructure
supports the distribution of content and data across multiple regions, reducing latency and ensuring a
responsive user experience regardless of the user's geographic location.

5. **Multimedia Content Management**: Social networking platforms heavily rely on multimedia


content, including photos, videos, and live streaming. Cloud-based services offer efficient content
delivery networks (CDNs) to distribute and deliver multimedia content to users worldwide, ensuring fast
and reliable content access.

6. **Integration with Third-Party Services**: Cloud platforms enable easy integration with various third-
party services, such as authentication providers, payment gateways, analytics tools, and social media
APIs. This integration enhances the functionality and features of social networking platforms, providing
users with a seamless and comprehensive experience.

7. **Security and Privacy**: Cloud providers implement robust security measures to protect user data,
including encryption, access controls, and regular security audits. Compliance with industry-standard
regulations, such as GDPR (General Data Protection Regulation), ensures that user privacy is maintained
and data handling practices adhere to established guidelines.

8. **Social Graph Analysis and Machine Learning**: Cloud computing enables advanced analytics and
machine learning capabilities, which can be applied to analyze social network data. These technologies
help uncover insights about user behavior, social connections, content preferences, and sentiment
analysis. This information can be leveraged to enhance user experiences, personalize content
recommendations, and improve targeted advertising.

Overall, cloud computing empowers social networking platforms with scalability, high availability,
efficient data management, global accessibility, and advanced analytics capabilities. It forms the
foundation for robust and feature-rich social networking experiences, connecting users worldwide and
enabling seamless communication and collaboration.

DEFINITION
Google App Engine

By

 Ben Lutkevich, Technical Features Writer

What is Google App Engine?


Google App Engine (GAE) is a platform-as-a-service product that
provides web app developers and enterprises with access to
Google's scalable hosting and tier 1 internet service.

GAE requires that applications be written in Java or Python, store data in


Google Bigtable and use the Google query language. Noncompliant
applications require modification to use GAE.

GAE provides more infrastructure than other scalable hosting services, such


as Amazon Elastic Compute Cloud (EC2). GAE also eliminates some system
administration and development tasks to make writing scalable applications
easier.
Google provides GAE free up to a certain amount of use for the following
resources:

 processor (CPU)

 storage

 application programming interface (API) calls

 concurrent requests

Users exceeding the per-day or per-minute rates can pay for more of these
resources.
Amazon
Elastic Beanstalk is a rival platform as a service to Google App Engine. It is compatible with
different programming languages and runtimes than GAE.
How is GAE used?
GAE is a fully managed, serverless platform that is used to host, build and
deploy web applications. Users can create a GAE account, set up a software
development kit and write application source code. They can then use GAE to
test and deploy the code in the cloud.

One way to use GAE is building scalable mobile application back ends that


adapt to workloads as needed. Application testing is another way to use GAE.
Users can route traffic to different application versions to A/B test them and
see which version performs better under various workloads.

Serverle
ss platforms like Google App Engine are inherently scalable and allow for quick software
deployments. See 10 common uses.

What are GAE's key features?


Key features of GAE include the following:

API selection. GAE has several built-in APIs, including the following five:
 Blobstore for serving large data objects;

 GAE Cloud Storage for storing data objects;

 Page Speed Service for automatically speeding up webpage load


times;

 URL Fetch Service to issue HTTP requests and receive responses


for efficiency and scaling; and

 Memcache for a fully managed in-memory data store.

Managed infrastructure. Google manages the back-end infrastructure for


users. This approach makes GAE a serverless platform and simplifies API
management.

Several programming languages. GAE supports a number of languages,


including GO, PHP, Java, Python, NodeJS, .NET and Ruby. It also supports
custom runtimes.

Support for legacy runtimes. GAE supports legacy runtimes, which are


versions of programming languages no longer maintained. Examples include
Python 2.7, Java 8 and Go 1.11.

Application diagnostics. GAE lets users record data and run diagnostics on


applications to gauge performance.

Security features. GAE enables users to define access policies with the GAE
firewall and managed Secure Sockets Layer/Transport Layer
Security certificates for free.

Traffic splitting. GAE lets users route requests to different application


versions.
Versioning. Applications in Google App Engine function as a set
of microservices that refer back to the main source code. Every time code is
deployed to a service with the corresponding GAE configuration files, a
version of that service is created.

Google App Engine benefits and challenges


GAE extends the benefits of cloud computing to application development, but
it also has drawbacks.

Benefits of GAE

 Ease of setup and use. GAE is fully managed, so users can write
code without considering IT operations and back-end infrastructure.
The built-in APIs enable users to build different types of applications.
Access to application logs also facilitates debugging and monitoring
in production.

 Pay-per-use pricing. GAE's billing scheme only charges users daily


for the resources they use. Users can monitor their resource usage
and bills on a dashboard.

 Scalability. Google App Engine automatically scales as workloads


fluctuate, adding and removing application instances or application
resources as needed.

 Security. GAE supports the ability to specify a range of


acceptable Internet Protocol (IP) addresses. Users
can allowlist specific networks and services and blocklist specific IP
addresses.
GAE challenges

 Lack of control. Although a managed infrastructure has


advantages, if a problem occurs in the back-end infrastructure, the
user is dependent on Google to fix it.
 Performance limits. CPU-intensive operations are slow and
expensive to perform using GAE. This is because one physical
server may be serving several separate, unrelated app engine users
at once who need to share the CPU.

 Limited access. Developers have limited, read-only access to the


GAE filesystem.

 Java limits. Java apps cannot create new threads and can only use
a subset of the Java runtime environment standard edition classes.
Examples of Google App Engine
One example of an application created in GAE is an Android messaging app
that stores user log data. The app can store user messages and write event
logs to the Firebase Realtime Database and use it to automatically
synchronize data across devices.

Java servers in the GAE flexible environment connect to Firebase and receive
notifications from it. Together, these components create a back-end streaming
service to collect messaging log data.

GAE can be used in many different application contexts. Additional sample


application code in GitHub includes the following:

 a Python application that uses Blobstore;

 a program that uses MySQL connections from GAE to Google Cloud


Platform SQL; and

 code that shows how to set up unit tests in GAE.

GAE lets users structure applications as microservices. Learn the difference


between service-oriented architecture and microservices and the benefits of
each.
https://www.geeksforgeeks.org/what-is-google-app-engine-gae/
OpenStack is an open-source cloud computing platform that provides a set of software tools for building
and managing private and public clouds. Its architecture is designed to be modular and scalable, allowing
users to customize and configure their cloud infrastructure based on their specific requirements. Here is
an overview of the key components and layers in the OpenStack architecture:

1. **Compute Layer (Nova)**: Nova is the primary compute service in OpenStack, responsible for
managing and provisioning virtual machines (VMs) and bare metal instances. It provides an interface to
launch, terminate, and monitor instances, as well as managing compute resources such as CPU, memory,
and networking.

2. **Networking Layer (Neutron)**: Neutron handles the networking functionality in OpenStack. It


provides network connectivity between instances and allows users to define and manage virtual
networks, subnets, routers, and security groups. Neutron supports various networking technologies,
including Software-Defined Networking (SDN) and Network Function Virtualization (NFV).

3. **Storage Layer (Cinder and Swift)**:

- Cinder: Cinder is the block storage service in OpenStack. It allows users to create and manage
persistent block storage volumes that can be attached to instances. Cinder supports different storage
backends, including local disks, iSCSI, Fiber Channel, and more.

- Swift: Swift is the object storage service in OpenStack, designed for storing and retrieving large
amounts of unstructured data. It provides highly scalable and redundant storage for various types of
data, such as images, videos, and backups.

4. **Identity Layer (Keystone)**: Keystone is the identity service in OpenStack, responsible for
authentication, authorization, and service catalog management. It provides authentication mechanisms,
such as username/password or token-based authentication, and supports integration with external
identity providers. Keystone also manages user roles and permissions to control access to OpenStack
services.

5. **Dashboard Layer (Horizon)**: Horizon is the web-based user interface for OpenStack, offering a
graphical interface for users and administrators to interact with the cloud infrastructure. It provides a
dashboard that allows users to manage and monitor their instances, networks, storage, and other
resources.

6. **Orchestration Layer (Heat)**: Heat is the orchestration service in OpenStack, enabling users to
define and manage cloud infrastructure resources using templates. It automates the provisioning and
configuration of complex application stacks, including instances, networks, storage, and associated
resources.

7. **Image Layer (Glance)**: Glance handles the management of virtual machine images in OpenStack.
It provides a repository for storing, discovering, and retrieving virtual machine images. Glance supports
various image formats, including raw, qcow2, VHD, and ISO.

8. **Telemetry Layer (Ceilometer)**: Ceilometer is the telemetry service in OpenStack, responsible for
collecting and processing metering data, such as resource utilization, performance metrics, and billing
information. It provides monitoring and metering capabilities for different OpenStack components and
services.

9. **Database Layer (Trove)**: Trove is the database as a service (DBaaS) component in OpenStack. It
offers users the ability to provision and manage database instances, including relational databases like
MySQL and PostgreSQL, as well as NoSQL databases like MongoDB.

These components work together to provide a comprehensive cloud infrastructure platform. They can be
combined and customized to meet specific cloud deployment needs and integrate with other
technologies and services. OpenStack's modular architecture allows users to scale their cloud
infrastructure, add new functionalities, and build a flexible and robust cloud computing environment.

https://www.javatpoint.com/openstack-architecture#:~:text=OpenStack%20is%20an%20open
%2Dstandard,resources%20are%20available%20for%20users.

You might also like