0% found this document useful (0 votes)
50 views23 pages

Semiar Reports Cloud Computing

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views23 pages

Semiar Reports Cloud Computing

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

A SEMINAR REPORT ON

“SEMINAR TOPIC”

SUBMITTED TO JAI NARAIN VYAS UNIVERSITY,


JODHPUR
IN PARTIAL FULFILLMENT FOR AWARD OF DEGREE
OF
BACHELOR IN COMPUTER APPLICATIONS
(BATCH 2022-2025)

SUBMITTED BY
Priyanka Gehlot

UNDER THE GUIDANCE OF


Guide Name
(POST OF YOUR GUIDE)

LUCKY INSTITUTE OF PROFESSIONAL STUDIES


Affiliated to
JAI NARAIN VYAS UNIVERSITY, JODHPUR

Faculty of Information Technology


Lucky Institute of Professional Studies

Jodhpur

CERTIFICATE

This is to certify that the seminar report titled “Cloud computing” is an original
work carried out by Priyanka Gehlot under the supervision of [Name of the
Supervisor]. The report has been completed and submitted as part of the
requirements for the award of Bachelor in Computer Applications from Lucky
Institute of Professional Studies.

This report has been carried out during the academic year 2024-2025 and is an
authentic record of the work done by Priyanka Gehlot

Guide Name
Designation
Faculty of Information Technology

Date:
Acknowledgment

The success and final outcome of this seminar report required a lot of guidance and
assistance from many people and I am extremely privileged to have got this all
along the completion of the report. All that I have done is only due to such
supervision and assistance and we would not forget to thank them.

I am grateful to the mentor Mr./Mrs./Ms./Dr. Guide Name (Asst. Prof.) for


giving guidelines to make the report successful. The interest and attention which
has shown so graciously lavished upon this work.

I extend my thanks to Dr. Saurabh Khatri (HoD, IT) for his cooperation,
guidance, encouragement, inspiration, support and attention led to complete this
report.

I would like to give sincere thanks to Dr. Manish Kachhawaha (Director) and
Mr. Arjun Singh Sankhala (Principal) for providing cordial environment to
exhibit my abilities to the fullest.

Yours Sincerely,

Student Name
Declaration

I hereby declare that this Seminar Report is a record of original work done by me
under the supervision and guidance of {guide name}. I further certify that this
report work has not formed the basis for the award of the Degree/Diploma or
similar work to any candidate of any university and no part of this report is
reproduced as it is from any source without seeking permission.

Student name:

Roll no:

Date:
CLOUD COMPUTING

Abstract
The term “cloud computing” is a recent buzzword in the IT world. Behind this fancy poetic
phrase there lies a true picture of the future of computing for both in technical perspective and
social perspective. Though the term “Cloud Computing” is recent but the idea of centralizing
computation and storage in distributed data centers maintained by third party companies is not
new but it came in way back in 1990s along with distributed computing approaches like grid
computing. Cloud computing is aimed at providing IT as a service to the cloud users on-demand
basis with greater flexibility, availability, reliability and scalability with utility computing model.
This new paradigm of computing has an immense potential in it to be used in the field of e-
governance and in rural development perspective in developing countries like India.
CLOUD COMPUTING

Contents

.Introduction
.Cloud Computing Basics
2.1 Types of Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Cloud Stakeholders . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Advantages of using Cloud . . . . . . . . . . . . . . . . . . . . . . . .
3. Motivation towards Cloud in recent time
4 .Cloud Architecture
4.1 Comparison between Cloud Computing and Grid Computing . . . . .
4.2 Relation between Cloud Computing and Utility Computing . . . . . .
4.3 Types of utility cloud services . . . . . . . . . . . . . . . . . . . . . .
5 .Popular Cloud Applications: A Case study
5.1 Amazon S3 Services . . . . . . . . . . . . . . . . . . . . . .
5.2 ............................

5.3 .............................

7. Conclusion
CLOUD COMPUTING

1 Introduction
Cloud computing is a new and fast-growing way of using computers over the internet, but the
idea isn't brand new. Back in 1969, a scientist named L. Kleinrock predicted that one day,
computers would work like utilities (like electricity or water), providing services to homes and
businesses. This idea is now what we call "cloud computing."

In the mid-1990s, a similar concept called grid computing was introduced, which allowed people
to access computing power whenever they needed it. Over time, this evolved into what we now
know as cloud computing. The term "cloud computing" became popular around 2006, especially
after Google's CEO Eric Schmidt started using it.

Cloud computing works by connecting many computers (called servers) that are spread out
across different locations. These computers work together to provide services like storage,
software, or platforms that users can access over the internet. A key technology that makes this
possible is virtualization, which allows these servers to act as virtual machines. This means users
can use computing resources without having to worry about the physical machines behind them.

Cloud services are offered on a pay-as-you-go basis, meaning you only pay for what you use,
similar to paying for electricity or water. These services are often grouped as "XaaS," where "X"
can be Software (SaaS), Platform (PaaS), or Infrastructure (IaaS). This way, users don’t have to
buy or maintain expensive hardware and software—they can simply rent it from cloud providers.

Cloud computing is becoming very popular because it is flexible, reliable, and can easily grow
(scale) based on demand. While businesses are the main users of cloud computing, it can also
help with social issues. For example, in countries like India, where many people depend on
agriculture, cloud computing could improve farming methods and government services.

In short, cloud computing saves money, makes things easier for users, and can be used in many
different ways. Its growth is driven by its ability to offer services that are both convenient and
cost-effective, without the need for users to manage complex IT infrastructure themselves.
aspects are handled by the cloud provider itself.

Cloud computing is growing now-a-days in the interest of technical and business organizations but this
can also be beneficial for solving social issues. In the recent time E-Governance is being implemented in
developing countries to improve efficiency and effectiveness of governance. This approach can be
improved much by using cloud computing instead of traditional ICT. In India, economy is agriculture
based and most of the citizens live in rural areas.

The standard of living, agricultural productivity etc can be enhanced by utilizing cloud computing in a
proper way. Both of these applications of cloud computing have technological as well as social
challenges to overcome. In this report we would try to clarify some of the ideas – Why is cloud
computing a buzzword today? i.e. what are the benefits the provider and the users get using cloud?
Though its idea has come long back in 1990 but what situation made it indispensable today? How is
cloud built? What differentiates it from similar terms like grid computing and utility computing? What
are the different services are provided by the cloud providers? Though cloud computing now-a-days
talks about business enterprises not the non-profit organizations; how can this new paradigm be used in
the services like e-governance and in social development issues of rural India?

2 .Cloud Computing Basic


Cloud computing is a paradigm of distributed computing to provide the customers on-demand, utility
based computing services. Cloud users can provide more reliable, available and updated services to their
clients in turn. Cloud itself consists of physical machines in the data centers of cloud providers.
Virtualization is provided on top of these physical machines. These virtual machines are provided to the
cloud users. Different cloud provider provides cloud services of different abstraction level. E.g. Amazon
EC2 enables the users to handle very low level details where Google App-Engine provides a
development platform for the developers to develop their applications. So the cloud services are divided
into many types like Software as a Service, Platform as a Service or Infrastructure as a Service. These
services are available over the Internet in the whole world where the cloud acts as the single point of
access for serving all customers. Cloud computing architecture addresses difficulties of large scale data
processing
2.1 Types of Cloud Cloud
can be of three types .

1. Private Cloud – This type of cloud is maintained within an organization and used solely for
their internal purpose. So the utility model is not a big term in this scenario. Many companies are
moving towards this setting and experts consider this is the 1st step for an organization to move
into cloud. Security, network bandwidth are not critical issues for private cloud.

2. Public Cloud – In this type an organization rents cloud services from cloud providers on-
demand basis. Services provided to the users using utility computing model.

3. Hybrid Cloud – This type of cloud is composed of multiple internal or external cloud. This is
the scenario when an organization moves to public cloud computing domain from its internal
private cloud.

2.2 Cloud Stakeholders


To know why cloud computing is used let’s first concentrate on who use it. And then we would
discuss what advantages they get using cloud. There are three types of stakeholders cloud
providers, cloud users and the end users [Figure 1]. Cloud providers provide cloud services to the
cloud users. These cloud services are of the form of utility computing i.e. the cloud users uses
these services pay-as-you-go model. The cloud users develop their product using these services
and deliver the product to the end users.
Figure 1: Interconnection between cloud stakeholders

2.3 Advantages of using Cloud


The advantages for using cloud services can be of technical, architectural, business etc [5, 6].

1. Cloud Providers’ point of view

(a) Most of the data centers today are under utilized. They are mostly 15% utilized. These data centers
need spare capacity just to cope with the huge spikes that sometimes get in the server usage. Large
companies having those data centers can easily rent those computing power to other organizations and get
profit out of it and also make the resources needed for running data center (like power) utilized properly.

(b) Companies having large data centers have already deployed the resources and to provide cloud
services they would need very little investment and the cost would be incremental.

2. Cloud Users’ point of view

(a) Cloud users need not to take care about the hardware and software they use and also they don’t have
to be worried about maintenance. The users are no longer tied to some one traditional system.

(b) Virtualization technology gives the illusion to the users that they are having all the resources
available

. (c) Cloud users can use the resources on demand basis and pay as much as they use. So the users can
plan well for reducing their usage to minimize their expenditure

. (d) Scalability is one of the major advantages to cloud users. Scalability is provided dynamically to the
users. Users get as much resources as they need. Thus this model perfectly fits in the management of rare
spikes in the demand.
3. Motivation towards Cloud in recent time
Cloud computing is not a new idea but it is an evolution of some old paradigm of distributed computing.
The advent of the enthusiasm about cloud computing in recent past is due to some recent technology trend
and business models

1. High demand of interactive applications – Applications with real time response and with
capability of providing information either by other users or by nonhuman sensors gaining
more and more popularity today. These are generally attracted to cloud not only because
of high availability but also because these services are generally data intensive and
require analyzing data across different sources
2. Parallel batch processing – Cloud inherently supports batch-processing and analyzing
tera-bytes of data very efficiently. Programming models like Google’s map-reduce [18]
and Yahoo!’s open source counter part Hadoop can be used to do these hiding operational
complexity of parallel processing of hundreds of cloud computing servers.
3. . New trend in business world and scientific community – In recent times the business
enterprises are interested in discovering customers needs, buying patterns, supply chains
to take top management decisions. These require analysis of very large amount of online
data. This can be done with the help of cloud very easily. Yahoo! Homepage is a very
good example of such thing. In the homepage they show the hottest news in the country.
And according to the users’ interest they change the ads and other sections in the page.
Other than these many scientific experiments need very time consuming data processing
jobs like LHC (Large Hadron Collider). Those can be done by cloud

4. Extensive desktop application – Some desktop applications like Matlab, Mathematica are
becoming so compute intensive that a single desktop machine is no longer enough to run
them. So they are developed to be capable of using cloud computing to perform extensive
evaluations.

4. Cloud Architecture
The cloud providers actually have the physical data centers to provide virtualized services to
their users through Internet. The cloud providers often provide separation between application
and data. This scenario is shown in the Figure 2. The underlying physical machines are generally
organized in grids and they are usually geographically distributed. Virtualization plays an
important role in the cloud scenario. The data center hosts provide the physical hardware on
which virtual machines resides. User potentially can use any OS supported by the virtual
machines used.

Operating systems are designed for specific hardware and software. It results in the lack of
portability of operating system and software from one machine to another machine which uses
different instruction set architecture. The concept of virtual machine solves this problem by
acting as an interface between the hardware and the operating system called as system VMs [21].
Another category of virtual machine is called process virtual machine which acts as an abstract
layer between the operating system and applications.

Virtualization can be very roughly said to be as software translating the hardware instructions
generated by conventional software to the understandable format for the physical hardware.
Virtualization also includes the mapping of virtual resources like registers and memory to real
hardware resources.

The underlying platform in virtualization is generally referred to as host and the software that
runs in the VM environment is called as the guest. The Figure 3 shows very basics of
virtualization. Here the virtualization layer covers the physical hardware. Operating System
accesses physical hardware through virtualization layer. Applications can issue instruction by
using OS interface as well as directly using virtualizing layer interface. This design enables the
users to use applications not compatible with the operating system. Virtualization enables the
migration of the virtual image from one physical machine to another and this feature is useful for
cloud as by data locality lots of optimization is possible and also this feature is helpful for taking
back up in different locations. This feature also enables the provider to shut down some of the
data center physical machines to reduce power consumption.

4.1 Relation between Cloud Computing and Utility Computing


The cloud users enjoy utility computing model for interacting with cloud service providers.
This Utility computing is essentially not same as cloud computing. Utility computing is the
aggregation of computing resources, such as computation and storage, as a metered service
similar to a traditional public utility like electricity, water or telephone network. This service
might be provided by a dedicated computer cluster specifically built for the purpose of being
rented out, or even an under-utilized supercomputer. And cloud is one of such option of
providing utility computing to the users.

4.2Types of utility cloud services


Utility computing services provided by the cloud provider can be classified by the type of the
services. These services are typically represented as XaaS where we

Table 1: Comparison between Grid & Cloud computing

Characteristics Grid Computing Cloud Computing


Business Model Business Model Uses Pay-as-you-go
model.

Resource Management Schedules dedicated resources Share all resources


by a queuing service. Until all simultaneously to all the users
the resources are available as at the same time. This allows
specified by the LRM (Local latency intensive and
Resource Manager) the job interactive applications run
waits in the queue. Thus naively in cloud.
interactive and latency
intensive applications are not
executed efficiently in grid.
Virtualization No virtualization, as the data For cloud computing one of
centers are handled by the the essential components is
individual organizations of virtualization. This is for
their own. So they generally providing abstraction and
manage those usually encapsulation to the users of
physically but not by the cloud.
virtualization. Although there
are some efforts being given
by some companies like
Nimbus for virtualization to
make dynamic deployment
and abstraction available.
Application model Executing tasks may be small Supports only loosely coupled
or large, loosely coupled or and transaction oriented,
tightly coupled, compute mostly interactive jobs
intensive or data intensive.

Security model Grids build on the assumption Cloud security is now in its
that resources are infancy.
heterogeneous and dynamic.
Thus security is engineered in
fundamental grid
infrastructure.

can replace X by Infrastructure or Platform or Hardware or Software or Desktop or Data etc. There
are three main types of services most widely accepted - Software as a Service, Platform as a Service
and Infrastructure as a Service. These services provide different levels of abstraction and flexibility
to the cloud users. This is shown in the Figure 4.
We’ll now discuss some salient features of some of these models

1. SaaS (Software as a service) – Delivers a single application through the web


browser to thousands of customers using a multitenant architecture. On the
customer side, it means no upfront investment in servers or software licensing; on
the provider side, with just one application to maintain, cost is low compared to
conventional hosting. Under SaaS, the software publisher (seller) runs and
maintains all necessary hardware and software. The customer of SaaS accesses the
applications through Internet. For example Salesforce.com with yearly revenues of
over $300M, offers on-demand Customer Relationship Management software
solutions. This application runs on Salesforce.com’s own infrastructure and
delivered directly to the users over the Internet. Salesforce does not sell perpetual
licenses but it charges a monthly subscription fee starting at $65/user/month [10].
Google docs is also a very nice example of SaaS where the users can create, edit,
delete and share their documents, spreadsheets or presentations whereas Google
have the responsibility to maintain the software and hardware.
E.g. - Google Apps, Zoho Office

2. PaaS (Platform as a service) – Delivers development environment as a service.


One can build his/her own applications that run on the provider’s infrastructure
that support transactions, uniform authentication, robust scalability and
availability. The applications built using PaaS are offered as SaaS and consumed
directly from the end users’ web browsers. This gives the ability to integrate or
consume third-party web-services from other service platforms.
E.g. - Google App Engine-

3. IaaS (Infrastructure as a Service) – IaaS service provides the users of the cloud
greater flexibility to lower level than other services. It gives even CPU clocks with
OS level control to the developers
. E.g. - Amazon EC2 and S3.

5 .Popular Cloud Applications: A Case study


1. Amazon S3 (Simple Storage Service)
Amazon Simple Storage Service (Amazon S3) is a highly scalable,
durable, and secure cloud-based storage solution provided by Amazon Web Services (AWS).
It is designed to store and retrieve any amount of data from anywhere on the web, offering
developers and businesses a flexible, cost-effective way to manage their data. S3 operates on
a pay-as-you-go model, allowing users to pay only for the storage they use, which makes it
an attractive option for organizations of all sizes.

Key Features of Amazon S3

1. Scalability: Amazon S3 can handle virtually unlimited amounts of data. Whether you’re
storing a few gigabytes or several petabytes, S3 automatically scales storage
infrastructure to meet your needs without manual intervention.

2. Durability and Availability: S3 is designed for 99.999999999% (11 nines) durability,


ensuring that your data is safe even in the event of hardware failures. Data is
automatically replicated across multiple geographically separated locations to ensure high
availability.
3. Security: S3 offers robust security features, including data encryption both in transit and
at rest. Users can define access policies to control who can access their data. Integration
with AWS Identity and Access Management (IAM) allows fine-grained access controls

4. Cost-Effective: With a pay-as-you-go pricing model, S3 allows users to pay only for the
storage and data transfer they use. Additionally, it offers multiple storage classes such as
S3 Standard, S3 Intelligent-Tiering, S3 Glacier, and S3 Glacier Deep Archive to optimize
costs based on data access patterns

5. Ease of Integration: S3 integrates seamlessly with a wide range of AWS services,


including Amazon EC2, AWS Lambda, and Amazon RDS. It also supports SDKs for
multiple programming languages, making it easy to integrate into applications..

6. Data Management and Analytics: S3 provides tools for data management, including
lifecycle policies that automate data transfer between storage classes. Users can also run
analytics directly on data stored in S3 using services like Amazon Athena, which allows
querying data using standard SQL.

Introduction to Amazon API Gateway

Amazon API Gateway is a fully managed service that enables developers to create, publish,
maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to
access data, business logic, or functionality from backend services. API Gateway allows you to
create RESTful APIs, WebSocket APIs, and HTTP APIs, providing the flexibility to meet the
needs of your application.
With Amazon API Gateway, you can create APIs that connect to a variety of backend services,
including AWS Lambda functions, Amazon EC2 instances, and other web services. It also offers
features like traffic management, authorization and access control, monitoring, and API version
management.

Key Concepts
1. API Endpoints

An API Endpoint is a unique URL where your API can be accessed by client applications. API
Gateway provides several types of endpoints depending on the API type you are using:
 REST API Endpoints: These are used for RESTful APIs and are accessed using
standard HTTP methods like GET, POST, PUT, DELETE, etc.
 WebSocket API Endpoints: These are used for WebSocket APIs, which provide two-
way communication channels between the client and the server.
 HTTP API Endpoints: These are lightweight, low-latency endpoints that are used to
create HTTP APIs. HTTP APIs are simpler and more cost-effective compared to REST
APIs.
API Gateway endpoints are typically structured as follows:
https://{api-id}.execute-api.{region}.amazonaws.com/{stage}/{resource}

Where:
 {api-id}: The unique identifier for your API.
 {region}: The AWS region where your API is hosted.
 {stage}: The deployment stage of your API (e.g., dev, test, prod).
 {resource}: The resource path that you define in your API.
2. Resources

Resources in API Gateway represent the various parts of your API, such as collections or
specific data objects. Each resource is identified by a path (e.g., /users, /orders/{orderId}), and
you can define multiple methods (GET, POST, DELETE, etc.) for each resource.
 Resource Paths: Resources are defined hierarchically, similar to a folder structure, and
can have child resources. For example, /orders/{orderId}/items could be a resource
within an /orders/{orderId} resource.
 Methods: Each resource can have one or more methods associated with it, corresponding
to HTTP methods like GET, POST, DELETE, PUT, etc. Methods define how the API
should handle requests sent to that resource.
 Integration: Each method is linked to a backend integration, which could be an AWS
Lambda function, an HTTP endpoint, or another AWS service. This integration defines
how API Gateway should process and route the request
 .
3. API Stages and Deployment

 Stages: Stages represent different versions or environments of your API (e.g., dev, test,
prod). Each stage has its own URL endpoint and can have different settings, such as
caching or logging.
 Deployment: Deploying an API in API Gateway means making it available at a specific
stage. Once you deploy an API, you can access it via the stage URL, and it becomes
publicly accessible (depending on your configuration).
4. Security and Authorization
API Gateway provides several options for securing your APIs, including:
 API Keys: You can require clients to include API keys in their requests to identify and
authenticate them.
 IAM Permissions: Use AWS Identity and Access Management (IAM) policies to control
access to your API.
 Cognito User Pools: Integrate with Amazon Cognito to manage user authentication and
authorization.
 Lambda Authorizers: Use custom AWS Lambda functions to control access to your
API by implementing custom authorization logic.

Getting Started with Amazon API Gateway


Here’s a step-by-step guide to getting started with Amazon API Gateway:

Step 1: Creating a REST API


1. Log in to the AWS Management Console:
o Navigate to the Amazon API Gateway service.
2. Create a New API:
o Click on "Create API".
o Choose "REST API" and then select "Build".
o Choose an API type: "HTTP API" for lightweight APIs or "REST API" for fully-
featured APIs.
o Enter an API name and description.
o Click "Create API".
Step 2: Define Resources and Methods
1. Create a Resource:
o In the API Gateway console, under your newly created API, click on "Resources".
o Click "Create Resource" to add a new resource.
o Enter a resource name and path (e.g., /users).
2. Add Methods:
o Select the resource you just created.
o Click "Create Method".
o Choose an HTTP method (e.g., GET, POST).
o Select the integration type. You can choose from options like "Lambda Function",
"HTTP", "Mock", or "AWS Service".
o If you choose "Lambda Function", select your Lambda function to integrate with
this method.
3. Set Up Method Response and Integration Response:
o Configure the method response to define how the API responds to the client.
o Configure the integration response to define how API Gateway should handle
responses from the backend service.
Step 3: Deploy the API
1. Create a Stage:
o After defining your resources and methods, you need to deploy your API to a
stage.
o Click on "Deploy API".
o Choose "Deploy API" from the Actions menu.
o Create a new stage (e.g., dev, prod) and provide a description.
2. Access Your API:
o After deployment, you will receive an endpoint URL for the stage.
o You can use this URL to test and access your API.
Step 4: Secure and Monitor Your API
1. Enable Security:
o Implement security measures like requiring API keys, setting up IAM
permissions, or integrating with Amazon Cognito.
2. Monitor API Usage:
o Use Amazon CloudWatch to monitor API usage and performance metrics like
request count, latency, and error rates.
o Set up alarms to notify you of any unusual activity or performance degradation.
Step 5: Versioning and API Lifecycle Management
1. Create New Versions:
o As you make updates to your API, you can create new versions or stages to
manage the API lifecycle.
o Deploy new versions to different stages for testing or production use.
2. Documentation and SDK Generation:
o Generate API documentation and SDKs (Software Development Kits) for client
developers to integrate with your API.
o Use the built-in documentation editor to provide details on API usage, methods,
and parameters.
Advanced Features and Use Cases
Caching:
 API Gateway provides the option to cache responses at the stage level to improve
performance and reduce backend load. You can configure caching settings such as time-
to-live (TTL) and cache key parameters.
Throttling and Rate Limiting:
 You can set up throttling limits to control the number of requests that clients can make to
your API within a certain timeframe. This helps protect your backend services from being
overwhelmed by excessive traffic.
Cross-Origin Resource Sharing (CORS):
 Enable CORS on your API to allow cross-domain requests from browsers, which is
essential for web applications that interact with your API from different domains.
API Gateway WebSocket APIs:
 Use WebSocket APIs for real-time communication applications, such as chat apps or live
data feeds. API Gateway manages WebSocket connections and can trigger backend
services based on WebSocket events.
Integrating with Other AWS Services:
 API Gateway can be integrated with other AWS services such as AWS Lambda, Amazon
DynamoDB, Amazon S3, and more to build comprehensive serverless applications.

Conclusion
Amazon API Gateway is a powerful service that allows you to create and manage APIs with
ease. By understanding the key concepts such as API endpoints, resources, and methods, and
following the steps to get started, you can leverage API Gateway to build scalable, secure, and
efficient APIs for your applications. Whether you're building RESTful APIs, WebSocket APIs,
or lightweight HTTP APIs, API Gateway provides the tools and features needed to manage the
entire API lifecycle, from development to deployment and monitoring.

3. Introduction to Amazon SQS and SNS

Amazon SQS (Simple Queue Service) and Amazon SNS (Simple Notification Service) are fully
managed messaging services offered by AWS that are designed to facilitate communication
between distributed systems, decouple microservices, and provide reliable and scalable message
processing capabilities.
 Amazon SQS: A message queuing service that allows you to decouple and scale
microservices, distributed systems, and serverless applications. It enables you to send,
store, and receive messages between software components, ensuring that the components
are loosely coupled, scalable, and highly available.
 Amazon SNS: A notification service that allows you to send messages or notifications to
multiple subscribers, including applications, distributed systems, and end-users. It
supports various notification formats, including SMS, email, and HTTP/HTTPS
endpoints, and can integrate with other AWS services.
Both services play crucial roles in building robust, scalable, and decoupled architectures, making
them essential tools in modern cloud-based application development.
Key Concepts
Amazon SQS
1. Queues
 Standard Queue: Offers unlimited throughput, ordering best-effort (messages are
generally delivered in the order sent), and at-least-once delivery (messages might be
delivered more than once).
 FIFO Queue: Guarantees that messages are processed exactly once and in the exact
order that they are sent. FIFO queues are designed to ensure that the order of message
processing is preserved.
2. Messages
 Message Attributes: Additional metadata that you can include with your messages.
Attributes provide structured information about the message that consumers can use for
filtering or routing.
 Message Retention: The period during which SQS retains a message if it’s not processed
and deleted by a consumer. The retention period can be configured from 1 minute to 14
days.
 Visibility Timeout: The time period during which a message is invisible to other
consumers after being received by a consumer. If the message is not deleted within this
time, it becomes visible again in the queue.
3. Dead-Letter Queues (DLQ)
A Dead-Letter Queue is a special type of queue used to handle messages that cannot be
processed successfully. When a message fails to be processed after a specified number of
attempts, it is moved to a DLQ for further investigation or reprocessing. This feature helps in
identifying problematic messages and preventing them from blocking the processing of other
messages.
4. Long Polling
Long Polling reduces the cost and latency of retrieving messages from SQS by allowing the
consumer to wait for a message to become available instead of polling the queue continuously.
When a long poll request is made, the request waits until a message is available or the long
polling timeout period is reached.
Amazon SNS
1. Topics
 Topics: Amazon SNS topics are logical access points that act as communication channels
for sending messages to multiple subscribers. A topic can support multiple subscriber
endpoints, and each message sent to the topic is delivered to all subscribers.
 Topic Attributes: Metadata about a topic, including settings such as display name,
delivery policy, and access control. You can configure these attributes to manage how
messages are delivered and who can publish to or subscribe to the topic.
2. Subscriptions
 Subscription Endpoints: These are the destinations where messages published to an
SNS topic are delivered. Endpoints can include Amazon SQS queues, AWS Lambda
functions, HTTP/HTTPS endpoints, email addresses, and mobile push notifications (e.g.,
SMS, Apple Push Notification Service).
 Filtering Policies: Filtering policies allow subscribers to receive only specific messages
that match certain attributes, reducing the volume of irrelevant messages that subscribers
need to process.
3. Message Formats
 Raw Message Delivery: This feature allows SNS to send the exact message payload to
subscribers without any additional SNS-generated metadata. It is useful when subscribers
need to receive the raw message content as-is.
 Message Structure: SNS supports structured messages that can include different formats
or content for each type of subscriber endpoint. This allows you to tailor the message
content to suit the capabilities and requirements of different subscriber types.
4. Fan-Out Pattern
The Fan-Out Pattern allows you to distribute messages to multiple endpoints simultaneously by
publishing a single message to an SNS topic that has multiple SQS queues or other endpoints
subscribed to it. This is commonly used for distributing workload to multiple processing nodes
or delivering notifications to multiple systems.
Getting Started with Amazon SQS
Step 1: Create an SQS Queue
1. Log in to the AWS Management Console and navigate to the Amazon SQS service.
2. Create a New Queue:
o Choose between a Standard Queue or a FIFO Queue depending on your
application’s requirements.
o Configure queue settings, such as the queue name, visibility timeout, message
retention period, and whether to enable encryption.
Step 2: Send and Receive Messages
1. Send Messages:
o Use the AWS Management Console, AWS CLI, or SDKs to send messages to the
queue.
o When sending a message, you can optionally include message attributes and set a
delay time for the message.
2. Receive and Process Messages:
o Use a consumer application or service to poll the queue for new messages.
o After receiving a message, process it and then delete it from the queue to prevent
it from being processed again.
Step 3: Implementing Dead-Letter Queues
1. Create a Dead-Letter Queue:
o Configure a secondary queue to act as a DLQ for your main queue.
o Set the maximum number of receive attempts for the main queue before a
message is moved to the DLQ.
2. Monitor and Handle Failed Messages:
o Monitor the DLQ for messages that couldn’t be processed successfully.
o Investigate and resolve the issues with these messages or reprocess them as
needed.
Getting Started with Amazon SNS
Step 1: Create an SNS Topic
1. Log in to the AWS Management Console and navigate to the Amazon SNS service.
2. Create a New Topic:
o Choose a topic type (Standard or FIFO) based on your requirements.
o Name the topic and configure topic attributes, such as display name and access
policies.
Step 2: Add Subscriptions
1. Subscribe Endpoints to the Topic:
o Create subscriptions by specifying the endpoint type (e.g., SQS, Lambda, email)
and the endpoint address (e.g., email address, queue ARN).
o Confirm subscriptions if required (e.g., email subscriptions require confirmation).
2. Configure Filtering Policies:
o Set up filtering policies for each subscription if you want to limit the messages
that are delivered to specific subscribers based on message attributes.
Step 3: Publish Messages to the Topic
1. Publish Messages:
o Use the AWS Management Console, AWS CLI, or SDKs to publish messages to
the SNS topic.
o You can include message attributes to allow for filtering and use structured
messages for different subscriber types.
2. Monitor and Analyze Delivery:
o Monitor message delivery success and failures using CloudWatch metrics and
logs.
o Analyze delivery logs to troubleshoot any issues with message delivery or to gain
insights into the performance of your messaging system.
Advanced Use Cases and Integration
1. Integrating SQS and SNS
 Fan-Out with SQS and SNS: Use SNS to fan out a message to multiple SQS queues.
This pattern is useful when you need to distribute the same message to multiple
processing nodes or different systems.
 Event-Driven Architectures: Use SNS to publish events from your application, and then
have different microservices or applications subscribe to these events through SQS,
Lambda, or HTTP endpoints.
2. Combining with AWS Lambda
 Lambda Triggers: Set up Lambda functions to trigger in response to messages arriving
in SQS or SNS. This is particularly useful for real-time processing, data transformations,
and integrating with other AWS services.
 Serverless Architectures: Build serverless architectures where SNS and SQS handle
message delivery and queuing, and Lambda functions process these messages without the
need for dedicated servers.
3. Monitoring and Logging
 CloudWatch Metrics: Use Amazon CloudWatch to monitor key metrics for your SQS
queues and SNS topics, such as message count, delivery success rate, and throttling.
 Logging: Implement logging to track message flow and diagnose issues within your
messaging system. CloudWatch Logs can be used to capture logs for further analysis.
Conclusion
Amazon SQS and SNS are fundamental services for building scalable, decoupled, and resilient
cloud applications. By mastering the key concepts and following the steps to get started, you can
leverage SQS and SNS to create powerful messaging systems that handle distributed workloads,
coordinate microservices, and improve the reliability of your applications. Whether you’re using
SQS for decoupling components or SNS for broadcasting notifications, these services are
essential for modern cloud-based architectures.

You might also like