Semiar Reports Cloud Computing
Semiar Reports Cloud Computing
“SEMINAR TOPIC”
SUBMITTED BY
Priyanka Gehlot
Jodhpur
CERTIFICATE
This is to certify that the seminar report titled “Cloud computing” is an original
work carried out by Priyanka Gehlot under the supervision of [Name of the
Supervisor]. The report has been completed and submitted as part of the
requirements for the award of Bachelor in Computer Applications from Lucky
Institute of Professional Studies.
This report has been carried out during the academic year 2024-2025 and is an
authentic record of the work done by Priyanka Gehlot
Guide Name
Designation
Faculty of Information Technology
Date:
Acknowledgment
The success and final outcome of this seminar report required a lot of guidance and
assistance from many people and I am extremely privileged to have got this all
along the completion of the report. All that I have done is only due to such
supervision and assistance and we would not forget to thank them.
I extend my thanks to Dr. Saurabh Khatri (HoD, IT) for his cooperation,
guidance, encouragement, inspiration, support and attention led to complete this
report.
I would like to give sincere thanks to Dr. Manish Kachhawaha (Director) and
Mr. Arjun Singh Sankhala (Principal) for providing cordial environment to
exhibit my abilities to the fullest.
Yours Sincerely,
Student Name
Declaration
I hereby declare that this Seminar Report is a record of original work done by me
under the supervision and guidance of {guide name}. I further certify that this
report work has not formed the basis for the award of the Degree/Diploma or
similar work to any candidate of any university and no part of this report is
reproduced as it is from any source without seeking permission.
Student name:
Roll no:
Date:
CLOUD COMPUTING
Abstract
The term “cloud computing” is a recent buzzword in the IT world. Behind this fancy poetic
phrase there lies a true picture of the future of computing for both in technical perspective and
social perspective. Though the term “Cloud Computing” is recent but the idea of centralizing
computation and storage in distributed data centers maintained by third party companies is not
new but it came in way back in 1990s along with distributed computing approaches like grid
computing. Cloud computing is aimed at providing IT as a service to the cloud users on-demand
basis with greater flexibility, availability, reliability and scalability with utility computing model.
This new paradigm of computing has an immense potential in it to be used in the field of e-
governance and in rural development perspective in developing countries like India.
CLOUD COMPUTING
Contents
.Introduction
.Cloud Computing Basics
2.1 Types of Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Cloud Stakeholders . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Advantages of using Cloud . . . . . . . . . . . . . . . . . . . . . . . .
3. Motivation towards Cloud in recent time
4 .Cloud Architecture
4.1 Comparison between Cloud Computing and Grid Computing . . . . .
4.2 Relation between Cloud Computing and Utility Computing . . . . . .
4.3 Types of utility cloud services . . . . . . . . . . . . . . . . . . . . . .
5 .Popular Cloud Applications: A Case study
5.1 Amazon S3 Services . . . . . . . . . . . . . . . . . . . . . .
5.2 ............................
5.3 .............................
7. Conclusion
CLOUD COMPUTING
1 Introduction
Cloud computing is a new and fast-growing way of using computers over the internet, but the
idea isn't brand new. Back in 1969, a scientist named L. Kleinrock predicted that one day,
computers would work like utilities (like electricity or water), providing services to homes and
businesses. This idea is now what we call "cloud computing."
In the mid-1990s, a similar concept called grid computing was introduced, which allowed people
to access computing power whenever they needed it. Over time, this evolved into what we now
know as cloud computing. The term "cloud computing" became popular around 2006, especially
after Google's CEO Eric Schmidt started using it.
Cloud computing works by connecting many computers (called servers) that are spread out
across different locations. These computers work together to provide services like storage,
software, or platforms that users can access over the internet. A key technology that makes this
possible is virtualization, which allows these servers to act as virtual machines. This means users
can use computing resources without having to worry about the physical machines behind them.
Cloud services are offered on a pay-as-you-go basis, meaning you only pay for what you use,
similar to paying for electricity or water. These services are often grouped as "XaaS," where "X"
can be Software (SaaS), Platform (PaaS), or Infrastructure (IaaS). This way, users don’t have to
buy or maintain expensive hardware and software—they can simply rent it from cloud providers.
Cloud computing is becoming very popular because it is flexible, reliable, and can easily grow
(scale) based on demand. While businesses are the main users of cloud computing, it can also
help with social issues. For example, in countries like India, where many people depend on
agriculture, cloud computing could improve farming methods and government services.
In short, cloud computing saves money, makes things easier for users, and can be used in many
different ways. Its growth is driven by its ability to offer services that are both convenient and
cost-effective, without the need for users to manage complex IT infrastructure themselves.
aspects are handled by the cloud provider itself.
Cloud computing is growing now-a-days in the interest of technical and business organizations but this
can also be beneficial for solving social issues. In the recent time E-Governance is being implemented in
developing countries to improve efficiency and effectiveness of governance. This approach can be
improved much by using cloud computing instead of traditional ICT. In India, economy is agriculture
based and most of the citizens live in rural areas.
The standard of living, agricultural productivity etc can be enhanced by utilizing cloud computing in a
proper way. Both of these applications of cloud computing have technological as well as social
challenges to overcome. In this report we would try to clarify some of the ideas – Why is cloud
computing a buzzword today? i.e. what are the benefits the provider and the users get using cloud?
Though its idea has come long back in 1990 but what situation made it indispensable today? How is
cloud built? What differentiates it from similar terms like grid computing and utility computing? What
are the different services are provided by the cloud providers? Though cloud computing now-a-days
talks about business enterprises not the non-profit organizations; how can this new paradigm be used in
the services like e-governance and in social development issues of rural India?
1. Private Cloud – This type of cloud is maintained within an organization and used solely for
their internal purpose. So the utility model is not a big term in this scenario. Many companies are
moving towards this setting and experts consider this is the 1st step for an organization to move
into cloud. Security, network bandwidth are not critical issues for private cloud.
2. Public Cloud – In this type an organization rents cloud services from cloud providers on-
demand basis. Services provided to the users using utility computing model.
3. Hybrid Cloud – This type of cloud is composed of multiple internal or external cloud. This is
the scenario when an organization moves to public cloud computing domain from its internal
private cloud.
(a) Most of the data centers today are under utilized. They are mostly 15% utilized. These data centers
need spare capacity just to cope with the huge spikes that sometimes get in the server usage. Large
companies having those data centers can easily rent those computing power to other organizations and get
profit out of it and also make the resources needed for running data center (like power) utilized properly.
(b) Companies having large data centers have already deployed the resources and to provide cloud
services they would need very little investment and the cost would be incremental.
(a) Cloud users need not to take care about the hardware and software they use and also they don’t have
to be worried about maintenance. The users are no longer tied to some one traditional system.
(b) Virtualization technology gives the illusion to the users that they are having all the resources
available
. (c) Cloud users can use the resources on demand basis and pay as much as they use. So the users can
plan well for reducing their usage to minimize their expenditure
. (d) Scalability is one of the major advantages to cloud users. Scalability is provided dynamically to the
users. Users get as much resources as they need. Thus this model perfectly fits in the management of rare
spikes in the demand.
3. Motivation towards Cloud in recent time
Cloud computing is not a new idea but it is an evolution of some old paradigm of distributed computing.
The advent of the enthusiasm about cloud computing in recent past is due to some recent technology trend
and business models
1. High demand of interactive applications – Applications with real time response and with
capability of providing information either by other users or by nonhuman sensors gaining
more and more popularity today. These are generally attracted to cloud not only because
of high availability but also because these services are generally data intensive and
require analyzing data across different sources
2. Parallel batch processing – Cloud inherently supports batch-processing and analyzing
tera-bytes of data very efficiently. Programming models like Google’s map-reduce [18]
and Yahoo!’s open source counter part Hadoop can be used to do these hiding operational
complexity of parallel processing of hundreds of cloud computing servers.
3. . New trend in business world and scientific community – In recent times the business
enterprises are interested in discovering customers needs, buying patterns, supply chains
to take top management decisions. These require analysis of very large amount of online
data. This can be done with the help of cloud very easily. Yahoo! Homepage is a very
good example of such thing. In the homepage they show the hottest news in the country.
And according to the users’ interest they change the ads and other sections in the page.
Other than these many scientific experiments need very time consuming data processing
jobs like LHC (Large Hadron Collider). Those can be done by cloud
4. Extensive desktop application – Some desktop applications like Matlab, Mathematica are
becoming so compute intensive that a single desktop machine is no longer enough to run
them. So they are developed to be capable of using cloud computing to perform extensive
evaluations.
4. Cloud Architecture
The cloud providers actually have the physical data centers to provide virtualized services to
their users through Internet. The cloud providers often provide separation between application
and data. This scenario is shown in the Figure 2. The underlying physical machines are generally
organized in grids and they are usually geographically distributed. Virtualization plays an
important role in the cloud scenario. The data center hosts provide the physical hardware on
which virtual machines resides. User potentially can use any OS supported by the virtual
machines used.
Operating systems are designed for specific hardware and software. It results in the lack of
portability of operating system and software from one machine to another machine which uses
different instruction set architecture. The concept of virtual machine solves this problem by
acting as an interface between the hardware and the operating system called as system VMs [21].
Another category of virtual machine is called process virtual machine which acts as an abstract
layer between the operating system and applications.
Virtualization can be very roughly said to be as software translating the hardware instructions
generated by conventional software to the understandable format for the physical hardware.
Virtualization also includes the mapping of virtual resources like registers and memory to real
hardware resources.
The underlying platform in virtualization is generally referred to as host and the software that
runs in the VM environment is called as the guest. The Figure 3 shows very basics of
virtualization. Here the virtualization layer covers the physical hardware. Operating System
accesses physical hardware through virtualization layer. Applications can issue instruction by
using OS interface as well as directly using virtualizing layer interface. This design enables the
users to use applications not compatible with the operating system. Virtualization enables the
migration of the virtual image from one physical machine to another and this feature is useful for
cloud as by data locality lots of optimization is possible and also this feature is helpful for taking
back up in different locations. This feature also enables the provider to shut down some of the
data center physical machines to reduce power consumption.
Security model Grids build on the assumption Cloud security is now in its
that resources are infancy.
heterogeneous and dynamic.
Thus security is engineered in
fundamental grid
infrastructure.
can replace X by Infrastructure or Platform or Hardware or Software or Desktop or Data etc. There
are three main types of services most widely accepted - Software as a Service, Platform as a Service
and Infrastructure as a Service. These services provide different levels of abstraction and flexibility
to the cloud users. This is shown in the Figure 4.
We’ll now discuss some salient features of some of these models
3. IaaS (Infrastructure as a Service) – IaaS service provides the users of the cloud
greater flexibility to lower level than other services. It gives even CPU clocks with
OS level control to the developers
. E.g. - Amazon EC2 and S3.
1. Scalability: Amazon S3 can handle virtually unlimited amounts of data. Whether you’re
storing a few gigabytes or several petabytes, S3 automatically scales storage
infrastructure to meet your needs without manual intervention.
4. Cost-Effective: With a pay-as-you-go pricing model, S3 allows users to pay only for the
storage and data transfer they use. Additionally, it offers multiple storage classes such as
S3 Standard, S3 Intelligent-Tiering, S3 Glacier, and S3 Glacier Deep Archive to optimize
costs based on data access patterns
6. Data Management and Analytics: S3 provides tools for data management, including
lifecycle policies that automate data transfer between storage classes. Users can also run
analytics directly on data stored in S3 using services like Amazon Athena, which allows
querying data using standard SQL.
Amazon API Gateway is a fully managed service that enables developers to create, publish,
maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to
access data, business logic, or functionality from backend services. API Gateway allows you to
create RESTful APIs, WebSocket APIs, and HTTP APIs, providing the flexibility to meet the
needs of your application.
With Amazon API Gateway, you can create APIs that connect to a variety of backend services,
including AWS Lambda functions, Amazon EC2 instances, and other web services. It also offers
features like traffic management, authorization and access control, monitoring, and API version
management.
Key Concepts
1. API Endpoints
An API Endpoint is a unique URL where your API can be accessed by client applications. API
Gateway provides several types of endpoints depending on the API type you are using:
REST API Endpoints: These are used for RESTful APIs and are accessed using
standard HTTP methods like GET, POST, PUT, DELETE, etc.
WebSocket API Endpoints: These are used for WebSocket APIs, which provide two-
way communication channels between the client and the server.
HTTP API Endpoints: These are lightweight, low-latency endpoints that are used to
create HTTP APIs. HTTP APIs are simpler and more cost-effective compared to REST
APIs.
API Gateway endpoints are typically structured as follows:
https://{api-id}.execute-api.{region}.amazonaws.com/{stage}/{resource}
Where:
{api-id}: The unique identifier for your API.
{region}: The AWS region where your API is hosted.
{stage}: The deployment stage of your API (e.g., dev, test, prod).
{resource}: The resource path that you define in your API.
2. Resources
Resources in API Gateway represent the various parts of your API, such as collections or
specific data objects. Each resource is identified by a path (e.g., /users, /orders/{orderId}), and
you can define multiple methods (GET, POST, DELETE, etc.) for each resource.
Resource Paths: Resources are defined hierarchically, similar to a folder structure, and
can have child resources. For example, /orders/{orderId}/items could be a resource
within an /orders/{orderId} resource.
Methods: Each resource can have one or more methods associated with it, corresponding
to HTTP methods like GET, POST, DELETE, PUT, etc. Methods define how the API
should handle requests sent to that resource.
Integration: Each method is linked to a backend integration, which could be an AWS
Lambda function, an HTTP endpoint, or another AWS service. This integration defines
how API Gateway should process and route the request
.
3. API Stages and Deployment
Stages: Stages represent different versions or environments of your API (e.g., dev, test,
prod). Each stage has its own URL endpoint and can have different settings, such as
caching or logging.
Deployment: Deploying an API in API Gateway means making it available at a specific
stage. Once you deploy an API, you can access it via the stage URL, and it becomes
publicly accessible (depending on your configuration).
4. Security and Authorization
API Gateway provides several options for securing your APIs, including:
API Keys: You can require clients to include API keys in their requests to identify and
authenticate them.
IAM Permissions: Use AWS Identity and Access Management (IAM) policies to control
access to your API.
Cognito User Pools: Integrate with Amazon Cognito to manage user authentication and
authorization.
Lambda Authorizers: Use custom AWS Lambda functions to control access to your
API by implementing custom authorization logic.
Conclusion
Amazon API Gateway is a powerful service that allows you to create and manage APIs with
ease. By understanding the key concepts such as API endpoints, resources, and methods, and
following the steps to get started, you can leverage API Gateway to build scalable, secure, and
efficient APIs for your applications. Whether you're building RESTful APIs, WebSocket APIs,
or lightweight HTTP APIs, API Gateway provides the tools and features needed to manage the
entire API lifecycle, from development to deployment and monitoring.
Amazon SQS (Simple Queue Service) and Amazon SNS (Simple Notification Service) are fully
managed messaging services offered by AWS that are designed to facilitate communication
between distributed systems, decouple microservices, and provide reliable and scalable message
processing capabilities.
Amazon SQS: A message queuing service that allows you to decouple and scale
microservices, distributed systems, and serverless applications. It enables you to send,
store, and receive messages between software components, ensuring that the components
are loosely coupled, scalable, and highly available.
Amazon SNS: A notification service that allows you to send messages or notifications to
multiple subscribers, including applications, distributed systems, and end-users. It
supports various notification formats, including SMS, email, and HTTP/HTTPS
endpoints, and can integrate with other AWS services.
Both services play crucial roles in building robust, scalable, and decoupled architectures, making
them essential tools in modern cloud-based application development.
Key Concepts
Amazon SQS
1. Queues
Standard Queue: Offers unlimited throughput, ordering best-effort (messages are
generally delivered in the order sent), and at-least-once delivery (messages might be
delivered more than once).
FIFO Queue: Guarantees that messages are processed exactly once and in the exact
order that they are sent. FIFO queues are designed to ensure that the order of message
processing is preserved.
2. Messages
Message Attributes: Additional metadata that you can include with your messages.
Attributes provide structured information about the message that consumers can use for
filtering or routing.
Message Retention: The period during which SQS retains a message if it’s not processed
and deleted by a consumer. The retention period can be configured from 1 minute to 14
days.
Visibility Timeout: The time period during which a message is invisible to other
consumers after being received by a consumer. If the message is not deleted within this
time, it becomes visible again in the queue.
3. Dead-Letter Queues (DLQ)
A Dead-Letter Queue is a special type of queue used to handle messages that cannot be
processed successfully. When a message fails to be processed after a specified number of
attempts, it is moved to a DLQ for further investigation or reprocessing. This feature helps in
identifying problematic messages and preventing them from blocking the processing of other
messages.
4. Long Polling
Long Polling reduces the cost and latency of retrieving messages from SQS by allowing the
consumer to wait for a message to become available instead of polling the queue continuously.
When a long poll request is made, the request waits until a message is available or the long
polling timeout period is reached.
Amazon SNS
1. Topics
Topics: Amazon SNS topics are logical access points that act as communication channels
for sending messages to multiple subscribers. A topic can support multiple subscriber
endpoints, and each message sent to the topic is delivered to all subscribers.
Topic Attributes: Metadata about a topic, including settings such as display name,
delivery policy, and access control. You can configure these attributes to manage how
messages are delivered and who can publish to or subscribe to the topic.
2. Subscriptions
Subscription Endpoints: These are the destinations where messages published to an
SNS topic are delivered. Endpoints can include Amazon SQS queues, AWS Lambda
functions, HTTP/HTTPS endpoints, email addresses, and mobile push notifications (e.g.,
SMS, Apple Push Notification Service).
Filtering Policies: Filtering policies allow subscribers to receive only specific messages
that match certain attributes, reducing the volume of irrelevant messages that subscribers
need to process.
3. Message Formats
Raw Message Delivery: This feature allows SNS to send the exact message payload to
subscribers without any additional SNS-generated metadata. It is useful when subscribers
need to receive the raw message content as-is.
Message Structure: SNS supports structured messages that can include different formats
or content for each type of subscriber endpoint. This allows you to tailor the message
content to suit the capabilities and requirements of different subscriber types.
4. Fan-Out Pattern
The Fan-Out Pattern allows you to distribute messages to multiple endpoints simultaneously by
publishing a single message to an SNS topic that has multiple SQS queues or other endpoints
subscribed to it. This is commonly used for distributing workload to multiple processing nodes
or delivering notifications to multiple systems.
Getting Started with Amazon SQS
Step 1: Create an SQS Queue
1. Log in to the AWS Management Console and navigate to the Amazon SQS service.
2. Create a New Queue:
o Choose between a Standard Queue or a FIFO Queue depending on your
application’s requirements.
o Configure queue settings, such as the queue name, visibility timeout, message
retention period, and whether to enable encryption.
Step 2: Send and Receive Messages
1. Send Messages:
o Use the AWS Management Console, AWS CLI, or SDKs to send messages to the
queue.
o When sending a message, you can optionally include message attributes and set a
delay time for the message.
2. Receive and Process Messages:
o Use a consumer application or service to poll the queue for new messages.
o After receiving a message, process it and then delete it from the queue to prevent
it from being processed again.
Step 3: Implementing Dead-Letter Queues
1. Create a Dead-Letter Queue:
o Configure a secondary queue to act as a DLQ for your main queue.
o Set the maximum number of receive attempts for the main queue before a
message is moved to the DLQ.
2. Monitor and Handle Failed Messages:
o Monitor the DLQ for messages that couldn’t be processed successfully.
o Investigate and resolve the issues with these messages or reprocess them as
needed.
Getting Started with Amazon SNS
Step 1: Create an SNS Topic
1. Log in to the AWS Management Console and navigate to the Amazon SNS service.
2. Create a New Topic:
o Choose a topic type (Standard or FIFO) based on your requirements.
o Name the topic and configure topic attributes, such as display name and access
policies.
Step 2: Add Subscriptions
1. Subscribe Endpoints to the Topic:
o Create subscriptions by specifying the endpoint type (e.g., SQS, Lambda, email)
and the endpoint address (e.g., email address, queue ARN).
o Confirm subscriptions if required (e.g., email subscriptions require confirmation).
2. Configure Filtering Policies:
o Set up filtering policies for each subscription if you want to limit the messages
that are delivered to specific subscribers based on message attributes.
Step 3: Publish Messages to the Topic
1. Publish Messages:
o Use the AWS Management Console, AWS CLI, or SDKs to publish messages to
the SNS topic.
o You can include message attributes to allow for filtering and use structured
messages for different subscriber types.
2. Monitor and Analyze Delivery:
o Monitor message delivery success and failures using CloudWatch metrics and
logs.
o Analyze delivery logs to troubleshoot any issues with message delivery or to gain
insights into the performance of your messaging system.
Advanced Use Cases and Integration
1. Integrating SQS and SNS
Fan-Out with SQS and SNS: Use SNS to fan out a message to multiple SQS queues.
This pattern is useful when you need to distribute the same message to multiple
processing nodes or different systems.
Event-Driven Architectures: Use SNS to publish events from your application, and then
have different microservices or applications subscribe to these events through SQS,
Lambda, or HTTP endpoints.
2. Combining with AWS Lambda
Lambda Triggers: Set up Lambda functions to trigger in response to messages arriving
in SQS or SNS. This is particularly useful for real-time processing, data transformations,
and integrating with other AWS services.
Serverless Architectures: Build serverless architectures where SNS and SQS handle
message delivery and queuing, and Lambda functions process these messages without the
need for dedicated servers.
3. Monitoring and Logging
CloudWatch Metrics: Use Amazon CloudWatch to monitor key metrics for your SQS
queues and SNS topics, such as message count, delivery success rate, and throttling.
Logging: Implement logging to track message flow and diagnose issues within your
messaging system. CloudWatch Logs can be used to capture logs for further analysis.
Conclusion
Amazon SQS and SNS are fundamental services for building scalable, decoupled, and resilient
cloud applications. By mastering the key concepts and following the steps to get started, you can
leverage SQS and SNS to create powerful messaging systems that handle distributed workloads,
coordinate microservices, and improve the reliability of your applications. Whether you’re using
SQS for decoupling components or SNS for broadcasting notifications, these services are
essential for modern cloud-based architectures.