0% found this document useful (0 votes)
13 views

Assignment 2

The document compares the cloud services of AWS, Azure, and GCP in terms of features, components, and services, highlighting their respective offerings in compute, storage, databases, networking, security, AI/ML, and more. It also provides an overview of AWS, detailing its key features, core components, and use cases, along with explanations of specific services like EC2, Elastic Beanstalk, and Amazon S3. Additionally, it discusses various storage methods, including block, image, and file storage, with examples from each cloud provider.

Uploaded by

Vishakha Dhake
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Assignment 2

The document compares the cloud services of AWS, Azure, and GCP in terms of features, components, and services, highlighting their respective offerings in compute, storage, databases, networking, security, AI/ML, and more. It also provides an overview of AWS, detailing its key features, core components, and use cases, along with explanations of specific services like EC2, Elastic Beanstalk, and Amazon S3. Additionally, it discusses various storage methods, including block, image, and file storage, with examples from each cloud provider.

Uploaded by

Vishakha Dhake
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Assignment 2

1] Compare GCP, AWS & Azure w.r.t. Features, Components & Services.

Category AWS (Amazon Azure (Microsoft GCP (Google Cloud Platform)


Web Services) Azure)
Launch Year 2006 2010 2008
Parent Amazon Microsoft Google
Company
Compute Amazon EC2, Azure Virtual Compute Engine, Cloud Functions, App
Services Lambda, Elastic Machines, Azure Engine
Beanstalk Functions, App
Services
Storage S3, EBS, Glacier Blob Storage, Disk Cloud Storage, Persistent Disks,
Services Storage, Archive Nearline/Coldline
Storage
Database RDS, DynamoDB, Azure SQL, Cosmos Cloud SQL, Bigtable, Spanner, Firestore
Services Redshift, Aurora DB, Table Storage,
Synapse Analytics
Networking VPC, Route 53, Virtual Network, VPC, Cloud Load Balancing, Cloud CDN
CloudFront, Elastic Load Balancer,
Load Balancer Traffic Manager
Security & IAM, KMS, AWS Azure Active IAM, Cloud Identity, Security Command
Identity Shield, Cognito Directory, Key Center
Vault, Security
Center
AI/ML SageMaker, Lex, Azure ML, Bot Vertex AI, AutoML, Cloud Vision,
Services Polly, Rekognition Service, Cognitive Dialogflow
Services
DevOps & CodePipeline, Azure DevOps, Cloud Build, Cloud Source Repositories,
CI/CD CodeBuild, Pipelines, Repos Deployment Mgr
CodeDeploy
Containers & ECS, EKS AKS (Azure GKE (Google Kubernetes Engine), Cloud
Kubernetes (Kubernetes), Kubernetes Service), Run
Fargate Container Instances
Serverless AWS Lambda Azure Functions Cloud Functions
Analytics & Redshift, Athena, HDInsight, Synapse BigQuery, Dataflow, Dataproc
Big Data EMR Analytics
Monitoring & CloudWatch, Azure Monitor, Log Stackdriver (now part of Cloud Operations
Logging CloudTrail Analytics suite)
Hybrid Cloud AWS Outposts, Azure Arc, Azure Anthos
Snowball Stack
Compliance Extensive Strong enterprise Strong compliance focus (ISO, SOC,
& Security compliance compliance (ISO, GDPR, etc.)
offerings (HIPAA, SOC, HIPAA, etc.)
GDPR, etc.)
Global Reach Widest coverage - Available in 60+ Available in 35+ regions
100+ Availability regions
Zones
Free Tier 12-month free + 12-month free + Always-free tier + $300 credits for 90
always-free products always-free services days
Pricing Pay-as-you-go, Spot Pay-as-you-go, Pay-as-you-go, sustained use discounts,
instances, Savings Reserved instances, committed use
Plans Cost Management
Tools
Market Market leader Strong enterprise Strong in AI/ML and data analytics
Position presence
2] What is AWS?
AWS (Amazon Web Services) is a comprehensive, evolving cloud computing platform provided by
Amazon. It includes a mixture of infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and
packaged software-as-a-service (SaaS) offerings. AWS offers tools such as compute power, database storage
and content delivery services.
Key Features of AWS
 Scalability: Automatically scales resources up or down based on demand.
 Global Infrastructure: Operates in 33 regions and 105 availability zones (as of early 2025),
providing low-latency access worldwide.
 Pay-as-You-Go Pricing: Users pay only for the resources they consume, with no upfront costs.
 Security: Offers robust security features like encryption, identity management (IAM), and
compliance with global standards (e.g., GDPR, HIPAA).
 Wide Service Range: Covers compute, storage, databases, networking, AI/ML, analytics, and more.
Core Components and Services
1. Compute:
o Amazon EC2 (Elastic Compute Cloud): Virtual servers for running applications.
o AWS Lambda: Serverless computing for running code without managing servers.
o Elastic Kubernetes Service (EKS): Managed Kubernetes for containerized workloads.
2. Storage:
o Amazon S3 (Simple Storage Service): Highly durable object storage.
o Elastic Block Store (EBS): Block storage for EC2 instances.
o Glacier: Low-cost archival storage.
3. Networking:
o Amazon VPC (Virtual Private Cloud): Isolated cloud networks.
o CloudFront: Content Delivery Network (CDN) for fast content delivery.
o Route 53: Scalable DNS service.
4. Databases:
o Amazon RDS: Managed relational database service (e.g., MySQL, PostgreSQL).
o DynamoDB: Fully managed NoSQL database.
o Redshift: Data warehousing for analytics.
5. AI and Machine Learning:
o SageMaker: Platform for building, training, and deploying ML models.
o Rekognition: Image and video analysis tool.
6. Analytics:
o Amazon Kinesis: Real-time data streaming and analytics.
o AWS Glue: ETL (Extract, Transform, Load) service.
Use Cases
 Hosting websites and applications (e.g., Netflix, Airbnb).
 Big data processing and analytics.
 Machine learning and AI development.
 Backup, disaster recovery, and storage solutions.
 Enterprise IT and hybrid cloud deployments.

Networking and Subnetting in AWS


Networking in AWS is primarily handled through Amazon VPC (Virtual Private Cloud), which allows
users to create isolated virtual networks within the AWS cloud. A VPC enables users to define their own IP
address ranges, create subnets, and configure route tables, internet gateways, and security settings to control
inbound and outbound traffic.
Key components of AWS networking include:
 VPC: A logically isolated section of the AWS Cloud where resources are launched.
 Subnets: Segments within a VPC that divide the network into smaller ranges. Subnets can be:
o Public: Connected to the internet via an Internet Gateway.
o Private: Not directly accessible from the internet.
 Route Tables: Define how traffic is directed within the VPC.
 Internet Gateway (IGW): Allows internet access for resources in a public subnet.
 NAT Gateway/Instance: Allows instances in a private subnet to access the internet (for updates,
etc.) without exposing them to inbound internet traffic.
 Security Groups and Network ACLs: Act as virtual firewalls to control traffic at the instance and
subnet level, respectively.

Subnetting in AWS
Subnetting in AWS involves dividing a VPC’s IP address range into smaller, more manageable sub-
networks. This helps in organizing resources and improving security and performance.
 When you create a VPC, you assign it a CIDR block (e.g., 10.0.0.0/16).
 Subnets are then created using smaller CIDR blocks within that range (e.g., 10.0.1.0/24, 10.0.2.0/24).
 AWS recommends placing resources in different Availability Zones by creating subnets in multiple
AZs for high availability and fault tolerance.
 Subnets can be assigned as public or private, depending on whether they route traffic through an
Internet Gateway.
Example: If you have a VPC with 10.0.0.0/16, you can create:
 10.0.1.0/24 for a public subnet
 10.0.2.0/24 for a private subnet

3] Explain EC2 (Elastic Compute Cloud)?


EC2 stands for Elastic Compute Cloud. EC2 is an on-demand computing service on the AWS cloud
platform. Under computing, it includes all the services a computing device can offer to you along with the
flexibility of a virtual environment. It also allows the user to configure their instances as per their
requirements i.e. allocate the RAM, ROM, and storage according to the need of the current task. Even the
user can dismantle the virtual device once its task is completed and it is no more required. For providing, all
these scalable resources AWS charges some bill amount at the end of every month, the bill amount is
entirely dependent on your usage. EC2 allows you to rent virtual computers. The provision of servers on
AWS Cloud is one of the easiest ways in EC2. EC2 has resizable capacity. EC2 offers security, reliability,
high performance, and cost-effective infrastructure so as to meet the demanding business needs.

Features:
1. Virtual Servers (Instances): EC2 provides a variety of instance types optimized for different use
cases, such as general-purpose computing, memory-intensive applications, compute-intensive tasks,
and storage-intensive workloads.
2. Scalability: Users can easily scale the number of instances up or down based on demand, ensuring
that they only pay for the capacity they actually use.
3. Flexibility: EC2 supports a wide range of operating systems, including various versions of Linux
and Windows, and allows users to choose the instance type, storage, and networking configuration
that best suits their needs.
4. Elastic IP Addresses: Users can allocate static IP addresses that can be associated with their
instances, making it easier to manage network configurations.
5. Security: EC2 provides robust security features, including Virtual Private Cloud (VPC) for network
isolation, security groups for controlling inbound and outbound traffic, and key pairs for secure SSH
access to instances.
6. Load Balancing: Elastic Load Balancing (ELB) automatically distributes incoming application
traffic across multiple instances to ensure high availability and fault tolerance.
7. Auto Scaling: Auto Scaling automatically adjusts the number of instances in response to changing
demand, helping to maintain performance and reduce costs.
8. Storage Options: EC2 offers various storage options, including Amazon Elastic Block Store (EBS)
for persistent block storage and instance store for temporary storage.
9. Integration with AWS Services: EC2 integrates seamlessly with other AWS services, such as
Amazon S3, Amazon RDS, Amazon DynamoDB, and more, enabling users to build comprehensive
and scalable applications.

4] What is elastic beanstalk?


Amazon Elastic Beanstalk is a web infrastructure management service. It handles deployment and scaling
for web applications and services.
Elastic Beanstalk can automatically manage setup, configuration, scaling and provisioning for other AWS
services.
How Elastic Beanstalk Works :
Elastic Beanstalk is a fully managed service provided by AWS that makes it easy to deploy and manage
applications in the cloud without worrying about the underlying infrastructure. First, create an application
and select an environment, configure the environment, and deploy the application.

Elastic Beanstalk Features


 Elastic Beanstalk offers preconfigured runtime-like environments and deployment tools which makes
it easy to deploy our application
 It supports numerous platforms and programming languages like GO, Python java, etc.
 Elastic Beanstalk scales your application automatically when the demand increases with the help of
auto-scaling rules.
 Elastic Beanstalk can integrate with databases such as Mysql, Oracle, and Microsoft SQL Server.
 Access control via AWS Identity and Access Management and built-in security features like
SSL/TLS encryption are provided by Elastic Beanstalk (IAM).

5] What is amazon s3?


Amazon S3 is a Simple Storage Service in AWS that stores files of different types like Photos, Audio, and
Videos as Objects providing more scalability and security to. It allows the users to store and retrieve any
amount of data at any point in time from anywhere on the web. It facilitates features such as extremely high
availability, security, and simple connection to other AWS Services.
How Does Amazon S3 works?
Amazon S3 works on organizing the data into unique S3 Buckets, customizing the buckets with Acccess
controls. It allows the users to store objects inside the S3 buckets with facilitating features like versioning
and lifecycle management of data storage with scaling. The following are few main features of Amazon s3:
1. Amazon S3 Buckets and Objects
Amazon S3 Bucket: Data, in S3, is stored in containers called buckets. Each bucket will have its own set of
policies and configurations. This enables users to have more control over their data. Bucket Names must be
unique. Can be thought of as a parent folder of data. There is a limit of 100 buckets per AWS account. But it
can be increased if requested by AWS support.
Amazon S3 Objects: Fundamental entity type stored in AWS S3.You can store as many objects as you
want to store. The maximum size of an AWS S3 bucket is 5TB. It consists of the following:
 Key
 Version ID
 Value
 Metadata
 Subresources
 Access control information
 Tags
2. Amazon S3 Versioning and Access Control
S3 Versioning: Versioning means always keeping a record of previously uploaded files in S3. Points to
Versioning are not enabled by default. Once enabled, it is enabled for all objects in a bucket. Versioning
keeps all the copies of your file, so, it adds cost for storing multiple copies of your data. For example, 10
copies of a file of size 1GB will have you charged for using 10GBs for S3 space. Versioning is helpful to
prevent unintended overwrites and deletions. Objects with the same key can be stored in a bucket if
versioning is enabled (since they have a unique version ID). To know more about versioning refer this
article – Amazon S3 Versioning
Access control lists (ACLs): A document for verifying access to S3 buckets from outside your AWS
account. An ACL is specific to each bucket. You can utilize S3 Object Ownership, an Amazon S3 bucket-
level feature, to manage who owns the objects you upload to your bucket and to enable or disable ACLs.
3. Bucket policies and Life Cycles
Bucket Policies: A document for verifying the access to S3 buckets from within your AWS account,
controls which services and users have what kind of access to your S3 bucket. Each bucket has its own
Bucket Policies.
Lifecycle Rules: This is a cost-saving practice that can move your files to AWS Glacier (The AWS Data
Archive Service) or to some other S3 storage class for cheaper storage of old data or completely delete the
data after the specified time. To know more about refer this article – Amazon S3 Life Cycle Management
4. Keys and Null Objects
Keys: The key, in S3, is a unique identifier for an object in a bucket. For example in a bucket ‘ABC’ your
GFG.java file is stored at javaPrograms/GFG.java then ‘javaPrograms/GFG.java’ is your object key for
GFG.java.
Null Object: Version ID for objects in a bucket where versioning is suspended is null. Such objects may be
referred to as null objects.List) and Other settings for managing data efficiently.
6] What are the different methods used for block, image or file storage ?
Block Storage:
Block storage is a type of storage that divides data into fixed-sized blocks, each with a unique identifier. It is
typically used for structured data and is ideal for applications that require low-latency and high-performance
storage, such as databases and virtual machines.
Methods for Block Storage:
1. Amazon Elastic Block Store (EBS): Provides persistent block storage volumes for use with
Amazon EC2 instances. EBS volumes are highly available and reliable, and can be easily attached to
instances.
2. Google Persistent Disk: Offers high-performance block storage for Google Cloud Platform (GCP)
instances. It supports both HDD and SSD options.
3. Azure Managed Disks: Provides managed disk storage for Azure Virtual Machines (VMs). It
simplifies disk management and offers various performance tiers.
4. OpenStack Cinder: An open-source block storage service that integrates with OpenStack compute
(Nova) to provide block storage to VMs.
Image Storage:
Image storage refers to the storage of virtual machine images, container images, and other types of disk
images. These images are used to create and deploy virtual machines or containers.
Methods for Image Storage:
1. Amazon Machine Images (AMIs): Pre-configured templates for EC2 instances that include the
operating system, application server, and applications. AMIs are stored in Amazon S3.
2. Google Cloud Images: Pre-configured images for Google Compute Engine (GCE) instances. These
images are stored in Google Cloud Storage.
3. Azure VM Images: Pre-configured images for Azure VMs. These images are stored in Azure Blob
Storage.
4. Docker Registry: A storage and distribution system for Docker container images. Docker Hub is a
popular public registry, but private registries can also be set up.
5. OpenStack Glance: An open-source image service that stores and manages disk images for use with
OpenStack compute instances.
File Storage:
File storage is a type of storage that organizes data in a hierarchical file and folder structure. It is typically
used for unstructured data and is ideal for shared file systems, home directories, and content repositories.
Methods for File Storage:
1. Amazon Elastic File System (EFS): Provides scalable and elastic file storage for use with AWS
EC2 instances. EFS supports NFS (Network File System) and is designed to be highly available and
durable.
2. Google Cloud Filestore: Offers high-performance file storage for GCP instances. It supports NFS
and is ideal for applications that require shared file systems.
3. Azure Files: Provides fully managed file shares in the cloud that are accessible via the SMB (Server
Message Block) protocol. Azure Files can be used with Azure VMs and on-premises servers.
4. Network Attached Storage (NAS): A dedicated file storage device that provides file-level storage
to multiple clients over a network. NAS devices support protocols like NFS and SMB.
5. OpenStack Manila: An open-source file storage service that provides shared file systems to
OpenStack compute instances.

7] Explain IAM? (Identity Access Management) and What are Roles and Functions of IAM
AWS Identity and Access Management (IAM) is a web service that helps you securely control access to
AWS resources. With IAM, you can manage permissions that control which AWS resources users can
access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use
resources. IAM provides the infrastructure necessary to control authentication and authorization for your
AWS accounts.
 Roles of IAM :
1. Access Management: Controls who (users, applications, or services) can access AWS resources and
what actions they can perform.
2. Security Enforcement: Ensures secure access to AWS services by enforcing policies like least
privilege, MFA, and password policies.
3. Resource Protection: Safeguards AWS resources (e.g., EC2 instances, S3 buckets) by restricting
unauthorized access.
4. Compliance and Auditing: Helps meet regulatory requirements by providing detailed access logs
and audit trails through integration with AWS CloudTrail.
5. Federation and Identity Management: Enables integration with external identity providers (e.g.,
Microsoft Active Directory, Google, Facebook) for federated access.
6. Temporary Access Management: Provides temporary security credentials for roles, allowing short-
term access to resources.
7. Cross-Account Access: Manages access between multiple AWS accounts, enabling secure resource
sharing.
 Functions of IAM :
1. User Management: Create and manage IAM users (individuals or applications) and assign unique
credentials (passwords or access keys).
2. Group Management: Organize users into groups and assign permissions to the group, simplifying
permission management.
3. Role Management: Define roles with specific permissions that can be assumed by users,
applications, or AWS services (e.g., EC2 instances, Lambda functions).
4. Policy Management: Create and manage JSON-based policies that define permissions (allow or
deny actions on specific resources).
5. Multi-Factor Authentication (MFA): Add an extra layer of security by requiring a second form of
authentication (e.g., a code from a mobile device).
6. Access Key Management: Generate and manage access keys for programmatic access to AWS
services (e.g., using AWS CLI, SDKs, or APIs).
7. Password Policies: Enforce strong password requirements (e.g., minimum length, complexity, and
rotation).

8. Federation: Enable users to log in to AWS using external identity providers (e.g., SAML 2.0, OIDC,
or corporate directories).
9. Permissions Boundaries: Set the maximum permissions a user or role can have, ensuring they
cannot exceed specified limits.
10. Auditing and Monitoring: Track and log access to AWS resources using AWS CloudTrail for
auditing and compliance purposes.
11. Temporary Security Credentials: Issue temporary credentials for roles, which expire after a set
period, enhancing security.
12. Cross-Account Access: Allow users or services from one AWS account to access resources in
another account securely.
13. Service-Linked Roles: Create roles that are linked to specific AWS services, allowing them to
perform actions on your behalf.
14. Identity Providers: Integrate with external identity providers (e.g., Google, Facebook, or corporate
systems) for federated access.
15. Resource-Based Policies: Attach policies directly to AWS resources (e.g., S3 buckets) to control
access.

8] What is amazon RDS. Explain dynamo DB. Compare RDS and DynamoDB
Amazon RDS is a fully managed database service by AWS that takes care of a lot of the heavy lifting when
it comes to managing databases. It works with popular engines like MySQL, PostgreSQL, MariaDB, Oracle,
and SQL Server. The great thing about RDS is that it automates time-consuming tasks like backups,
software updates, scaling, and replication. This means you don’t have to worry about the finer details of
maintaining a database and can instead focus on building your app, knowing that your data is secure,
available, and ready to scale as needed.
DynamoDB allows users to create databases capable of storing and retrieving any amount of data and comes
in handy while serving any amount of traffic. It dynamically manages each customer’s requests and provides
high performance by automatically distributing data and traffic over servers. It is a fully managed NoSQL
database service that is fast, predictable in terms of performance, and seamlessly scalable. It relieves the user
from the administrative burdens of operating and scaling a distributed database as the user doesn’t have to
worry about hardware provisioning, patching Softwares, or cluster scaling. It also eliminates the operational
burden and complexity involved in protecting sensitive data by providing encryption at REST.

Feature Amazon RDS DynamoDB

Managed relational database service


Fully managed NoSQL database for key-value
supporting SQL-based engines like
and document-based data models.
Definition MySQL, PostgreSQL, etc.

Relational (structured tables with NoSQL (key-value pairs and documents,


Data Model rows and columns). flexible schema).

Vertical scaling (instance size


Horizontal scaling, automatic scaling with on-
limits), supports read replicas for
demand and provisioned capacity modes.
Scalability read scalability.

Depends on database engine and


Millisecond response times, optimized for high
configuration; optimized for complex
throughput and low-latency performance.
Performance queries and ACID transactions.

Full ACID compliance, supports Supports transactions but limited compared to


Transactions complex, multi-step transactions. RDS; best for simple, single-step transactions.
Feature Amazon RDS DynamoDB

Based on throughput (read/write capacity


Based on instance type, storage, and
units) or storage; cost-effective for high-scale
I/O; can be costly at large scale.
Pricing workloads.

Ideal for CRM systems, financial Best for real-time apps like gaming
applications, and complex queries leaderboards, IoT, session management, and
Use Cases requiring relationships between data. high-traffic workloads.

Supports SQL, complex queries, and


Seamless scaling, low-latency, flexible
strong consistency with ACID
schema, cost-effective for high throughput.
Strengths transactions.

Costly at scale, less flexibility for Limited support for complex queries and
unstructured data, limited automatic transactions, eventual consistency in some
Weaknesses scaling. cases.

9] Explain route53. Explain networking and subnetting is handled in AWS


Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is
designed for developers and corporations to route the end users to Internet applications by translating
human-readable names like www.geeksforgeeks.org into the numeric IP addresses like 192.0.1.1 that
computers use to connect. You cannot use Amazon Route 53 to connect your on-premises network
with AWS Cloud.

The following are the some of the main features explaining on how Amazon Route 53 functions well:
 Domain Registration And Management: Amazon Route 53 allows users to register and maintain
domain names through its user-friendly interface. Users can transfer their existing domain to the Route
53 service or can go for register a new one. Users may freely configure the DNS settings, including
mail server setups (MX records), domain name aliases , and more, once they have registered.
 Global DNS Resolution: Route 53 uses a worldwidet network cast made up of many DNS servers that
have been placed strategically all over the world. The IP address which matches to a domain name
entered by a user in their web browser is sent back by Route 53’s DNS servers. Users can immediately
access the websites and services from anywhere in the globe because of Route 53’s low latency and
high-performance DNS resolution by using global network.
 Traffic Routing And Load Balancing: Users can set up load balancing and fallback setups for their
applications with Route 53’s wide traffic routing capabilities. Users may distribute incoming traffic
among several endpoints, such as Amazon EC2 instances, Elastic Load Balancers, or by other external
resources, by utilizing capabilities like DNS-based latency routing and weighted round-robin routing.

10] Explain AWS Lambda (Function as a Service)


AWS Lambda is a powerful serverless computing service that automatically runs code in response to
events, without requiring you to manage the underlying infrastructure. It supports event-driven applications
triggered by events such as HTTP requests, DynamoDB table updates, or state transitions. You simply
upload your code (as a .zip file or container image), and Lambda handles everything from provisioning to
scaling and maintenance. It automatically scales applications based on traffic, handling server management,
auto-scaling, security patching, and monitoring. AWS Lambda is ideal for developers who want to focus on
writing code without worrying about infrastructure management.
In this article, we’ll explore AWS Lambda, its key features, pricing structure, and practical use cases.

What are Lambdas Functions?


AWS lambda are server-less compute functions are fully managed by the AWS where developers can run
their code without worrying about servers. AWS lambda functions will allow you to run the code with out
provisioning or managing servers.
Once you upload the source code file into AWS lambda in the form of ZIP file then AWS lambda will
automatically run the code without you provision the servers and also it will automatically scaling your
functions up or down based on demand. AWS lambda are mostly used for the event-driven application for
the data processing Amazon S3 buckets, or responding to HTTP requests.
Use Cases of AWS Lambda Functions
You can trigger the lambda in so many ways some of which are mentioned below.
1. File Processing: AWS lambda can be triggered by using simple storage services (S3). Whenever
files are added to the S3 service Lambda data processing will be triggered.
2. Web Applications: You can combine both web applications and AWS lambda which will scale up
and down automatically based on the incoming traffic.
3. IoT (Internet of Things) applications: You can trigger the AWS lambda based on certain
conditions while processing the data from the device which are connected to the IOT applications. It
will analyze the data which are received from the IOT application.
4. Stream Processing: Lambda functions can be integrated with the Amazon kinesis to process real-
time streaming data for application tracking, log filtering, and so on.
AWS lambda will help you to focus more on your code than the underlying infrastructure. The infrastructure
maintenance in AWS was taken care of by AWS lambda.
Advantages of AWS Lambda Function
The following are the advantages of AWS Lambda function
1. Zero Server Management: Since AWS Lambda automatically runs the user’s code, there’s no need
for the user to manage the server. Simply write the code and upload it to Lambda.
2. Scalability: AWS Lambda runs code in response to each trigger, so the user’s application is
automatically scaled. The code also runs in parallel processes, each triggered individually, so scaling
is done precisely with the size of the workload.
3. Event-Driven Architecture: AWS Lambda function can be triggered based on the events happing
in the aws other service like when the file or video is added to the S3 bucket you can trigger the
AWS Lambda function.
Disadvantages of AWS Lambda Function
The following are the disadvantages of AWS Lambda function:
1. Latency while starting: While AWS lambda is going to be activated after a long gap it will take
some time to initialize the service which is required to deploy the application at that time end users
will face latency issues.
2. Limited control of infrastructure: Behalf of your lambda function is going to take of underlying
infrastructure so you will have very limited control over undelaying infrastructure.
3. Time Limit: AWS Lambda enforces a maximum execution time limit for functions, which is
currently set to 900 seconds (15 minutes). If your function exceeds this time limit, it will be forcibly
terminated.

11] List different components and services with brief use cases available in AWS.

1] Compute Services
Service Use Case
Run virtual machines (servers) on the cloud. Great for hosting apps,
EC2 (Elastic Compute Cloud)
websites, backend systems.
Run code without managing servers. Used for automation, APIs, event-
Lambda
driven functions.
Deploy web apps quickly. It auto-manages infrastructure like load
Elastic Beanstalk
balancing and scaling.
Run containers (Docker). ECS is managed, Fargate is serverless. Great for
ECS / Fargate
microservices.
Simple servers with preconfigured environments. Great for small apps,
Lightsail
websites, and beginners.

2] Storage Services
Service Use Case
Store and retrieve unlimited files (images, videos, backups). Used for static
S3 (Simple Storage Service)
websites, data lakes.
EBS (Elastic Block Store) Block storage for EC2. Used for OS and application data.
EFS (Elastic File System) Shared file system for multiple EC2 instances. Ideal for scalable apps.
Long-term cold storage for archival and backup. Very cheap, but slower to
Glacier
access.

3] Database Services
Service Use Case
Managed SQL databases like MySQL, PostgreSQL, etc. Ideal for
(Relational Database Service)
transactional systems.
DynamoDB NoSQL database. Highly scalable and fast. Great for real-time apps.
Aurora High-performance SQL database compatible with MySQL/PostgreSQL.
In-memory caching (Redis/Memcached). Speeds up applications by
ElastiCache
reducing database load.

4] Networking & Content Delivery


Service Use Case
VPC (Virtual Private Cloud) Create isolated networks within AWS. Control IPs, subnets, security.
Route 53 DNS service for domain management and routing.
CloudFront CDN (Content Delivery Network). Speeds up content delivery globally.
API Gateway Create and manage secure APIs. Often used with Lambda.
Elastic Load Balancer (ELB) Distributes incoming traffic to multiple EC2 instances.

5] Security, Identity & Access


Service Use Case
IAM (Identity and Access Management) Manage users, roles, and permissions across AWS.
Cognito User sign-up, sign-in, and authentication for apps.
KMS (Key Management Service) Manage encryption keys for securing data.

6] Monitoring & Analytics


Service Use Case
CloudWatch Monitor resources and applications (logs, metrics, alerts).
CloudTrail Track user activity and API usage for auditing.
Athena Query data in S3 using SQL (serverless).
QuickSight Create dashboards and visualizations from AWS data sources.

7] Machine Learning & AI


Service Use Case
SageMaker Build, train, and deploy ML models. End-to-end ML pipeline support.
Rekognition Image and video analysis (face detection, object recognition).
Polly Text-to-speech. Converts text into spoken audio.
Comprehend NLP (Natural Language Processing). Detect sentiment, entities, etc.
Lex Build chatbots with voice and text interfaces (used in Alexa).

12] What is Serverless and Serverfull Computing. How it is implemented on AWS?


A] Serverless Computing
Concept:
 No need to manage servers (no provisioning, patching, or scaling).
 You focus on writing code, not infrastructure.
 Automatically scales up and down based on demand.
 You only pay when your code runs (no idle cost).
Key Features:
 Event-driven architecture
 Automatic scaling
 Cost-effective
 Fast deployment
How it's implemented in AWS:
AWS Service Purpose
AWS Lambda Run code in response to events (e.g., API call, S3 upload, DB change).
API Gateway Front-end for Lambda; exposes APIs over HTTP.
S3 Serverless object storage (no servers needed).
DynamoDB Serverless NoSQL database.
Step Functions Orchestrate serverless workflows.
EventBridge / SNS / SQS Trigger serverless workflows (messaging/eventing).
Fargate (with ECS/EKS) Run containers without managing servers (serverless containers).

Use Case Example: A photo sharing app where:


 User uploads a photo → stored in S3
 Lambda is triggered → resizes the photo
 Result is saved in another S3 bucket
 Metadata stored in DynamoDB
 Notifications sent via SNS

B] Serverful (Server-based) Computing


Concept:
 You manage the entire infrastructure: provisioning, patching, scaling, and monitoring.
 More control but more responsibility.
 Ideal for apps that require long-lived processes, custom environments, or legacy dependencies.
Key Features:
 Full control over OS and runtime
 Manual or auto scaling
 Persistent storage and networking control
How it's implemented in AWS:
AWS Service Purpose
EC2 (Elastic Compute Cloud) Launch and manage virtual machines.
Elastic Beanstalk PaaS for deploying applications (still serverful under the hood).
EKS / ECS (non-Fargate) Container orchestration where you manage the EC2 cluster.
RDS Managed SQL DB, but you control instance type, storage, etc.
Auto Scaling Groups Scale EC2 instances based on load.

Use Case Example: A traditional e-commerce website where:


 App is hosted on EC2
 MySQL database runs on RDS
 Load is balanced using ELB
 Scaling is done via Auto Scaling Groups

13] Compare Serverless and Serverfull Computing.

Aspect Serverless Computing Serverful (Server-based) Computing


Server Fully managed by cloud provider (no You manage the servers (OS, patches,
Management infrastructure to maintain) scaling, availability)
Automatic provisioning, triggered by Manual provisioning of instances, OS, and
Provisioning
events services
Must configure scaling rules or manage
Scaling Auto-scales automatically based on usage
manually
Pay-per-use (e.g., per request or function Pay-per-hour (or second) for provisioned
Cost Model
execution time) resources, regardless of usage
Very fast (milliseconds to a few seconds, Slower (boot-up time for EC2 or container
Startup Time
except for cold starts) start time)
Stateless by design (store state externally, Can be stateful (persistent local storage, long-
Persistence
e.g., in S3 or DynamoDB) running processes)
No responsibility for OS or infrastructure You are responsible for OS updates, patching,
Maintenance
maintenance and security
Limited; constrained to supported runtimes Full flexibility; install any software, use any
Flexibility
and environments OS, configure networking
Rapid deployments (function-based or Slower deployments (full app stack setup and
Deployment
microservice-based) configuration)
APIs, automation, chatbots, scheduled Monolithic apps, legacy systems, custom
Use Cases
tasks, real-time processing environments, long-running apps
Lambda, API Gateway, S3, DynamoDB, EC2, RDS, Elastic Beanstalk, ECS (on EC2),
AWS Examples Fargate (serverless containers), Step EKS (with node groups), Auto Scaling
Functions Groups

You might also like