0% found this document useful (0 votes)
12 views34 pages

AWSnkv

Uploaded by

kumar99.vallam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views34 pages

AWSnkv

Uploaded by

kumar99.vallam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

AWS

S3-->Storage
Lambda-->Serverless
IAM-->Permission and Access Management
CLOUDWATCH-->alerting and cloud monitoring and little bit of orchestration
CLOUDTRAIL-->generating logs
EVENT BRIDGE-->For orchestration
EC2
SNS--> for notification
SQS-->messaging queue not similar to
EMR-->etl process
GLUE-->etl process
KINESIS
RDS
ATHENA
REDSHIFT
Dynamo DB-->
● AWS S3(simple storage service)
S3 can be used to store any kind of data be it csv,json and any unstructured
data(photos,videos,backups,logs ..)
S3 stores all data file in the form object that is object storage service
Be it folder or file anything
Each object has unique access key that is Unipath for that respective object
Data in s3 is stored as Buckets (like directories or root folders) and objects (files)
In case you are putting folders under buckets or anything under the buckets are
known as objects
Versioning--> S3 supports versioning and can retrieve and restore every version of
object under your bucket
Security-->bucket policies, access control list and server-side encryption for data
AWS Identity and Access Management (IAM)
Event Configuration-->alerting or notifying or automated process whenever the object is
deleted or on any action on the object
Object Lock-->Object lock helps you protect objects from being deleted or modified for a
specificied retention period.It useful for complaince and data governance
Multipart upload-->For large file,s3 supports multipart upload allowing you to upload
parts of an object concurrently and then combine them into single object

S3 Storage Classes

S3 Pricing factors
Putting the data into S3 is free but taking data out from s3 to internet or other services
using the data
S3 services such as S3 analytics, S3 inventory and S3 object tracking
S3 versioning, cloud watch and other features
Request made on objects (get, put ,copy)
Use cases
Data Backup and Archiving: S3 is commonly used for data backup and long term
archiving due to its durability and reliability.Organization can store backup copies of
thier data to protect againt data loss
Data Storage for Application: S3 servers as data repository for applications ,enabling
them to store and retrieve files,assets, and other resources.This can include content for
website,mobile apps, and software
Log and EVent storage:Applications and systems use s3 to store logs , and telemetry
data.This data can be later be analyzed for monitoring,troubleshooting, and compliance
purposes.
Backup and Disaster Recovery:S3’s durability and high availability make it suitable
choice for disaster recovery scenarios.Organization can replicate data across regions
for data resilience
Media Storage and Streaming:S3 is used to stored media files,such as videos, and
audio that need to be accessed and streamed by users globally

AWS S3 is global then why we need to select region?


Aws s3 is global but for faster retrieval of data its recommended to select nearest data
center to avoid high latency issues
Can S3 can upload 10Tb data at one time if it allows unlimited data upload?
Aws s3 allows only 1 file should have size of 5TB
How many buckets we can create in AWS S3?
In S3 we can create upto 100 buckets and we can increase the number by contacting
with AWS

Versioning
For enabling versioning the bucket --> Properties -->Bucket versioning
Incase u have uploaded one file index.txt and after few mins again u have uploaded the
same file but u made some changes internally in that file
But it will replace the file it wont keep the old file

If u enable versioning
The old file is also kept under s3 and you can access 2 versions of file

Incase u have enabled the versioning and added two same files but different versions
and after few days u disabled versioning then it will not affect the already present file but
it will not be able to apply versioning on newly uploaded file/objects

S3 commands AWS CLI (command line input)


The aws cli is powerful tool that allow user to interact with AWS services including S3
directly from the command line

Aws boto3 u can import this package u will be able to interact with aws using
programmatically (using java or python with help of boto3)
Configuration
Aws configure-->Setup the cli with your aws credentials default region and desired
output format

Bucket operations
Aws s3 ls-->list all buckets
Aws s3 mb s3://my-bucket-name: Create a new bucket
Aws s3 rb s3://my-bucket-name:delete a bucket

File and folder Operations


Aws s3 ls s3://my-bucket-name-->List contents of bucket
Aws s3 cp localfile.txt s3://my-bucket-name-->copy a file to bucket
Aws s3 cp s3://my-bucket-name/file.txt localfile.txt--> copy a file from a bucket to local
system
Aws s3 mv local_file.txt s3://my-bucket-name/--> moves a local file to a bucket
(removes the local after copying)
Aws s3 rm s3://my-bucket-name/file.txt-->deletes a file from bucket

For adding all permissions for working on specific services or creating communication
between two services
And in Security Credentials u need to take access key id and secret key to access the
aws services through cli
Aws –version to check whether the aws is installed correctly
Aws configure
Under this command output u need to mention access key id and secret key
U will get below error while accessing the s3
So u need to provide fulls3 access under add persmissions

Amazon Resource Name


To identify uniquely the instances then we need to look for arn under properties
To create bucket:
Go to>Create Bucket>Enter buckets name and other details and agree terms and
create bucket

To create folder(object) under bucket:


Select the bucket and double click >Create folder>enter all name and other details and
create folder
Event Notification-->This will give notification incase any data changes or action in s3
AWS S3 Url --->is the path of the file or folder

Key for accesing the files under S3


● AWS LAMBDA
Aws lambda is serverless computing service that let you run code without provisioning
or managing servers.It automatically scales your application by running code in
response to events,such as changes to data in Amazon s3 buckets or updates in an
Amazin DynamoDB table.
Lambda will not start unless it is triggered
And Lambda can scale up and scale down similar to microservices
Aws lambda can concurrently execute that services or instances
Serverless computing means user won't be taking care of server it would be totally
managed by Cloud service providers.All server related things would be done at backend
by cloud providers
Features:
Event-Driven-->AWs lambda is designed to use events like changes to data in an s3
bucket or an update to dynamicDB table to trigger execution of code
Scaling-->Lambda functions can scale automatically by running code in response to
each trigger. Your trigger can be uploaded image,new log file,a new row in database
Front end – The signup page
It as two options create account and delete account
And in case you want to store information in form of json
If user selects create account in backend webservice it gets triggered with create
account and it inserts data into the database
If user selects delete account in backend it gets triggered with delete account and it
deletes the data from the database

If you use lambda instead of backend webservices it will trigger the function of python
code or java code and insert the data in db

If there are two user accesing the same function, then aws can create multiple
instances and lambda can handle them with concurrency
Languages--> As of my lat update aws lambda supports multiple programming
languages.These includes Nodejs,Python,ruby,java,go,.net and custom runtime that you
can provide
Stateless--> by default aws lambda is stateless,meaning each function execution is
independent if you need to maintain state you would use an external service like
Amazon RDS or DynamoDb
Short-lived-->Lambda function are designed to be short lived initially there was 5minute
max execution time which is now extended to 15minutes
If your code does not complete the execution in respective time then it will throw error
as lambda as expired and code as terminated
Resource Specification-->YOu can specify the amount of memory to your lambda
function.AWS Lambda allocates CPU Power linearlly proportional to the amount of
memory configured
Built-IN Fault Tolerance:AWS lambda maintains compute capacity and infrastructure
reliability,including monitoring,logging via Amazon CloudWatch and automatic retries
Deployment:Code can be deployed as a Lambda function via a ZIP or JAR file.AWS
aslo provides a blueprints features to start off with sample code for common use cases.
Integrated With AWS Services:Its integrated with many AWS services making it flexible
tool
For instance, you can trigger a Lambda function from changes in S3,updates in Dynamo
DB,endpoint requests in API Gateway
Layers:Lambda Layers are in distribution mechanism for libraries,custom runtime and
other function dependencies layers promote code sharing and seperation of
responsibilties
Billing:With Aws lambda ,you billed for compute time your code is running.You arent
changed when your code isnt running
Event Source Mapping:If a Lambda function is triggered by an event source,AWS
lambda takes care of the reading,retries, and deletion of the event ,ensuring that each
event is processed in order
Concurrent Executions:AWS Lambda scales functions in parallel,While it manages
and scales these automatically there is default safety throttle for number of concurrent
exceutions across all functions in a given region.
If application is proceesing n number of requests and incase if the limit is crossed and
then u can enable throttle means it stop accepting requests afer specific no .of request
count

AWS Lambda flow

As soon u upload a image in s3 with orginal size it triggers a lambda code which will
convert the image to lower size that is resized version of image and place or upload it to
required target
You can select above any options which offers to write the code from scratch , Use
sample code and make your appropriate changes
When selected “Use a blueprint ” you will get list of code samples
You can select the runtime for running the code and execution role you can assign
default aws role or if u have any iam role created u can use existing rule or create new
role from AWS policy templates
Role is highly important as it allow integration or accessing the other services of aws for
running aws lambda
After creating you will get the diagram flow of lambda
Add triger --> upon getting triger the lambda gets started the respective assigned
function
Add destination-->incase you want the lambda output to be passed to another service or
to make it trigger another lambda service
U can observe the Function Arn which is access point for any created service

You can check the monitor where lambda working in the logs of it the logs u can see
using another aws service Clodwatch (logging and monitoring,alerting tool)
Under configuration you will be able to configure the timeout options which is max
15mins (by that time your code should complete the run)

You can manage the memory related settings also in that option
Under permission u will get the aws assigned role the default to your aws lambda
And you when clicked on that permission it will take you to IAM

You can write your code into this


There isnt save option when you have edited the code then click on deploy
There is no run option this is actually the event trigger code
You can click on test u will get option as configure test event
Event json form as key value pair
Every detail stored in s3 will be stored in json as key value pair

For checking logs of ran lambda service-->View Cloudwatchlogs > it will redirect to new
page
Under Logstream

You will get logs data when opened you will get detailed operation data with each
timestamp
Example log
Event Notification:
In S3 bucket under bucket properties>under event notifications

You can create event notificaion from here


You will get below options while creating the event notification
Incase You want notified by each and every object/file upload or deletion notification by
lambda then no need to mention sufix nd prefix

Incase you want to restrict yours notification to only specific files format under sufix
And folder name under prefix
Destination -->U can select lambda function,sns topic,sqs queue trigger

Then u can specify the lambda function name or lambda arn


Under Monitor > Cloudwatch logs

In this log u will be able to get the data related operations and on which object the
operation performed and aws region, which aws service used
LAYERS
A Lambda layer is a .zip file archive that contains supplementary code or data. Layers
usually contain library dependencies, a custom runtime, or configuration files.
A layer can manually created by uploading zip file where you import all the code
dependencies in that
You can add up to 5 layers in 1 one lambda functions
For creating custom layers go to Lambda>>Layer>>Create Layer
You need to upload zip file and select runtime for the uploaded zip file
● SNS
Amazon Simple Notification Service(Amazon SNS) is fully messaging service provided
by AWS.It is designed for distributing notifications to a wide range of recipients.With
SNS,you can send messages to individual recipients or large number of recipients
Key Features and properties of SNS:
● Pub/Sub messaging: SNS follows the publicsh/subscribe messaging
paradigm,allowing users to create topics and then have subscriptions that
receive messages or notifications on those objects
● Multiple Protocols:SNS supports multiple protocols meaning you can deliver
messages to
■ HTTP/HTTPS endpoints
■ Email/Email-JSON
■ Short message service
■ Application (for sending messages to other AWS Services, to applications)
■ Aws Lambda
■ Simple Queue Service
■ Application endpoints
● Flexibility – You can send a message to SNS topic,and then that single
message can be delivered to many recipients across various supported
protocols
● Durability – SNS messages are stored redundantly across multiple servers
and data centers providing high availability and durability
● Content Filtering-With SNS,you can filter the messages delivered to each
subscription,ensuring subscribers only receive the messages of intreset them
● Access Control – Integration with AWS IAM allows granular access control to
SNS topics.You can specify who can publish or subscribe to topic
If u want to use SNS only for particular services like lambda and s3 then you can
configure accordingly
● Large Message Size – for messages that exceed the normal size
limit(256kb).SNS can store the large message in an Amazon S3 bucket and
send a pointer to message in the notification
● Monitoring – Integrated with Amazon Cloud Watch allowing user to monitor
metrics related to SNS service
● Encryption – Supports encryption in transit (using HTTPS endpoints) are rest
using AWS key Management Service , when its tranfsering from user to
subscriber
● Cost – User pay for what they use.This includes the number of
requests,number of messages delivered and data transfer.there is no upfront
commitment required

You might also like