Aws Lab Manual
Aws Lab Manual
Aws Lab Manual
(BCA-CTIS )
5. Amazon DynamoDB
AWS IOT
8
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers
industry-leading scalability, data availability, security, and performance. Customers of all
sizes and industries can use Amazon S3 to store and protect any amount of data for a range
of use cases, such as data lakes, websites, mobile applications, backup and restore, archive,
enterprise applications, IoT devices, and big data analytics. Amazon S3 provides
management features so that you can optimize, organize, and configure access to your data
to meet your specific business, organizational, and compliance requirements.
Features of Amazon S3
Storage classes
Amazon S3 offers a range of storage classes designed for different use cases. For example, you can store
mission-critical production data in S3 Standard for frequent access, save costs by storing infrequently
accessed data in S3 Standard-IA or S3 One Zone-IA, and archive data at the lowest costs in S3 Glacier
Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive.
You can store data with changing or unknown access patterns in S3 Intelligent-Tiering, which optimizes
storage costs by automatically moving your data between four access tiers when your access patterns
change. These four access tiers include two low-latency access tiers optimized for frequent and infrequent
access, and two opt-in archive access tiers designed for asynchronous access for rarely accessed data.
For more information, see Using Amazon S3 storage classes. For more information about S3 Glacier
Flexible Retrieval, see the Amazon S3 Glacier Developer Guide.
Storage management
Amazon S3 has storage management features that you can use to manage costs, meet regulatory
requirements, reduce latency, and save multiple distinct copies of your data for compliance requirements.
S3 Lifecycle – Configure a lifecycle policy to manage your objects and store them cost effectively throughout their lifecycle.
You can transition objects to other S3 storage classes or expire objects that reach the end of their lifetimes.
S3 Object Lock – Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely.
You can use Object Lock to help meet regulatory requirements that require write-once-read-many (WORM) storage or to
simply add another layer of protection against object changes and deletions.
S3 Replication – Replicate objects and their respective metadata and object tags to one or more destination buckets in the
same or different AWS Regions for reduced latency, compliance, security, and other use cases.
S3 Batch Operations – Manage billions of objects at scale with a single S3 API request or a few clicks in the Amazon S3
console. You can use Batch Operations to perform operations such as Copy, Invoke AWS Lambda function,
and Restore on millions or billions of objects.
Access management
Amazon S3 provides features for auditing and managing access to your buckets and objects. By default, S3
buckets and the objects in them are private. You have access only to the S3 resources that you create. To
grant granular resource permissions that support your specific use case or to audit the permissions of your
Amazon S3 resources, you can use the following features.
S3 Block Public Access – Block public access to S3 buckets and objects. By default, Block Public Access settings are turned
on at the account and bucket level.
AWS Identity and Access Management (IAM) – Create IAM users for your AWS account to manage access to your Amazon
S3 resources. For example, you can use IAM with Amazon S3 to control the type of access a user or group of users has to
an S3 bucket that your AWS account owns.
Bucket policies – Use IAM-based policy language to configure resource-based permissions for your S3 buckets and the
objects in them.
Access control lists (ACLs) – Grant read and write permissions for individual buckets and objects to authorized users. As a
general rule, we recommend using S3 resource-based policies (bucket policies and access point policies) or IAM policies
for access control instead of ACLs. ACLs are an access control mechanism that predates resource-based policies and IAM.
For more information about when you'd use ACLs instead of resource-based policies or IAM policies, see Access policy
guidelines.
S3 Object Ownership – Disable ACLs and take ownership of every object in your bucket, simplifying access management
for data stored in Amazon S3. You, as the bucket owner, automatically own and have full control over every object in your
bucket, and access control for your data is based on policies.
Access Analyzer for S3 – Evaluate and monitor your S3 bucket access policies, ensuring that the policies provide only the
intended access to your S3 resources.
To store your data in Amazon S3, you first create a bucket and specify a bucket name and AWS Region.
Then, you upload your data to that bucket as objects in Amazon S3. Each object has a key (or key name),
which is the unique identifier for the object within the bucket.
S3 provides features that you can configure to support your specific use case. For example, you can use S3
Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects
that are accidentally deleted or overwritten.
Buckets and the objects in them are private and can be accessed only if you explicitly grant access
permissions. You can use bucket policies, AWS Identity and Access Management (IAM) policies, access
control lists (ACLs), and S3 Access Points to manage access.
Topics
• Buckets
• Objects
• Keys
• S3 Versioning
• Version ID
• Bucket policy
• S3 Access Points
• Regions
Buckets
A bucket is a container for objects stored in Amazon S3. You can store any number of objects in a bucket
and can have up to 100 buckets in your account. To request an increase, visit the Service Quotas Console.
Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in
the DOC-EXAMPLE-BUCKET bucket in the US West (Oregon) Region, then it is addressable using the
URL https://DOC-EXAMPLE-BUCKET.s3.us-west-2.amazonaws.com/photos/puppy.jpg. For more information,
see Accessing a Bucket.
When you create a bucket, you enter a bucket name and choose the AWS Region where the bucket will
reside. After you create a bucket, you cannot change the name of the bucket or its Region. Bucket names
must follow the bucket naming rules. You can also configure a bucket to use S3 Versioning or other storage
management features.
Buckets also:
⚫ .Identify the account responsible for storage and data transfer charges.
⚫ Provide access control options, such as bucket policies, access control lists (ACLs), and S3 Access Points, that you can
use to manage access to your Amazon S3 resources
An object is uniquely identified within a bucket by a key (name) and a version ID (if S3 Versioning is
enabled on the bucket). For more information about objects, see Amazon S3 objects overview.
Keys
An object key (or key name) is the unique identifier for an object within a bucket. Every object in a bucket
has exactly one key. The combination of a bucket, object key, and optionally, version ID (if S3 Versioning is
enabled for the bucket) uniquely identify each object. So you can think of Amazon S3 as a basic data map
between "bucket + key + version" and the object itself.
Every object in Amazon S3 can be uniquely addressed through the combination of the web service
endpoint, bucket name, key, and optionally, a version. For example, in the URL https://DOC-EXAMPLE-
BUCKET.s3.us-west-2.amazonaws.com/photos/puppy.jpg, DOC-EXAMPLE-BUCKET is the name of the bucket
and /photos/puppy.jpg is the key.
For more information about object keys, see Creating object key names.
S3 Versioning
You can use S3 Versioning to keep multiple variants of an object in the same bucket. With S3 Versioning,
you can preserve, retrieve, and restore every version of every object stored in your buckets. You can easily
recover from both unintended user actions and application failures.
Version ID
When you enable S3 Versioning in a bucket, Amazon S3 generates a unique version ID for each object
added to the bucket. Objects that already existed in the bucket at the time that you enable versioning have
a version ID of null. If you modify these (or any other) objects with other operations, such
as CopyObject and PutObject, the new objects get a unique version ID.
Bucket policy
A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy that you can use to
grant access permissions to your bucket and the objects in it. Only the bucket owner can associate a policy
with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are
owned by the bucket owner. Bucket policies are limited to 20 KB in size.
Bucket policies use JSON-based access policy language that is standard across AWS. You can use bucket
policies to add or deny permissions for the objects in a bucket. Bucket policies allow or deny requests
based on the elements in the policy, including the requester, S3 actions, resources, and aspects or
conditions of the request (for example, the IP address used to make the request). For example, you can
create a bucket policy that grants cross-account permissions to upload objects to an S3 bucket while
ensuring that the bucket owner has full control of the uploaded objects. For more information, see Bucket
policy examples.
In your bucket policy, you can use wildcard characters on Amazon Resource Names (ARNs) and other
values to grant permissions to a subset of objects. For example, you can control access to groups of objects
that begin with a common prefix or end with a given extension, such as .html.
By default, when another AWS account uploads an object to your S3 bucket, that account (the object
writer) owns the object, has access to it, and can grant other users access to it through ACLs. You can use
Object Ownership to change this default behavior so that ACLs are disabled and you, as the bucket owner,
automatically own every object in your bucket. As a result, access control for your data is based on policies,
such as IAM policies, S3 bucket policies, virtual private cloud (VPC) endpoint policies, and AWS
Organizations service control policies (SCPs).
A majority of modern use cases in Amazon S3 no longer require the use of ACLs, and we recommend that
you disable ACLs except in unusual circumstances where you need to control access for each object
individually. With Object Ownership, you can disable ACLs and rely on policies for access control. When
you disable ACLs, you can easily maintain a bucket with objects uploaded by different AWS accounts. You,
as the bucket owner, own all the objects in the bucket and can manage access to them using policies. For
more information, see Controlling ownership of objects and disabling ACLs for your bucket.
S3 Access Points
Amazon S3 Access Points are named network endpoints with dedicated access policies that describe how
data can be accessed using that endpoint. Access Points simplify managing data access at scale for shared
datasets in Amazon S3. Access Points are named network endpoints attached to buckets that you can use
to perform S3 object operations, such as GetObject and PutObject.
Each access point has its own IAM policy. You can configure Block Public Access settings for each access
point. To restrict Amazon S3 data access to a private network, you can also configure any access point to
accept requests only from a virtual private cloud (VPC).
Regions
You can choose the geographical AWS Region where Amazon S3 stores the buckets that you create. You
might choose a Region to optimize latency, minimize costs, or address regulatory requirements. Objects
stored in an AWS Region never leave the Region unless you explicitly transfer or replicate them to another
Region. For example, objects stored in the Europe (Ireland) Region never leave it.
Creating a bucket
To upload your data to Amazon S3, you must first create an Amazon S3 bucket in one of the
AWS Regions. When you create a bucket, you must choose a bucket name and Region. You
can optionally choose other storage management options for the bucket. After you create a
bucket, you cannot change the bucket name or Region. For information about naming
buckets, see Bucket naming rules.
The AWS account that creates the bucket owns it. You can upload any number of objects to
the bucket. By default, you can create up to 100 buckets in each of your AWS accounts. If
you need more buckets, you can increase your account bucket limit to a maximum of 1,000
buckets by submitting a service limit increase. To learn how to submit a bucket limit
increase, see AWS service quotas in the AWS General Reference. You can store any number
of objects in a bucket.
S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to disable
access control lists (ACLs) and take ownership of every object in your bucket, simplifying
access management for data stored in Amazon S3. By default, when another AWS account
uploads an object to your S3 bucket, that account (the object writer) owns the object, has
access to it, and can grant other users access to it through ACLs. When you create a bucket,
you can apply the bucket owner enforced setting for Object Ownership to change this
default behavior so that ACLs are disabled and you, as the bucket owner, automatically own
every object in your bucket. As a result, access control for your data is based on policies.
For more information, see Controlling ownership of objects and disabling ACLs for your
bucket.
You can use the Amazon S3 console, Amazon S3 APIs, AWS CLI, or AWS SDKs to create a
bucket. For more information about the permissions required to create a bucket,
see CreateBucket in the Amazon Simple Storage Service API Reference.
Using the S3 console
1. Sign in to the AWS Management Console and open the Amazon S3 console
at https://console.aws.amazon.com/s3/.
2. Choose Create bucket.
The Create bucket wizard opens.
3. In Bucket name, enter a DNS-compliant name for your bucket.
The bucket name must:
◆ Be unique across all of Amazon S3.
◆ Be between 3 and 63 characters long.
◆ Not contain uppercase characters.
◆ Start with a lowercase letter or number.
After you create the bucket, you cannot change its name. For information about naming
buckets, see Bucket naming rules.
Important
Avoid including sensitive information, such as account number, in the bucket name. The
bucket name is visible in the URLs that point to the objects in the bucket.
4. In Region, choose the AWS Region where you want the bucket to reside.
Choose a Region close to you to minimize latency and costs and address regulatory
requirements. Objects stored in a Region never leave that Region unless you explicitly
transfer them to another Region. For a list of Amazon S3 AWS Regions, see AWS service
endpoints in the Amazon Web Services General Reference.
5. Under Object Ownership, to disable or enable ACLs and control ownership of objects
uploaded in your bucket, choose one of the following settings:
ACLs disabled
Bucket owner enforced – ACLs are disabled, and the bucket owner automatically owns and
has full control over every object in the bucket. ACLs no longer affect permissions to data in
the S3 bucket. The bucket uses policies to define access control.
To require that all new buckets are created with ACLs disabled by using IAM or AWS
Organizations policies, see Disabling ACLs for all new buckets (bucket owner enforced).
ACLs enabled
Bucket owner preferred – The bucket owner owns and has full control over new objects
that other accounts write to the bucket with the bucket-owner-full-control canned ACL.
If you apply the bucket owner preferred setting, to require all Amazon S3 uploads to
include the bucket-owner-full-control canned ACL, you can add a bucket policy that only
allows object uploads that use this ACL.
Object writer – The AWS account that uploads an object owns the object, has full control
over it, and can grant other users access to it through ACLs.
Note
To apply the Bucket owner enforced setting or the Bucket owner preferred setting, you
must have the following permission: s3:CreateBucket and s3:PutBucketOwnershipControls.
6. In Bucket settings for Block Public Access, choose the Block Public Access settings that
you want to apply to the bucket.
We recommend that you keep all settings enabled unless you know that you need to turn
off one or more of them for your use case, such as to host a public website. Block Public
Access settings that you enable for the bucket are also enabled for all access points that
you create on the bucket. For more information about blocking public access, see Blocking
public access to your Amazon S3 storage.
7. (Optional) If you want to enable S3 Object Lock, do the following:
a) Choose Advanced settings, and read the message that appears.
Important
You can only enable S3 Object Lock for a bucket when you create it. If you enable Object
Lock for the bucket, you cannot disable it later. Enabling Object Lock also enables
versioning for the bucket. After you enable Object Lock for the bucket, you must configure
the Object Lock default retention and legal hold settings to protect new objects from being
deleted or overwritten. For more information, see Configuring S3 Object Lock using the
console.
b) If you want to enable Object Lock, enter enable in the text box and choose Confirm.
For more information about the S3 Object Lock feature, see Using S3 Object Lock.
Note
To create an Object Lock enabled bucket, you must have the following permissions:
s3:CreateBucket, s3:PutBucketVersioning and s3:PutBucketObjectLockConfiguration.
8. Choose Create bucket.
Practical -2 Amazon cloud front
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic
web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your
content through a worldwide network of data centers called edge locations. When a user
requests content that you're serving with CloudFront, the request is routed to the edge
location that provides the lowest latency (time delay), so that content is delivered with the
best possible performance.
If the content is already in the edge location with the lowest latency, CloudFront delivers it
immediately.
If the content is not in that edge location, CloudFront retrieves it from an origin that you've
defined—such as an Amazon S3 bucket, a MediaPackage channel, or an HTTP server (for
example, a web server) that you have identified as the source for the definitive version of
your content.
As an example, suppose that you're serving an image from a traditional web server, not
from CloudFront. For example, you might serve an image, sunsetphoto.png, using the
URL http://example.com/sunsetphoto.png.
Your users can easily navigate to this URL and see the image. But they probably don't know
that their request is routed from one network to another—through the complex collection
of interconnected networks that comprise the internet—until the image is found.
CloudFront speeds up the distribution of your content by routing each user request through
the AWS backbone network to the edge location that can best serve your content. Typically,
this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the
AWS network dramatically reduces the number of networks that your users' requests must
pass through, which improves performance. Users get lower latency—the time it takes to
load the first byte of the file—and higher data transfer rates.
You also get increased reliability and availability because copies of your files (also known
as objects) are now held (or cached) in multiple edge locations around the world.
Topics
10. Optionally, you can configure your origin server to add headers to the files, to
indicate how long you want the files to stay in the cache in CloudFront edge locations.
By default, each file stays in an edge location for 24 hours before it expires. The
minimum expiration time is 0 seconds; there isn't a maximum expiration time. For more
information, see Managing how long content stays in the cache (expiration).
AWS KMS integrates with most other AWS services that encrypt your data. AWS KMS also integrates with AWS
CloudTrail to log use of your KMS keys for auditing, regulatory, and compliance needs.
You can use the AWS KMS API to create and manage KMS keys and special features, such as custom key stores, and
use KMS keys in cryptographic operations. For detailed information, see the AWS Key Management Service API
Reference.
⚫ Control access to your KMS keys by using key policies, IAM policies, and grants. AWS KMS supports attribute-
based access control (ABAC). You can also refine policies by using condition keys.
⚫ Create, delete, list, and update aliases, friendly names for your KMS keys. You can also use aliases to control
access to your KMS keys.
⚫ Tag your KMS keys for identification, automation, and cost tracking. You can also use tags to control access to
your KMS keys.
⚫ Enable and disable automatic rotation of the cryptographic material in a KMS keys.
⚫ You can use your KMS keys in cryptographic operations. For examples, see Programming the AWS KMS API.
⚫ Encrypt, decrypt, and re-encrypt data with symmetric or asymmetric KMS keys.
⚫ Generate exportable symmetric data keys and asymmetric data key pairs.
⚫ Create KMS keys in your own custom key store backed by a AWS CloudHSM cluster
⚫ Use hybrid post-quantum TLS to provide forward-looking encryption in transit for the data that you send to
AWS KMS.
By using AWS KMS, you gain more control over access to data you encrypt. You can use the key management and
cryptographic features directly in your applications or through AWS services integrated with AWS KMS. Whether you
write applications for AWS or use AWS services, AWS KMS enables you to maintain control over who can use your
AWS KMS keys and gain access to your encrypted data.
AWS KMS integrates with AWS CloudTrail, a service that delivers log files to your designated Amazon S3 bucket. By
using CloudTrail you can monitor and investigate how and when your KMS keys have been used and who used them.
The AWS Regions in which AWS KMS is supported are listed in AWS Key Management Service Endpoints and Quotas.
If an AWS KMS feature is not supported in an AWS Region that AWS KMS supports, the regional difference is
described in the topic about the feature.
As with other AWS products, using AWS KMS does not require contracts or minimum purchases. For more
information about AWS KMS pricing, see AWS Key Management Service Pricing.
AWS Key Management Service is backed by a service level agreement that defines our service availability policy.
Practical -4 Amazon Elastic search service
Elasticsearch is a distributed search and analytics engine built on Apache Lucene. Since its
release in 2010, Elasticsearch has quickly become the most popular search engine and is
commonly used for log analytics, full-text search, security intelligence, business analytics,
and operational intelligence use cases.
On January 21, 2021, Elastic NV announced that they would change their software licensing
strategy and not release new versions of Elasticsearch and Kibana under the permissive
Apache License, Version 2.0 (ALv2) license. Instead, new versions of the software will be
offered under the Elastic license, with source code available under the Elastic License or
SSPL. These licenses are not open source and do not offer users the same freedoms. To
ensure that the open source community and our customers continue to have a secure,
high-quality, fully open source search and analytics suite, we introduced
the OpenSearch project, a community-driven, ALv2 licensed fork of open source
Elasticsearch and Kibana.
You can send data in the form of JSON documents to Elasticsearch using the API or
ingestion tools such as Logstash and Amazon Kinesis Firehose. Elasticsearch automatically
stores the original document and adds a searchable reference to the document in the
cluster’s index. You can then search and retrieve the document using the Elasticsearch API.
You can also use Kibana, a visualization tool, with Elasticsearch to visualize your data and
build interactive dashboards.
You can run Apache 2.0 licensed Elasticsearch versions (up until version 7.10.2 & Kibana
7.10.2) on-premises, on Amazon EC2, or on Amazon OpenSearch Service (successor to
Amazon Elasticsearch Service). With on-premises or Amazon EC2 deployments, you are
responsible for installing Elasticsearch and other necessary software, provisioning
infrastructure, and managing the cluster. Amazon OpenSearch Service, on the other hand,
is a fully managed service, so you don’t have to worry about time-consuming cluster
management tasks such as hardware provisioning, software patching, failure recovery,
backups, and monitoring.
Elasticsearch benefits
FAST TIME-TO-VALUE
Elasticsearch offers simple REST based APIs, a simple HTTP interface, and uses schema-free
JSON documents, making it easy to get started and quickly build applications for a variety of
use-cases.
HIGH PERFORMANCE
The distributed nature of Elasticsearch enables it to process large volumes of data in
parallel, quickly finding the best matches for your queries.
COMPLIMENTARY TOOLING AND PLUGINS
Elasticsearch comes integrated with Kibana, a popular visualization and reporting tool. It
also offers integration with Beats and Logstash, while enable you to easily transform source
data and load it into your Elasticsearch cluster. You can also use a number of open-source
Elasticsearch plugins such as language analyzers and suggesters to add rich functionality to
your applications.
NEAR REAL-TIME OPERATIONS
Elasticsearch operations such as reading or writing data usually take less than a second to
complete. This lets you use Elasticsearch for near real-time use cases such as application
monitoring and anomaly detection.
EASY APPLICATION DEVELOPMENT
Elasticsearch provides support for various languages including Java, Python, PHP, JavaScript,
Node.js, Ruby, and many more.
Managing and scaling Elasticsearch can be difficult and requires expertise in Elasticsearch
setup and configuration. To make it easy for customers to run open-source Elasticsearch,
AWS offers Amazon OpenSearch Service to perform interactive log analytics, real-time
application monitoring, website search, and more.
Practical -5 Introduction to Amazon DynamoDB
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and
predictable performance with seamless scalability. DynamoDB lets you offload the
administrative burdens of operating and scaling a distributed database so that you don't
have to worry about hardware provisioning, setup and configuration, replication, software
patching, or cluster scaling. DynamoDB also offers encryption at rest, which eliminates the
operational burden and complexity involved in protecting sensitive data. For more
information, see DynamoDB Encryption at Rest.
With DynamoDB, you can create database tables that can store and retrieve any amount of
data and serve any level of request traffic. You can scale up or scale down your tables'
throughput capacity without downtime or performance degradation. You can use the AWS
Management Console to monitor resource utilization and performance metrics.
DynamoDB provides on-demand backup capability. It allows you to create full backups of
your tables for long-term retention and archival for regulatory compliance needs. For more
information, see Using On-Demand Backup and Restore for DynamoDB.
You can create on-demand backups and enable point-in-time recovery for your Amazon
DynamoDB tables. Point-in-time recovery helps protect your tables from accidental write or
delete operations. With point-in-time recovery, you can restore a table to any point in time
during the last 35 days. For more information, see Point-in-Time Recovery: How It Works.
DynamoDB allows you to delete expired items from tables automatically to help you reduce
storage usage and the cost of storing data that is no longer relevant. For more information,
see Expiring Items By Using DynamoDB Time to Live (TTL).
High Availability and Durability
DynamoDB automatically spreads the data and traffic for your tables over a sufficient
number of servers to handle your throughput and storage requirements, while maintaining
consistent and fast performance. All of your data is stored on solid-state disks (SSDs) and is
automatically replicated across multiple Availability Zones in an AWS Region, providing
built-in high availability and data durability. You can use global tables to keep DynamoDB
tables in sync across AWS Regions. For more information, see Global Tables: Multi-Region
Replication with DynamoDB.
API Gateway handles all the tasks involved in accepting and processing up to hundreds of
thousands of concurrent API calls, including traffic management, CORS support,
authorization and access control, throttling, monitoring, and API version management. API
Gateway has no minimum fees or startup costs. You pay for the API calls you receive and
the amount of data transferred out and, with the API Gateway tiered pricing model, you
can reduce your cost as your API usage scales.
API Types
RESTful APIs
Build RESTful APIs optimized for serverless workloads and HTTP backends using HTTP APIs.
HTTP APIs are the best choice for building APIs that only require API proxy functionality. If
your APIs require API proxy functionality and API management features in a single solution,
API Gateway also offers REST APIs.
WEBSOCKET APIs
Build real-time two-way communication applications, such as chat apps and streaming
dashboards, with WebSocket APIs. API Gateway maintains a persistent connection to
handle message transfer between your backend service and your clients.
Benefits
Run multiple versions of the same API simultaneously with API Gateway, allowing you to
quickly iterate, test, and release new versions. You pay for calls made to your APIs and data
transfer out and there are no minimum fees or upfront commitments.
Provide end users with the lowest possible latency for API requests and responses by taking
advantage of our global network of edge locations using Amazon CloudFront. Throttle
traffic and authorize API calls to ensure that backend operations withstand traffic spikes
and backend systems are not unnecessarily called.
API Gateway provides a tiered pricing model for API requests. With an API Requests price as
low as $0.90 per million requests at the highest tier, you can decrease your costs as your
API usage increases per region across your AWS accounts.
Easy monitoring
Monitor performance metrics and information on API calls, data latency, and error rates
from the API Gateway dashboard, which allows you to visually monitor calls to your services
using Amazon CloudWatch.
Authorize access to your APIs with AWS Identity and Access Management (IAM) and
Amazon Cognito. If you use OAuth tokens, API Gateway offers native OIDC and OAuth2
support. To support custom authorization requirements, you can execute a Lambda
authorizer from AWS Lambda.
Create RESTful APIs using HTTP APIs or REST APIs. HTTP APIs are the best way to build APIs
for a majority of use cases—they're up to 71% cheaper than REST APIs. If your use case
requires API proxy functionality and management features in a single solution, you can use
REST APIs.
Getting started with API Gateway
In this getting started exercise, you create a serverless API. Serverless APIs let you focus on
your applications, instead of spending time provisioning and managing servers. This
exercise takes less than 20 minutes to complete, and is possible within the AWS Free Tier.
First, you create a Lambda function using the AWS Lambda console. Next, you create an
HTTP API using the API Gateway console. Then, you invoke your API.
When you invoke your HTTP API, API Gateway routes the request to your Lambda function.
Lambda runs the Lambda function and returns a response to API Gateway. API Gateway
then returns a response to you.
Architectural overview of the API that you create in this getting started guide. Clients
use an API Gateway
HTTP API to invoke a Lambda function. API Gateway returns the Lambda function's
response to clients.#
To complete this exercise, you need an AWS account and an AWS Identity and Access
Management user with console access. For more information, see Prerequisites for getting
started with API Gateway.
Topics
Next steps
For this example, you use the default Node.js function from the Lambda console.
The example function returns a 200 response to clients, and the text Hello from Lambda!.
You can modify your Lambda function, as long as the function's response aligns with the
format that API Gateway requires.
The default Lambda function code should look similar to the following:
const response = {
statusCode: 200,
};
return response;
};
The HTTP API provides an HTTP endpoint for your Lambda function. API Gateway routes
requests to your Lambda function, and then returns the function's response to clients.
If you've created an API before, choose Create API, and then choose Build for HTTP API.
Choose Lambda.
Choose Next.
Review the route that API Gateway creates for you, and then choose Next.
Review the stage that API Gateway creates for you, and then choose Next.
Choose Create.
Now you've created an HTTP API with a Lambda integration that's ready to receive requests
from clients.
Step 3: Test your API
Next, you test your API to make sure that it's working. For simplicity, use a web browser to
invoke your API.
After you create your API, the console shows your API's invoke URL
Copy your API's invoke URL, and enter it in a web browser. Append the name of your
Lambda function to your invoke URL to call your Lambda function. By default, the API
Gateway console creates a route with the same name as your Lambda function, my-
function.
Verify your API's response. You should see the text "Hello from Lambda!" in your browser.
To prevent unnecessary costs, delete the resources that you created as part of this getting
started exercise. The following steps delete your HTTP API, your Lambda function, and
associated resources.
On the APIs page, select an API. Choose Actions, and then choose Delete.
Choose Delete.
On the Functions page, select a function. Choose Actions, and then choose Delete.
Choose Delete.
On the Log groups page, select the function's log group (/aws/lambda/my-function).
Choose Actions, and then choose Delete log group.
Choose Delete.
In the AWS Identity and Access Management console, open the Roles page.
You can automate the creation and cleanup of AWS resources by using AWS
CloudFormation or AWS SAM. For example AWS CloudFormation templates, see example
AWS CloudFormation templates.
Next steps
For this example, you used the AWS Management Console to create a simple HTTP API. The
HTTP API invokes a Lambda function and returns a response to clients.
The following are next steps as you continue to work with API Gateway.
HTTP endpoints
AWS services such as Amazon Simple Queue Service, AWS Step Functions, and Kinesis Data
Streams
To get help with Amazon API Gateway from the community, see the API Gateway
Discussion Forum. When you enter this forum, AWS might require you to sign in.
To get help with API Gateway directly from AWS, see the support options on the AWS
Support page.
Practical -7 Amazon Machine learning
Make accurate predictions, get deeper insights from your data, reduce operational
overhead, and improve customer experience with AWS machine learning (ML). AWS helps
you at every stage of your ML adoption journey with the most comprehensive set of
artificial intelligence (AI) and ML services, infrastructure, and implementation resources.
Use cases
Explore the key use cases of AI/ML to improve customer experience, optimize business
operations, and accelerate innovation.
Enhance your customer service experience and reduce costs by integrating machine
learning into your contact center. Through intelligent chat and voice bots, voice sentiment
analysis, live-call analytics and agent assist, post-call analytics, and more, personalize every
customer interaction and improve overall customer satisfaction.
ChartSpan, the largest chronic care management service provider in the U.S., decreased
cost by 80% and increased staff utilization by 12%.
analyze_graphic
Analyze media content and discover new insights
Create new insights from video, audio, images and text by applying machine learning to
better manage and analyze content. Automate key functions of the media workflow to
accelerate the search and discovery, content localization, compliance, monetization, and
more.
SmugMug, a global image and video sharing platform, is able to find and properly flag
unwanted content at scale, enabling a safe and welcoming experience for its community.
forecast_graphic
Forecast future values and detect anomalies in your business metrics
Accurately forecast sales, financial, and demand data to streamline decision-making.
Automatically identify anomalies in your business metrics and their root cause to stay
ahead of the game.
Domino’s Pizza Enterprises Ltd, the largest pizza chain in Australia, gets orders to customers
faster by predicting what pizzas would be ordered next.
Benefits
Simple to use
AWS Database Migration Service is simple to use. There is no need to install any drivers or
applications, and it does not require changes to the source database in most cases. You can
begin a database migration with just a few clicks in the AWS Management Console. Once
the migration has started, DMS manages all the complexities of the migration process
including automatically replicating data changes that occur in the source database during
the migration process. You can also use this service for continuous data replication with the
same simplicity.
Minimal downtime
AWS Database Migration Service helps you migrate your databases to AWS with virtually no
downtime. All data changes to the source database that occur during the migration are
continuously replicated to the target, allowing the source database to be fully operational
during the migration process. After the database migration is complete, the target database
will remain synchronized with the source for as long as you choose, allowing you to
switchover the database at a convenient time.
AWS Database Migration Service can migrate your data to and from most of the widely
used commercial and open source databases. It supports homogeneous migrations such as
Oracle to Oracle, as well as heterogeneous migrations between different database
platforms, such as Oracle to Amazon Aurora. Migrations can be from on-premises
databases to Amazon RDS or Amazon EC2, databases running on EC2 to RDS, or vice versa,
as well as from one RDS database to another RDS database. It can also move data between
SQL, NoSQL, and text based targets.
Low cost
AWS Database Migration Service is a low cost service. You only pay for the compute
resources used during the migration process and any additional log storage. Migrating a
terabyte-size database can be done for as little as $3. This applies to both homogeneous
and heterogeneous migrations of any supported databases. This is in stark contrast to
conventional database migration methods that can be very expensive.
On-going replication
You can set up a DMS task for either one-time migration or on-going replication. An on-
going replication task keeps your source and target databases in sync. Once set up, the on-
going replication task will continuously apply source changes to the target with minimal
latency. All DMS features such as data validation and transformations are available for any
replication task.
Reliable
The AWS Database Migration Service is highly resilient and self–healing. It continually
monitors source and target databases, network connectivity, and the replication instance.
In case of interruption, it automatically restarts the process and continues the migration
from where it stopped. Multi-AZ option allows you to have high-availability for database
migration and continuous data replication by enabling redundant replication instances
Use cases
In homogeneous database migrations, the source and target database engines are the same
or are compatible like Oracle to Amazon RDS for Oracle, MySQL to Amazon Aurora, MySQL
to Amazon RDS for MySQL, or Microsoft SQL Server to Amazon RDS for SQL Server. Since
the schema structure, data types, and database code are compatible between the source
and target databases, this kind of migration is a one-step process. You create a migration
task with connections to the source and target databases, and then start the migration with
the click of a button. AWS Database Migration Service takes care of the rest. The source
database can be located in your own premises outside of AWS, running on an Amazon EC2
instance, or it can be an Amazon RDS database. The target can be a database in Amazon
EC2 or Amazon RDS.
Verizon is a global leader delivering innovative communications and technology solutions.
“Verizon is helping our customers build a better, more connected life. As part of this
journey, we are undergoing a major transformation in our database management approach,
moving away from expensive, legacy commercial database solutions to more efficient and
cost-effective options. Testing of Amazon Aurora PostgreSQL showed better performance
over standard PostgreSQL residing on Amazon EC2 instances, and the AWS Database
Migration Service and Schema Conversion Tool were found effective at identifying areas for
data-conversion that required special attention during migration.” - Shashidhar Sureban,
Associate Director, Database Engineering, Verizon.
In heterogeneous database migrations the source and target databases engines are
different, like in the case of Oracle to Amazon Aurora, Oracle to PostgreSQL, or Microsoft
SQL Server to MySQL migrations. In this case, the schema structure, data types, and
database code of source and target databases can be quite different, requiring a schema
and code transformation before the data migration starts. That makes heterogeneous
migrations a two-step process. First use the AWS Schema Conversion Tool to convert the
source schema and code to match that of the target database, and then use the AWS
Database Migration Service to migrate data from the source database to the target
database. All the required data type conversions will automatically be done by the AWS
Database Migration Service during the migration. The source database can be located in
your own premises outside of AWS, running on an Amazon EC2 instance, or it can be an
Amazon RDS database. The target can be a database in Amazon EC2 or Amazon RDS.
Trimble is a global leader in telematics solutions. They had a significant investment in on-
premises hardware in North America and Europe running Oracle databases. Rather than
refresh the hardware and renew the licenses, they opted to migrate the databases to AWS.
They ran the AWS Schema Conversion Tool to analyze the effort, and then migrated their
complete database to a managed PostgreSQL service on Amazon RDS. "Our projections are
that we will pay about one quarter of what we were paying in our private infrastructure." -
Todd Hofert, Director of Infrastructure Operations, Trimble.
AWS Database Migration Service can be used to migrate data both into and out of the cloud
for development purposes. There are two common scenarios. The first is to deploy
development, test or staging systems on AWS, to take advantage of the cloud’s scalability
and rapid provisioning. This way, developers and testers can use copies of real production
data, and can copy updates back to the on-premises production system. The second
scenario is when development systems are on-premises (often on personal laptops), and
you migrate a current copy of an AWS Cloud production database to these on-premises
systems either once or continuously. This avoids disruption to existing DevOps processes
while ensuring the up-to-date representation of your production system.
Database Consolidation
You can use AWS Database Migration Service to consolidate multiple source databases into
a single target database. This can be done for homogeneous and heterogeneous migrations,
and you can use this feature with all supported database engines. The source databases can
be located in your own premises outside of AWS, running on an Amazon EC2 instance, or it
can be an Amazon RDS database. The sources databases can also be spread across different
locations. For example, one of the source databases can be in your own premises outside of
AWS, while the second one in Amazon EC2, and the third one in an Amazon RDS database.
The target can be a database in Amazon EC2 or Amazon RDS.
You can use AWS Database Migration Service to perform continuous data replication.
Continuous data replication has a multitude of use cases including Disaster Recovery
instance synchronization, geographic database distribution and Dev/Test environment
synchronization. You can use DMS for both homogeneous and heterogeneous data
replications for all supported database engines. The source or destination databases can be
located in your own premises outside of AWS, running on an Amazon EC2 instance, or it can
be an Amazon RDS database. You can replicate data from a single database to one or more
target databases or consolidate and replicate data from multiple databases to one or more
target databases.
Practical-9 AWS iOT
AWS offers Internet of Things (IoT) services and solutions to connect and manage billions of
devices. Collect, store, and analyze IoT data for industrial, consumer, commercial, and
automotive workloads.
Scale, move quickly, and save money, with AWS IoT. From secure device connectivity to
management, storage, and analytics, AWS IoT has the broad and deep services you need to
build complete solutions.
AWS IoT services address every layer of your application and device security. Safeguard
your device data with preventative mechanisms, like encryption and access control, and
consistently audit and monitor your configurations with AWS IoT Device Defender.
Create models in the cloud and deploy them to devices with up to 25x better performance
and less than 1/10th the runtime footprint. AWS brings artificial intelligence (AI), machine
learning (ML), and IoT together to make devices more intelligent.
Build innovative, differentiated solutions on secure, proven, and elastic cloud infrastructure
that scales to billions of devices and trillions of messages. AWS IoT easily integrates with
other AWS services.
AWS IoT services
Device software
FreeRTOS
Deploy an operating system for microcontrollers that makes small, low-power edge devices easy to manage
Analytics services
Work with IoT data faster to extract value from your data.
Use cases
Develop connected consumer applications for home automation, home security and monitoring, and home networking.
Build commercial IoT applications that solve challenges in infrastructure, health, and the environment.
Transform mobility
Deliver IoT applications that gather, process, analyze, and act on connected vehicle data, without having to manage any
infrastructure.
Amazon Route 53 Traffic Flow makes it easy for you to manage traffic globally through a
variety of routing types, including Latency Based Routing, Geo DNS, Geoproximity, and
Weighted Round Robin—all of which can be combined with DNS Failover in order to enable
a variety of low-latency, fault-tolerant architectures. Using Amazon Route 53 Traffic Flow’s
simple visual editor, you can easily manage how your end-users are routed to your
application’s endpoints—whether in a single AWS region or distributed around the globe.
Amazon Route 53 also offers Domain Name Registration – you can purchase and manage
domain names such as example.com and Amazon Route 53 will automatically configure
DNS settings for your domains.
Benefits
Amazon Route 53 is built using AWS’s highly available and reliable infrastructure. The
distributed nature of our DNS servers helps ensure a consistent ability to route your end
users to your application. Features such as Amazon Route 53 Traffic Flow and routing
control help you improve reliability with easily-configured failover to reroute your users to
an alternate location if your primary application endpoint becomes unavailable. Amazon
Route 53 is designed to provide the level of dependability required by important
applications. Amazon Route 53 is backed by the Amazon Route 53 Service Level Agreement.
Flexible
Amazon Route 53 Traffic Flow routes traffic based on multiple criteria, such as endpoint
health, geographic location, and latency. You can configure multiple traffic policies and
decide which policies are active at any given time. You can create and edit traffic policies
using the simple visual editor in the Route 53 console, AWS SDKs, or the Route 53 API.
Traffic Flow’s versioning feature maintains a history of changes to your traffic policies, so
you can easily roll back to a previous version using the console or API.
Amazon Route 53 is designed to work well with other AWS features and offerings. You can
use Amazon Route 53 to map domain names to your Amazon EC2 instances, Amazon S3
buckets, Amazon CloudFront distributions, and other AWS resources. By using the AWS
Identity and Access Management (IAM) service with Amazon Route 53, you get fine grained
control over who can update your DNS data. You can use Amazon Route 53 to map your
zone apex (example.com versus www.example.com) to your Elastic Load Balancing instance,
Amazon CloudFront distribution, AWS Elastic Beanstalk environment, API Gateway, VPC
endpoint, or Amazon S3 website bucket using a feature called Alias record.
Simple
With self-service sign-up, Amazon Route 53 can start to answer your DNS queries within
minutes. You can configure your DNS settings with the AWS Management Console or our
easy-to-use API. You can also programmatically integrate the Amazon Route 53 API into
your overall web application. For instance, you can use Amazon Route 53’s API to create a
new DNS record whenever you create a new EC2 instance. Amazon Route 53 Traffic Flow
makes it easy to set up sophisticated routing logic for your applications by using the simple
visual policy editor.
Fast
Using a global anycast network of DNS servers around the world, Amazon Route 53 is
designed to automatically route your users to the optimal location depending on network
conditions. As a result, the service offers low query latency for your end users, as well as
low update latency for your DNS record management needs. Amazon Route 53 Traffic Flow
lets you further improve your customers’ experience by running your application in multiple
locations around the world and using traffic policies to ensure your end users are routed to
the closest healthy endpoint for your application.
Cost-effective
Amazon Route 53 passes on the benefits of AWS’s scale to you. You pay only for the
resources you use, such as the number of queries that the service answers for each of your
domains, hosted zones for managing domains through the service, and optional features
such as traffic policies and health checks, all at a low cost and without minimum usage
commitments or any up-front fees.
Secure
By integrating Amazon Route 53 with AWS Identity and Access Management (IAM), you can
grant unique credentials and manage permissions for every user within your AWS account
and specify who has access to which parts of the Amazon Route 53 service. When you
enable Amazon Route 53 Resolver DNS firewall, you can configure it to inspect outbound
DNS requests against a list of known malicious domains.
Scalable
Route 53 is designed to automatically scale to handle very large query volumes without any
intervention from you.
Centralized DNS management of hybrid cloud with Amazon Route 53 and AWS Transit
Gateway
PRATEEK (121519009)