Amazon
Amazon
Amazon
Cloud Computing
Cloud computing is the on-demand delivery of IT resources over the internet with pay-as-
you-go pricing
with to make requests to computer servers. A server can be services, such as Amazon Elastic
For example, suppose that a client makes a request for a news article, the score in an online
game, or a funny video. The server evaluates the details of this request and fulfills it by
returning the information to the client.
The three cloud computing deployment models are cloud-based, on-premises, and hybrid.
To learn more about deployment models, choose each of the following three tabs.
b. On-premises deployment
• Deploy resources by using virtualization and resource management tools.
• Increase resource utilization by using application management and
virtualization technologies.
For example, you might have applications that run on technology that is fully kept in
your on-premises data center. Though this model is much like legacy IT
infrastructure, its incorporation of application management and virtualization
technologies helps to increase resource utilization.
c. Hybrid Deployment
For example, suppose that a company wants to use cloud services that can automate
batch data processing and analytics. However, the company has several legacy
applications that are more suitable on premises and will not be migrated to the cloud.
With a hybrid deployment, the company would be able to keep the legacy
applications on premises while benefiting from the data and analytics services that run
in the cloud.
• Go global in minutes
The global footprint of the AWS Cloud enables you to deploy applications to
customers around the world quickly, while providing them with low latency. This
means that even if you are located in a different part of the world than your
customers, customers are able to access your applications with minimal delays.
Later in this course, you will explore the AWS global infrastructure in greater detail.
You will examine some of the services that you can use to deliver content to
customers around the world.
BAB 2
Amazon EC2 instance types
When selecting an instance type, consider the specific needs of your workloads and
applications. This might include requirements for compute, memory, or storage
capabilities.
Suppose that you have an application in which the resource needs for compute,
memory, and networking are roughly equivalent. You might consider running it on a
general purpose instance because the application does not require optimization in any
single resource area.
• On-Demand Instances
are ideal for short-term, irregular workloads that cannot be interrupted. No
upfront costs or minimum contracts apply. The instances run continuously until
you stop them, and you pay for only the compute time you use.
Sample use cases for On-Demand Instances include developing and testing
applications and running applications that have unpredictable usage patterns.
On-Demand Instances are not recommended for workloads that last a year or
longer because these workloads can experience greater cost savings using
Reserved Instances.
You can purchase Standard Reserved and Convertible Reserved Instances for a
1-year or 3-year term. You realize greater cost savings with the 3-year option.
You have the option to specify an Availability Zone for your EC2 Reserved
Instances. If you make this specification, you get EC2 capacity reservation. This
ensures that your desired amount of EC2 instances will be available when you
need them.
• Spot Instances
are ideal for workloads with flexible start and end times, or that can withstand
interruptions. Spot Instances use minimum contract length, unused Amazon EC2
computing capacity and offer you cost savings at up to 90% off of On-Demand
prices.
Suppose that you have a background processing job that can start and stop as
needed (such as the data processing job for a customer survey). You want to
start and stop the processing job without affecting the overall operations of your
business. If you make a Spot request and Amazon EC2 capacity is available, your
Spot Instance launches. However, if you make a Spot request and Amazon EC2
capacity is unavailable, the request is not successful until capacity becomes
available. The unavailable capacity might delay the launch of your background
processing job.
• Dedicated Hosts
are physical servers with Amazon EC2 instance capacity that is fully dedicated to
your use.
You can use your existing per-socket, per-core, or per-VM software licenses to
help maintain license compliance. You can purchase On-Demand Dedicated
Hosts and Dedicated Hosts Reservations. Of all the Amazon EC2 options that
were covered, Dedicated Hosts are the most expensive.
Scalability
involves beginning with only the resources you need and designing your architecture
to automatically respond to changing demand by scaling out or in. As a result, you pay
for only the resources you use. You don’t have to worry about a lack of computing
capacity to meet your customers’ needs.
If you wanted the scaling process to happen automatically, which AWS service would
you use? The AWS service that provides this functionality for Amazon EC2 instances
is Amazon EC2 Auto Scaling.
Next, you can set the desired capacity at two Amazon EC2 instances even though your
application needs a minimum of a single Amazon EC2 instance to run.
If you do not specify the desired number of Amazon EC2 instances in an Auto
Scaling group, the desired capacity defaults to your minimum capacity.
The third configuration that you can set in an Auto Scaling group is the maximum capacity.
For example, you might configure the Auto Scaling group to scale out in response to
increased demand, but only to a maximum of four Amazon EC2 instances.
Because Amazon EC2 Auto Scaling uses Amazon EC2 instances, you pay for only the
instances you use, when you use them. You now have a cost-effective architecture that
provides the best customer experience while reducing expenses.
Elastic Load Balancing
Load balancer is an application that takes in requests and routes them to the instances
to be processed
Elastic Load Balancing is the AWS service that automatically distributes incoming
application traffic across multiple resources, such as Amazon EC2 instances. Elastic Load
Balancing is the AWS service that automatically distributes incoming application traffic
across multiple resources, such as Amazon EC2 instances. This helps to ensure that no single
resource becomes overutilized.
A load balancer acts as a single point of contact for all incoming web traffic to your Auto
Scaling group. This means that as you add or remove Amazon EC2 instances in response to
the amount of incoming traffic, these requests route to the load balancer first. Then, the
requests spread across multiple resources that will handle them. For example, if you have
multiple Amazon EC2 instances, Elastic Load Balancing distributes the workload across the
multiple instances so that no single instance has to carry the bulk of it.
Although Elastic Load Balancing and Amazon EC2 Auto Scaling are separate services, they
work together to help ensure that applications running in Amazon EC2 can provide high
performance and availability.
Mesagging & Queuing
Idea of placing messages into a buffer is called messaging and queuing. Just as our
cashier sends orders to the barista, applications send messages to each other to communicate.
If applications communicate directly like our cashier and barista previously, this is called
being tightly coupled.
• AWS Lambda
is a service that lets you run code without needing to provision or manage
servers. While using AWS Lambda, you pay only for the compute time that you
consume. Charges apply only when your code is running. You can also run code for
virtually any type of application or backend service, all with zero administration. For
example, a simple Lambda function might involve automatically resizing uploaded
images to the AWS Cloud. In this case, the function triggers when uploading a new
image.
AWS Lambda is one serverless compute option. Lambda's a service that allows
you to upload your code into what's called a Lambda function. Configure a
trigger and from there, the service waits for the trigger. When the trigger is
detected, the code is automatically run in a managed environment, an
environment you do not need to worry too much about because it is
automatically scalable, highly available and all of the maintenance in the
environment itself is done by AWS. If you have one or 1,000 incoming triggers,
Lambda will scale your function to meet demand. Lambda is designed to run
code under 15 minutes so this isn't for long running processes like deep learning.
It's more suited for quick processing like a web backend, handling requests or a
backend expense report processing service where each invocation takes less
than 15 minutes to complete.
Containers provide you with a standard way to package your application's code and
dependencies into a single object. You can also use containers for processes and workflows
in which there are essential requirements for security, reliability, and scalability.
AWS container services like:
• Amazon Elastic Container Service, otherwise known as ECS.
is a highly scalable, high-performance container management system that enables you
to run and scale containerized applications on AWS. Amazon ECS supports Docker
containers. Docker(opens in a new tab) is a software platform that enables you to
build, test, and deploy applications quickly. AWS supports the use of open-source
Docker Community Edition and subscription-based Docker Enterprise Edition. With
Amazon ECS, you can use API calls to launch and stop Docker-enabled applications.
Both of these services are container orchestration tools, but before I get too far here, a
container in this case is a Docker container. Docker is a widely used platform that uses
operating system level virtualization to deliver software in containers. Now a container
is a package for your code where you package up your application, its dependencies as
well as any configurations that it needs to run. These containers run on top of EC2
instances and run in isolation from each other similar to how virtual machines work.
But in this case, the host is an EC2 instance. When you use Docker containers on AWS,
you need processes to start, stop, restart, and monitor containers running across not
just one EC2 instance, but a number of them together which is called a cluster.
• AWS Fargate
Is a serverless compute engine for containers. It works with both Amazon ECS and
Amazon EKS.
When using AWS Fargate, you do not need to provision or manage servers. AWS
Fargate manages your server infrastructure for you. You can focus more on
innovating and developing your applications, and you pay only for the resources that
are required to run your containers.
Container Summary:
• If you are trying to host traditional applications and want full access to the underlying
operating system like Linux or Windows, you are going to want to use EC2.
• If you are looking to host short running functions, service-oriented or event driven
applications and you don't want to manage the underlying environment at all, look
into the serverless AWS Lambda.
• If you are looking to run Docker container-based workloads on AWS, you first need
to choose your orchestration tool
BAB 3
Selecting Region
Regions are geographically isolated areas, where you can access services needed to run your
enterprise
• Compliance with data governance and legal requirements – Depending on your
company and location, you might need to run your data out of specific areas. For
example, if your company requires all of its data to reside within the boundaries of the
UK, you would choose the London Region. Not all companies have location-specific
data regulations, so you might need to focus more on the other three factors.
• Proximity to your customers – Selecting a Region that is close to your customers will
help you to get content to them faster. For example, your company is based in
Washington, DC, and many of your customers live in Singapore. You might consider
running your infrastructure in the Northern Virginia Region to be close to company
headquarters, and run your applications from the Singapore Region.
• Available services within a Region – Sometimes, the closest Region might not have
all the features that you want to offer to customers. AWS is frequently innovating by
creating new services and expanding on features within existing services. However,
making new services available around the world sometimes requires AWS to build out
physical hardware one Region at a time. Suppose that your developers want to build an
application that uses Amazon Braket (AWS quantum computing platform). As of this
course, Amazon Braket is not yet available in every AWS Region around the world, so
your developers would have to run it in one of the Regions that already offers it.
• Pricing – Suppose that you are considering running applications in both the United
States and Brazil. The way Brazil’s tax structure is set up, it might cost 50% more to
run the same workload out of the São Paulo Region compared to the Oregon Region.
You will learn in more detail that several factors determine pricing, but for now know
that the cost of services can vary from Region to Region.
Availability Zone
is a single data center or a group of data centers within a Region. Availability Zones are
located tens of miles apart from each other. This is close enough to have low latency (the time
between when content requested and received) between Availability Zones. However, if a
disaster occurs in one part of the Region, they are distant enough to reduce the chance that
multiple Availability Zones are affected.
Edge Location
An edge location is a site that Amazon CloudFront uses to store cached copies of your content
closer to your customers for faster delivery
AWS Outpost
AWS Outposts, where AWS will basically install a fully operational mini Region, right inside
your own data center. That's owned and operated by AWS, using 100% of AWS functionality,
but isolated within your own building.
API
API is an application programming interface. And what that means is, there are pre determined
ways for you to interact with AWS services. And you can invoke or call these APIs to
provision, configure, and manage your AWS resources.
AWS CloudFormation
With AWS CloudFormation, you can treat your infrastructure as code. This means that you
can build an environment by writing lines of code instead of using the AWS Management
Console to individually provision resources. AWS CloudFormation provisions your resources
in a safe, repeatable manner, enabling you to frequently build your infrastructure and
applications without having to perform manual actions. It determines the right operations to
perform when managing your stack and rolls back changes automatically if it detects errors.
BAB 4
Amazon Virtual Private Cloud or VPCs
A networking service that you can use to establish boundaries around your AWS resources is
Amazon Virtual Private Cloud. Amazon VPC enables you to provision an isolated section of
the AWS Cloud. In this isolated section, you can launch resources in a virtual network that you
define. Within a virtual private cloud (VPC), you can organize your resources into subnets. A
subnet is a section of a VPC that can contain resources such as Amazon EC2 instances
Subnets
are chunks of IP addresses in your VPC that allow you to group resources together. Subnets,
along with networking rules we will cover later, control whether resources are either publicly
or privately available.
Virtual private gateway
To access private resources in a VPC, you can use a virtual private gateway. Here’s an example
of how a virtual private gateway works. You can think of the internet as the road between your
home and the coffee shop. Suppose that you are traveling on this road with a bodyguard to
protect you. You are still using the same road as other customers, but with an extra layer of
protection. The bodyguard is like a virtual private network (VPN) connection that encrypts (or
protects) your internet traffic from all the other requests around it. The virtual private gateway
is the component that allows protected internet traffic to enter into the VPC. Even though your
connection to the coffee shop has extra protection, traffic jams are possible because you’re
using the same road as other customers.
AWS account’s default network access control list It is stateless and allows all inbound
and outbound traffic.
VPC Component
• Private Subnet
Isolate databases containing customers personal information
• Virtual Private Gateway
Create VPN Connection between the VPC and the internal corporate network
• Public Subnet
Support the customer-facing website
• AWS Direct Connect
Establish a dedicated connection between the on-premises data center and the VPC
DNS
Suppose that AnyCompany has a website hosted in the AWS Cloud. Customers enter the web
address into their browser, and they are able to access the website. This happens because of
Domain Name System (DNS) resolution. DNS resolution involves a customer DNS resolver
communicating with a company DNS server. You can think of DNS as being the phone book
of the internet. DNS resolution is the process of translating a domain name to an IP address.
Amazon Route 53
Amazon Route 53 is a DNS web service. It gives developers and businesses a reliable way to
route end users to internet applications hosted in AWS. Amazon Route 53 connects user
requests to infrastructure running in AWS (such as Amazon EC2 instances and load balancers).
It can route users to infrastructure outside of AWS. Another feature of Route 53 is the ability
to manage the DNS records for domain names. You can register new domain names directly in
Route 53. You can also transfer DNS records for existing domain names managed by other
domain registrars. This enables you to manage all of your domain names within a single
location.
BAB 5
Instance stores
Block-level storage volumes behave like physical hard drives. An instance store(opens in a
new tab) provides temporary block-level storage for an Amazon EC2 instance. An instance
store is disk storage that is physically attached to the host computer for an EC2 instance, and
therefore has the same lifespan as the instance. When the instance is terminated, you lose any
data in the instance store.
Amazon Elastic Block Store (Amazon EBS)
Best for
• Separate drives from the host computer of an EC2 instance
• Best for data that requires retention
is a service that provides block-level storage volumes that you can use with Amazon EC2
instances. If you stop or terminate an Amazon EC2 instance, all the data on the attached EBS
volume remains available. To create an EBS volume, you define the configuration (such as
volume size and type) and provision it. After you create an EBS volume, it can attach to an
Amazon EC2 instance. Because EBS volumes are for data that needs to persist, it’s important
to back up the data. You can take incremental backups of EBS volumes by creating Amazon
EBS snapshots.
Object Storage
Object storage treats any file as a complete, discreet object. Now this is great for documents,
and images, and video files that get uploaded and consumed as entire objects, but every time
there's a change to the object, you must re-upload the entire file.
Block Storage
Block storage breaks those files down to small component parts or blocks. This means, for that
80-gigabyte file, when you make an edit to one scene in the film and save that change, the
engine only updates the blocks where those bits live
Amazon S3
Amazon Simple Storage Service is a service that provides object-level storage. Amazon S3
stores data as objects in buckets. You can upload any type of file to Amazon S3, such as
images, videos, text files, and so on. For example, you might use Amazon S3 to store backup
files, media files for a website, or archived documents. Amazon S3 offers unlimited storage
space. The maximum file size for an object in Amazon S3 is 5 TB. When you upload a file to
Amazon S3, you can set permissions to control visibility and access to it. You can also use
the Amazon S3 versioning feature to track changes to your objects over time.
Amazon S3 storage classes
With Amazon S3, you pay only for what you use. You can choose from a range of storage
classes(opens in a new tab) to select a fit for your business and cost needs. When selecting an
Amazon S3 storage class, consider these two factors: How often you plan to retrieve your data
How available you need your data to be To learn more about Amazon S3 storage classes,
expand each of the following eight categories.
• S3 Standard – Designed for frequently accessed data Stores data in a minimum of
three Availability Zones Amazon S3 Standard provides high availability for objects.
This makes it a good choice for a wide range of use cases, such as websites, content
distribution, and data analytics. Amazon S3 Standard has a higher cost than other
storage classes intended for infrequently accessed data and archival storage.
• S3 Standard-Infrequent Access (S3 Standard-IA) – Ideal for infrequently accessed
data Similar to Amazon S3 Standard but has a lower storage price and higher retrieval
price Amazon S3 Standard-IA is ideal for data infrequently accessed but requires high
availability when needed. Both Amazon S3 Standard and Amazon S3 Standard-IA store
data in a minimum of three Availability Zones. Amazon S3 Standard-IA provides the
same level of availability as Amazon S3 Standard but with a lower storage price and a
higher retrieval price.
• S3 One Zone-Infrequent Access (S3 One Zone-IA) – Stores data in a single
Availability Zone Has a lower storage price than Amazon S3 Standard-IA Compared
to S3 Standard and S3 Standard-IA, which store data in a minimum of three Availability
Zones, S3 One Zone-IA stores data in a single Availability Zone. This makes it a good
storage class to consider if the following conditions apply: You want to save costs on
storage. You can easily reproduce your data in the event of an Availability Zone failure.
• S3 Intelligent-Tiering – Ideal for data with unknown or changing access patterns
Requires a small monthly monitoring and automation fee per object In the S3
Intelligent-Tiering storage class, Amazon S3 monitors objects’ access patterns. If you
haven’t accessed an object for 30 consecutive days, Amazon S3 automatically moves
it to the infrequent access tier, S3 Standard-IA. If you access an object in the infrequent
access tier, Amazon S3 automatically moves it to the frequent access tier, S3 Standard.
• S3 Glacier Instant Retrieval – Works well for archived data that requires immediate
access Can retrieve objects within a few milliseconds When you decide between the
options for archival storage, consider how quickly you must retrieve the archived
objects. You can retrieve objects stored in the S3 Glacier Instant Retrieval storage class
within milliseconds, with the same performance as S3 Standard.
• S3 Glacier Flexible Retrieval – Low-cost storage designed for data archiving Able to
retrieve objects within a few minutes to hours S3 Glacier Flexible Retrieval is a low-
cost storage class that is ideal for data archiving. For example, you might use this
storage class to store archived customer records or older photos and video files. You
can retrieve your data from S3 Glacier Flexible Retrieval from 1 minute to 12 hours.
• S3 Glacier Deep Archive – Lowest-cost object storage class ideal for archiving Able
to retrieve objects within 12 hours S3 Deep Archive supports long-term retention and
digital preservation for data that might be accessed once or twice in a year. This storage
class is the lowest-cost storage in the AWS Cloud, with data retrieval from 12 to 48
hours. All objects from this storage class are replicated and stored across at least three
geographically dispersed Availability Zones.
• S3 Outposts – Creates S3 buckets on Amazon S3 Outposts Makes it easier to retrieve,
store, and access data on AWS Outposts Amazon S3 Outposts delivers object storage
to your on-premises AWS Outposts environment. Amazon S3 Outposts is designed to
store data durably and redundantly across multiple devices and servers on your
Outposts. It works well for workloads with local data residency requirements that must
satisfy demanding performance needs by keeping data close to on-premises
applications.
EBS and S3
EBS S3
If you're making a bunch of micro edits, If you were using S3, every time you saved
using EBS, elastic block storage, is the the changes, the system would have to
perfect use case. upload all 80 gigabytes, the whole thing,
every time.
EFS
Amazon Elastic File is a scalable file system used with AWS Cloud services and on-premises
resources. As you add and remove files, Amazon EFS grows and shrinks automatically. It can
scale on demand to petabytes without disrupting applications.
Relational databases
In a relational database, data is stored in a way that relates it to other pieces of data. An example
of a relational database might be the coffee shop’s inventory management system. Each record
in the database would include data for a single item, such as product name, size, price, and so
on. Relational databases use structured query language (SQL) to store and query data. This
approach allows data to be stored in an easily understandable, consistent, and scalable way.
For example, the coffee shop owners can write a SQL query to identify all the customers whose
most frequently purchased drink is a medium latte.
Amazon Relational Database Service Amazon Relational Database
Service (Amazon RDS)
is a service that enables you to run relational databases in the AWS Cloud. Amazon RDS is a
managed service that automates tasks such as hardware provisioning, database setup, patching,
and backups. With these capabilities, you can spend less time completing administrative tasks
and more time using data to innovate your applications. You can integrate Amazon RDS with
other services to fulfill your business and operational needs, such as using AWS Lambda to
query your database from a serverless application.
Nonrelational databases
In a nonrelational database, you create tables. A table is a place where you can store and query
data. Nonrelational databases are sometimes referred to as “NoSQL databases” because they
use structures other than rows and columns to organize data. One type of structural approach
for nonrelational databases is key-value pairs. With key-value pairs, data is organized into
items (keys), and items have attributes (values). You can think of attributes as being different
features of your data.
Amazon DynamoDB
Security Responsible
• Customers: Security in the cloud – Customers are responsible for the security
of everything that they create and put in the AWS Cloud. When using AWS
services, you, the customer, maintain complete control over your content. You
are responsible for managing security requirements for your content, including
which content you choose to store on AWS, which AWS services you use, and
who has access to that content. You also control how access rights are granted,
managed, and revoked. The security steps that you take will depend on factors
such as the services that you use, the complexity of your systems, and your
company’s specific operational and security needs. Steps include selecting,
configuring, and patching the operating systems that will run on Amazon EC2
instances, configuring security groups, and managing user accounts.
• AWS: Security of the cloud – AWS is responsible for security of the cloud.
AWS operates, manages, and controls the components at all layers of
infrastructure. This includes areas such as the host operating system, the
virtualization layer, and even the physical security of the data centers from
which services operate. AWS is responsible for protecting the global
infrastructure that runs all of the services offered in the AWS Cloud. This
infrastructure includes AWS Regions, Availability Zones, and edge locations.
AWS manages the security of the cloud, specifically the physical infrastructure
that hosts your resources, which include: Physical security of data centers
Hardware and software infrastructure Network infrastructure Virtualization
infrastructure Although you cannot visit AWS data centers to see this protection
firsthand, AWS provides several reports from third-party auditors. These
auditors have verified its compliance with a variety of computer security
standards and regulations.
AWS Identity and Access Management (IAM)
AWS Identity and Access Management (IAM)(opens in a new tab) enables you to manage
access to AWS services and resources securely. IAM gives you the flexibility to configure
access based on your company’s specific operational and security needs. You do this by using
a combination of IAM features, which are explored in detail in this lesson:
• IAM users, groups, and roles
• IAM policies
• Multi-factor authentication
Best practice: Do not use the root user for everyday tasks. Instead, use the root user to create
your first IAM user and assign it permissions to create other users. Then, continue to create
other IAM users, and access those identities for performing regular tasks throughout AWS.
Only use the root user when you need to perform a limited number of tasks that are only
available to the root user. Examples of these tasks include changing your root user email
address and changing your AWS support plan. For more information, see “Tasks that require
root user credentials” in the AWS Account Management Reference Guide.
IAM users
An IAM user is an identity that you create in AWS. It represents the person or application that
interacts with AWS services and resources. It consists of a name and credentials
IAM policies
An IAM policy is a document that allows or denies permissions to AWS services and resources.
IAM policies enable you to customize users’ levels of access to resources. For example, you
can allow users to access all of the Amazon S3 buckets within your AWS account, or only a
specific bucket.
IAM groups
An IAM group is a collection of IAM users. When you assign an IAM policy to a group, all
users in the group are granted permissions specified by the policy.
IAM roles
In the coffee shop, an employee rotates to different workstations throughout the day.
Depending on the staffing of the coffee shop, this employee might perform several duties: work
at the cash register, update the inventory system, process online orders, and so on. When the
employee needs to switch to a different task, they give up their access to one workstation and
gain access to the next workstation. The employee can easily switch between workstations, but
at any given point in time, they can have access to only a single workstation. This same concept
exists in AWS with IAM roles. An IAM role is an identity that you can assume to gain
temporary access to permissions. Before an IAM user, application, or service can assume an
IAM role, they must be granted permissions to switch to the role. When someone assumes an
IAM role, they abandon all previous permissions that they had under a previous role and
assume the permissions of the new role.
Multi-factor authentication
Have you ever signed in to a website that required you to provide multiple pieces of information
to verify your identity? You might have needed to provide your password and then a second
form of authentication, such as a random code sent to your phone. This is an example of multi-
factor authentication(opens in a new tab). In IAM, multi-factor authentication (MFA) provides
an extra layer of security for your AWS account.
AWS Organizations
Suppose that your company has multiple AWS accounts. You can use AWS Organizations to
consolidate and manage multiple AWS accounts within a central location. When you create an
organization, AWS Organizations automatically creates a root, which is the parent container
for all the accounts in your organization. In AWS Organizations, you can centrally control
permissions for the accounts in your organization by using service control policies
(SCPs)(opens in a new tab). SCPs enable you to place restrictions on the AWS services,
resources, and individual API actions that users and roles in each account can access.
Organizational units
In AWS Organizations, you can group accounts into organizational units (OUs) to make it
easier to manage accounts with similar business or security requirements. When you apply a
policy to an OU, all the accounts in the OU automatically inherit the permissions specified in
the policy. By organizing separate accounts into OUs, you can more easily isolate workloads
or applications that have specific security requirements. For instance, if your company has
accounts that can access only the AWS services that meet certain regulatory requirements, you
can put these accounts into one OU. Then, you can attach a policy to the OU that blocks access
to all other AWS services that do not meet the regulatory requirements
AWS Artifact
Depending on your company’s industry, you may need to uphold specific standards. An audit
or inspection will ensure that the company has met those standards. AWS Artifact(opens in a
new tab) is a service that provides on-demand access to AWS security and compliance reports
and select online agreements. AWS Artifact consists of two main sections: AWS Artifact
Agreements and AWS Artifact Reports.
• AWS Artifact Agreements – Suppose that your company needs to sign an agreement
with AWS regarding your use of certain types of information throughout AWS services.
You can do this through AWS Artifact Agreements. In AWS Artifact Agreements, you
can review, accept, and manage agreements for an individual account and for all your
accounts in AWS Organizations. Different types of agreements are offered to address
the needs of customers who are subject to specific regulations, such as the Health
Insurance Portability and Accountability Act (HIPAA). Review, accept, and manage
agreements with AWS.
• AWS Artifact Reports – Next, suppose that a member of your company’s
development team is building an application and needs more information about their
responsibility for complying with certain regulatory standards. You can advise them to
access this information in AWS Artifact Reports. AWS Artifact Reports provide
compliance reports from third-party auditors. These auditors have tested and verified
that AWS is compliant with a variety of global, regional, and industry-specific security
standards and regulations. AWS Artifact Reports remains up to date with the latest
reports released. You can provide the AWS audit artifacts to your auditors or regulators
as evidence of AWS security controls. Access AWS compliance reports on-demand.
AWS Compliance
DOS
DDOS
AWS Shield
AWS Shield is a service that protects applications against DDoS attacks. AWS Shield provides
two levels of protection: Standard and Advanced. To learn more about AWS Shield, expand
each of the following two categories.
• AWS Shield Standard – AWS Shield Standard automatically protects all AWS
customers at no cost. It protects your AWS resources from the most common,
frequently occurring types of DDoS attacks. As network traffic comes into your
applications, AWS Shield Standard uses a variety of analysis techniques to detect
malicious traffic in real time and automatically mitigates it.
• AWS Shield Advanced – AWS Shield Advanced is a paid service that provides
detailed attack diagnostics and the ability to detect and mitigate sophisticated DDoS
attacks. It also integrates with other services such as Amazon CloudFront, Amazon
Route 53, and Elastic Load Balancing. Additionally, you can integrate AWS Shield
with AWS WAF by writing custom rules to mitigate complex DDoS attacks.
Encryption
encryption, which is securing a message or data in a way that can only be accessed by
authorized parties. Non-authorized parties are therefore less likely to be able to access the
message. Or not able to access it at all.
• Encryption at rest, we mean when your data is idle. It's just being stored and not
moving. For example, server-side encryption at rest is enabled on all DynamoDB
table data. And that helps prevent unauthorized access. DynamoDB's encryption at
rest also integrates with AWS KMS, or Key Management Service, for managing the
encryption key that is used to encrypt your tables
• Encryption at transit means that the data is traveling between, say A and B. Where
A is the AWS service, and B could be a client accessing the service. Or even another
AWS service itself. For example, let's say we have a Redshift instance running. And
we want to connect it with a SQL client. We use secure sockets layer, or SSL
connections to encrypt data, and we can use service certificates to validate, and
authorize a client. This means that data is protected when passing between Redshift,
and our client. And this functionality exists in numerous other AWS services such as
SQS, S3, RDS, and many more.
Amazon Inspector
To perform automated security assessments, they decide to use Amazon Inspector. Amazon
Inspector helps to improve the security and compliance of applications by running automated
security assessments. It checks applications for security vulnerabilities and deviations from
security best practices, such as open access to Amazon EC2 instances and installations of
vulnerable software versions. After Amazon Inspector has performed an assessment, it
provides you with a list of security findings. The list prioritizes by severity level, including a
detailed description of each security issue and a recommendation for how to fix it. However,
AWS does not guarantee that following the provided recommendations resolves every potential
security issue. Under the shared responsibility model, customers are responsible for the security
of their applications, processes, and tools that run on AWS services.
Amazon Guard Duty
BAB 7
Monitoring
Observing systems, collecting metrics, and then using data to make decisions
Metrics
Variables tied to your resources
Amazon CloudWatch
Amazon CloudWatch) is a web service that enables you to monitor and manage various metrics
and configure alarm actions based on data from those metrics. CloudWatch uses metrics(opens
in a new tab) to represent the data points for your resources. AWS services send metrics to
CloudWatch. CloudWatch then uses these metrics to create graphs automatically that show
how performance has changed over time.
CloudWatch alarms
With CloudWatch, you can create alarms that automatically perform actions if the value of
your metric has gone above or below a predefined threshold. For example, suppose that your
company’s developers use Amazon EC2 instances for application development or testing
purposes. If the developers occasionally forget to stop the instances, the instances will continue
to run and incur charges. In this scenario, you could create a CloudWatch alarm that
automatically stops an Amazon EC2 instance when the CPU utilization percentage has
remained below a certain threshold for a specified period. When configuring the alarm, you
can specify to receive a notification whenever this alarm is triggered.
MTTR
mean time to resolution
TCO
total cost of ownership
AWS CloudTrail
AWS CloudTrail records API calls for your account. The recorded information includes the
identity of the API caller, the time of the API call, the source IP address of the API caller, and
more. You can think of CloudTrail as a “trail” of breadcrumbs (or a log of actions) that
someone has left behind them. Recall that you can use API calls to provision, manage, and
configure your AWS resources. With CloudTrail, you can view a complete history of user
activity and API calls for your applications and resources. Events are typically updated in
CloudTrail within 15 minutes after an API call. You can filter events by specifying the time
and date that an API call occurred, the user who requested the action, the type of resource that
was involved in the API call, and more.
CloudTrail Insights
Within CloudTrail, you can also enable CloudTrail Insights. This optional feature allows
CloudTrail to automatically detect unusual API activities in your AWS account. For example,
CloudTrail Insights might detect that a higher number of Amazon EC2 instances than usual
have recently launched in your account. You can then review the full event details to determine
which actions you need to take next
AWS Pricing
• Pay for what you use. – For each service, you pay for exactly the amount of resources
that you actually use, without requiring long-term contracts or complex licensing.
• Pay less when you reserve. – Some services offer reservation options that provide a
significant discount compared to On-Demand Instance pricing. For example, suppose
that your company is using Amazon EC2 instances for a workload that needs to run
continuously. You might choose to run this workload on Amazon EC2 Instance Savings
Plans, because the plan allows you to save up to 72% over the equivalent On-Demand
Instance capacity.
• Pay less with volume-based discounts when you use more. – Some services offer
tiered pricing, so the per-unit cost is incrementally lower with increased usage. For
example, the more Amazon S3 storage space you use, the less you pay for it per GB.
Consolidated billing
In an earlier module, you learned about AWS Organizations, a service that enables you to
manage multiple AWS accounts from a central location. AWS Organizations also provides the
option for consolidated billing(opens in a new tab). The consolidated billing feature of AWS
Organizations enables you to receive a single bill for all AWS accounts in your organization.
By consolidating, you can easily track the combined costs of all the linked accounts in your
organization. The default maximum number of accounts allowed for an organization is 4, but
you can contact AWS Support to increase your quota, if needed. On your monthly bill, you can
review itemized charges incurred by each account. This enables you to have greater
transparency into your organization’s accounts while still maintaining the convenience of
receiving a single monthly bill. Another benefit of consolidated billing is the ability to share
bulk discount pricing, Savings Plans, and Reserved Instances across the accounts in your
organization. For instance, one account might not have enough monthly usage to qualify for
discount pricing. However, when multiple accounts are combined, their aggregated usage may
result in a benefit that applies across all accounts in the organization. Combine usage across
accounts to receive volume pricing discounts.
AWS Budgets
In AWS Budgets, you can create budgets to plan your service usage, service costs, and instance
reservations. The information in AWS Budgets updates three times a day. This helps you to
accurately determine how close your usage is to your budgeted amounts or to the AWS Free
Tier limits. In AWS Budgets, you can also set custom alerts when your usage exceeds (or is
forecasted to exceed) the budgeted amount.
AWS Cost Explorer
AWS Cost Explorer is a tool that lets you visualize, understand, and manage your AWS costs
and usage over time. AWS Cost Explorer includes a default report of the costs and usage for
your top five cost-accruing AWS services. You can apply custom filters and groups to analyze
your data. For example, you can view resource usage at the hourly level.
AWS Support
AWS offers four different Support plans to help you troubleshoot issues, lower costs, and
efficiently use AWS services. You can choose from the following Support plans to meet your
company’s needs:
• Basic
• Developer – Customers in the Developer Support plan have access to features such as:
o Best practice guidance
o Client-side diagnostic tools
o Building-block architecture support, which consists of guidance for how to use
AWS offerings, features, and services together
For example, suppose that your company is exploring AWS services. You’ve heard
about a few different AWS services. However, you’re unsure of how to potentially use
them together to build applications that can address your company’s needs. In this
scenario, the building-block architecture support that is included with the Developer
Support plan could help you to identify opportunities for combining specific services
and features.
• Business - Customers with a Business Support plan have access to additional features,
including:
o Use-case guidance to identify AWS offerings, features, and services that can
best support your specific needs
o All AWS Trusted Advisor checks
o Limited support for third-party software, such as common operating systems
and application stack components
Suppose that your company has the Business Support plan and wants to install a
common third-party operating system onto your Amazon EC2 instances. You could
contact AWS Support for assistance with installing, configuring, and troubleshooting
the operating system. For advanced topics such as optimizing performance, using
custom scripts, or resolving security issues, you may need to contact the third-party
software provider directly. AWS Trusted Advisor checks at the lowest cost
• Enterprise On-Ramp - In November 2021, AWS opened enrollment into AWS
Enterprise On-Ramp Support plan. In addition to all the features included in the Basic,
Developer, and Business Support plans, customers with an Enterprise On-Ramp
Support plan have access to:
o A pool of Technical Account Managers to provide proactive guidance and
coordinate access to programs and AWS experts
o A Cost Optimization workshop (one per year)
o A Concierge support team for billing and account assistance
o Tools to monitor costs and performance through Trusted Advisor and Health
API/Dashboard
Enterprise On-Ramp Support plan also provides access to a specific set of proactive
support services, which are provided by a pool of Technical Account Managers.
o Consultative review and architecture guidance (one per year)
o Infrastructure Event Management support (one per year)
o Support automation workflows
o 30 minutes or less response time for business-critical issues
• Enterprise – In addition to all features included in the Basic, Developer, Business, and
Enterprise On-Ramp support plans, customers with Enterprise Support have access to:
o A designated Technical Account Manager to provide proactive guidance and
coordinate access to programs and AWS experts
o A Concierge support team for billing and account assistance
o Operations Reviews and tools to monitor health
o Training and Game Days to drive innovation
o Tools to monitor costs and performance through Trusted Advisor and Health
API/Dashboard
The Enterprise plan also provides full access to proactive services, which are provided
by a designated Technical Account Manager:
o Consultative review and architecture guidance
o Infrastructure Event Management support
o Cost Optimization Workshop and tools
o Support automation workflows
o 15 minutes or less response time for business-critical issues
Developer, Business, Enterprise On-Ramp, and Enterprise
Support
The Developer, Business, Enterprise On-Ramp, and Enterprise Support plans include all the
benefits of Basic Support, in addition to the ability to open an unrestricted number of technical
support cases. These Support plans have pay-by-the-month pricing and require no long-term
contracts. In general, for pricing, the Developer plan has the lowest cost, the Business and
Enterprise On-Ramp plans are in the middle, and the Enterprise plan has the highest cost.
Basic Support
Basic Support is free for all AWS customers. It includes access to whitepapers, documentation,
and support communities. With Basic Support, you can also contact AWS for billing questions
and service limit increases. With Basic Support, you have access to a limited selection of AWS
Trusted Advisor checks. Additionally, you can use the AWS Personal Health Dashboard.
AWS Marketplace
AWS Marketplace is a digital catalog that includes thousands of software listings from
independent software vendors. You can use AWS Marketplace to find, test, and buy software
that runs on AWS. For each listing in AWS Marketplace, you can access detailed information
on pricing options, available support, and reviews from other AWS customers.
BAB 9
Amazon SageMaker
Traditional machine learning (ML) development is complex, expensive, time consuming, and
error prone. AWS offers Amazon SageMaker to remove the difficult work from the process
and empower you to build, train, and deploy ML models quickly. You can use ML to analyze
data, solve complex problems, and predict outcomes before they happen.
Artificial intelligence
AWS offers a variety of services powered by artificial intelligence (AI). For example, you can
perform the following tasks:
o Get code recommendations while writing code and identify security issues in your code
with Amazon CodeWhisperer.
o Convert speech to text with Amazon Transcribe.
o Discover patterns in text with Amazon Comprehend.
o Identify potentially fraudulent online activities with Amazon Fraud Detector.
o Build voice and text chatbots with Amazon Lex.
BAB 10