AWS Cloud Practitioner Essentials Resume
AWS Cloud Practitioner Essentials Resume
Instance family :
- General Purpose
- Compute Optimize
- Memory Optimize
- Storage Optimize
- Accelerated Computing
General purpose instances provide a balance of compute, memory, and networking resources.
Compute optimized instances are more well suited for batch processing workloads than general
purpose instances.
Memory optimized instances are more ideal for workloads that process large datasets in memory,
such as high-performance databases.
Storage optimized instances are designed for workloads that require high, sequential read and write
access to large datasets on local storage. The question does not specify the size of data that will
processed. Batch processing involves processing data in groups. A compute optimized instance is
ideal for this type of workload, which would benefit from a high-performance processor.
For distribute the request into the pool of Amazon EC2 Auto Scaling, we need :
Tighly Coupled Architecture = If the frontend and backend or something like that communicate
directly. If one component in failure state, so other component will be affected.
Loosely Coupled Architecture = Single failure won’t cause cascading failures to others. This is
because the component who at the failure state will be isolated. In this concept, we have message
queue on the middle of the communication between components, who queueing the
request/message from one to another.
Amazon SQS queues : Where message are placed until they are processed
Amazon SNS is broadcaster, who will send the message or notification to services even to end users.
The service/endpoints who have received message from Amazon SNS such as SQS Queues, AWS
Lambda Functions, HTTP or HTTPS Web Hooks. To end users can be using mobile push, SMS, email.
In Amazon SNS, subscribers can be web servers, email addresses, AWS Lambda functions, or several
other options.
AWS Lambda is a service that lets you run code without needing to provision or manage servers.
Container Management :
Amazon EC2
AWS Lambda
- Amazon ECS
- Amazon EKS
Serverless Computing : Your code runs on the servers without you need to provision of manage
these servers.
Module 3 : Global Infrastructure and Reliability
There is 4 factors for you to pick the regions :
Availability Zone, the AZs are located tens of miles apart from each other.
Amazon CloudFront
AWS Outposts for running the services for locally access, inside your own building (your own data
center)
AWS Outposts is a service that enables you to run infrastructure in a hybrid cloud approach.
Key Points :
To interact with the services at the AWS Global Infrastructure, we use an API.
1. Test environments
2. View AWS bills
3. View monitoring
4. Work with non-technical resources
AWS Command Line : Make API calls using the terminal on your machine
AWS SDKs : Interact with AWS resources through various programming languages
SDKs enable you to use AWS services with your existing applications or create entirely new
applications that will run on AWS.
To help you get started with using SDKs, AWS provides documentation and sample code for each
supported programming language. Supported programming languages include C++, Java, .NET, and
more.
AWS Elastic Beanstalk helps you to focus on you business application, not the infrastructure
- Adjust capacity
- Load balancing
- Automatic scaling
- Application health monitoring
AWS CloudFormation
AWS CloudFormation is IaaC (Infrasctructure as a Code) tool used to define a wide variety of AWS
resources using JSON or YAML text-based documents, called CloudFormation templates.
AWS Cloud Formation support isn’t just limited to EC2-based solutions, also support to :
- Storage
- Databases
- Analytics
- Machine Learning, more
Module 4 : Networking
Amazon Virtual Private Cloud (VPC)
AWS Direct Connect, connect directly from you Data Center to the AWS
AWS Direct Connect is physical line to the AWS VPC. The private connection that AWS Direct
Connect provides helps you to reduce network costs and increase the amount of bandwidth that can
travel through your network.
Virtual Private Gateway, for VPN connection from your office to the AWS
AWS have a wide range of tools for cover every layer of security :
- Network hardening
- Application security
- User identity
- Authentication and authorization
- Distributed denial of service prevention
- Data integrity
- Encryption, etc
Network ACL : The packet who come will be check, and if its appropriate so will be in. And also do
the same for packet who out.
By default, your account’s default network ACL allows all inbound and outbound traffic, but you can
modify it by adding your own rules. For custom network ACLs, all inbound and outbound traffic is
denied until you add rules to specify which traffic should be allowed. Additionally, all network ACLs
have an explicit deny rule. This rule ensures that if a packet doesn’t match any of the other rules on
the list, the packet is denied.
Security Groups : by default, deny all traffic, just accept specific traffic who have set before (stateful),
and allow all traffic go outside
Routing policies :
- Latency-based routing
- Geolocation DNS
- Geoproximity routing
- Weighted round robin
Module 5 : Storage and Databases
Instance Store and Amazon EBS
Instance stores, Block-level storage volumes behave like physical hard drives.
EBS, Elastic Block Store, is a service that provides block-level storage volumes that you can use with
Amazon EC2 instances. If you stop or terminate an Amazon EC2 instance, all the data on the
attached EBS volume remains available.
An EBS snapshot is an incremental backup. This means that the first backup taken of a volume copies
all the data. For subsequent backups, only the blocks of data that have changed since the most
recent snapshot are saved.
Incremental backups are different from full backups, in which all the data in a storage volume copies
each time a backup occurs. The full backup includes data that has not changed since the most recent
backup.
Amazon S3 offers unlimited storage space. The maximum file size for an object in Amazon S3 is 5 TB.
With Amazon S3, you pay only for what you use. You can choose from a range of storage classes to
select a fit for your business and cost needs. When selecting an Amazon S3 storage class, consider
these two factors:
- S3 Standard
o Designed for frequently accessed data
o Stores data in a minimum of three Availability Zones
S3 Standard provides high availability for objects. This makes it a good choice for a
wide range of use cases, such as websites, content distribution, and data analytics.
S3 Standard has a higher cost than other storage classes intended for infrequently
accessed data and archival storage.
- S3 Standard-Infrequent Access (S3 Standard-IA)
o Ideal for infrequently accessed data
o Similar to S3 Standard but has a lower storage price and higher retrieval price
S3 Standard-IA is ideal for data infrequently accessed but requires high availability
when needed. Both S3 Standard and S3 Standard-IA store data in a minimum of
three Availability Zones. S3 Standard-IA provides the same level of availability as S3
Standard but with a lower storage price and a higher retrieval price.
- S3 One Zone-Infrequent Acccess (S3 One Zone-IA)
o Stores data in a single Availability Zone
o Has a lower storage price than S3 Standard-IA
Compared to S3 Standard and S3 Standard-IA, which store data in a minimum of
three Availability Zones, S3 One Zone-IA stores data in a single Availability Zone. This
makes it a good storage class to consider if the following conditions apply:
You want to save costs on storage.
You can easily reproduce your data in the event of an Availability Zone
failure.
- S3 Intelligent-Tiering
o deal for data with unknown or changing access patterns
o Requires a small monthly monitoring and automation fee per object
In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access
patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3
automatically moves it to the infrequent access tier, S3 Standard-IA. If you access an
object in the infrequent access tier, Amazon S3 automatically moves it to the
frequent access tier, S3 Standard.
- S3 Glacier
o deal for data with unknown or changing access patterns
o Requires a small monthly monitoring and automation fee per object
In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access
patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3
automatically moves it to the infrequent access tier, S3 Standard-IA. If you access an
object in the infrequent access tier, Amazon S3 automatically moves it to the
frequent access tier, S3 Standard.
- S3 Glacier Deep Archive
o deal for data with unknown or changing access patterns
o Requires a small monthly monitoring and automation fee per object
In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access
patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3
automatically moves it to the infrequent access tier, S3 Standard-IA. If you access an
object in the infrequent access tier, Amazon S3 automatically moves it to the
frequent access tier, S3 Standard.
Amazon EBS vs Amazon S3
Amazon EBS
- Sizes up to 16 TB
- Survive termination of their EC2 instance
- Solid state by default
- HDD options
Amazon S3
- Unlimited storage
- Individual objects up to 5TBs
- Write once/read many
- 99.999999999999% durability
- Web enabled (every file have an url)
- Regionally distributed
- Offers cost savings
- Serverless
You have photo gallery website, you use S3 for many file of photos
You have 80GB file (video file), you use EBS. Video file breakdown to blocks, small component parts.
EBS > every new change, just recent update, because of block
If you are doing complex read, right, change functions, EBS is knockout winner.
Multiple instances can access the data in the EFS at the same time. Compared to block storage and
object storage, file storage is ideal for use cases in which a large number of services and resources
need to access the same data at the same time.
Amazon EBS
Amazon EFS
- Automated patching
- Backups
- Redundancy
- Failover
- Disaster Recovery
You can integrate Amazon RDS with other services to fulfill your business and operational needs,
such as using AWS Lambda to query your database from a serverless application.
Non-relational database, you create tables. A table is a place where you can store and query data.
Nonrelational databases are sometimes referred to as “NoSQL databases” because they use
structures other than rows and columns to organize data. One type of structural approach for
nonrelational databases is key-value pairs. With key-value pairs, data is organized into items (keys),
and items have attributes (values). You can think of attributes as being different features of your
data.
In a key-value database, you can add or remove attributes from items in the table at any time.
Additionally, not every item in the table has to have the same attributes.
Key Value
Amazon RDS
Amazon DynamoDB
- Key-value
- Massive throughput capabilities
- PB size potential (PetaByte)
- Granual API access
RDS use case : You have sales supply chain management system that you have to analyze for weak
spots. Using RDS because you need complex relational joins.
DynamoDB use case : you have an employee contact list: names, phone numbers, emails, employee
IDs. Well, this is all single table territory. I could use a relational database for this, but the things that
make relational databases great, all of that complex functionality, creates overhead and lag and
expense if you're not actually using it. This is where non-relational databases, Dynamo DB, delivers
the knockout punch. By eliminating all the overhead, DynamoDB allows you to build powerful,
incredibly fast databases where you don't need complex joint functionality.
Amazon Redshift
Data warehousing as a service for big data analytics. It offers the ability to collect data from many
sources and helps you to understand relationships and trends across your data.
Enables you to migrate relational databases, nonrelational databases, and other types of data stores.
From :
- EC2
- On Premises
- RDS
To :
- EC2
- On Premises
- RDS
Heterogenous databases : from different type, will be convert by AWS Schema Convertion Tool to
target database, and then DMS will be migrate the database.
- Development and test database migrations (ex: copy db from production to dev/test db)
- Database consolidation (ex : several db consolidation to one central db)
- Continous database replication (ex : continuous db replication for disaster recovery)
For use case content management system. Great for content management, catalog, user profiles.
Amazon Neptune to build and run applications that work with highly connected datasets, such as
recommendation engines, fraud detection, and knowledge graphs.
Amazon Managed Blockchain
Amazon Managed Blockchain is a service that you can use to create and manage blockchain
networks with open-source frameworks.
Blockchain is a distributed ledger system that lets multiple parties run transactions and share data
without a central authority.
is a ledger database service. You can use Amazon QLDB to review a complete history of all the
changes that have been made to your application data.
Database Accelerator
Amazon ElastiCache
is a service that adds caching layers on top of your databases to help improve the read times of
common requests. It supports two types of data stores: Redis and Memcached.
is an in-memory cache for DynamoDB. It helps improve response times from single-digit milliseconds
to microseconds.
Module 6 : Security
Shared Responsibility Model
Principle of least privilege : A user is granted access only to what they need
IAM Groups. You can attach policy/permission to the group, so all members of the group will have
the permissions.
AWS IAM : By default, when you create a new IAM user in AWS, it has no permissions associated
with it.
- Root user
- Users
- Groups
- Policies
- Roles
- Identity Federation
If you have own identity from your company, you can fedearate those users to AWS using
role based access. Use one login for both your corporate system as well as AWS.
MFA especially for root user.
- Associated permissions
- Allow or deny specific actions
- Assumed for temporary amounts of time
- No username and password
- Access to temporary permissions
- Grant access to AWS resources (users, external identities, applications, other AWS services)
AWS IAM Roles : What if a coffee shop employee hasn’t switched jobs permanently, but instead,
rotates to different workstations throughout the day? This employee can get the access they need
through IAM roles.
When an identity assumes a role, it abandons all of the previous permissions that it has and it
assumes the permissions of that role.
You can actually avoid creating IAM users for every person in your organization by federating users
into your account. This means that they could use their regular corporate credentials to log into AWS
by mapping their corporate identities to IAM roles.
AWS Organizations
- Centralized management
- Consolidated billing (for bulk discounts)
- Implement hierarchical groupings of accounts (grouping to Organizational Units (OU))
- AWS service and API actions access control (using SCPs (Service Control Policies)
SCP : Specify the maximum permissions for member accounts in the organization. In essence, with
SCPs you can restrict which AWS services, resources, and individual API actions, the users and roles
in each member account can access.
Organizational Unit
Group accounts into organizational units (OUs) to make it easier to manage accounts with similar
business or security requirements. When you apply a policy to an OU, all the accounts in the OU
automatically inherit the permissions specified in the policy.
By organizing separate accounts into OUs, you can more easily isolate workloads or applications that
have specific security requirements. For instance, if your company has accounts that can access only
the AWS services that meet certain regulatory requirements, you can put these accounts into one
OU. Then, you can attach a policy to the OU that blocks access to all other AWS services that do not
meet the regulatory requirements.
AWS Organization work :
Compliance
AWS Artifact, is a service that provides on-demand access to AWS security and compliance reports
and select online agreements. AWS Artifact consists of two main sections :
- AWS Artifact Agreements
Suppose that your company needs to sign an agreement with AWS regarding your use of
certain types of information throughout AWS services. You can do this through AWS Artifact
Agreements.
In AWS Artifact Agreements, you can review, accept, and manage agreements for an
individual account and for all your accounts in AWS Organizations. Different types of
agreements are offered to address the needs of customers who are subject to specific
regulations, such as the Health Insurance Portability and Accountability Act (HIPAA).
- AWS Artifact Reports
Suppose that a member of your company’s development team is building an application and
needs more information about their responsibility for complying with certain regulatory
standards. You can advise them to access this information in AWS Artifact Reports.
AWS Artifact Reports provide compliance reports from third-party auditors. These auditors
have tested and verified that AWS is compliant with a variety of global, regional, and
industry-specific security standards and regulations. AWS Artifact Reports remains up to
date with the latest reports released. You can provide the AWS audit artifacts to your
auditors or regulators as evidence of AWS security controls.
In the Customer Compliance Center, you can read customer compliance stories to discover how
companies in regulated industries have solved various compliance, governance, and audit
challenges.
You can also access compliance whitepapers and documentation on topics such as:
Additionally, the Customer Compliance Center includes an auditor learning path. This learning path
is designed for individuals in auditing, compliance, and legal roles who want to learn more about
how their internal operations can demonstrate compliance using the AWS Cloud.
UDP Flood (low level network attack), when bad actor send request to Weather Service, the Weather
Service give the data of the request but it sends to fake address (our address), and our server will be
got many data who make our server on the flood condition. Overwhelm traffic.
HTTP Level Attack, some attacks are much more sophisticated, which look like normal customers
asking for normal things like complicated product searches over and over and over, all coming from
an army of zombified bot machines. They ask for so much attention that regular customers can't get
in.
Slowloris Attack
Imagine standing in line at the coffee shop, when someone in front of you takes seven minutes to
order their whatever it is they're ordering, and you don't get to order until they finish and get out of
your way. Well, Slowloris attack is the exact same thing. Instead of a normal connection, I would like
to place an order, the attacker pretends to have a terribly slow connection. You get the picture.
Meanwhile, your production servers are standing there waiting for the customer to finish their
request so they can dash off and return the result. But until they get the entire packet, they can't
move on to the next thread, the next customer. A few Slowloris attackers can exhaust the capacity of
your entire front end with almost no effort at all.
AWS WAF : Using Web Application Firewall, read the signature of Bad Actors.
Lets you monitor network requests that come into your web applications. AWS WAF works in a
similar way to block or allow traffic. However, it does this by using a web access control list (ACL) to
protect your AWS resources. AWS WAF like htaccess for security, for blocking access based on list.
- Standard
AWS Shield Standard automatically protects all AWS customers at no cost. It protects your
AWS resources from the most common, frequently occurring types of DDoS attacks.
As network traffic comes into your applications, AWS Shield Standard uses a variety of
analysis techniques to detect malicious traffic in real time and automatically mitigates it.
- Advanced
AWS Shield Advanced is a paid service that provides detailed attack diagnostics and the
ability to detect and mitigate sophisticated DDoS attacks.
It also integrates with other services such as Amazon CloudFront, Amazon Route 53, and
Elastic Load Balancing. Additionally, you can integrate AWS Shield with AWS WAF by writing
custom rules to mitigate complex DDoS attacks.
Additional Services :
Encryption at rest. For example, when data is idle, for Amazon DynamoDB, the data on the table has
been to encrypted. Integrate with Amazon KMS (Key Management Service)
AWS KMS
AWS KMS enables you to perform encryption operations through the use of cryptographic keys. A
cryptographic key is a random string of digits used for locking (encrypting) and unlocking
(decrypting) data. You can use AWS KMS to create, manage, and use cryptographic keys. You can
also control the use of keys across a wide range of services and in your applications.
With AWS KMS, you can choose the specific levels of access control that you need for your keys. For
example, you can specify which IAM users and roles are able to manage keys. Alternatively, you can
temporarily disable keys so that they are no longer in use by anyone. Your keys never leave AWS
KMS, and you are always in control of them.
Amazon Inspector
An automated security assessment to applications (ex: EC2 Instance). It checks applications for
security vulnerabilities and deviations from security best practices, such as open access to Amazon
EC2 instances and installations of vulnerable software versions.
Use case :
Suppose that the developers at the coffee shop are developing and testing a new ordering
application. They want to make sure that they are designing the application in accordance with
security best practices. However, they have several other applications to develop, so they cannot
spend much time conducting manual assessments. To perform automated security assessments,
they decide to use Amazon Inspector.
After Amazon Inspector has performed an assessment, it provides you with a list of security findings.
The list prioritizes by severity level, including a detailed description of each security issue and a
recommendation for how to fix it. However, AWS does not guarantee that following the provided
recommendations resolves every potential security issue. Under the shared responsibility model,
customers are responsible for the security of their applications, processes, and tools that run on
AWS services.
Amazon GuardDuty
Is a service that provides intelligent threat detection for your AWS infrastructure and resources. It
identifies threats by continuously monitoring the network activity and account behavior within your
AWS environment.
GuardDuty then continuously analyzes data from multiple AWS sources, including VPC Flow Logs and
DNS logs. You can also configure AWS Lambda functions to take remediation steps automatically in
response to GuardDuty’s security findings.
Module 7 : Monitoring and Analytics
Monitoring
Monitoring, observing systems, collecting metrics, and then using data to make decisions
Amazon CloudWatch
Monitoring AWS Infrastructure in real-time. CloudWatch is a web service that enables you to
monitor and manage various metrics and configure alarm actions based on data from those metrics.
Amazon CloudWatch Alarm, for alerting you and trigger the action. You can create custom metrics
for your needs. Also integrated with Amazon SNS, to alert you via SMS. With CloudWatch, you can
create alarms that automatically perform actions if the value of your metric has gone above or below
a predefined threshold.
For example, suppose that your company’s developers use Amazon EC2 instances for application
development or testing purposes. If the developers occasionally forget to stop the instances, the
instances will continue to run and incur charges.
In this scenario, you could create a CloudWatch alarm that automatically stops an Amazon EC2
instance when the CPU utilization percentage has remained below a certain threshold for a specified
period. When configuring the alarm, you can specify to receive a notification whenever this alarm is
triggered.
AWS CloudTrail
The Comprehensive API auditing tool. Every request gets logged in the CloudTrail Engine (for every
AWS services).
AWS CloudTrail records API calls for your account. The recorded information includes the identity of
the API caller, the time of the API call, the source IP address of the API caller, and more. You can
think of CloudTrail as a “trail” of breadcrumbs (or a log of actions) that someone has left behind
them.
Recall that you can use API calls to provision, manage, and configure your AWS resources. With
CloudTrail, you can view a complete history of user activity and API calls for your applications and
resources. Events are typically updated in CloudTrail within 15 minutes after an API call.
Example: AWS CloudTrail event
Suppose that the coffee shop owner is browsing through the AWS Identity and Access Management
(IAM) section of the AWS Management Console. They discover that a new IAM user named Mary
was created, but they do not who, when, or which method created the user.
In the CloudTrail Event History section, the owner applies a filter to display only the events for the
“CreateUser” API action in IAM. The owner locates the event for the API call that created an IAM
user for Mary. This event record provides complete details about what occurred:
On January 1, 2020 at 9:00 AM, IAM user John created a new IAM user (Mary) through the AWS
Management Console.
An automated advisor, is a web service that inspects your AWS environment and provides real-time
recommendations in accordance with AWS best practices.
- Cost optimization
- Performance
- Security
- Fault tolerance
- Service limits
- The green check indicates the number of items for which it detected no problems
- The orange triangle represents the number of recommended investigations
- The red circle represents the number of recommended actions
Module 8 : Pricing and Support
AWS Free Tier
- Always free
These offers do not expire and are available to all AWS customers.
For example, AWS Lambda allows 1 million free requests and up to 3.2 million seconds of
compute time per month. Amazon DynamoDB allows 25 GB of free storage per month.
- 12 montsh free
These offers are free for 12 months following your initial sign-up date to AWS.
Examples include specific amounts of Amazon S3 Standard Storage, thresholds for monthly
hours of Amazon EC2 compute time, and amounts of Amazon CloudFront data transfer out.
- Trials
Short-term free trial offers start from the date you activate a particular service. The length of
each trial might vary by number of days or the amount of usage in the service.
For example, Amazon Inspector offers a 90-day free trial. Amazon Lightsail (a service that
enables you to run virtual private servers) offers 750 free hours of usage over a 30-day
period.
Let’s you explore AWS Services and grouping to calculate the price for those types of pricing.
AWS Pricing Examples
Amazon S3
Storage - You pay for only the storage that you use. You are charged the rate to store objects in your
Amazon S3 buckets based on your objects’ sizes, storage classes, and how long you have stored each
object during the month.
Requests and data retrievals - You pay for requests made to your Amazon S3 objects and buckets.
For example, suppose that you are storing photo files in Amazon S3 buckets and hosting them on a
website. Every time a visitor requests the website that includes these photo files, this counts
towards requests you must pay for.
Data transfer - There is no cost to transfer data between different Amazon S3 buckets or from
Amazon S3 to other services within the same AWS Region. However, you pay for data that you
transfer into and out of Amazon S3, with a few exceptions. There is no cost for data transferred into
Amazon S3 from the internet or out to Amazon CloudFront. There is also no cost for data transferred
out to an Amazon EC2 instance in the same AWS Region as the Amazon S3 bucket.
Management and replication - You pay for the storage management features that you have enabled
on your account’s Amazon S3 buckets. These features include Amazon S3 inventory, analytics, and
object tagging.
Example :
The AWS account in this example has used Amazon S3 in two Regions: Northern Virginia and
Ohio. For each Region, itemized charges are based on the following factors:
All the usage for Amazon S3 in this example is under the AWS Free Tier limits, so the account
owner would not have to pay for any Amazon S3 usage in this month.
AWS Billing Dashboard
Use the AWS Billing & Cost Management dashboard to pay your AWS bill, monitor your usage, and
analyze and control your costs.
- Compare your current month-to-date balance with the previous month, and get a forecast
of the next month based on current usage.
- View month-to-date spend by service.
- View Free Tier usage by service.
- Access Cost Explorer and create budgets.
- Purchase and manage Savings Plans.
- Publish AWS Cost and Usage Reports.
Features :
The default maximum number of accounts allowed for an organization is 4, but you can contact AWS
Support to increase your quota, if needed.
Another benefit of consolidated billing is the ability to share bulk discount pricing, Savings Plans, and
Reserved Instances across the accounts in your organization. For instance, one account might not
have enough monthly usage to qualify for discount pricing. However, when multiple accounts are
combined, their aggregated usage may result in a benefit that applies across all accounts in the
organization.
AWS Budgets
You can create budgets to plan your service usage, service costs, and instance reservations.
The information in AWS Budgets updates 3 times a day. You can also set custom alerts when your
usage exceeds.
AWS Cost Explorer is a tool that enables you to visualize, understand, and manage your AWS costs
and usage over time.
AWS Support plans
Basic Support :
Developer Support :
- Basic Support
- Email access to customer support
Customers in the Developer Support plan have access to features such as:
For example, suppose that your company is exploring AWS services. You’ve heard about a few
different AWS services. However, you’re unsure of how to potentially use them together to build
applications that can address your company’s needs. In this scenario, the building-block
architecture support that is included with the Developer Support plan could help you to identify
opportunities for combining specific services and features.
Business Support :
Customers with a Business Support plan have access to additional features, including:
- Use-case guidance to identify AWS offerings, features, and services that can best support
your specific needs
- All AWS Trusted Advisor checks
- Limited support for third-party software, such as common operating systems and application
stack components
Suppose that your company has the Business Support plan and wants to install a common third-
party operating system onto your Amazon EC2 instances. You could contact AWS Support for
assistance with installing, configuring, and troubleshooting the operating system. For advanced
topics such as optimizing performance, using custom scripts, or resolving security issues, you
may need to contact the third-party software provider directly.
Enterprise Support :
In addition to all the features included in the Basic, Developer, and Business Support plans,
customers with an Enterprise Support plan have access to features such as:
Provide guidance, architectural reviews, and ongoing communication with your company as you
plan, deploy, and optimize your applications.
Your TAM provides expertise across the full range of AWS services. They help you design solutions
that efficiently use multiple services together through an integrated approach.
For example, suppose that you are interested in developing an application that uses several AWS
services together. Your TAM could provide insights into how to best use the services together. They
achieve this, while aligning with the specific needs that your company is hoping to address through
the new application.
- Operational Excellence
- Security
- Reliability
- Performance Efficiency
- Cost Optimization
Developer, Business and Enterprise Support Plans have pay-by-the-month pricing and require no
long-term contracts.
AWS Marketplace
AWS Marketplace is a digital catalog that includes thousands of software listings from independent
software vendors. You can use AWS Marketplace to find, test, and buy software that runs on AWS.
For each listing in AWS Marketplace, you can access detailed information on pricing options,
available support, and reviews from other AWS customers.
You can also explore software solutions by industry and use case. For example, suppose that your
company is in the healthcare industry. In AWS Marketplace, you can review use cases that software
helps you to address, such as implementing solutions to protect patient records or using machine
learning models to analyze a patient’s medical history and predict possible health risks.
Most vendors in the marketplace also offer on-demand pay-as-you-go options. Many vendors even
offer free trials or Quick Start plans to help you experiment and learn about their offerings.
- Business Applicatons
- Data & Analytics
- DevOps
- Infrastructure Software
- Internet of Things
- Machine Learning
- Migration
- Security
Help you manage the process of migration from on-premises or cloud to the AWS through guidance.
The Cloud Adoption Framework exists to provide advice to your company to enable a quick and
smooth migration to AWS.
Business Capabilities :
- Business
- People
- Governance
Technical Capabilities :
- Platform
- Security
- Operations
AWS CAF Action Plan : Helps guide your organization for cloud migration
Business Perspective :
The Business Perspective ensures that IT aligns with business needs and that IT investments link to
key business results.
Use the Business Perspective to create a strong business case for cloud adoption and prioritize cloud
adoption initiatives. Ensure that your business strategies and goals align with your IT strategies and
goals.
- Business managers
- Finance managers
- Budget owners
- Strategy stakeholders
People Perspective :
Use the People Perspective to evaluate organizational structures and roles, new skill and process
requirements, and identify gaps. This helps prioritize training, staffing, and organizational changes.
- Human resources
- Staffing
- People managers
Governance Perspective :
The Governance Perspective focuses on the skills and processes to align IT strategy with business
strategy. This ensures that you maximize the business value and minimize risks.
Use the Governance Perspective to understand how to update the staff skills and processes
necessary to ensure business governance in the cloud. Manage and measure cloud investments to
evaluate business outcomes.
Platform Perspective :
The Platform Perspective includes principles and patterns for implementing new solutions on the
cloud, and migrating on-premises workloads to the cloud.
Use a variety of architectural models to understand and communicate the structure of IT systems
and their relationships. Describe the architecture of the target state environment in detail.
Security Perspective :
The Security Perspective ensures that the organization meets security objectives for visibility,
auditability, control, and agility.
Use the AWS CAF to structure the selection and implementation of security controls that meet the
organization’s needs.
Operations Perspective :
The Operations Perspective helps you to enable, run, use, operate, and recover IT workloads to the
level agreed upon with your business stakeholders.
Define how day-to-day, quarter-to-quarter, and year-to-year business is conducted. Align with and
support the operations of the business. The AWS CAF helps these stakeholders define current
operating procedures and identify the process changes and training needed to implement successful
cloud adoption.
Common roles in the Operations Perspective include:
- IT operations managers
- IT support managers
- Rehosting
Rehosting also known as “lift-and-shift” involves moving applications without changes.
In the scenario of a large legacy migration, in which the company is looking to implement its
migration and scale quickly to meet a business case, the majority of applications are
rehosted.
- Replatforming
Also known as “lift, tinker, and shift,” involves making a few cloud optimizations to realize a
tangible benefit. Optimization is achieved without changing the core architecture of the
application. This is what I did for my last work as freelancer.
- Refactoring/re-architecting
(also known as re-architecting) involves reimagining how an application is architected and
developed by using cloud-native features. Refactoring is driven by a strong business need to
add features, scale, or performance that would otherwise be difficult to achieve in the
application’s existing environment.
- Repurchasing
Repurchasing involves moving from a traditional license to a software-as-a-service model.
For example, a business might choose to implement the repurchasing strategy by migrating
from a customer relationship management (CRM) system to Salesforce.com.
- Retaining
Retaining consists of keeping applications that are critical for the business in the source
environment. This might include applications that require major refactoring before they can
be migrated, or, work that can be postponed until a later time.
- Retiring
Is the process of removing applications that are no longer needed.
AWS Snow Family, is a collection of physical devices that help to physically transport up to exabytes
of data into and out of AWS.
- AWS Snowcone
AWS Snowcone is a small, rugged, and secure edge computing and data transfer device.
It features 2 CPUs, 4 GB of memory, and 8 TB of usable storage.
- AWS Snowball
2 types of devices :
o Snowball Edge Storage Optimized, well suited for large-scale data migrations and
recurring transfer workflows, in addition to local computing with higher capacity
needs.
Storage : 80 TB of hard disk drive (HDD) capacity for block volumes and
Amazon S3 compatible object storage, and 1 TB of SATA solid state drive
(SSD) for block volumes.
Compute : 40 vCPUs, and 80 GiB of memory to support Amazon EC2 sbe1
instances (equivalent to C5).
o Snowball Edge Compute Optimized, provides powerful computing resources for use
cases such as machine learning, full motion video analysis, analytics, and local
computing stacks.
Storage : 42-TB usable HDD capacity for Amazon S3 compatible object
storage or Amazon EBS compatible block volumes and 7.68 TB of usable
NVMe SSD capacity for Amazon EBS compatible block volumes.
Compute : 52 vCPUs, 208 GiB of memory, and an optional NVIDIA Tesla V100
GPU. Devices run Amazon EC2 sbe-c and sbe-g instances, which are
equivalent to C5, M5a, G3, and P3 instances.
- AWS Snowmobile
AWS Snowmobile is an exabyte-scale data transfer service used to move large amounts of
data to AWS. You can transfer up to 100 petabytes of data per Snowmobile, a 45-foot long
ruggedized shipping container, pulled by a semi trailer truck.
When examining how to use AWS services, it is important to focus on the desired outcomes. You are
properly equipped to drive innovation in the cloud if you can clearly articulate the following
conditions:
Amazon SageMaker : Quickly build, train, and deploy machine learning models at scale.
AWS DeepRacer : A chance for your developers to experiment with reinforcement learning. An
autonomous 1/18 scale race car that you can use to test reinforcement learning models
Amazon Textract : Extracting text and data from documents to make them more usable for your
enterprise instead of them just being locked away in a repository.
Amazon Augmented AI (A2I) : Provide a machine learning platform that any business can build upon
without needing PhD level expertise in-house
AWS Ground Station : Your own satellite for only pay that time you actually use.
Path to Cloud Journey :
- Serverless applications
serverless refers to applications that don’t require you to provision, maintain, or administer
servers. You don’t need to worry about fault tolerance or availability. AWS handles these
capabilities for you.
AWS Lambda is an example of a service that you can use to run serverless applications. If
you design your architecture to trigger Lambda functions to run your code, you can bypass
the need to manage a fleet of servers.
- Artificial Intelligence
You can perform the following tasks:
o Convert speech to text with Amazon Transcribe.
o Discover patterns in text with Amazon Comprehend.
o Identify potentially fraudulent online activities with Amazon Fraud Detector.
o Build voice and text chatbots with Amazon Lex.
- Machine Learning
Traditional machine learning (ML) development is complex, expensive, time consuming, and
error prone. AWS offers Amazon SageMaker to remove the difficult work from the process
and empower you to build, train, and deploy ML models quickly.
You can use ML to analyze data, solve complex problems, and predict outcomes before they
happen.
Module 10 : The Cloud Journey
AWS Well-Architectured Framework
This a tool you can use to evaluate the architectures you build for excellence in a few different
categories. Helps you understand how to design and operate reliable, secure, efficient, and cost-
effective systems in the AWS Cloud. It provides a way for you to consistently measure your
architecture against best practices and design principles and identify areas for improvement.
- Operational Excellence
Focuses on running and monitoring systems to deliver business value, and with that,
continually improving processes and procedures. For example, automating changes with
deployment pipelines, or responding to events that are triggered.
Design principles for operational excellence in the cloud include performing operations as
code, annotating documentation, anticipating failure, and frequently making small,
reversible changes.
- Security
The ability to protect information, systems, and assets while delivering business value
through risk assessments and mitigation strategies. Checking integrity of data and, for
example, protecting systems by using encryption.
When considering the security of your architecture, apply these best practices:
o Automate security best practices when possible.
o Apply security at all layers.
o Protect data in transit and at rest.
- Reliability
Focuses on recovery planning, such as recovery from an Amazon DynamoDB disruption. Or
EC2 node failure, to how you handle change to meet business and customer demand.
The ability of a system to do the following:
o Recover from infrastructure or service disruptions
o Dynamically acquire computing resources to meet demand
o Mitigate disruptions such as misconfigurations or transient network issues
- Performance Efficiency
It entails using IT and computing resources efficiently. For example, using the right Amazon
EC2 type, based on workload and memory requirements, to making informed decisions, to
maintain efficiency as business needs evolve.
The ability to use computing resources efficiently to meet system requirements and to
maintain that efficiency as demand changes and technologies evolve.
Evaluating the performance efficiency of your architecture includes experimenting more
often, using serverless architectures, and designing systems to be able to go global in
minutes.
- Cost Optimization
The ability to run systems to deliver business value at the lowest price point. Which looks
at optimizing full cost. This is controlling where money is spent. And, for example, checking if
you have overestimated your EC2 server size. You can then lower cost by choosing a more
cost-effective size.
- Physical space
- Hardware
- Staff for racking and stacking
- Overhead for running data center
- Fixed cost
Achieve a lower variable cost than you could running a data center on your own
Guessing your capacity upfront can be problematic if you over or under estimate
Scaling on AWS
Experiment on AWS
6. Go Global in minutes
Glossary
Access Keys are used for programmatic access to AWS, but not for controlling S3 bucket access. You
must provide your AWS access keys to make programmatic calls to AWS or to use the AWS
Command Line Interface or AWS Tools for PowerShell.
Identities are the IAM resource objects that are used to identify and group. You can attach a policy
to an IAM identity. These include users, groups, and roles.
A Principal is a person or application that uses the AWS account root user, an IAM user, or an IAM
role to sign in and make requests to AWS.
Entities are the IAM resource objects that AWS uses for authentication. These include IAM users,
federated users, and assumed IAM roles.
Resource Groups, you can use resource groups to organize your AWS resources. Resource groups
make it easier to manage and automate tasks on large numbers of resources at one time.
Amazon Macie is a fully managed data security and data privacy service that uses machine learning
and pattern matching to discover and protect your sensitive data in AWS.
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those
built using a microservices architecture. With X-Ray, you can understand how your application and
its underlying services are performing to identify and troubleshoot the root cause of performance
issues and errors. X-Ray provides an end-to-end view of requests as they travel through your
application, and shows a map of your application’s underlying components.
Classic Load Balancer provides basic load balancing across multiple Amazon EC2 instances and
operates at both the request level and connection level. Classic Load Balancer is intended for
applications that were built within the EC2-Classic network.
Application Load Balancer is best suited for load balancing of HTTP and HTTPS traffic and provides
advanced request routing targeted at the delivery of modern application architectures, including
microservices and containers.
Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User
Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is
required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets
within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests
per second while maintaining ultra-low latencies.
AWS Systems Manager Run Command lets you remotely and securely manage the configuration of
your managed instances. A managed instance is any EC2 instance or on-premises machine in your
hybrid environment that has been configured for Systems Manager.
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a
variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises
servers. CodeDeploy does not house git repositories.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your
release pipelines for fast and reliable application and infrastructure updates. CodePipeline
automates the build, test, and deploy phases of your release process every time there is a code
change, based on the release model you define. This enables you to rapidly and reliably deliver
features and updates.
AWS CodeCommit is a fully-managed source control service that hosts secure Git-based repositories.
It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem.
CodeCommit eliminates the need to operate your own source control system or worry about scaling
its infrastructure. You can use CodeCommit to securely store anything from source code to binaries,
and it works seamlessly with your existing Git tools.
AWS CloudFormation simplifies provisioning and management on AWS. You can create templates
for the service or application architectures you want and have AWS CloudFormation use those
templates for quick and reliable provisioning of the services or applications (called “stacks”). You can
also easily update or replicate the stacks as needed.
S3 Lifecycle Policy, you can add rules in an S3 Lifecycle configuration to tell Amazon S3 to transition
objects to another Amazon S3 storage class. For example:
When you know that objects are infrequently accessed, you might transition them to the S3
Standard-IA storage class.
You might want to archive objects that you don't need to access in real time to the S3 Glacier
storage class.
Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot
Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot
Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized
workloads, CI/CD, web servers, high-performance computing (HPC), and other test & development
workloads. The key phrase in this question is, “It is alright if there are interruptions in the
application”. If the application could not accept interruptions, then the best option would be on-
demand. (Spot instance, it’s alright if there are interruptions. The process needs to run
uninterrupted from start to finish. This eliminates spot instances)
You can use AWS Cost and Usage Reports (AWS CUR) to publish your AWS billing reports to an
Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that
break down your costs by the hour or day, by product or product resource, or by tags that you define
yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format.
You can view the reports using spreadsheet software such as Microsoft Excel or Apache OpenOffice
Calc, or access them from an application using the Amazon S3 API.
AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is
experiencing events that may impact you. While the Service Health Dashboard displays the general
status of AWS services, Personal Health Dashboard gives you a personalized view into the
performance and availability of the AWS services underlying your AWS resources.
The latest Amazon DynamoDB update added support for JSON data, making it easy to store JSON
documents in a DynamoDB table while preserving their complex and possibly nested shape. Now,
the AWS SDK for .NET has added native JSON support, so you can use raw JSON data when working
with DynamoDB. This is especially helpful if your application needs to consume or produce JSON (for
instance, if your application is talking to a client-side component that uses JSON to send and receive
data), as you no longer need to manually parse or compose this data.
Amazon Rekognition makes it easy to add image and video analysis to your applications using
proven, highly scalable, deep learning technology that requires no machine learning expertise to use.