0% found this document useful (0 votes)
4 views22 pages

AWS Services

The document provides an overview of various AWS services, detailing their functionalities and benefits. It covers services such as APIs, AWS Cost Explorer, Amazon Aurora, and Amazon RDS, among others, explaining their roles in data management, cost analysis, and cloud computing. Additionally, it highlights features like scalability, high availability, and security within these services.

Uploaded by

cibrdrnc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views22 pages

AWS Services

The document provides an overview of various AWS services, detailing their functionalities and benefits. It covers services such as APIs, AWS Cost Explorer, Amazon Aurora, and Amazon RDS, among others, explaining their roles in data management, cost analysis, and cloud computing. Additionally, it highlights features like scalability, high availability, and security within these services.

Uploaded by

cibrdrnc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Green - Services you should know in depth

AWS Service What it Does in Detail


APIs – Application Programming APIs are mechanisms that enable two
Interface software components to communicate with
each other using a set of definitions and
protocols. For example, the weather bureau’s
software system contains daily weather data.
The weather app on your phone “talks” to this
system via APIs and shows you daily
weather updates on your phone. In the
context of APIs, the word Application refers
to any software with a distinct function.
Interface can be thought of as a contract of
service between two applications. This
contract defines how the two communicate
with each other using requests and
responses. Their API documentation
contains information on how developers are
to structure those requests and responses.
API architecture is usually explained in terms
of client and server; the app sending the
request is called the client, the app sending
the response is called the server.
The bureau’s weather database is the server
and the mobile app is the client.
Cost Explorer AWS Cost Explorer is a tool that enables you
to view and analyse your costs and usage.
You can explore your usage and costs using
the main graph, the Cost Explorer cost and
usage reports, or the Cost Explorer RI
reports.
You can view data for up to the last 12
months, forecast how much you're likely to
spend for the next 12 months, and get
recommendations for what Reserved
Instances to purchase. You can use Cost
Explorer to identify areas that need further
inquiry and see trends that you can use to
understand your costs.
You can view your costs and usage using the
Cost Explorer user interface free of charge.
You can also access your data
programmatically using the Cost Explorer
API. Each paginated API request incurs a
charge of $0.01. You can't disable Cost
Explorer after you enable it.
In addition, Cost Explorer provides
preconfigured views that display at-a-glance
information about your cost trends and give
you a head start on customizing views that
suit your needs.
AWS Cost and Usage Report The AWS Cost and Usage Reports (AWS
CUR) contains the most comprehensive set
of cost and usage data available. You can
use Cost and Usage Reports to publish your
AWS billing reports to an Amazon Simple
Storage Service (Amazon S3) bucket that
you own. You can receive reports that break
down your costs by the hour, day, or month,
by product or product resource, or by tags
that you define yourself. AWS updates the
report in your bucket once a day in comma-
separated value (CSV) format. You can view
the reports using spreadsheet software such
as Microsoft Excel or Apache OpenOffice
Calc, or access them from an application
using the Amazon S3 API.

AWS Cost and Usage Reports tracks your


Yellow – Services you should know quite a lot, what it is, how it's deployed,
and the benefits
AWS Service What it Does in some Detail
Amazon EC2
AWS Lambda
Amazon Aurora A fully managed (serverless) relational
database engine that is compatible with
MySQL and PostgreSQL. It has been designed
from the ground up by Amazon to be used
within AWS and the cloud.
It is part of Amazon RDS and is 5 times faster
than standard MySQL databases.
It is made up of clusters and automates and
standardises database clustering and
replication which are typically among the most
challenging aspects of database configuration
and administration.

Features of Amazon Aurora:


 High performance and scalability
 High availability and durability
 Multiple levels of security
 Compatible with MySQL and
PostgreSQL
 Fully managed

When an Aurora instance is created, a


Database (DB) cluster is created.

An Aurora DB cluster consists of one or more


DB instances and a cluster volume that
manages the data for those DB instances.

An Aurora cluster volume is a virtual DB


storage volume that spans multiple AZ’s. Each
AZ has a copy of the DB cluster data. An
Aurora Db cluster is made up of 2 types of DB
instances – A PrimaryDB Instance and an
Aurora Instance. Aurora storage volumes used
to create protection groups are 10GB.

A PrimaryDB instance supports READ +


WRITE operations and it performs all the data
modifications to the cluster volume. Each
Aurora DB cluster has one primary instance.

An Aurora replica supports only READ


operations. Each Aurora DB cluster can have
up to 15 Aurora replicas + the primary
instance. Multiple replicas distribute the read
workload, and if they are located in separate
AZ’s, they can increase the high availability
and resilience of the database.
S3 is used for nearly continuous backups and
up to 15 Read Replicas can be used to ensure
data is not lost (this is done for performance).
Aurora is also designed for instant crash
recovery if your PrimaryDB becomes
unhealthy.

It is a distributed and fault tolerance service, it


is a self-healing storage system that
automatically scales when needed.

It combines the performance and availability of


high-end commercial databases with the
simplicity and cost-effectiveness of open
source databases.

A database instance is created in the


background that you cannot see from the MC.

Often involves cluster volumes (storage) that is


spread across 3 AZ’s and makes 2 copies of a
database in each AZ.

There is a PrimaryDB (with Read + Write


permissions) and a StandbyDB (that is only
used when the PrimaryDB fails – Automatic
Failover) and this is called a Multi-AZ
deployment.
Amazon Relational Database (RDS) Amazon Relational Database Service (Amazon
There are 15 Database Engines in RDS) is a web service that makes it easier to
set up, operate, and scale a relational
AWS but there are 6 within RDS: database in the AWS Cloud. It provides cost-
 MySQL efficient, resizable capacity for an industry-
 PostgreSQL standard relational database and manages
 MariaDB common database administration tasks.
 Oracle Database
 Microsoft SQL Server
 Amazon Aurora (a high-
performance, MySQL, and
PostgreSQL-compatible
database engine)

AWS Database Migration Service


(AWS DMS)
AWS Auto Scaling
AWS CloudFormation
AWS CloudTrail
Amazon CloudWatch
AWS Config
Amazon Route 53
Amazon VPC Virtual Private Cloud. Each default VPC spans
all Availability Zones (AZ’s) in that region.
Up to 4 more custom VPC that span all AZ’s in
a region.
Default VPC IP address is always
172.31.0.0/16 (CIDR Block) – Can create
65,000 possible host IP’s in over 50 regions.
Has a default public subnet in each AZ. Every
subnet that is created is public.
Every default VPC has a default SG (Security
Group), a default Route Table and a subnet
that automatically has a public IP address that
is already enabled, with any EC2 instance
created within this subnet also having a public
IP address.
An internet gateway and a route is also created
within a public subnet.

Custom VPC’s also span ALL AZ’s.

SUBNETS
 Inside a VPC
 Inside an availability zone and cannot
span AZ’s
 Subnets HAVE to sit inside an AZ
 All subnets (public and private)
MUST have an IP address (CIDR
block range) that falls within the CIDR
block range of the VPC’s IP address.

A route table and an internet gateway must be


present for a subnet to be public.
IGW attached to the VPC that the subnet is in.
Route from this public Subnet to the Internet
Gateway.

If these 2 things are missing, then a subnet is


private.
Anything going to 0.0.0.0/0 – outside of the
VPC – This tells AWS that anything that isn’t
local (on the local root account), anything that
you aren’t talking to in the subnet or VPC e.g.,
an EC2 instance or anything that is OUTSIDE
of the VPC (even to another VPC or Subnet)
MUST go through the IGW – Internet Gateway
– that is attached to the VPC.

Smaller subset of the main IP address which


makes the network faster. Can configure rules
so that EC2 instances remain isolated and
don’t communicate to other EC2 instances
within different subnets. This avoids
broadcasting to all devices within all of the
subnets.
AWS Identity and Access Management
(IAM)
Amazon Elastic Block Store (Amazon A block storage system offered by AWS. It is
EBS) best used for storing persistent data and
provides highly available block storage
volumes to be used in Amazon EC2 instances.
Each EBS volume is automatically replicated
behind the scenes in the SAME availability
zones – protecting data from component
failure, allowing for high availability and
durability - and you can scale usage (change
size of the volume) up or down within minutes
(all while only paying for the resources that you
provision).

Recall that persistent storage is any


data storage device that retains data after
power to that device is shut off. It’s also
referred to as non-volatile storage.

EBS volumes offer the consistent and low-


latency performance that you
need to run your workloads.

You can use Amazon EBS to create individual


storage volumes and attach them to an EC2
instance. Amazon EBS offers block-level
storage with volumes automatically replicated
within its Availability Zone. This arrangement
provides durable, detachable, block-level
storage (such as an external hard drive) for
your EC2 instances.
The volumes are directly attached to the
instances. Thus, they provide low latency
between where the data is stored and where it
might be used on the instance.

For this reason, they can be used to run a


database with an EC2 instance.
EBS volumes can also be used to back up your
instances into Amazon Machine Images
(AMIs), which are stored in Amazon Simple
Storage Service (Amazon S3) and can be
reused to create new EC2 instances later.

Uses include:
• Boot volumes and storage for EC2
instances
• Data storage with a file system
• Database hosts
• Enterprise applications

To provide an even higher level of data


durability, you can use Amazon EBS
to create point-in-time snapshots of your
volumes. You can also re-create a
new volume from a snapshot at any time.
Share snapshots or even copy snapshots to
different AWS Regions for even greater
disaster recovery (DR) protection (through
copying AMI’s). You can, for example, encrypt
and share your snapshots from Virginia, US to
Tokyo, Japan.

You can also get encrypted Amazon EBS


volumes at no additional cost. The encryption
occurs on the Amazon EC2 side. The data that
moves between the EC2 instance and the EBS
volume inside AWS data centres is encrypted
in transit.

EBS volumes can increase capacity and


change to different types. You can
change from a hard disk drive (HDD) to a solid
state drive (SSD). You can also increase from
a 50-GB volume to a 16-TB volume. For
example, you can do this resizing operation
dynamically, without stopping the instances.

Snapshots
Amazon EBS provides the ability to back up
snapshots of your data to Amazon S3 for
durable recovery. If you choose Amazon EBS
snapshots, the added cost is calculated per
GB-month of data stored.

Data transfer
Consider the amount of data that is transferred
out of your application.
Inbound data transfer is free, and outbound
data transfer charges are tiered.

EBS Costs
Volume storage for all EBS volume types is
charged by the amount that you
provision in GB per month, until you release
the storage.

IOPS (Input/output operations per second)


General Purpose (SSD) – Charged by the
amount you provision in GB per month until
storage is released. Input/output operations per
second (IOPS) is a way to measure the
performance of storage devices. A higher IOPS
means that a storage device can handle more
input and output (that is, write and read)
operations.

For Amazon EBS, I/O is included in the price of


General Purpose (SSD) volumes, while for
Amazon EBS Magnetic volumes, I/O is
charged by the number of requests that you
make to your volume. With Provisioned IOPS
(SSD) volumes, you are also charged by the
amount you provision in IOPS (multiplied by
the percentage of days that you provision for
the month).

Magnetic (standard) – Charged by the number


of requests to volume.

Provisioned IOPS (SSD) – Charged by the


amount you provision in IOPS (by percentage
of day or month that is used).

EBS Volume Types


SSD’s:
Provisioned IOPS SSD (io1) volumes
General Purpose SSD (gp2) volumes

AWS Instance Store Instance stores provide temporary (ephemeral)


block-level storage for your instance. This
storage is located on disks that are physically
attached to the host computer.
Amazon Data Lifecycle Manager Automates the creation, retention and deletion
(Amazon DLM) of snapshots. Tags are used to identify which
boot volume is being backed up. Rules are
defined that point to TAGS (or triggers) that
take snapshots of certain EBS volumes on a
schedule – At certain times and days.

Can do this via the CLI and this avoids having


to use the Management Console.

These rules / policies are created using JSON


format (text files).
Amazon S3 Amazon S3 can be used to store and retrieve
any amount of data (objects), at any time, from
anywhere on the web.
Amazon S3 Glacier Amazon S3 Glacier is a secure, durable and
extremely low-cost Amazon S3 storage class
for data archiving and long-term backup.

Data model concepts


Archive: Any object such as a photo, video,
file or document that you store in Amazon S3
Glacier.
Blue – Services you need to know the basic information mainly to eliminate
it from exam questions as not the answer.
Purple - Same as above just research them in basic detail

AWS Service What it Does Briefly


AWS Marketplace
AWS Professional Services
AWS Personal Health Dashboard
AWS Service Catalog
AWS Service Health Dashboard
Service quotas
AWS software development kits
(SDKs)
AWS Support Centre
AWS Support tiers
Virtual private networks (VPNs)
Amazon Kinesis
Amazon Simple Queue Service
(Amazon SQS)
AWS Batch
Amazon Lightsail Amazon Lightsail is the easiest way to get
started with Amazon Web Services (AWS) for
developers who need to build websites or
web applications. It includes everything you
need to launch your project quickly -
instances (virtual private servers), container
services, managed databases, content
delivery network (CDN) distributions, load
balancers, SSD-based block storage, static
IP addresses, DNS management of
registered domains, and resource snapshots
(backups) - for a low, predictable monthly
price.

https://www.youtube.com/watch?
v=taMlabDBO58
Amazon WorkSpaces
Amazon Elastic Kubernetes Service
(Amazon EKS)
AWS Fargate
Amazon ElastiCache
AWS Budgets
AWS Secrets Manager
AWS WorkDocs
AWS Step Functions – Serverless A collection of microservices that are loosely
Service coupled in one way or another and are
individual tasks that are working together. It
creates a workflow, for example input data
that has tasks to achieve an objective i.e.,
Task 1 (action 1) = Lambda function > Task 2
(action 2) = Lambda function to create an
EC2 Instance > Task 3 = Output to an S3
Bucket.
Workflow is sometimes called a State
Machine and the process is called a “State
Transition”. The part in the middle of the
workflow is called a “Flow Choice”.
AWS Service What it Does Briefly
AWS Certificate Manager Use AWS Certificate Manager (ACM) to provision, manage, and
(ACM) deploy public and private SSL/TLS certificates for use with AWS
services and your internal connected resources. AWS Certificate
Manager (ACM) handles the complexity of creating, storing, and
renewing public and private SSL/TLS X.509 certificates and keys
that protect your AWS websites and applications. ACM removes
the time-consuming manual process of purchasing, uploading,
and renewing SSL/TLS certificates.

Use cases:
 Protect and Secure your website – Provision and
manage certificates so you can securely terminate traffic
to your website or application.
 Protect your internal resources - Secure
communication between connected resources on private
networks, such as servers, mobile and IoT devices, and
applications.
 Improve uptime - Maintain SSL/TLS certificates,
including certificate renewals, with automated certificate
management.

You can provide certificates for your integrated AWS


services either by issuing them directly with ACM or
by importing third-party certificates into the ACM management
system. ACM certificates can secure singular domain names,
multiple specific domain names, wildcard domains, or
combinations of these. ACM wildcard certificates can protect an
unlimited number of subdomains. You can also export ACM
certificates signed by AWS Private CA for use anywhere in your
internal PKI (Public Key Infrastructure).
AWS CloudHSM AWS CloudHSM combines the benefits of the AWS cloud with
the security of hardware security modules (HSMs).
A hardware security module (HSM) is a computing device that
processes cryptographic operations and provides secure storage
for cryptographic keys. With AWS CloudHSM, you have
complete control over high availability HSMs that are in the AWS
Cloud, have low-latency access, and a secure root of trust that
automates HSM management (including backups, provisioning,
configuration, and maintenance).
Amazon Cognito Amazon Cognito is an identity platform for web and mobile apps.
It’s a user directory, an authentication server, and an
authorization service for OAuth 2.0 access tokens and AWS
credentials. With Amazon Cognito, you can authenticate and
authorize users from the built-in user directory, from your
enterprise directory, and from consumer identity providers like
Google and Facebook.

The two components that follow make up Amazon Cognito.


They operate independently or in tandem, based on your access
needs for your users.

User Pools and Identity Pools


Amazon Detective Amazon Detective helps you analyze, investigate, and quickly
identify the root cause of security findings or suspicious
activities. Detective automatically collects log data from your
AWS resources. It then uses machine learning, statistical
analysis, and graph theory to generate visualizations that help
you to conduct faster and more efficient security investigations.
The Detective prebuilt data aggregations, summaries, and
context help you to quickly analyse and determine the nature
and extent of possible security issues.
With Detective, you can access up to a year of historical event
data. This data is available through a set of visualizations that
show changes in the type and volume of activity over a selected
time window. Detective links these changes to GuardDuty
findings.
Detective automatically extracts time-based events such as login
attempts, API calls, and network traffic from AWS CloudTrail and
Amazon VPC flow logs. It also ingests findings detected by
GuardDuty.

From those events, Detective uses machine learning and


visualization to create a unified, interactive view of your resource
behaviours and the interactions between them over time. You
can explore this behaviour graph to examine disparate actions
such as failed logon attempts or suspicious API calls.

You can also see how these actions affect resources such as
AWS accounts and Amazon EC2 instances.

You can adjust the behaviour graph's scope and timeline for a
variety of tasks:
 Rapidly investigate any activity that falls outside the
norm.
 Identify patterns that may indicate a security issue.
 Understand all of the resources affected by a finding.

Detective tailored visualizations provide a baseline for and


summarize the account information. These findings can help
answer questions such as "Is this an unusual API call for this
role?" Or "Is this spike in traffic from this instance expected?"
With Detective, you don't have to organize any data or develop,
configure, or tune your own queries and algorithms. There are
no upfront costs and you pay only for the events analysed, with
no additional software to deploy or other feeds to subscribe to.
Amazon GuardDuty Amazon GuardDuty is a security monitoring service that
analyses and processes foundational data sources e.g., AWS
CloudTrail management events, AWS CloudTrail event logs,
VPC flow logs (from Amazon EC2 instances), and DNS logs.
It also processes features e.g., Kubernetes audit logs, RDS login
activity, S3 logs, EBS volumes, Runtime monitoring, and
Lambda network activity logs. It uses threat intelligence feeds,
such as lists of malicious IP addresses and domains, and
machine learning to identify unexpected, potentially
unauthorized, and malicious activity within your AWS
environment.

This can include issues like escalation of privileges, use of


exposed credentials, or communication with malicious IP
addresses, domains, presence of malware on your Amazon EC2
instances and container workloads, or discovery of unusual
patterns of login events on your database. For example,
GuardDuty can detect compromised EC2 instances and
container workloads serving malware, or mining bitcoin. It also
monitors AWS account access behaviour for signs of
compromise, such as unauthorized infrastructure deployments,
like instances deployed in a Region that hasn't been used
before, or unusual API calls like a password policy change to
reduce password strength.

GuardDuty informs you of the status of your AWS environment


by producing security findings that you can view in the
GuardDuty console or through Amazon EventBridge.
GuardDuty also provides support for you to export your findings
to an Amazon Simple Storage Service (S3) bucket, and integrate
with other services such as AWS Security Hub and Detective.
Amazon Inspector Amazon Inspector is a vulnerability management service that
continuously scans your AWS workloads for software
vulnerabilities and unintended network exposure. Amazon
Inspector automatically discovers and scans running Amazon
EC2 instances, container images in Amazon Elastic Container
Registry (Amazon ECR), and AWS Lambda functions for known
software vulnerabilities and unintended network exposure.

Amazon Inspector creates a finding when it discovers a software


vulnerability or network configuration issue. A finding describes
the vulnerability, identifies the affected resource, rates the
severity of the vulnerability, and provides remediation guidance.
You can analyse findings using the Amazon Inspector console,
or view and process your findings through other AWS services.
Amazon Macie Amazon Macie is a data security service that discovers sensitive
data by using machine learning and pattern matching, provides
visibility into data security risks, and enables automated
protection against those risks.

To help you manage the security posture of your organization's


Amazon Simple Storage Service (Amazon S3) data estate,
Macie provides you with an inventory of your S3 buckets, and
automatically evaluates and monitors the buckets for security
and access control. If Macie detects a potential issue with the
security or privacy of your data, such as a bucket that becomes
publicly accessible, Macie generates a finding for you to review
and remediate as necessary.

Macie also automates discovery and reporting of sensitive data


to provide you with a better understanding of the data that your
organization stores in Amazon S3. To detect sensitive data, you
can use built-in criteria and techniques that Macie provides,
custom criteria that you define, or a combination of the two. If
Macie detects sensitive data in an S3 object, Macie generates a
finding to notify you of the sensitive data that Macie found.

In addition to findings, Macie provides statistics and other data


that offer insight into the security posture of your Amazon S3
data, and where sensitive data might reside in your data estate.
The statistics and data can guide your decisions to perform
deeper investigations of specific S3 buckets and objects. You
can review and analyse findings, statistics, and other data by
using the Amazon Macie console or the Amazon Macie API. You
can also leverage Macie integration with Amazon EventBridge
and AWS Security Hub to monitor, process, and remediate
findings by using other services, applications, and systems.
AWS Shield You can use AWS WAF (Web Application Firewall) web access
control lists (web ACLs) to help minimize the effects of a
Distributed Denial of Service (DDoS) attack. For additional
protection against DDoS attacks, AWS also provides AWS
Shield Standard and AWS Shield Advanced. AWS Shield
Standard is automatically included at no extra cost beyond what
you already pay for AWS WAF and your other AWS services.

AWS Shield Advanced provides expanded DDoS attack


protection for your Amazon EC2 instances, Elastic Load
Balancing load balancers, CloudFront distributions, Route 53
hosted zones, and AWS Global Accelerator standard
accelerators. AWS Shield Advanced incurs additional charges.
Shield Advanced options and features include automatic
application layer DDoS mitigation, advanced event visibility, and
dedicated support from the Shield Response Team (SRT). If you
own high visibility websites or are otherwise prone to frequent
DDoS attacks, consider purchasing the additional protections
that Shield Advanced provides.
AWS WAF AWS WAF is a web application firewall that lets you monitor the
(Web Application Firewall) HTTP and HTTPS requests that are forwarded to your protected
web application resources. You can protect the following
resource types:

 Amazon CloudFront distribution


 Amazon API Gateway REST API
 Application Load Balancer
 AWS AppSync GraphQL API
 Amazon Cognito user pool
 AWS App Runner service
 AWS Verified Access instance

AWS WAF lets you control access to your content. Based on


conditions that you specify, such as the IP addresses that
requests originate from or the values of query strings, your
protected resource responds to requests either with the
requested content, with an HTTP 403 status code (Forbidden),
or with a custom response.

At the simplest level, AWS WAF lets you choose one of the
following behaviours:

 Allow all requests except the ones that you specify –


This is useful when you want Amazon CloudFront, Amazon
API Gateway, Application Load Balancer, AWS AppSync,
Amazon Cognito, AWS App Runner, or AWS Verified
Access to serve content for a public website, but you also
want to block requests from attackers.

 Block all requests except the ones that you specify –


This is useful when you want to serve content for a restricted
website whose users are readily identifiable by properties in
web requests, such as the IP addresses that they use to
browse to the website.

 Count requests that match your criteria – You can use


the Count action to track your web traffic without modifying
how you handle it. You can use this for general monitoring
and also to test your new web request handling rules. When
you want to allow or block requests based on new properties
in the web requests, you can first configure AWS WAF to
count the requests that match those properties. This lets you
confirm your new configuration settings before you switch
your rules to allow or block matching requests.

 Run CAPTCHA or challenge checks against requests


that match your criteria – You can implement CAPTCHA
and silent challenge controls against requests to help reduce
bot traffic to your protected resources.
AWS License Manager AWS License Manager is a service that makes it easier for you
to manage your software licenses from software vendors (for
example, Microsoft, SAP, Oracle, and IBM) centrally across
AWS and your on-premises environments. This provides control
and visibility into the usage of your licenses, enabling you to limit
licensing overages and reduce the risk of non-compliance and
misreporting.

As you build out your cloud infrastructure on AWS, you can save
costs by using Bring Your Own License model (BYOL)
opportunities. That is, you can re-purpose your existing license
inventory for use with your cloud resources.
License Manager reduces the risk of licensing overages and
penalties with inventory tracking that is tied directly into AWS
services. With rule-based controls on the consumption of
licenses, administrators can set hard or soft limits on new and
existing cloud deployments. Based on these limits, License
Manager helps stop non-compliant server usage before it
happens.

License Manager's built-in dashboards provide ongoing visibility


into license usage and assistance with vendor audits.
License Manager supports tracking any software that is licensed
based on virtual cores (vCPUs), physical cores, sockets, or
number of machines. This includes a variety of software
products from Microsoft, IBM, SAP, Oracle, and other vendors.
With AWS License Manager, you can centrally track licenses
and enforce limits across multiple Regions, by maintaining a
count of all the checked out entitlements. License Manager also
tracks the end-user identity and the underlying resource
identifier, if available, associated with each check out, along with
the check-out time. This time-series data can be tracked to the
ISV through CloudWatch metrics and events. ISVs can use this
data for analytics, auditing, and other similar purposes.
AWS License Manager is integrated with AWS
Marketplace and AWS Data Exchange, and with the following
AWS services: AWS Identity and Access Management
(IAM), AWS Organizations, Service Quotas, AWS
CloudFormation, AWS resource tagging, and AWS X-Ray.

With License Manager, a license administrator can distribute,


activate, and track software licenses across accounts and
throughout an organisation.
Amazon Connect Amazon Connect is an omnichannel cloud contact centre; A
contact centre that provides a unified experience across multiple
channels, such as voice, chat, and tasks.
You can set up a contact centre in a few steps, add agents who
are located anywhere, and start engaging with your customers.
You can create personalized experiences for your customers
using omnichannel communications.
For example, you can dynamically offer chat and voice contact,
based on such factors as customer preference and estimated
wait times. Agents, meanwhile, conveniently handle all
customers from just one interface. For example, they can chat
with customers, and create or respond to tasks as they are
routed to them.
Amazon Connect is an open platform that you can integrate with
other enterprise applications, such as Salesforce. You can use
Amazon Connect with other AWS services to provide innovative
new experiences for your customers.

 You use the same routing profiles, queues, flows,


metrics, and reports for all channels.

 Managers monitor all channels from one dashboard.

 Agents handle all customers from just one interface. If a


customer interaction starts with chat and moves to
voice, the agent handling the voice call has the complete
chat transcript so context is preserved.
AWS CodeBuild AWS CodeBuild is a fully managed build service in the cloud.
CodeBuild compiles your source code, runs unit tests, and
produces artifacts that are ready to deploy. CodeBuild eliminates
the need to provision, manage, and scale your own build
servers. It provides pre-packaged build environments for popular
programming languages and build tools such as Apache Maven,
Gradle, and more. You can also customize build environments in
CodeBuild to use your own build tools. CodeBuild scales
automatically to meet peak build requests.

CodeBuild provides these benefits:


 Fully managed – CodeBuild eliminates the need to set
up, patch, update, and manage your own build servers.
 On demand – CodeBuild scales on demand to meet
your build needs. You pay only for the number of build
minutes you consume.
 Out of the box – CodeBuild provides preconfigured
build environments for the most popular programming
languages. All you need to do is point to your build script
to start your first build.
How to run CodeBuild
You can use the AWS CodeBuild or AWS CodePipeline
console to run CodeBuild. You can also automate the
running of CodeBuild by using the AWS Command Line
Interface (AWS CLI) or the AWS SDKs.
AWS CodeCommit AWS CodeCommit is a version control service hosted by
Amazon Web Services that you can use to privately store and
manage assets (such as documents, source code, and binary
files) in the cloud.

CodeCommit is a secure, highly scalable, managed source


control service that hosts private Git repositories. CodeCommit
eliminates the need for you to manage your own source control
system or worry about scaling its infrastructure. You can use
CodeCommit to store anything from code to binaries. It supports
the standard functionality of Git, so it works seamlessly with your
existing Git-based tools.

With CodeCommit, you can:

 Benefit from a fully managed service hosted by


AWS. CodeCommit provides high service availability
and durability and eliminates the administrative
overhead of managing your own hardware and software.
There is no hardware to provision and scale and no
server software to install, configure, and update.
 Store your code securely. CodeCommit repositories
are encrypted at rest as well as in transit.
 Work collaboratively on code. CodeCommit
repositories support pull requests, where users can
review and comment on each other's code changes
before merging them to branches; notifications that
automatically send emails to users about pull requests
and comments; and more.
 Easily scale your version control projects.
CodeCommit repositories can scale up to meet your
development needs. The service can handle repositories
with large numbers of files or branches, large file sizes,
and lengthy revision histories.
 Store anything, anytime. CodeCommit has no limit on
the size of your repositories or on the file types you can
store.
 Integrate with other AWS and third-party services.
CodeCommit keeps your repositories close to your other
production resources in the AWS Cloud, which helps
increase the speed and frequency of your development
lifecycle. It is integrated with IAM and can be used with
other AWS services and in parallel with other
repositories.
 Easily migrate files from other remote repositories.
You can migrate to CodeCommit from any Git-based
repository.
 Use the Git tools you already know. CodeCommit
supports Git commands as well as its own AWS CLI
commands and APIs.
AWS CodeDeploy CodeDeploy is a deployment service that automates application
deployments to Amazon EC2 instances, on-premises instances,
serverless Lambda functions, or Amazon ECS services.

You can deploy a nearly unlimited variety of application content,


including:

 Code
 Serverless AWS Lambda functions
 Web and configuration files
 Executables
 Packages
 Scripts
 Multimedia files

CodeDeploy can deploy application content that runs on a server


and is stored in Amazon S3 buckets, GitHub repositories, or
Bitbucket repositories. CodeDeploy can also deploy a serverless
Lambda function. You do not need to make changes to your
existing code before you can use CodeDeploy.

CodeDeploy makes it easier for you to:

 Rapidly release new features.


 Update AWS Lambda function versions.
 Avoid downtime during application deployment.
 Handle the complexity of updating your applications,
without many of the risks associated with error-prone
manual deployments.

The service scales with your infrastructure so you can easily


deploy to one instance or thousands.

CodeDeploy works with various systems for configuration


management, source control, continuous integration, continuous
delivery, and continuous deployment.

The CodeDeploy console also provides a way to quickly search


for your resources, such as repositories, build projects,
deployment applications, and pipelines.
Choose Go to resource or press the / key, and then type the
name of the resource. Any matches appear in the list. Searches
are case insensitive. You only see resources that you have
permissions to view.
AWS CodePipeline AWS CodePipeline is a continuous delivery service you can use
to model, visualize, and automate the steps required to release
your software. You can quickly model and configure the different
stages of a software release process. CodePipeline automates
the steps required to release your software changes
continuously.
AWS CodeStar AWS CodeStar is a cloud-based service for creating, managing,
and working with software development projects on AWS. You
can quickly develop, build, and deploy applications on AWS with
an AWS CodeStar project. An AWS CodeStar project creates
and integrates AWS services for your project development
toolchain. Depending on your choice of AWS CodeStar project
template, that toolchain might include source control, build,
deployment, virtual servers or serverless resources, and more.
AWS CodeStar also manages the permissions required for
project users (called team members). By adding users as team
members to an AWS CodeStar project, project owners can
quickly and simply grant each team member role-appropriate
access to a project and its resources.

You can use AWS CodeStar to help you set up your application
development in the cloud and manage your development from a
single, centralized dashboard. Specifically, you can:

 Start new software projects on AWS in minutes


using templates for web applications, web services,
and more: AWS CodeStar includes project templates
for various project types and programming languages.
Because AWS CodeStar takes care of the setup, all of
your project resources are configured to work together.
 Manage project access for your team: AWS CodeStar
provides a central console where you can assign project
team members the roles they need to access tools and
resources. These permissions are applied automatically
across all AWS services used in your project, so you
don't need to create or manage complex IAM policies.
 Visualize, operate, and collaborate on your projects
in one place: AWS CodeStar includes a project
dashboard that provides an overall view of the project,
its toolchain, and important events. You can monitor the
latest project activity, like recent code commits, and
track the status of your code changes, build results, and
deployments, all from the same webpage. You can
monitor what's going on in the project from a single
dashboard and drill into problems to investigate.
 Iterate quickly with all the tools you need: AWS
CodeStar includes an integrated development toolchain
for your project. Team members push code, and
changes are automatically deployed. Integration with
issue tracking allows team members to keep track of
what needs to be done next. You and your team can
work together more quickly and efficiently across all
phases of code delivery.
Amazon QuickSight Amazon QuickSight is a cloud-scale business intelligence (BI)
service that you can use to deliver easy-to-understand insights
to the people who you work with, wherever they are.
Amazon QuickSight connects to your data in the cloud and
combines data from many different sources. In a single data
dashboard, QuickSight can include AWS data, third-party data,
big data, spreadsheet data, SaaS data, B2B data, and more. As
a fully managed cloud-based service, Amazon QuickSight
provides enterprise-grade security, global availability, and built-in
redundancy. It also provides the user-management tools that
you need to scale from 10 users to 10,000, all with no
infrastructure to deploy or manage.

QuickSight gives decision-makers the opportunity to explore and


interpret information in an interactive visual environment. They
have secure access to dashboards from any device on your
network and from mobile devices.
AWS Glue AWS Glue is a serverless data integration service that makes it
(AWS Glue Studio which is easy for analytics users to discover, prepare, move, and
integrate data from multiple sources. You can use it for analytics,
a GUI) machine learning, and application development. It also includes
additional productivity and data ops tooling for authoring, running
jobs, and implementing business workflows.

With AWS Glue, you can discover and connect to more than 70
diverse data sources and manage your data in a centralized
data catalogue. You can visually create, run, and monitor
extract, transform, and load (ETL) pipelines to load data into
your data lakes. Also, you can immediately search and query
catalogued data using Amazon Athena, Amazon EMR, and
Amazon Redshift Spectrum.
AWS Glue consolidates major data integration capabilities into a
single service. These include data discovery, modern ETL,
cleansing, transforming, and centralized cataloguing. It's also
serverless, which means there's no infrastructure to manage.
With flexible support for all workloads like ETL, ELT, and
streaming in one service, AWS Glue supports users across
various workloads and types of users.

Also, AWS Glue makes it easy to integrate data across your


architecture. It integrates with AWS analytics services and
Amazon S3 data lakes. AWS Glue has integration interfaces and
job-authoring tools that are easy to use for all users, from
developers to business users, with tailored solutions for varied
technical skill sets.
With the ability to scale on demand, AWS Glue helps you focus
on high-value activities that maximize the value of your data. It
scales for any data size, and supports all data types and schema
variances. To increase agility and optimize costs, AWS Glue
provides built-in high availability and pay-as-you-go billing.
Amazon DMS AWS Database Migration Service (AWS DMS) is a cloud service
that makes it possible to migrate relational databases, data
warehouses, NoSQL databases, and other types of data stores.
You can use AWS DMS to migrate your data into the AWS
Cloud or between combinations of cloud and on-premises
setups.

With AWS DMS, you can discover your source data stores,
convert your source schemas, and migrate your data.
 To discover your source data infrastructure, you can use
DMS Fleet Advisor. This service collects data from your
on-premises database and analytic servers, and builds
an inventory of servers, databases, and schemas that
you can migrate to the AWS Cloud.

 To migrate to a different database engine, you can use


DMS Schema Conversion. This service automatically
assesses and converts your source schemas to a new
target engine. Alternatively, you can download the AWS
Schema Conversion Tool (AWS SCT) to your local PC
to convert your source schemas.

 After you convert your source schemas and apply the


converted code to your target database, you can use
AWS DMS to migrate your data. You can perform one-
time migrations or replicate ongoing changes to keep
sources and targets in sync. Because AWS DMS is a
part of the AWS Cloud, you get the cost efficiency,
speed to market, security, and flexibility that AWS
services offer.

At a basic level, AWS DMS is a server in the AWS Cloud that


runs replication software. You create a source and target
connection to tell AWS DMS where to extract data from and
where to load it. Next, you schedule a task that runs on this
server to move your data. AWS DMS creates the tables and
associated primary keys if they don't exist on the target. You can
create the target tables yourself if you prefer. Or you can use
AWS Schema Conversion Tool (AWS SCT) to create some or all
of the target tables, indexes, views, triggers, and so on.
A diagram of the AWS DMS replication process

AWS EMR A managed cluster platform that simplifies running big data
(Elastic MapReduce) frameworks e.g., Apache Hadoop and Apache Spark, on AWS to
process and analyse vast amounts of data.
Using these frameworks and related open-source projects, you
can process data for analytics purposes and business
intelligence workloads. Amazon EMR also lets you transform
and move large amounts of data into and out of other AWS data
stores and databases, such as Amazon Simple Storage Service
(Amazon S3) and Amazon DynamoDB.

The central component of Amazon EMR is the cluster. A cluster


is a collection of Amazon Elastic Compute Cloud (Amazon EC2)
instances. Each instance in the cluster is called a node. Each
node has a role within the cluster, referred to as the node type.
AWS Neptune Amazon Neptune is a fast, reliable, fully managed graph
database service that makes it easy to build and run applications
that work with highly connected datasets. The core of Neptune is
a purpose-built, high-performance graph database engine.
This engine is optimized for storing billions of relationships and
querying the graph with milliseconds latency.

Neptune supports the popular property-graph query languages


Apache TinkerPop Gremlin and Neo4j's openCypher, and the
W3C's RDF query language, SPARQL. This enables you to build
queries that efficiently navigate highly connected datasets.
Neptune powers graph use cases such as recommendation
engines, fraud detection, knowledge graphs, drug discovery, and
network security.

Neptune is highly available, with read replicas, point-in-time


recovery, continuous backup to Amazon S3, and replication
across Availability Zones. Neptune provides data security
features, with support for encryption at rest and in transit.
Neptune is fully managed, so you no longer need to worry about
database management tasks like hardware provisioning,
software patching, setup, configuration, or backups.

You might also like