Amazon Relational Database Service - User Guide
Amazon Relational Database Service - User Guide
Database Service
User Guide
Amazon Relational Database Service User Guide
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Amazon Relational Database Service User Guide
Table of Contents
What is Amazon RDS? ........................................................................................................................ 1
Overview ................................................................................................................................... 1
Amazon EC2 and on-premises databases ............................................................................... 1
Amazon RDS and Amazon EC2 ............................................................................................ 2
Amazon RDS Custom for Oracle and Microsoft SQL Server ...................................................... 3
Amazon RDS on AWS Outposts ............................................................................................ 3
DB instances .............................................................................................................................. 3
DB engines ........................................................................................................................ 4
DB instance classes ............................................................................................................ 4
DB instance storage ............................................................................................................ 4
Amazon Virtual Private Cloud (Amazon VPC) ......................................................................... 5
AWS Regions and Availability Zones ............................................................................................. 5
Security .................................................................................................................................... 5
Amazon RDS monitoring ............................................................................................................. 5
How to work with Amazon RDS ................................................................................................... 5
AWS Management Console .................................................................................................. 6
Command line interface ...................................................................................................... 6
Amazon RDS APIs .............................................................................................................. 6
How you are charged for Amazon RDS ......................................................................................... 6
What's next? .............................................................................................................................. 6
Getting started .................................................................................................................. 6
Topics specific to database engines ...................................................................................... 6
Amazon RDS shared responsibility model ...................................................................................... 8
DB instances .............................................................................................................................. 9
DB instance classes ................................................................................................................... 11
DB instance class types ..................................................................................................... 11
Supported DB engines ...................................................................................................... 14
Determining DB instance class support in AWS Regions ......................................................... 68
Changing your DB instance class ........................................................................................ 71
Configuring the processor for RDS for Oracle ....................................................................... 71
Hardware specifications ..................................................................................................... 87
DB instance storage ................................................................................................................ 101
Storage types ................................................................................................................. 101
General Purpose SSD storage ........................................................................................... 102
Provisioned IOPS storage ................................................................................................ 104
Comparing SSD storage types .......................................................................................... 106
Magnetic storage ............................................................................................................ 107
Monitoring storage performance ...................................................................................... 107
Factors that affect storage performance ............................................................................ 108
Regions, Availability Zones, and Local Zones .............................................................................. 110
AWS Regions .................................................................................................................. 111
Availability Zones ........................................................................................................... 113
Local Zones ................................................................................................................... 114
Supported Amazon RDS features by Region and engine .............................................................. 116
Table conventions ........................................................................................................... 116
Feature quick reference ................................................................................................... 116
Blue/Green Deployments ................................................................................................. 118
Cross-Region automated backups ..................................................................................... 118
Cross-Region read replicas ............................................................................................... 119
Database activity streams ................................................................................................ 121
Dual-stack mode ............................................................................................................ 125
Export snapshots to S3 ................................................................................................... 133
IAM database authentication ............................................................................................ 138
Kerberos authentication .................................................................................................. 141
iii
Amazon Relational Database Service User Guide
iv
Amazon Relational Database Service User Guide
v
Amazon Relational Database Service User Guide
vi
Amazon Relational Database Service User Guide
vii
Amazon Relational Database Service User Guide
viii
Amazon Relational Database Service User Guide
ix
Amazon Relational Database Service User Guide
x
Amazon Relational Database Service User Guide
xi
Amazon Relational Database Service User Guide
xii
Amazon Relational Database Service User Guide
xiii
Amazon Relational Database Service User Guide
xiv
Amazon Relational Database Service User Guide
xv
Amazon Relational Database Service User Guide
xvi
Amazon Relational Database Service User Guide
xvii
Amazon Relational Database Service User Guide
xviii
Amazon Relational Database Service User Guide
xix
Amazon Relational Database Service User Guide
xx
Amazon Relational Database Service User Guide
xxi
Amazon Relational Database Service User Guide
xxii
Amazon Relational Database Service User Guide
Overview
If you are new to AWS products and services, begin learning more with the following resources:
Topics
• Amazon EC2 and on-premises databases (p. 1)
• Amazon RDS and Amazon EC2 (p. 2)
• Amazon RDS Custom for Oracle and Microsoft SQL Server (p. 3)
• Amazon RDS on AWS Outposts (p. 3)
When you buy an on-premises server, you get CPU, memory, storage, and IOPS, all bundled together.
With Amazon EC2, these are split apart so that you can scale them independently. If you need more CPU,
less IOPS, or more storage, you can easily allocate them.
For a relational database in an on-premises server, you assume full responsibility for the server,
operating system, and software. For a database on an Amazon EC2 instance, AWS manages the layers
below the operating system. In this way, Amazon EC2 eliminates some of the burden of managing an on-
premises database server.
In the following table, you can find a comparison of the management models for on-premises databases
and Amazon EC2.
1
Amazon Relational Database Service User Guide
Amazon RDS and Amazon EC2
Amazon EC2 isn't a fully managed service. Thus, when you run a database on Amazon EC2, you're
more prone to user errors. For example, when you update the operating system or database software
manually, you might accidentally cause application downtime. You might spend hours checking every
change to identify and fix an issue.
In the following table, you can find a comparison of the management models in Amazon EC2 and
Amazon RDS.
2
Amazon Relational Database Service User Guide
Amazon RDS Custom for Oracle and Microsoft SQL Server
Amazon RDS provides the following specific advantages over database deployments that aren't fully
managed:
• You can use the database products you are already familiar with: MariaDB, Microsoft SQL Server,
MySQL, Oracle, and PostgreSQL.
• Amazon RDS manages backups, software patching, automatic failure detection, and recovery.
• You can turn on automated backups, or manually create your own backup snapshots. You can use
these backups to restore a database. The Amazon RDS restore process works reliably and efficiently.
• You can get high availability with a primary instance and a synchronous secondary instance that you
can fail over to when problems occur. You can also use read replicas to increase read scaling.
• In addition to the security in your database package, you can help control who can access your RDS
databases. To do so, you can use AWS Identity and Access Management (IAM) to define users and
permissions. You can also help protect your databases by putting them in a virtual private cloud (VPC).
You can use the control capabilities of RDS Custom to access and customize the database environment
and operating system for legacy and packaged business applications. Meanwhile, Amazon RDS
automates database administration tasks and operations.
In this deployment model, you can install applications and change configuration settings to suit your
applications. At the same time, you can offload database administration tasks such as provisioning,
scaling, upgrading, and backup to AWS. You can take advantage of the database management benefits
of Amazon RDS, with more control and flexibility.
For Oracle Database and Microsoft SQL Server, RDS Custom combines the automation of Amazon RDS
with the flexibility of Amazon EC2. For more information on RDS Custom, see Working with Amazon RDS
Custom (p. 978).
With the shared responsibility model of RDS Custom, you get more control than in Amazon RDS, but also
more responsibility. For more information, see Shared responsibility model in RDS Custom (p. 979).
DB instances
A DB instance is an isolated database environment in the AWS Cloud. The basic building block of Amazon
RDS is the DB instance.
3
Amazon Relational Database Service User Guide
DB engines
Your DB instance can contain one or more user-created databases. You can access your DB instance by
using the same tools and applications that you use with a standalone database instance. You can create
and modify a DB instance by using the AWS Command Line Interface (AWS CLI), the Amazon RDS API, or
the AWS Management Console.
DB engines
A DB engine is the specific relational database software that runs on your DB instance. Amazon RDS
currently supports the following engines:
• MariaDB
• Microsoft SQL Server
• MySQL
• Oracle
• PostgreSQL
Each DB engine has its own supported features, and each version of a DB engine can include specific
features. Support for Amazon RDS features varies across AWS Regions and specific versions of each DB
engine. To check feature support in different engine versions and Regions, see Supported features in
Amazon RDS by AWS Region and DB engine (p. 116).
Additionally, each DB engine has a set of parameters in a DB parameter group that control the behavior
of the databases that it manages.
DB instance classes
A DB instance class determines the computation and memory capacity of a DB instance. A DB instance
class consists of both the DB instance type and the size. Each instance type offers different compute,
memory, and storage capabilities. For example, db.m6g is a general-purpose DB instance type powered
by AWS Graviton2 processors. Within the db.m6g instance type, db.m6g.2xlarge is a DB instance class.
You can select the DB instance that best meets your needs. If your needs change over time, you can
change DB instances. For information, see DB instance classes (p. 11).
Note
For pricing information on DB instance classes, see the Pricing section of the Amazon RDS
product page.
DB instance storage
Amazon EBS provides durable, block-level storage volumes that you can attach to a running instance. DB
instance storage comes in the following types:
The storage types differ in performance characteristics and price. You can tailor your storage
performance and cost to the needs of your database.
Each DB instance has minimum and maximum storage requirements depending on the storage type and
the database engine it supports. It's important to have sufficient storage so that your databases have
room to grow. Also, sufficient storage makes sure that features for the DB engine have room to write
content or log entries. For more information, see Amazon RDS DB instance storage (p. 101).
4
Amazon Relational Database Service User Guide
Amazon Virtual Private Cloud (Amazon VPC)
Amazon RDS uses Network Time Protocol (NTP) to synchronize the time on DB instances.
Each AWS Region contains multiple distinct locations called Availability Zones, or AZs. Each Availability
Zone is engineered to be isolated from failures in other Availability Zones. Each is engineered to provide
inexpensive, low-latency network connectivity to other Availability Zones in the same AWS Region. By
launching instances in separate Availability Zones, you can protect your applications from the failure of a
single location. For more information, see Regions, Availability Zones, and Local Zones (p. 110).
You can run your DB instance in several Availability Zones, an option called a Multi-AZ deployment.
When you choose this option, Amazon automatically provisions and maintains one or more secondary
standby DB instances in a different Availability Zone. Your primary DB instance is replicated across
Availability Zones to each secondary DB instance. This approach helps provide data redundancy and
failover support, eliminate I/O freezes, and minimize latency spikes during system backups. In a Multi-
AZ DB clusters deployment, the secondary DB instances can also serve read traffic. For more information,
see Configuring and managing a Multi-AZ deployment (p. 492).
Security
A security group controls the access to a DB instance. It does so by allowing access to IP address ranges or
Amazon EC2 instances that you specify.
For more information about security groups, see Security in Amazon RDS (p. 2565).
5
Amazon Relational Database Service User Guide
AWS Management Console
For application development, we recommend that you use one of the AWS Software Development Kits
(SDKs). The AWS SDKs handle low-level details such as authentication, retry logic, and error handling, so
that you can focus on your application logic. AWS SDKs are available for a wide variety of languages. For
more information, see Tools for Amazon web services .
AWS also provides libraries, sample code, tutorials, and other resources to help you get started more
easily. For more information, see Sample code & libraries.
For Amazon RDS pricing information, see the Amazon RDS product page.
What's next?
The preceding section introduced you to the basic infrastructure components that RDS offers. What
should you do next?
Getting started
Create a DB instance using instructions in Getting started with Amazon RDS (p. 180).
6
Amazon Relational Database Service User Guide
Topics specific to database engines
7
Amazon Relational Database Service User Guide
Amazon RDS shared responsibility model
8
Amazon Relational Database Service User Guide
DB instances
You can have up to 40 Amazon RDS DB instances, with the following limitations:
• 10 for each SQL Server edition (Enterprise, Standard, Web, and Express) under the "license-included"
model
• 10 for Oracle under the "license-included" model
• 40 for MySQL, MariaDB, or PostgreSQL
• 40 for Oracle under the "bring-your-own-license" (BYOL) licensing model
Note
If your application requires more DB instances, you can request additional DB instances by using
this form.
Each DB instance has a DB instance identifier. This customer-supplied name uniquely identifies the DB
instance when interacting with the Amazon RDS API and AWS CLI commands. The DB instance identifier
must be unique for that customer in an AWS Region.
The DB instance identifier forms part of the DNS hostname allocated to your instance by RDS.
For example, if you specify db1 as the DB instance identifier, then RDS will automatically
allocate a DNS endpoint for your instance. An example endpoint is db1.abcdefghijkl.us-
east-1.rds.amazonaws.com, where db1 is your instance ID.
• If you rename your DB instance, the endpoint is different but the fixed identifier is the same.
For example, if you rename db1 to renamed-db1, the new instance endpoint is renamed-
db1.abcdefghijkl.us-east-1.rds.amazonaws.com.
• If you delete and re-create a DB instance with the same DB instance identifier, the endpoint is the
same.
• If you use the same account to create a DB instance in a different Region, the internally
generated identifier is different because the Region is different, as in db2.mnopqrstuvwx.us-
west-1.rds.amazonaws.com.
Each DB instance supports a database engine. Amazon RDS currently supports MySQL, MariaDB,
PostgreSQL, Oracle, Microsoft SQL Server, and Amazon Aurora database engines.
When creating a DB instance, some database engines require that a database name be specified. A DB
instance can host multiple databases, or a single Oracle database with multiple schemas. The database
name value depends on the database engine:
9
Amazon Relational Database Service User Guide
DB instances
• For the MySQL and MariaDB database engines, the database name is the name of a database hosted
in your DB instance. Databases hosted by the same DB instance must have a unique name within that
instance.
• For the Oracle database engine, database name is used to set the value of ORACLE_SID, which must be
supplied when connecting to the Oracle RDS instance.
• For the Microsoft SQL Server database engine, database name is not a supported parameter.
• For the PostgreSQL database engine, the database name is the name of a database hosted in your DB
instance. A database name is not required when creating a DB instance. Databases hosted by the same
DB instance must have a unique name within that instance.
Amazon RDS creates a master user account for your DB instance as part of the creation process. This
master user has permissions to create databases and to perform create, delete, select, update, and insert
operations on tables the master user creates. You must set the master user password when you create
a DB instance, but you can change it at any time using the AWS CLI, Amazon RDS API operations, or the
AWS Management Console. You can also change the master user password and manage users using
standard SQL commands.
Note
This guide covers non-Aurora Amazon RDS database engines. For information about using
Amazon Aurora, see the Amazon Aurora User Guide.
10
Amazon Relational Database Service User Guide
DB instance classes
DB instance classes
The DB instance class determines the computation and memory capacity of an Amazon RDS DB instance.
The DB instance class that you need depends on your processing power and memory requirements.
A DB instance class consists of both the DB instance class type and the size. For example, db.r6g is a
memory-optimized DB instance class type powered by AWS Graviton2 processors. Within the db.r6g
instance class type, db.r6g.2xlarge is a DB instance class. The size of this class is 2xlarge.
For more information about instance class pricing, see Amazon RDS pricing.
Topics
• DB instance class types (p. 11)
• Supported DB engines for DB instance classes (p. 14)
• Determining DB instance class support in AWS Regions (p. 68)
• Changing your DB instance class (p. 71)
• Configuring the processor for a DB instance class in RDS for Oracle (p. 71)
• Hardware specifications for DB instance classes (p. 87)
For more information about Amazon EC2 instance types, see Instance types in the Amazon EC2
documentation.
• db.m7g – General-purpose DB instance classes powered by AWS Graviton3 processors. These instance
classes deliver balanced compute, memory, and networking for a broad range of general-purpose
workloads.
You can modify a DB instance to use one of the DB instance classes powered by AWS Graviton3
processors. To do so, complete the same steps as with any other DB instance modification.
• db.m6g – General-purpose DB instance classes powered by AWS Graviton2 processors. These instances
deliver balanced compute, memory, and networking for a broad range of general-purpose workloads.
The db.m6gd instance classes have local NVMe-based SSD block-level storage for applications that
need high-speed, low latency local storage.
You can modify a DB instance to use one of the DB instance classes powered by AWS Graviton2
processors. To do so, complete the same steps as with any other DB instance modification.
• db.m6i – General-purpose DB instance classes powered by 3rd Generation Intel Xeon Scalable
processors. These instances are SAP Certified and ideal for workloads such as backend servers
supporting enterprise applications, gaming servers, caching fleets, and application development
11
Amazon Relational Database Service User Guide
DB instance class types
environments. The db.m6id instance classes offer up to 7.6 TB of local NVMe-based SSD storage,
whereas db.m6i offers EBS-only storage.
• db.m5 –General-purpose DB instance classes that provide a balance of compute, memory, and
network resources, and are a good choice for many applications. The db.m5d instance class offers
NVMe-based SSD storage that is physically connected to the host server. The db.m5 instance classes
provide more computing capacity than the previous db.m4 instance classes. They are powered by the
AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor.
• db.m4 – General-purpose DB instance classes that provide more computing capacity than the previous
db.m3 instance classes.
For the RDS for Oracle DB engines, Amazon RDS no longer supports db.m4 DB instance classes. If you
had previously created RDS for Oracle db.m4 DB instances, Amazon RDS automatically upgrades those
DB instances to equivalent db.m5 DB instance classes.
• db.m3 – General-purpose DB instance classes that provide more computing capacity than the previous
db.m1 instance classes.
For the RDS for MariaDB, RDS for MySQL, and RDS for PostgreSQL DB engines, Amazon RDS has
started the end-of-life process for db.m3 DB instance classes using the following schedule, which
includes upgrade recommendations. For all RDS DB instances that use db.m3 DB instance classes, we
recommend that you upgrade to a db.m5 DB instance class as soon as possible.
• db.z1d – Instance classes optimized for memory-intensive applications. These instance classes offer
both high compute capacity and a high memory footprint. High frequency z1d instances deliver a
sustained all-core frequency of up to 4.0 GHz.
• db.x2g – Instance classes optimized for memory-intensive applications and powered by AWS
Graviton2 processors. These instance classes offer low cost per GiB of memory.
You can modify a DB instance to use one of the DB instance classes powered by AWS Graviton2
processors. To do so, complete the same steps as with any other DB instance modification.
• db.x2i – Instance classes optimized for memory-intensive applications. The db.x2iedn and db.x2idn
classes are powered by third-generation Intel Xeon Scalable processors (Ice Lake). They include up
to 3.8 TB of local NVMe SSD storage, up to 100 Gbps of networking bandwidth, and up to 4 TiB
(db.x2iden) or 2 TiB (db.x2idn) of memory. The db.x2iezn class is powered by second-generation Intel
Xeon Scalable processors (Cascade Lake) with an all-core turbo frequency of up to 4.5 GHz and up to
1.5 TiB of memory.
• db.x1 – Instance classes optimized for memory-intensive applications. These instance classes offer one
of the lowest price per GiB of RAM among the DB instance classes and up to 1,952 GiB of DRAM-based
instance memory. The db.x1e type offers up to 3,904 GiB of DRAM-based instance memory.
12
Amazon Relational Database Service User Guide
DB instance class types
• db.r7g – Instance classes powered by AWS Graviton3 processors. These instance classes are ideal for
running memory-intensive workloads in open-source databases such as MySQL and PostgreSQL.
You can modify a DB instance to use one of the DB instance classes powered by AWS Graviton3
processors. To do so, complete the same steps as with any other DB instance modification.
• db.r6g – Instance classes powered by AWS Graviton2 processors. These instance classes are ideal for
running memory-intensive workloads in open-source databases such as MySQL and PostgreSQL. The
db.r6gd type offers local NVMe-based SSD block-level storage for applications that need high-speed,
low latency local storage.
You can modify a DB instance to use one of the DB instance classes powered by AWS Graviton2
processors. To do so, complete the same steps as with any other DB instance modification.
• db.r6i – Instance classes powered by 3rd Generation Intel Xeon Scalable processors. These instances
are SAP-Certified and are an ideal fit for memory-intensive workloads in open-source databases such
as MySQL and PostgreSQL. The db.r6id instance class type has a memory-to-vCPU ratio of 8:1 and a
maximum memory of 1 TiB. The db.r6id instance class type offers up to 7.6 TB of local NVMe-based
SSD storage, whereas the db.r6i class type offers EBS-only storage.
• db.r5b – Instance classes that are memory-optimized for throughput-intensive applications. Powered
by the AWS Nitro System, db.r5b instances deliver up to 60 Gbps bandwidth and 260,000 IOPS of EBS
performance. This is the fastest block storage performance on EC2.
• db.r5d – Instance classes that are optimized for low latency, very high random I/O performance, and
high sequential read throughput.
• db.r5 – Instance classes optimized for memory-intensive applications. These instance classes offer
improved networking performance. They are powered by the AWS Nitro System, a combination of
dedicated hardware and lightweight hypervisor.
• db.r4 – Instance classes that provide improved networking over previous db.r3 instance classes.
For the RDS for Oracle DB engines, Amazon RDS has started the end-of-life process for db.r4 DB
instance classes using the following schedule, which includes upgrade recommendations. For RDS
for Oracle DB instances that use db.r4 instance classes, we recommend that you upgrade to a db.r5
instance class as soon as possible.
Amazon RDS started automatic upgrades of RDS for April 17, 2023
Oracle DB instances that use db.r4 DB instance classes
to equivalent db.r5 DB instance classes.
For the RDS for MariaDB, RDS for MySQL, and RDS for PostgreSQL DB engines, Amazon RDS has
started the end-of-life process for db.r3 DB instance classes using the following schedule, which
includes upgrade recommendations. For all RDS DB instances that use db.r3 DB instance classes, we
recommend that you upgrade to a db.r5 DB instance class as soon as possible.
13
Amazon Relational Database Service User Guide
Supported DB engines
• db.t4g – General-purpose instance classes powered by Arm-based AWS Graviton2 processors. These
instance classes deliver better price performance than previous burstable-performance DB instance
classes for a broad set of burstable general-purpose workloads. Amazon RDS db.t4g instances are
configured for Unlimited mode. This means that they can burst beyond the baseline over a 24-hour
window for an additional charge.
You can modify a DB instance to use one of the DB instance classes powered by AWS Graviton2
processors. To do so, complete the same steps as with any other DB instance modification.
• db.t3 – Instance classes that provide a baseline performance level, with the ability to burst to full
CPU usage. The db.t3 instances are configured for Unlimited mode. These instance classes provide
more computing capacity than the previous db.t2 instance classes. They are powered by the AWS Nitro
System, a combination of dedicated hardware and lightweight hypervisor.
• db.t2 – Instance classes that provide a baseline performance level, with the ability to burst to full CPU
usage. We recommend using these instance classes only for development and test servers, or other
non-production servers.
Note
The DB instance classes that use the AWS Nitro System (db.m5, db.r5, db.t3) are throttled on
combined read plus write workload.
For DB instance class hardware specifications, see Hardware specifications for DB instance
classes (p. 87).
DB instance class support varies according to the version and edition of SQL Server. For
instance class support by version and edition, see DB instance class support for Microsoft SQL
Server (p. 1358).
Oracle
DB instance class support varies according to the Oracle Database version and edition. RDS for
Oracle supports additional memory-optimized instance classes. These classes have names of the
form db.r5.instance_size.tpcthreads_per_core.memratio. For the vCPU count and memory
allocation for each optimized class, see Supported RDS for Oracle instance classes (p. 1797).
RDS Custom
For information about the DB instance classes supported in RDS Custom, see DB instance class
support for RDS Custom for Oracle (p. 999) and DB instance class support for RDS Custom for SQL
Server (p. 1089).
14
Amazon Relational Database Service User Guide
Supported DB engines
In the following table, you can find details about supported Amazon RDS DB instance classes for each
Amazon RDS DB engine. The cell for each engine contains one of the following values:
Yes
No
specific-versions
The instance class is supported only for the specified database versions of the DB engine.
Amazon RDS periodically deprecates major and minor versions. For information about current
supported versions, see topics for the individual DB engines: MariaDB versions (p. 1265), Microsoft
SQL Server versions (p. 1362), MySQL versions (p. 1627), Oracle versions (p. 1789), and PostgreSQL
versions (p. 2154).
15
Amazon Relational Database Service User Guide
Supported DB engines
16
Amazon Relational Database Service User Guide
Supported DB engines
17
Amazon Relational Database Service User Guide
Supported DB engines
18
Amazon Relational Database Service User Guide
Supported DB engines
19
Amazon Relational Database Service User Guide
Supported DB engines
20
Amazon Relational Database Service User Guide
Supported DB engines
db.m6id – general-purpose instance classes powered by 3rd generation Intel Xeon Scalable processors
21
Amazon Relational Database Service User Guide
Supported DB engines
22
Amazon Relational Database Service User Guide
Supported DB engines
23
Amazon Relational Database Service User Guide
Supported DB engines
24
Amazon Relational Database Service User Guide
Supported DB engines
25
Amazon Relational Database Service User Guide
Supported DB engines
26
Amazon Relational Database Service User Guide
Supported DB engines
27
Amazon Relational Database Service User Guide
Supported DB engines
28
Amazon Relational Database Service User Guide
Supported DB engines
29
Amazon Relational Database Service User Guide
Supported DB engines
30
Amazon Relational Database Service User Guide
Supported DB engines
31
Amazon Relational Database Service User Guide
Supported DB engines
32
Amazon Relational Database Service User Guide
Supported DB engines
33
Amazon Relational Database Service User Guide
Supported DB engines
db.x2idn – memory-optimized instance classes powered by 3rd generation Intel Xeon Scalable processors
34
Amazon Relational Database Service User Guide
Supported DB engines
db.x2iedn – memory-optimized instance classes with local NVMe-based SSDs, powered by 3rd generation Intel
Xeon Scalable processors
35
Amazon Relational Database Service User Guide
Supported DB engines
36
Amazon Relational Database Service User Guide
Supported DB engines
37
Amazon Relational Database Service User Guide
Supported DB engines
db.x2iezn – memory-optimized instance classes powered by 2nd generation Intel Xeon Scalable processors
db.x2iezn.12xlarge No No No Enterprise No
Edition only
db.x2iezn.8xlarge No No No Enterprise No
Edition only
db.x2iezn.6xlarge No No No Enterprise No
Edition only
db.x2iezn.4xlarge No No No Enterprise No
Edition and
Standard
Edition 2
(SE2)
db.x2iezn.2xlarge No No No Enterprise No
Edition and
Standard
Edition 2
(SE2)
38
Amazon Relational Database Service User Guide
Supported DB engines
39
Amazon Relational Database Service User Guide
Supported DB engines
40
Amazon Relational Database Service User Guide
Supported DB engines
41
Amazon Relational Database Service User Guide
Supported DB engines
42
Amazon Relational Database Service User Guide
Supported DB engines
43
Amazon Relational Database Service User Guide
Supported DB engines
44
Amazon Relational Database Service User Guide
Supported DB engines
45
Amazon Relational Database Service User Guide
Supported DB engines
46
Amazon Relational Database Service User Guide
Supported DB engines
47
Amazon Relational Database Service User Guide
Supported DB engines
48
Amazon Relational Database Service User Guide
Supported DB engines
49
Amazon Relational Database Service User Guide
Supported DB engines
db.r6id – memory-optimized instance classes powered by 3rd generation Intel Xeon Scalable processors
50
Amazon Relational Database Service User Guide
Supported DB engines
51
Amazon Relational Database Service User Guide
Supported DB engines
52
Amazon Relational Database Service User Guide
Supported DB engines
53
Amazon Relational Database Service User Guide
Supported DB engines
54
Amazon Relational Database Service User Guide
Supported DB engines
55
Amazon Relational Database Service User Guide
Supported DB engines
db.r5b – memory-optimized instance classes preconfigured for high memory, storage, and I/O
db.r5b.8xlarge.tpc2.mem3x No No No Yes No
db.r5b.6xlarge.tpc2.mem4x No No No Yes No
db.r5b.4xlarge.tpc2.mem4x No No No Yes No
db.r5b.4xlarge.tpc2.mem3x No No No Yes No
db.r5b.4xlarge.tpc2.mem2x No No No Yes No
db.r5b.2xlarge.tpc2.mem8x No No No Yes No
db.r5b.2xlarge.tpc2.mem4x No No No Yes No
db.r5b.2xlarge.tpc1.mem2x No No No Yes No
db.r5b.xlarge.tpc2.mem4x No No No Yes No
db.r5b.xlarge.tpc2.mem2x No No No Yes No
db.r5b.large.tpc1.mem2x No No No Yes No
56
Amazon Relational Database Service User Guide
Supported DB engines
57
Amazon Relational Database Service User Guide
Supported DB engines
58
Amazon Relational Database Service User Guide
Supported DB engines
59
Amazon Relational Database Service User Guide
Supported DB engines
db.r5 – memory-optimized instance classes preconfigured for high memory, storage, and I/O
db.r5.12xlarge.tpc2.mem2x No No No Yes No
db.r5.8xlarge.tpc2.mem3x No No No Yes No
db.r5.6xlarge.tpc2.mem4x No No No Yes No
db.r5.4xlarge.tpc2.mem4x No No No Yes No
db.r5.4xlarge.tpc2.mem3x No No No Yes No
db.r5.4xlarge.tpc2.mem2x No No No Yes No
db.r5.2xlarge.tpc2.mem8x No No No Yes No
60
Amazon Relational Database Service User Guide
Supported DB engines
db.r5.2xlarge.tpc2.mem4x No No No Yes No
db.r5.2xlarge.tpc1.mem2x No No No Yes No
db.r5.xlarge.tpc2.mem4x No No No Yes No
db.r5.xlarge.tpc2.mem2x No No No Yes No
db.r5.large.tpc1.mem2x No No No Yes No
61
Amazon Relational Database Service User Guide
Supported DB engines
db.r4.16xlarge All MariaDB Yes All MySQL 8.0, Deprecated Lower than
10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions
db.r4.8xlarge All MariaDB Yes All MySQL 8.0, Deprecated Lower than
10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions
db.r4.4xlarge All MariaDB Yes All MySQL 8.0, Deprecated Lower than
10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions
62
Amazon Relational Database Service User Guide
Supported DB engines
db.r4.2xlarge All MariaDB Yes All MySQL 8.0, Deprecated Lower than
10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions
db.r4.xlarge All MariaDB Yes All MySQL 8.0, Deprecated Lower than
10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions
db.r4.large All MariaDB Yes All MySQL 8.0, Deprecated Lower than
10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions
63
Amazon Relational Database Service User Guide
Supported DB engines
64
Amazon Relational Database Service User Guide
Supported DB engines
65
Amazon Relational Database Service User Guide
Supported DB engines
66
Amazon Relational Database Service User Guide
Supported DB engines
67
Amazon Relational Database Service User Guide
Determining DB instance class support in AWS Regions
Contents
• Using the Amazon RDS pricing page to determine DB instance class support in AWS
Regions (p. 68)
• Using the AWS CLI to determine DB instance class support in AWS Regions (p. 69)
• Listing the DB instance classes that are supported by a specific DB engine version in an AWS
Region (p. 69)
• Listing the DB engine versions that support a specific DB instance class in an AWS
Region (p. 70)
To use the pricing page to determine the DB instance classes supported by each engine in a
Region
68
Amazon Relational Database Service User Guide
Determining DB instance class support in AWS Regions
2. Choose a DB engine.
3. On the pricing page for the DB engine, choose On-Demand DB Instances or Reserved DB Instances.
4. To see the DB instance classes available in an AWS Region, choose the AWS Region in Region.
Other choices might be available for some DB engines, such as Single-AZ Deployment or Multi-AZ
Deployment.
sqlserver-ex
sqlserver-web
oracle-se2
For information about AWS Region names, see AWS Regions (p. 111).
The following examples demonstrate how to determine DB instance class support in an AWS Region
using the describe-orderable-db-instance-options AWS CLI command.
Note
To limit the output, these examples show results only for the General Purpose SSD (gp2) storage
type. If necessary, you can change the storage type to General Purpose SSD (gp3), Provisioned
IOPS (io1), or magnetic (standard) in the commands.
Topics
• Listing the DB instance classes that are supported by a specific DB engine version in an AWS
Region (p. 69)
• Listing the DB engine versions that support a specific DB instance class in an AWS Region (p. 70)
69
Amazon Relational Database Service User Guide
Determining DB instance class support in AWS Regions
For Windows:
For example, the following command lists the supported DB instance classes for version 13.6 of the RDS
for PostgreSQL DB engine in US East (N. Virginia).
For Windows:
For Windows:
70
Amazon Relational Database Service User Guide
Changing your DB instance class
--region region
For example, the following command lists the DB engine versions of the RDS for PostgreSQL DB engine
that support the db.r5.large DB instance class in US East (N. Virginia).
For Windows:
Topics
• Overview of configuring the processor (p. 71)
• DB instance classes that support processor configuration (p. 72)
• Setting the CPU cores and threads per CPU core for a DB instance class (p. 80)
• Number of CPU cores – You can customize the number of CPU cores for the DB instance. You might do
this to potentially optimize the licensing costs of your software with a DB instance that has sufficient
amounts of RAM for memory-intensive workloads but fewer CPU cores.
71
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
• Threads per core – You can disable Intel Hyper-Threading Technology by specifying a single thread
per CPU core. You might do this for certain workloads, such as high-performance computing (HPC)
workloads.
You can control the number of CPU cores and threads for each core separately. You can set one or both
in a request. After a setting is associated with a DB instance, the setting persists until you change it.
The processor settings for a DB instance are associated with snapshots of the DB instance. When a
snapshot is restored, its restored DB instance uses the processor feature settings used when the snapshot
was taken.
If you modify the DB instance class for a DB instance with nondefault processor settings, either specify
default processor settings or explicitly specify processor settings at modification. This requirement
ensures that you are aware of the third-party licensing costs that might be incurred when you modify the
DB instance.
There is no additional or reduced charge for specifying processor features on an RDS for Oracle DB
instance. You're charged the same as for DB instances that are launched with default CPU configurations.
• You're configuring an RDS for Oracle DB instance. For information about the DB instance classes
supported by different Oracle Database editions, see RDS for Oracle instance classes (p. 1796).
• Your DB instance is using the Bring Your Own License (BYOL) licensing option of RDS for Oracle. For
more information about Oracle licensing options, see RDS for Oracle licensing options (p. 1793).
• Your DB instance doesn't belong to one of the db.r5 or db.r5b instance classes
that have predefined processor configurations. These instance classes have names
in the form db.r5.instance_size.tpcthreads_per_core.memratio or
db.r5b.instance_size.tpcthreads_per_core.memratio. For example, db.r5b.xlarge.tpc2.mem4x
is preconfigured with 2 threads per core (tpc2) and 4x as much memory as the standard db.r5b.xlarge
instance class. You can't configure the processor features of these optimized instance classes. For more
information, see Supported RDS for Oracle instance classes (p. 1797).
In the following table, you can find the DB instance classes that support setting a number of CPU cores
and CPU threads per core. You can also find the default value and the valid values for the number of CPU
cores and CPU threads per core for each DB instance class.
DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core
db.m6i.large 2 1 2 1 1, 2
db.m6i.xlarge 4 2 2 2 1, 2
db.m6i.2xlarge 8 4 2 2, 4 1, 2
db.m6i.4xlarge 16 8 2 2, 4, 6, 8 1, 2
db.m6i.4xlarge 16 8 2 2, 4, 6, 8 1, 2
72
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core
db.m6i.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
db.m6i.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
db.m6i.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
db.m6i.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48
db.m5.large 2 1 2 1 1, 2
db.m5.xlarge 4 2 2 2 1, 2
db.m5.2xlarge 8 4 2 2, 4 1, 2
db.m5.4xlarge 16 8 2 2, 4, 6, 8 1, 2
db.m5.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
db.m5.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
db.m5.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
73
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core
db.m5d.large 2 1 2 1 1, 2
db.m5d.xlarge 4 2 2 2 1, 2
db.m5d.2xlarge 8 4 2 2, 4 1, 2
db.m5d.4xlarge 16 8 2 2, 4, 6, 8 1, 2
db.m5d.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
db.m5d.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
db.m5d.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
db.m4.10xlarge 40 20 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20
db.m4.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
db.r6i.large 2 1 2 1 1, 2
db.r6i.xlarge 4 2 2 1, 2 1, 2
db.r6i.2xlarge 8 4 2 2, 4 1, 2
db.r6i.4xlarge 16 8 2 2, 4, 6, 8 1, 2
74
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core
db.r6i.4xlarge 16 8 2 2, 4, 6, 8 1, 2
db.r6i.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
db.r6i.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
db.r6i.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
db.r6i.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48
db.r5.large 2 1 2 1 1, 2
db.r5.xlarge 4 2 2 2 1, 2
db.r5.2xlarge 8 4 2 2, 4 1, 2
db.r5.4xlarge 16 8 2 2, 4, 6, 8 1, 2
db.r5.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
db.r5.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
db.r5.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
75
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core
db.r5b.large 2 1 2 1 1, 2
db.r5b.xlarge 4 2 2 2 1, 2
db.r5b.2xlarge 8 4 2 2, 4 1, 2
db.r5b.4xlarge 16 8 2 2, 4, 6, 8 1, 2
db.r5b.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
db.r5b.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
db.r5b.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
db.r5d.large 2 1 2 1 1, 2
db.r5d.xlarge 4 2 2 2 1, 2
db.r5d.2xlarge 8 4 2 2, 4 1, 2
db.r5d.4xlarge 16 8 2 2, 4, 6, 8 1, 2
db.r5d.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
db.r5d.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
76
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core
db.r5d.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
db.r4.large 2 1 2 1 1, 2
db.r4.xlarge 4 2 2 1, 2 1, 2
db.r4.2xlarge 8 4 2 1, 2, 3, 4 1, 2
db.r4.4xlarge 16 8 2 1, 2, 3, 4, 5, 6, 1, 2
7, 8
db.r4.8xlarge 32 16 2 1, 2, 3, 4, 5, 6, 1, 2
7, 8, 9, 10, 11,
12, 13, 14, 15,
16
db.r4.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
db.r3.large 2 1 2 1 1, 2
db.r3.xlarge 4 2 2 1, 2 1, 2
db.r3.2xlarge 8 4 2 1, 2, 3, 4 1, 2
db.r3.4xlarge 16 8 2 1, 2, 3, 4, 5, 6, 1, 2
7, 8
db.r3.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
db.x2idn.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
77
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core
db.x2idn.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48
db.x2iedn.xlarge 4 2 2 1, 2 1, 2
db.x2iedn.2xlarge 8 4 2 2, 4 1, 2
db.x2iedn.4xlarge 16 8 2 2, 4, 6, 8 1, 2
db.x2iedn.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
db.x2iedn.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
db.x2iedn.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48
db.x2iezn.2xlarge 8 4 2 2, 4 1, 2
db.x2iezn.4xlarge 16 8 2 2, 4, 6, 8 1, 2
78
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core
db.x2iezn.6xlarge 24 12 2 2, 4, 6, 8, 10, 1, 2
12
db.x2iezn.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16
db.x2iezn.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24
db.x1.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
db.x1e.xlarge 4 2 2 1, 2 1, 2
db.x1e.2xlarge 8 4 2 1, 2, 3, 4 1, 2
db.x1e.4xlarge 16 8 2 1, 2, 3, 4, 5, 6, 1, 2
7, 8
db.x1e.8xlarge 32 16 2 1, 2, 3, 4, 5, 6, 1, 2
7, 8, 9, 10, 11,
12, 13, 14, 15,
16
db.x1e.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32
db.z1d.large 2 1 2 1 1, 2
db.z1d.xlarge 4 2 2 2 1, 2
db.z1d.2xlarge 8 4 2 2, 4 1, 2
db.z1d.3xlarge 12 6 2 2, 4, 6 1, 2
79
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core
db.z1d.6xlarge 24 12 2 2, 4, 6, 8, 10, 1, 2
12
Note
You can use AWS CloudTrail to monitor and audit changes to the process configuration of
Amazon RDS for Oracle DB instances. For more information about using CloudTrail, see
Monitoring Amazon RDS API calls in AWS CloudTrail (p. 940).
Setting the CPU cores and threads per CPU core for a DB
instance class
You can configure the number of CPU cores and threads per core for the DB instance class when you
perform the following operations:
Note
When you modify a DB instance to configure the number of CPU cores or threads per core, there
is a brief DB instance outage.
You can set the CPU cores and the threads per CPU core for a DB instance class using the AWS
Management Console, the AWS CLI, or the RDS API.
Console
When you are creating, modifying, or restoring a DB instance, you set the DB instance class in the
AWS Management Console. The Instance specifications section shows options for the processor. The
following image shows the processor features options.
80
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
81
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
Set the following options to the appropriate values for your DB instance class under Processor features:
• Core count – Set the number of CPU cores using this option. The value must be equal to or less than
the maximum number of CPU cores for the DB instance class.
• Threads per core – Specify 2 to enable multiple threads per core, or specify 1 to disable multiple
threads per core.
When you modify or restore a DB instance, you can also set the CPU cores and the threads per CPU core
to the defaults for the instance class.
When you view the details for a DB instance in the console, you can view the processor information for
its DB instance class on the Configuration tab. The following image shows a DB instance class with one
CPU core and multiple threads per core enabled.
For Oracle DB instances, the processor information only appears for Bring Your Own License (BYOL) DB
instances.
AWS CLI
You can set the processor features for a DB instance when you run one of the following AWS CLI
commands:
• create-db-instance
82
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
• modify-db-instance
• restore-db-instance-from-db-snapshot
• restore-db-instance-from-s3
• restore-db-instance-to-point-in-time
To configure the processor of a DB instance class for a DB instance by using the AWS CLI, include the --
processor-features option in the command. Specify the number of CPU cores with the coreCount
feature name, and specify whether multiple threads per core are enabled with the threadsPerCore
feature name.
Examples
• Setting the number of CPU cores for a DB instance (p. 83)
• Setting the number of CPU cores and disabling multiple threads for a DB instance (p. 83)
• Viewing the valid processor values for a DB instance class (p. 84)
• Returning to default processor settings for a DB instance (p. 85)
• Returning to the default number of CPU cores for a DB instance (p. 85)
• Returning to the default number of threads per core for a DB instance (p. 86)
Example
The following example modifies mydbinstance by setting the number of CPU cores to 4. The changes
are applied immediately by using --apply-immediately. If you want to apply the changes during the
next scheduled maintenance window, omit the --apply-immediately option.
For Windows:
Setting the number of CPU cores and disabling multiple threads for a DB instance
Example
The following example modifies mydbinstance by setting the number of CPU cores to 4 and disabling
multiple threads per core. The changes are applied immediately by using --apply-immediately. If
you want to apply the changes during the next scheduled maintenance window, omit the --apply-
immediately option.
83
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
For Windows:
Example
You can view the valid processor values for a particular DB instance class by running the describe-
orderable-db-instance-options command and specifying the instance class for the --db-instance-
class option. For example, the output for the following command shows the processor options for the
db.r3.large instance class.
{
"SupportsIops": true,
"MaxIopsPerGib": 50.0,
"LicenseModel": "bring-your-own-license",
"DBInstanceClass": "db.r3.large",
"SupportsIAMDatabaseAuthentication": false,
"MinStorageSize": 100,
"AvailabilityZones": [
{
"Name": "us-west-2a"
},
{
"Name": "us-west-2b"
},
{
"Name": "us-west-2c"
}
],
"EngineVersion": "12.1.0.2.v2",
"MaxStorageSize": 32768,
"MinIopsPerGib": 1.0,
"MaxIopsPerDbInstance": 40000,
"ReadReplicaCapable": false,
"AvailableProcessorFeatures": [
{
"Name": "coreCount",
"DefaultValue": "1",
"AllowedValues": "1"
},
{
"Name": "threadsPerCore",
"DefaultValue": "2",
"AllowedValues": "1,2"
}
84
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
],
"SupportsEnhancedMonitoring": true,
"SupportsPerformanceInsights": false,
"MinIopsPerDbInstance": 1000,
"StorageType": "io1",
"Vpc": false,
"SupportsStorageEncryption": true,
"Engine": "oracle-ee",
"MultiAZCapable": true
}
In addition, you can run the following commands for DB instance class processor information:
In the output of the preceding commands, the values for the processor features are not null only if the
following conditions are met:
If the preceding conditions aren't met, you can get the instance type using describe-db-instances. You
can get the processor information for this instance type by running the EC2 operation describe-instance-
types.
Example
The following example modifies mydbinstance by returning its DB instance class to the default
processor values for it. The changes are applied immediately by using --apply-immediately. If
you want to apply the changes during the next scheduled maintenance window, omit the --apply-
immediately option.
For Windows:
Example
The following example modifies mydbinstance by returning its DB instance class to the default number
of CPU cores for it. The threads per core setting isn't changed. The changes are applied immediately
85
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle
by using --apply-immediately. If you want to apply the changes during the next scheduled
maintenance window, omit the --apply-immediately option.
For Windows:
Example
The following example modifies mydbinstance by returning its DB instance class to the default number
of threads per core for it. The number of CPU cores setting isn't changed. The changes are applied
immediately by using --apply-immediately. If you want to apply the changes during the next
scheduled maintenance window, omit the --apply-immediately option.
For Windows:
RDS API
You can set the processor features for a DB instance when you call one of the following Amazon RDS API
operations:
• CreateDBInstance
• ModifyDBInstance
• RestoreDBInstanceFromDBSnapshot
• RestoreDBInstanceFromS3
• RestoreDBInstanceToPointInTime
To configure the processor features of a DB instance class for a DB instance by using the Amazon RDS
API, include the ProcessFeatures parameter in the call.
86
Amazon Relational Database Service User Guide
Hardware specifications
Specify the number of CPU cores with the coreCount feature name, and specify whether multiple
threads per core are enabled with the threadsPerCore feature name.
You can view the valid processor values for a particular DB instance class by running the
DescribeOrderableDBInstanceOptions operation and specifying the instance class for the
DBInstanceClass parameter. You can also use the following operations:
In the output of the preceding operations, the values for the processor features are not null only if the
following conditions are met:
If the preceding conditions aren't met, you can get the instance type using DescribeDBInstances.
You can get the processor information for this instance type by running the EC2 operation
DescribeInstanceTypes.
vCPU
The number of virtual central processing units (CPUs). A virtual CPU is a unit of capacity that you can
use to compare DB instance classes. Instead of purchasing or leasing a particular processor to use for
several months or years, you are renting capacity by the hour. Our goal is to make a consistent and
specific amount of CPU capacity available, within the limits of the actual underlying hardware.
ECU
The relative measure of the integer processing power of an Amazon EC2 instance. To make it easy
for developers to compare CPU capacity between different instance classes, we have defined an
Amazon EC2 Compute Unit. The amount of CPU that is allocated to a particular instance is expressed
in terms of these EC2 Compute Units. One ECU currently provides CPU capacity equivalent to a 1.0–
1.2 GHz 2007 Opteron or 2007 Xeon processor.
Memory (GiB)
The RAM, in gibibytes, allocated to the DB instance. There is often a consistent ratio between
memory and vCPU. As an example, take the db.r4 instance class, which has a memory to vCPU ratio
similar to the db.r5 instance class. However, for most use cases the db.r5 instance class provides
better, more consistent performance than the db.r4 instance class.
EBS-optimized
The DB instance uses an optimized configuration stack and provides additional, dedicated capacity
for I/O. This optimization provides the best performance by minimizing contention between I/O and
other traffic from your instance. For more information about Amazon EBS–optimized instances, see
Amazon EBS–Optimized instances in the Amazon EC2 User Guide for Linux Instances.
EBS-optimized instances have a baseline and maximum IOPS rate. The maximum IOPS rate is
enforced at the DB instance level. A set of EBS volumes that combine to have an IOPS rate that is
higher than the maximum can't exceed the instance-level threshold. For example, if the maximum
87
Amazon Relational Database Service User Guide
Hardware specifications
IOPS for a particular DB instance class is 40,000, and you attach four 64,000 IOPS EBS volumes, the
maximum IOPS is 40,000 rather than 256,000. For the IOPS maximum specific to each EC2 instance
type, see Supported instance types in the Amazon EC2 User Guide for Linux Instances.
Max. EBS bandwidth (Mbps)
The maximum EBS bandwidth in megabits per second. Divide by 8 to get the expected throughput in
megabytes per second.
Important
General Purpose SSD (gp2) volumes for Amazon RDS DB instances have a throughput limit
of 250 MiB/s in most cases. However, the throughput limit can vary depending on volume
size. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide for
Linux Instances.
Network bandwidth
In the following table, you can find hardware details about the Amazon RDS DB instance classes.
For information about Amazon RDS DB engine support for each DB instance class, see Supported DB
engines for DB instance classes (p. 14).
Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)
88
Amazon Relational Database Service User Guide
Hardware specifications
Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)
89
Amazon Relational Database Service User Guide
Hardware specifications
Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)
90
Amazon Relational Database Service User Guide
Hardware specifications
Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)
91
Amazon Relational Database Service User Guide
Hardware specifications
Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)
92
Amazon Relational Database Service User Guide
Hardware specifications
Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)
93
Amazon Relational Database Service User Guide
Hardware specifications
Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)
94
Amazon Relational Database Service User Guide
Hardware specifications
Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)
95
Amazon Relational Database Service User Guide
Hardware specifications
Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)
96
Amazon Relational Database Service User Guide
Hardware specifications
Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)
db.r5b – Oracle memory-optimized instance classes preconfigured for high memory, storage, and I/O
97
Amazon Relational Database Service User Guide
Hardware specifications
Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)
db.r5 – Oracle memory-optimized instance classes preconfigured for high memory, storage, and I/O
98
Amazon Relational Database Service User Guide
Hardware specifications
Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)
99
Amazon Relational Database Service User Guide
Hardware specifications
Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)
* These DB instance classes can support maximum performance for 30 minutes at least once every 24
hours. For more information on baseline performance of the underlying EC2 instance types, see Amazon
EBS-optimized instances in the Amazon EC2 User Guide for Linux Instances.
** The r3.8xlarge DB instance class doesn't have dedicated EBS bandwidth and therefore doesn't offer
EBS optimization. For this instance class, network traffic and Amazon EBS traffic share the same 10-
gigabit network interface.
100
Amazon Relational Database Service User Guide
DB instance storage
In some cases, your database workload might not be able to achieve 100 percent of the IOPS that you
have provisioned. For more information, see Factors that affect storage performance (p. 108).
For more information about instance storage pricing, see Amazon RDS pricing.
• General Purpose SSD – General Purpose SSD volumes offer cost-effective storage that is ideal for a
broad range of workloads running on medium-sized DB instances. General Purpose storage is best
suited for development and testing environments.
For more information about General Purpose SSD storage, including the storage size ranges, see
General Purpose SSD storage (p. 102).
• Provisioned IOPS SSD – Provisioned IOPS storage is designed to meet the needs of I/O-intensive
workloads, particularly database workloads, that require low I/O latency and consistent I/O
throughput. Provisioned IOPS storage is best suited for production environments.
For more information about Provisioned IOPS storage, including the storage size ranges, see
Provisioned IOPS SSD storage (p. 104).
• Magnetic – Amazon RDS also supports magnetic storage for backward compatibility. We recommend
that you use General Purpose SSD or Provisioned IOPS SSD for any new storage needs. The maximum
amount of storage allowed for DB instances on magnetic storage is less than that of the other storage
types. For more information, see Magnetic storage (p. 107).
When you select General Purpose SSD or Provisioned IOPS SSD, depending on the engine selected and
the amount of storage requested, Amazon RDS automatically stripes across multiple volumes to enhance
performance, as shown in the following table.
101
Amazon Relational Database Service User Guide
General Purpose SSD storage
When you modify a General Purpose SSD or Provisioned IOPS SSD volume, it goes through a sequence of
states. While the volume is in the optimizing state, your volume performance is in between the source
and target configuration specifications. Transitional volume performance will be no less than the lowest
of the two specifications. For more information on volume modifications, see Monitor the progress of
volume modifications in the Amazon EC2 User Guide.
Important
When you modify an instance’s storage so that it goes from one volume to four volumes, or
when you modify an instance using magnetic storage, Amazon RDS does not use the Elastic
Volumes feature. Instead, Amazon RDS provisions new volumes and transparently moves the
data from the old volume to the new volumes. This operation consumes a significant amount of
IOPS and throughput of both the old and new volumes. Depending on the size of the volume
and the amount of database workload present during the modification, this operation can
consume a high amount of IOPS, significantly increase IO latency, and take several hours to
complete, while the RDS instance remains in the Modifying state.
Amazon RDS offers two types of General Purpose SSD storage: gp2 storage (p. 102) and gp3
storage (p. 103).
gp2 storage
When your applications don't need high storage performance, you can use General Purpose SSD gp2
storage. Baseline I/O performance for gp2 storage is 3 IOPS for each GiB, with a minimum of 100
IOPS. This relationship means that larger volumes have better performance. For example, baseline
performance for one 100-GiB volume is 300 IOPS. Baseline performance for one 1,000 GiB volume is
3,000 IOPS. Maximum baseline performance for one gp2 volume (5334 GiB and greater) is 16,000 IOPS.
Individual gp2 volumes below 1,000 GiB in size also have the ability to burst to 3,000 IOPS for extended
periods of time. Volume I/O credit balance determines burst performance. For more information about
volume I/O credits, see I/O credits and burst performance in the Amazon EC2 User Guide. For a more
detailed description of how baseline performance and I/O credit balance affect performance, see the
post Understanding burst vs. baseline performance with Amazon RDS and gp2 on the AWS Database
Blog.
Many workloads never deplete the burst balance. However, some workloads can exhaust the 3,000
IOPS burst storage credit balance, so you should plan your storage capacity to meet the needs of your
workloads.
For gp2 volumes larger than 1,000 GiB, the baseline performance is greater than the burst performance.
For such volumes, burst is irrelevant because the baseline performance is better than the 3,000 IOPS
burst performance. However, for DB instances of certain engines and sizes, storage is striped across four
102
Amazon Relational Database Service User Guide
General Purpose SSD storage
volumes providing four times the baseline throughput, and four times the burst IOPS of a single volume.
Storage performance for gp2 volumes on Amazon RDS DB engines, including the threshold, is shown in
the following table.
DB engine RDS Storage size Range of Baseline Range of Baseline Burst IOPS
IOPS Throughput
MariaDB, MySQL, Between 400 and 1,200-4,005 IOPS 500-1,000 MiB/s 12,000
and PostgreSQL 1,335 GiB
SQL Server Between 334 and 1,002-2,997 IOPS 250 MiB/s 3,000
999 GiB
* The baseline performance of the volume exceeds the maximum burst performance.
gp3 storage
By using General Purpose SSD gp3 storage volumes, you can customize storage performance
independently of storage capacity. Storage performance is the combination of I/O operations per second
(IOPS) and how fast the storage volume can perform reads and writes (storage throughput). On gp3
storage volumes, Amazon RDS provides a baseline storage performance of 3000 IOPS and 125 MiB/s.
For every RDS DB engine except RDS for SQL Server, when the storage size for gp3 volumes reaches a
certain threshold, the baseline storage performance increases to 12,000 IOPS and 500 MiB/s. This is
because of volume striping, where the storage uses four volumes instead of one. RDS for SQL Server
doesn't support volume striping, and therefore doesn't have a threshold value.
Note
General Purpose SSD gp3 storage is supported on Single-AZ and Multi-AZ DB instances, but
isn't supported on Multi-AZ DB clusters. For more information, see Configuring and managing a
Multi-AZ deployment (p. 492) and Multi-AZ DB cluster deployments (p. 499).
103
Amazon Relational Database Service User Guide
Provisioned IOPS storage
Storage performance for gp3 volumes on Amazon RDS DB engines, including the threshold, is shown in
the following table.
MariaDB, MySQL, Less than 400 GiB 3,000 IOPS/125 N/A N/A
and PostgreSQL MiB/s
MariaDB, MySQL, 400 GiB and 12,000 IOPS/500 12,000–64,000 500–4,000 MiB/s
and PostgreSQL higher MiB/s IOPS
For every DB engine except RDS for SQL Server, you can provision additional IOPS and storage
throughput when storage size is at or above the threshold value. For RDS for SQL Server, you can
provision additional IOPS and storage throughput for any available storage size. For all DB engines, you
pay for only the additional provisioned storage performance. For more information, see Amazon RDS
pricing.
Although the added Provisioned IOPS and storage throughput aren't dependent on the storage size, they
are related to each other. When you raise the IOPS above 32,000 for MariaDB and MySQL, the storage
throughput value automatically increases from 500 MiB/s. For example, when you set the IOPS to 40,000
on RDS for MySQL, the storage throughput must be at least 625 MiB/s. The automatic increase doesn't
happen for Oracle, PostgreSQL, and SQL Server DB instances.
Storage performance values for gp3 volumes on RDS have the following constraints:
• The maximum ratio of storage throughput to IOPS is 0.25 for all supported DB engines.
• The minimum ratio of IOPS to allocated storage (in GiB) is 0.5 on RDS for SQL Server. There is no
minimum ratio for the other supported DB engines.
• The maximum ratio of IOPS to allocated storage is 500 for all supported DB engines.
• If you're using storage autoscaling, the same ratios between IOPS and maximum storage threshold (in
GiB) also apply.
For more information on storage autoscaling, see Managing capacity automatically with Amazon RDS
storage autoscaling (p. 480).
104
Amazon Relational Database Service User Guide
Provisioned IOPS storage
When you create a DB instance, you specify the IOPS rate and the size of the volume. Amazon RDS
provides that IOPS rate for the DB instance until you change it.
io1 storage
For I/O-intensive workloads, you can use Provisioned IOPS SSD io1 storage and achieve up to 256,000
I/O operations per second (IOPS). The throughput of io1 volumes varies based on the amount of IOPS
provisioned per volume and on the size of the IO operations being executed. For more information about
throughput of io1 volumes, see Provisioned IOPS volumes in the Amazon EC2 User Guide.
The following table shows the range of Provisioned IOPS and and maximum throughput for each
database engine and storage size range.
MariaDB, MySQL, and Between 100 and 1,000–19,950 IOPS 500 MiB/s
PostgreSQL 399 GiB
MariaDB, MySQL, and Between 400 and 1,000–256,000 IOPS 4,000 MiB/s
PostgreSQL 65,536 GiB
Note
For SQL Server, the maximum 64,000 IOPS is guaranteed only on Nitro-based instances that are
on the m5*, m6i, r5*, r6i, and z1d instance types. Other instance types guarantee performance
up to 32,000 IOPS.
For Oracle, you can provision the maximum 256,000 IOPS only on the r5b instance type.
The IOPS and storage size ranges have the following constraints:
• The ratio of IOPS to allocated storage (in GiB) must be from 1–50 on RDS for SQL Server, and 0.5–50
on other RDS DB engines.
• If you're using storage autoscaling, the same ratios between IOPS and maximum storage threshold (in
GiB) also apply.
For more information on storage autoscaling, see Managing capacity automatically with Amazon RDS
storage autoscaling (p. 480).
You can also use Provisioned IOPS SSD storage with read replicas for MySQL, MariaDB or PostgreSQL.
The type of storage for a read replica is independent of that on the primary DB instance. For example,
105
Amazon Relational Database Service User Guide
Comparing SSD storage types
you might use General Purpose SSD for read replicas with a primary DB instance that uses Provisioned
IOPS SSD storage to reduce costs. However, your read replica's performance in this case might differ
from that of a configuration where both the primary DB instance and the read replicas use Provisioned
IOPS SSD storage.
Provisioned IOPS SSD storage provides a way to reserve I/O capacity by specifying IOPS. However, as
with any other system capacity attribute, its maximum throughput under load is constrained by the
resource that is consumed first. That resource might be network bandwidth, CPU, memory, or database
internal resources.
For more information about getting the most out of your Provisioned IOPS volumes, see Amazon EBS
volume performance.
Characteristic Provisioned IOPS (io1) General Purpose (gp3) General Purpose (gp2)
106
Amazon Relational Database Service User Guide
Magnetic storage
Characteristic Provisioned IOPS (io1) General Purpose (gp3) General Purpose (gp2)
Volume size 100 GiB–64 TiB (16 TiB 20 GiB–64 TiB (16 TiB 20 GiB–64 TiB (16 TiB
on RDS for SQL Server) on RDS for SQL Server) on RDS for SQL Server)
Maximum IOPS 256,000 (64,000 on 64,000 (16,000 on RDS 64,000 (16,000 on RDS
RDS for SQL Server) for SQL Server) for SQL Server)
Note
You can't
provision IOPS
directly on gp2
storage. IOPS
varies with
the allocated
storage size.
Maximum throughput Scales based on Provision additional 1000 MB/s (250 MB/s
Provisioned IOPS up to throughput up to 4,000 on RDS for SQL Server)
4,000 MB/s MB/s (1000 MB/s on
RDS for SQL Server
Magnetic storage
Amazon RDS also supports magnetic storage for backward compatibility. We recommend that you
use General Purpose SSD or Provisioned IOPS SSD for any new storage needs. The following are some
limitations for magnetic storage:
• Doesn't allow you to scale storage when using the SQL Server database engine.
• Doesn't support storage autoscaling.
• Doesn't support elastic volumes.
• Limited to a maximum size of 3 TiB.
• Limited to a maximum of 1,000 IOPS.
The following metrics are useful for monitoring storage for your DB instance:
• IOPS – The number of I/O operations completed each second. This metric is reported as the average
IOPS for a given time interval. Amazon RDS reports read and write IOPS separately on 1-minute
intervals. Total IOPS is the sum of the read and write IOPS. Typical values for IOPS range from zero to
tens of thousands per second.
• Latency – The elapsed time between the submission of an I/O request and its completion. This metric
is reported as the average latency for a given time interval. Amazon RDS reports read and write
latency separately at 1-minute intervals. Typical values for latency are in milliseconds (ms).
107
Amazon Relational Database Service User Guide
Factors that affect storage performance
• Throughput – The number of bytes each second that are transferred to or from disk. This metric is
reported as the average throughput for a given time interval. Amazon RDS reports read and write
throughput separately on 1-minute intervals using units of megabytes per second (MB/s). Typical
values for throughput range from zero to the I/O channel's maximum bandwidth.
• Queue Depth – The number of I/O requests in the queue waiting to be serviced. These are I/O
requests that have been submitted by the application but have not been sent to the device because
the device is busy servicing other I/O requests. Time spent waiting in the queue is a component of
latency and service time (not available as a metric). This metric is reported as the average queue depth
for a given time interval. Amazon RDS reports queue depth in 1-minute intervals. Typical values for
queue depth range from zero to several hundred.
Measured IOPS values are independent of the size of the individual I/O operation. This means that when
you measure I/O performance, make sure to look at the throughput of the instance, not simply the
number of I/O operations.
System activities
The following system-related activities consume I/O capacity and might reduce DB instance performance
while in progress:
Database workload
In some cases, your database or application design results in concurrency issues, locking, or other forms
of database contention. In these cases, you might not be able to use all the provisioned bandwidth
directly. In addition, you might encounter the following workload-related situations:
In some cases, there isn't a system resource that is at or near a limit, and adding threads doesn't increase
the database transaction rate. In such cases, the bottleneck is most likely contention in the database.
The most common forms are row lock and index page lock contention, but there are many other
possibilities. If this is your situation, seek the advice of a database performance tuning expert.
DB instance class
To get the most performance out of your Amazon RDS DB instance, choose a current generation instance
type with enough bandwidth to support your storage type. For example, you can choose Amazon EBS–
optimized instances and instances with 10-gigabit network connectivity.
Important
Depending on the instance class you're using, you might see lower IOPS performance than
the maximum that you can provision with RDS. For specific information on IOPS performance
for DB instance classes, see Amazon EBS–optimized instances in the Amazon EC2 User Guide.
108
Amazon Relational Database Service User Guide
Factors that affect storage performance
We recommend that you determine the maximum IOPS for the instance class before setting a
Provisioned IOPS value for your DB instance.
We encourage you to use the latest generation of instances to get the best performance. Previous
generation DB instances can also have lower maximum storage.
Some older 32-bit file systems might have lower storage capacities. To determine the storage capacity of
your DB instance, you can use the describe-valid-db-instance-modifications AWS CLI command.
The following list shows the maximum storage that most DB instance classes can scale to for each
database engine:
• MariaDB – 64 TiB
• Microsoft SQL Server – 16 TiB
• MySQL – 64 TiB
• Oracle – 64 TiB
• PostgreSQL – 64 TiB
The following table shows some exceptions for maximum storage (in TiB). All RDS for Microsoft SQL
Server DB instances have a maximum storage of 16 TiB, so there are no entries for SQL Server.
db.t4g.medium 16 16 N/A 32
db.t4g.small 16 16 N/A 16
db.t4g.micro 6 6 N/A 6
db.t3.medium 16 16 32 32
db.t3.small 16 16 32 16
db.t3.micro 6 6 32 6
db.t2.medium 32 32 N/A 32
db.t2.small 16 16 N/A 16
db.t2.micro 6 6 N/A 6
For more details about all instance classes supported, see Previous generation DB instances.
109
Amazon Relational Database Service User Guide
Regions, Availability Zones, and Local Zones
By using Local Zones, you can place resources, such as compute and storage, in multiple locations closer
to your users. Amazon RDS enables you to place resources, such as DB instances, and data in multiple
locations. Resources aren't replicated across AWS Regions unless you do so specifically.
Amazon operates state-of-the-art, highly-available data centers. Although rare, failures can occur that
affect the availability of DB instances that are in the same location. If you host all your DB instances in
one location that is affected by such a failure, none of your DB instances will be available.
It is important to remember that each AWS Region is completely independent. Any Amazon RDS activity
you initiate (for example, creating database instances or listing available database instances) runs only in
your current default AWS Region. The default AWS Region can be changed in the console, or by setting
the AWS_DEFAULT_REGION environment variable. Or it can be overridden by using the --region
parameter with the AWS Command Line Interface (AWS CLI). For more information, see Configuring the
AWS Command Line Interface, specifically the sections about environment variables and command line
options.
Amazon RDS supports special AWS Regions called AWS GovCloud (US). These are designed to allow
US government agencies and customers to move more sensitive workloads into the cloud. The AWS
GovCloud (US) Regions address the US government's specific regulatory and compliance requirements.
For more information, see What is AWS GovCloud (US)?
To create or work with an Amazon RDS DB instance in a specific AWS Region, use the corresponding
regional service endpoint.
110
Amazon Relational Database Service User Guide
AWS Regions
AWS Regions
Each AWS Region is designed to be isolated from the other AWS Regions. This design achieves the
greatest possible fault tolerance and stability.
When you view your resources, you see only the resources that are tied to the AWS Region that you
specified. This is because AWS Regions are isolated from each other, and we don't automatically replicate
resources across AWS Regions.
Region availability
The following table shows the AWS Regions where Amazon RDS is currently available and the endpoint
for each Region.
rds.us-east-2.api.aws HTTPS
rds-fips.us-east-2.amazonaws.com HTTPS
rds-fips.us-east-1.amazonaws.com HTTPS
rds.us-east-1.api.aws HTTPS
rds-fips.us-west-1.amazonaws.com HTTPS
rds-fips.us-west-1.api.aws HTTPS
rds.us-west-2.api.aws HTTPS
rds-fips.us-west-2.api.aws HTTPS
111
Amazon Relational Database Service User Guide
AWS Regions
rds-fips.ca-central-1.api.aws HTTPS
rds-fips.ca-central-1.amazonaws.com HTTPS
112
Amazon Relational Database Service User Guide
Availability Zones
If you do not explicitly specify an endpoint, the US West (Oregon) endpoint is the default.
When you work with a DB instance using the AWS CLI or API operations, make sure that you specify its
regional endpoint.
Availability Zones
When you create a DB instance, you can choose an Availability Zone or have Amazon RDS choose one for
you randomly. An Availability Zone is represented by an AWS Region code followed by a letter identifier
(for example, us-east-1a).
113
Amazon Relational Database Service User Guide
Local Zones
Use the describe-availability-zones Amazon EC2 command as follows to describe the Availability Zones
within the specified Region that are enabled for your account.
For example, to describe the Availability Zones within the US East (N. Virginia) Region (us-east-1) that are
enabled for your account, run the following command:
You can't choose the Availability Zones for the primary and secondary DB instances in a Multi-AZ DB
deployment. Amazon RDS chooses them for you randomly. For more information about Multi-AZ
deployments, see Configuring and managing a Multi-AZ deployment (p. 492).
Note
Random selection of Availability Zones by RDS doesn't guarantee an even distribution of DB
instances among Availability Zones within a single account or DB subnet group. You can request
a specific AZ when you create or modify a Single-AZ instance, and you can use more-specific DB
subnet groups for Multi-AZ instances. For more information, see Creating an Amazon RDS DB
instance (p. 300) and Modifying an Amazon RDS DB instance (p. 401).
Local Zones
A Local Zone is an extension of an AWS Region that is geographically close to your users. You can extend
any VPC from the parent AWS Region into Local Zones. To do so, create a new subnet and assign it to the
AWS Local Zone. When you create a subnet in a Local Zone, your VPC is extended to that Local Zone. The
subnet in the Local Zone operates the same as other subnets in your VPC.
When you create a DB instance, you can choose a subnet in a Local Zone. Local Zones have their own
connections to the internet and support AWS Direct Connect. Thus, resources created in a Local Zone can
serve local users with very low-latency communications. For more information, see AWS Local Zones.
A Local Zone is represented by an AWS Region code followed by an identifier that indicates the location,
for example us-west-2-lax-1a.
Note
A Local Zone can't be included in a Multi-AZ deployment.
For more information, see Enabling Local Zones in the Amazon EC2 User Guide for Linux Instances.
2. Create a subnet in the Local Zone.
For more information, see Creating a subnet in your VPC in the Amazon VPC User Guide.
3. Create a DB subnet group in the Local Zone.
When you create a DB subnet group, choose the Availability Zone group for the Local Zone.
For more information, see Creating an Amazon RDS DB instance (p. 300).
Important
Currently, the only AWS Local Zone where Amazon RDS is available is Los Angeles in the US
West (Oregon) Region.
114
Amazon Relational Database Service User Guide
Local Zones
115
Amazon Relational Database Service User Guide
Supported Amazon RDS features by Region and engine
Amazon RDS features are different from engine-native features and options. For more information on
engine-native features and options, see Engine-native features. (p. 162)
Topics
• Table conventions (p. 116)
• Feature quick reference (p. 116)
• Blue/Green Deployments (p. 118)
• Cross-Region automated backups (p. 118)
• Cross-Region read replicas (p. 119)
• Database activity streams (p. 121)
• Dual-stack mode (p. 125)
• Export snapshots to S3 (p. 133)
• IAM database authentication (p. 138)
• Kerberos authentication (p. 141)
• Multi-AZ DB clusters (p. 147)
• Performance Insights (p. 150)
• RDS Custom (p. 151)
• Amazon RDS Proxy (p. 155)
• Secrets Manager integration (p. 161)
• Engine-native features (p. 162)
Table conventions
The tables in the feature sections use these patterns to specify version numbers and level of availability:
116
Amazon Relational Database Service User Guide
Feature quick reference
Feature RDS for MariaDB RDS for MySQL RDS for Oracle RDS for RDS for SQL
PostgreSQL Server
Cross- – – Available (p. 119) Available (p. 119) Available (p. 119)
Region
automated
backups
Cross- Available (p. 119) Available (p. 119) Available (p. 120) Available (p. 120) Available (p. 120)
Region
read
replicas
Dual- Available (p. 125) Available (p. 127) Available (p. 128) Available (p. 129) Available (p. 131)
stack
mode
Export Available (p. 133) Available (p. 135) – Available (p. 136) –
Snapshot
to
Amazon
S3
AWS Available (p. 138) Available (p. 140) – Available (p. 140) –
Identity
and
Access
Management
(IAM)
database
authentication
Kerberos – Available (p. 141) Available (p. 142) Available (p. 143) Available (p. 145)
authentication
Performance
Available (p. 150) Available (p. 150) Available (p. 150) Available (p. 150) Available (p. 150)
Insights
RDS Available (p. 155) Available (p. 157) – Available (p. 158) Available (p. 160)
Proxy
Secrets Available (p. 161) Available (p. 161) Available (p. 161) Available (p. 161) Available (p. 161)
Manager
integration
117
Amazon Relational Database Service User Guide
Blue/Green Deployments
Blue/Green Deployments
A blue/green deployment copies a production database environment in a separate, synchronized
staging environment. By using Amazon RDS Blue/Green Deployments, you can make changes to the
database in the staging environment without affecting the production environment. For example, you
can upgrade the major or minor DB engine version, change database parameters, or make schema
changes in the staging environment. When you are ready, you can promote the staging environment to
be the new production database environment. For more information, see Using Amazon RDS Blue/Green
Deployments for database updates (p. 566).
The Blue/Green Deployments feature isn't supported with the following engines:
The Blue/Green Deployments feature is supported in all AWS Regions except Israel (Tel Aviv).
For more detailed information on limitations for source and destination backup Regions, see Replicating
automated backups to another AWS Region (p. 602).
Topics
• Backup replication with RDS for MariaDB (p. 119)
• Backup replication with RDS for MySQL (p. 119)
• Backup replication with RDS for Oracle (p. 119)
• Backup replication with RDS for PostgreSQL (p. 119)
118
Amazon Relational Database Service User Guide
Cross-Region read replicas
Topics
• Cross-Region read replicas with RDS for MariaDB (p. 119)
• Cross-Region read replicas with RDS for MySQL (p. 119)
• Cross-Region read replicas with RDS for Oracle (p. 120)
• Cross-Region read replicas with RDS for PostgreSQL (p. 120)
• Cross-Region read replicas with RDS for SQL Server (p. 120)
119
Amazon Relational Database Service User Guide
Cross-Region read replicas
• For RDS for Oracle 21c, cross-Region read replicas aren't available.
• For RDS for Oracle 19c, cross-Region read replicas are available for instances of Oracle Database 19c
that aren't container database (CBD) instances.
• For RDS for Oracle 12c, cross-Region read replicas are available for Oracle Enterprise Edition (EE) of
Oracle Database 12c Release 1 (12.1) using 12.1.0.2.v10 and higher 12c releases.
For more information on additional requirements for cross-Region read replicas with RDS for Oracle, see
Requirements and considerations for RDS for Oracle replicas (p. 1974).
Cross-Region read replicas with RDS for SQL Server are available for the following versions using
Microsoft SQL Server Enterprise Edition:
120
Amazon Relational Database Service User Guide
Database activity streams
Topics
• Database activity streams with RDS for Oracle (p. 121)
• Database activity streams with RDS for SQL Server (p. 124)
For more information on additional requirements for database activity streams with RDS for Oracle, see
Overview of Database Activity Streams (p. 944).
121
Amazon Relational Database Service User Guide
Database activity streams
China (Beijing) – –
China (Ningxia) – –
122
Amazon Relational Database Service User Guide
Database activity streams
Europe (Zurich) – –
123
Amazon Relational Database Service User Guide
Database activity streams
For more information on additional requirements for database activity streams with RDS for SQL Server,
see Overview of Database Activity Streams (p. 944).
Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014
US East (Ohio) All available versions All available versions All available versions –
US East (N. Virginia) All available versions All available versions All available versions –
US West (N. All available versions All available versions All available versions –
California)
US West (Oregon) All available versions All available versions All available versions –
Africa (Cape Town) All available versions All available versions All available versions –
Asia Pacific (Hong All available versions All available versions All available versions –
Kong)
Asia Pacific All available versions All available versions All available versions –
(Hyderabad)
Asia Pacific (Jakarta) All available versions All available versions All available versions –
Asia Pacific – – – –
(Melbourne)
Asia Pacific (Mumbai) All available versions All available versions All available versions –
Asia Pacific (Osaka) All available versions All available versions All available versions –
Asia Pacific (Seoul) All available versions All available versions All available versions –
Asia Pacific All available versions All available versions All available versions –
(Singapore)
Asia Pacific (Sydney) All available versions All available versions All available versions –
Asia Pacific (Tokyo) All available versions All available versions All available versions –
Canada (Central) All available versions All available versions All available versions –
China (Beijing) – – – –
China (Ningxia) – – – –
Europe (Frankfurt) All available versions All available versions All available versions –
Europe (Ireland) All available versions All available versions All available versions –
Europe (London) All available versions All available versions All available versions –
Europe (Milan) All available versions All available versions All available versions –
Europe (Paris) All available versions All available versions All available versions –
124
Amazon Relational Database Service User Guide
Dual-stack mode
Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014
Europe (Spain) All available versions All available versions All available versions –
Europe (Stockholm) All available versions All available versions All available versions –
Europe (Zurich) – – – –
Middle East (Bahrain) All available versions All available versions All available versions –
Middle East (UAE) All available versions All available versions All available versions –
South America (São All available versions All available versions All available versions –
Paulo)
Dual-stack mode
By using dual-stack mode in RDS, resources can communicate with a DB instance over Internet Protocol
version 4 (IPv4), Internet Protocol version 6 (IPv6), or both. For more information, see Dual-stack
mode (p. 2691).
Topics
• Dual-stack mode with RDS for MariaDB (p. 125)
• Dual-stack mode with RDS for MySQL (p. 127)
• Dual-stack mode with RDS for Oracle (p. 128)
• Dual-stack mode with RDS for PostgreSQL (p. 129)
• Dual-stack mode with RDS for SQL Server (p. 131)
Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3
US East (Ohio) All available All available All available All available All available
versions versions versions versions versions
US East (N. All available All available All available All available All available
Virginia) versions versions versions versions versions
US West (N. All available All available All available All available All available
California) versions versions versions versions versions
US West All available All available All available All available All available
(Oregon) versions versions versions versions versions
125
Amazon Relational Database Service User Guide
Dual-stack mode
Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3
Africa (Cape All available All available All available All available All available
Town) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Hong Kong) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Hyderabad) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Jakarta) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Melbourne) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Mumbai) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Osaka) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Seoul) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Singapore) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Sydney) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Tokyo) versions versions versions versions versions
Canada (Central) All available All available All available All available All available
versions versions versions versions versions
China (Beijing) All available All available All available All available All available
versions versions versions versions versions
China (Ningxia) All available All available All available All available All available
versions versions versions versions versions
Europe All available All available All available All available All available
(Frankfurt) versions versions versions versions versions
Europe (Ireland) All available All available All available All available All available
versions versions versions versions versions
Europe (London) All available All available All available All available All available
versions versions versions versions versions
Europe (Milan) All available All available All available All available All available
versions versions versions versions versions
Europe (Paris) All available All available All available All available All available
versions versions versions versions versions
126
Amazon Relational Database Service User Guide
Dual-stack mode
Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3
Europe (Spain) All available All available All available All available All available
versions versions versions versions versions
Europe All available All available All available All available All available
(Stockholm) versions versions versions versions versions
Europe (Zurich) All available All available All available All available All available
versions versions versions versions versions
Middle East All available All available All available All available All available
(Bahrain) versions versions versions versions versions
Middle East All available All available All available All available All available
(UAE) versions versions versions versions versions
South America All available All available All available All available All available
(São Paulo) versions versions versions versions versions
AWS GovCloud All available All available All available All available All available
(US-East) versions versions versions versions versions
AWS GovCloud All available All available All available All available All available
(US-West) versions versions versions versions versions
Region RDS for MySQL 8.0 RDS for MySQL 5.7 RDS for MySQL 5.6
US East (Ohio) All available versions All available versions All available versions
US East (N. Virginia) All available versions All available versions All available versions
US West (N. California) All available versions All available versions All available versions
US West (Oregon) All available versions All available versions All available versions
Africa (Cape Town) All available versions All available versions All available versions
Asia Pacific (Hong Kong) All available versions All available versions All available versions
Asia Pacific (Jakarta) All available versions All available versions All available versions
Asia Pacific (Mumbai) All available versions All available versions All available versions
Asia Pacific (Osaka) All available versions All available versions All available versions
Asia Pacific (Seoul) All available versions All available versions All available versions
127
Amazon Relational Database Service User Guide
Dual-stack mode
Region RDS for MySQL 8.0 RDS for MySQL 5.7 RDS for MySQL 5.6
Asia Pacific (Singapore) All available versions All available versions All available versions
Asia Pacific (Sydney) All available versions All available versions All available versions
Asia Pacific (Tokyo) All available versions All available versions All available versions
Canada (Central) All available versions All available versions All available versions
China (Beijing) All available versions All available versions All available versions
China (Ningxia) All available versions All available versions All available versions
Europe (Frankfurt) All available versions All available versions All available versions
Europe (Ireland) All available versions All available versions All available versions
Europe (London) All available versions All available versions All available versions
Europe (Milan) All available versions All available versions All available versions
Europe (Paris) All available versions All available versions All available versions
Europe (Stockholm) All available versions All available versions All available versions
Middle East (Bahrain) All available versions All available versions All available versions
South America (São Paulo) All available versions All available versions All available versions
AWS GovCloud (US-East) All available versions All available versions All available versions
AWS GovCloud (US-West) All available versions All available versions All available versions
Region RDS for Oracle 21c RDS for Oracle 19c RDS for Oracle 12c
US East (Ohio) All available versions All available versions All available versions
US East (N. Virginia) All available versions All available versions All available versions
US West (N. California) All available versions All available versions All available versions
US West (Oregon) All available versions All available versions All available versions
Africa (Cape Town) All available versions All available versions All available versions
Asia Pacific (Hong Kong) All available versions All available versions All available versions
128
Amazon Relational Database Service User Guide
Dual-stack mode
Region RDS for Oracle 21c RDS for Oracle 19c RDS for Oracle 12c
Asia Pacific (Jakarta) All available versions All available versions All available versions
Asia Pacific (Mumbai) All available versions All available versions All available versions
Asia Pacific (Osaka) All available versions All available versions All available versions
Asia Pacific (Seoul) All available versions All available versions All available versions
Asia Pacific (Singapore) All available versions All available versions All available versions
Asia Pacific (Sydney) All available versions All available versions All available versions
Asia Pacific (Tokyo) All available versions All available versions All available versions
Canada (Central) All available versions All available versions All available versions
China (Beijing) All available versions All available versions All available versions
China (Ningxia) All available versions All available versions All available versions
Europe (Frankfurt) All available versions All available versions All available versions
Europe (Ireland) All available versions All available versions All available versions
Europe (London) All available versions All available versions All available versions
Europe (Milan) All available versions All available versions All available versions
Europe (Paris) All available versions All available versions All available versions
Europe (Spain) – – –
Europe (Stockholm) All available versions All available versions All available versions
Europe (Zurich) – – –
Middle East (Bahrain) All available versions All available versions All available versions
South America (São Paulo) All available versions All available versions All available versions
AWS GovCloud (US-East) All available versions All available versions All available versions
AWS GovCloud (US-West) All available versions All available versions All available versions
129
Amazon Relational Database Service User Guide
Dual-stack mode
Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10
US East (Ohio) All available All available All available All available All available All available
versions versions versions versions versions versions
US East (N. All available All available All available All available All available All available
Virginia) versions versions versions versions versions versions
US West (N. All available All available All available All available All available All available
California) versions versions versions versions versions versions
US West All available All available All available All available All available All available
(Oregon) versions versions versions versions versions versions
Africa (Cape All available All available All available All available All available All available
Town) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Hong Kong) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Hyderabad) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Melbourne) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Jakarta) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Mumbai) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Osaka) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Seoul) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Singapore) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Sydney) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Tokyo) versions versions versions versions versions versions
Canada All available All available All available All available All available All available
(Central) versions versions versions versions versions versions
China (Beijing) All available All available All available All available All available All available
versions versions versions versions versions versions
China All available All available All available All available All available All available
(Ningxia) versions versions versions versions versions versions
Europe All available All available All available All available All available All available
(Frankfurt) versions versions versions versions versions versions
130
Amazon Relational Database Service User Guide
Dual-stack mode
Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10
Europe All available All available All available All available All available All available
(Ireland) versions versions versions versions versions versions
Europe All available All available All available All available All available All available
(London) versions versions versions versions versions versions
Europe (Milan) All available All available All available All available All available All available
versions versions versions versions versions versions
Europe (Paris) All available All available All available All available All available All available
versions versions versions versions versions versions
Europe (Spain) All available All available All available All available All available All available
versions versions versions versions versions versions
Europe All available All available All available All available All available All available
(Stockholm) versions versions versions versions versions versions
Europe All available All available All available All available All available All available
(Zurich) versions versions versions versions versions versions
Israel (Tel – – – – – –
Aviv)
Middle East All available All available All available All available All available All available
(Bahrain) versions versions versions versions versions versions
Middle East All available All available All available All available All available All available
(UAE) versions versions versions versions versions versions
South All available All available All available All available All available All available
America (São versions versions versions versions versions versions
Paulo)
AWS All available All available All available All available All available All available
GovCloud versions versions versions versions versions versions
(US-East)
AWS All available All available All available All available All available All available
GovCloud versions versions versions versions versions versions
(US-West)
Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014
US East (Ohio) All available versions All available versions All available versions –
US East (N. Virginia) All available versions All available versions All available versions –
131
Amazon Relational Database Service User Guide
Dual-stack mode
Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014
US West (N. All available versions All available versions All available versions –
California)
US West (Oregon) All available versions All available versions All available versions –
Africa (Cape Town) All available versions All available versions All available versions –
Asia Pacific (Hong All available versions All available versions All available versions –
Kong)
Asia Pacific – – – –
(Hyderabad)
Asia Pacific (Jakarta) All available versions All available versions All available versions –
Asia Pacific – – – –
(Melbourne)
Asia Pacific (Mumbai) All available versions All available versions All available versions –
Asia Pacific (Osaka) All available versions All available versions All available versions –
Asia Pacific (Seoul) All available versions All available versions All available versions –
Asia Pacific All available versions All available versions All available versions –
(Singapore)
Asia Pacific (Sydney) All available versions All available versions All available versions –
Asia Pacific (Tokyo) All available versions All available versions All available versions –
Canada (Central) All available versions All available versions All available versions –
China (Beijing) All available versions All available versions All available versions –
China (Ningxia) All available versions All available versions All available versions –
Europe (Frankfurt) All available versions All available versions All available versions –
Europe (Ireland) All available versions All available versions All available versions –
Europe (London) All available versions All available versions All available versions –
Europe (Milan) All available versions All available versions All available versions –
Europe (Paris) All available versions All available versions All available versions –
Europe (Spain) – – – –
Europe (Stockholm) All available versions All available versions All available versions –
Europe (Zurich) – – – –
Middle East (Bahrain) All available versions All available versions All available versions –
132
Amazon Relational Database Service User Guide
Export snapshots to S3
Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014
South America (São All available versions All available versions All available versions –
Paulo)
AWS GovCloud (US- All available versions All available versions All available versions –
East)
AWS GovCloud (US- All available versions All available versions All available versions –
West)
Export snapshots to S3
You can export RDS DB snapshot data to an Amazon S3 bucket. You can export all types of DB snapshots
—including manual snapshots, automated system snapshots, and snapshots created by AWS Backup.
After the data is exported, you can analyze the exported data directly through tools like Amazon Athena
or Amazon Redshift Spectrum. For more information, see Exporting DB snapshot data to Amazon
S3 (p. 642).
Topics
• Export snapshots to S3 with RDS for MariaDB (p. 133)
• Export snapshots to S3 with RDS for MySQL (p. 135)
• Export snapshots to S3 with RDS for PostgreSQL (p. 136)
Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3
US East (Ohio) All available All available All available All available All available
versions versions versions versions versions
US East (N. All available All available All available All available All available
Virginia) versions versions versions versions versions
US West (N. All available All available All available All available All available
California) versions versions versions versions versions
US West All available All available All available All available All available
(Oregon) versions versions versions versions versions
Africa (Cape All available All available All available All available All available
Town) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Hong Kong) versions versions versions versions versions
Asia Pacific – – – – –
(Hyderabad)
133
Amazon Relational Database Service User Guide
Export snapshots to S3
Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3
Asia Pacific – – – – –
(Jakarta)
Asia Pacific – – – – –
(Melbourne)
Asia Pacific All available All available All available All available All available
(Mumbai) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Osaka) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Seoul) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Singapore) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Sydney) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Tokyo) versions versions versions versions versions
Canada (Central) All available All available All available All available All available
versions versions versions versions versions
China (Beijing) All available All available All available All available All available
versions versions versions versions versions
China (Ningxia) All available All available All available All available All available
versions versions versions versions versions
Europe All available All available All available All available All available
(Frankfurt) versions versions versions versions versions
Europe (Ireland) All available All available All available All available All available
versions versions versions versions versions
Europe (London) All available All available All available All available All available
versions versions versions versions versions
Europe (Milan) All available All available All available All available All available
versions versions versions versions versions
Europe (Paris) All available All available All available All available All available
versions versions versions versions versions
Europe (Spain) – – – – –
Europe All available All available All available All available All available
(Stockholm) versions versions versions versions versions
Europe (Zurich) – – – – –
134
Amazon Relational Database Service User Guide
Export snapshots to S3
Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3
Middle East All available All available All available All available All available
(Bahrain) versions versions versions versions versions
Middle East – – – – –
(UAE)
South America All available All available All available All available All available
(São Paulo) versions versions versions versions versions
AWS GovCloud – – – – –
(US-East)
AWS GovCloud – – – – –
(US-West)
Asia Pacific (Hong Kong) All available versions All available versions
135
Amazon Relational Database Service User Guide
Export snapshots to S3
Europe (Spain) – –
Europe (Zurich) – –
South America (São Paulo) All available versions All available versions
Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10
US East (Ohio) All available All available All available All available All available All available
versions versions versions versions versions versions
US East (N. All available All available All available All available All available All available
Virginia) versions versions versions versions versions versions
US West (N. All available All available All available All available All available All available
California) versions versions versions versions versions versions
US West All available All available All available All available All available All available
(Oregon) versions versions versions versions versions versions
Africa (Cape All available All available All available All available All available All available
Town) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Hong Kong) versions versions versions versions versions versions
Asia Pacific – – – – – –
(Hyderabad)
136
Amazon Relational Database Service User Guide
Export snapshots to S3
Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10
Asia Pacific – – – – – –
(Jakarta)
Asia Pacific – – – – – –
(Melbourne)
Asia Pacific All available All available All available All available All available All available
(Mumbai) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Osaka) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Seoul) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Singapore) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Sydney) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Tokyo) versions versions versions versions versions versions
Canada All available All available All available All available All available All available
(Central) versions versions versions versions versions versions
China (Beijing) All available All available All available All available All available All available
versions versions versions versions versions versions
China All available All available All available All available All available All available
(Ningxia) versions versions versions versions versions versions
Europe All available All available All available All available All available All available
(Frankfurt) versions versions versions versions versions versions
Europe All available All available All available All available All available All available
(Ireland) versions versions versions versions versions versions
Europe All available All available All available All available All available All available
(London) versions versions versions versions versions versions
Europe (Milan) All available All available All available All available All available All available
versions versions versions versions versions versions
Europe (Paris) All available All available All available All available All available All available
versions versions versions versions versions versions
Europe (Spain) – – – – – –
Europe All available All available All available All available All available All available
(Stockholm) versions versions versions versions versions versions
Europe – – – – – –
(Zurich)
137
Amazon Relational Database Service User Guide
IAM database authentication
Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10
Israel (Tel – – – – – –
Aviv)
Middle East All available All available All available All available All available All available
(Bahrain) versions versions versions versions versions versions
Middle East – – – – – –
(UAE)
South All available All available All available All available All available All available
America (São versions versions versions versions versions versions
Paulo)
AWS – – – – – –
GovCloud
(US-East)
AWS – – – – – –
GovCloud
(US-West)
Topics
• IAM database authentication with RDS for MariaDB (p. 138)
• IAM database authentication with RDS for MySQL (p. 140)
• IAM database authentication with RDS for PostgreSQL (p. 140)
Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3
138
Amazon Relational Database Service User Guide
IAM database authentication
Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3
Asia Pacific – – – – –
(Hyderabad)
Asia Pacific – – – – –
(Melbourne)
139
Amazon Relational Database Service User Guide
IAM database authentication
Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3
Europe (Spain) – – – – –
Europe (Zurich) – – – – –
Middle East – – – – –
(UAE)
140
Amazon Relational Database Service User Guide
Kerberos authentication
Kerberos authentication
By using Kerberos authentication in Amazon RDS, you can support external authentication of database
users using Kerberos and Microsoft Active Directory. Using Kerberos and Active Directory provides the
benefits of single sign-on and centralized authentication of database users. For more information, see
Kerberos authentication (p. 2567).
Topics
• Kerberos authentication with RDS for MySQL (p. 141)
• Kerberos authentication with RDS for Oracle (p. 142)
• Kerberos authentication with RDS for PostgreSQL (p. 143)
• Kerberos authentication with RDS for SQL Server (p. 145)
Region RDS for MySQL 8.0 RDS for MySQL 5.7 RDS for MySQL 5.6
141
Amazon Relational Database Service User Guide
Kerberos authentication
Region RDS for MySQL 8.0 RDS for MySQL 5.7 RDS for MySQL 5.6
Europe (Milan) – – –
Europe (Paris) – – –
Europe (Spain) – – –
Europe (Zurich) – – –
South America (São Paulo) All versions All versions All versions
Region RDS for Oracle 21c RDS for Oracle 19c RDS for Oracle 12c
142
Amazon Relational Database Service User Guide
Kerberos authentication
Region RDS for Oracle 21c RDS for Oracle 19c RDS for Oracle 12c
China (Beijing) – – –
China (Ningxia) – – –
Europe (Milan) – – –
Europe (Paris) – – –
Europe (Spain) – – –
Europe (Zurich) – – –
South America (São Paulo) All versions All versions All versions
Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10
US East (Ohio) All versions All versions All versions All versions All versions All versions
US East (N. All versions All versions All versions All versions All versions All versions
Virginia)
US West (N. All versions All versions All versions All versions All versions All versions
California)
143
Amazon Relational Database Service User Guide
Kerberos authentication
Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10
US West All versions All versions All versions All versions All versions All versions
(Oregon)
Africa (Cape – – – – – –
Town)
Asia Pacific – – – – – –
(Hong Kong)
Asia Pacific – – – – – –
(Hyderabad)
Asia Pacific – – – – – –
(Jakarta)
Asia Pacific – – – – – –
(Melbourne)
Asia Pacific All versions All versions All versions All versions All versions All versions
(Mumbai)
Asia Pacific – – – – – –
(Osaka)
Asia Pacific All versions All versions All versions All versions All versions All versions
(Seoul)
Asia Pacific All versions All versions All versions All versions All versions All versions
(Singapore)
Asia Pacific All versions All versions All versions All versions All versions All versions
(Sydney)
Asia Pacific All versions All versions All versions All versions All versions All versions
(Tokyo)
Canada All versions All versions All versions All versions All versions All versions
(Central)
China (Beijing) All versions All versions All versions All versions All versions All versions
China All versions All versions All versions All versions All versions All versions
(Ningxia)
Europe All versions All versions All versions All versions All versions All versions
(Frankfurt)
Europe All versions All versions All versions All versions All versions All versions
(Ireland)
Europe All versions All versions All versions All versions All versions All versions
(London)
Europe (Milan) – – – – – –
Europe (Paris) All versions All versions All versions All versions All versions All versions
144
Amazon Relational Database Service User Guide
Kerberos authentication
Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10
Europe (Spain) – – – – – –
Europe All versions All versions All versions All versions All versions All versions
(Stockholm)
Europe – – – – – –
(Zurich)
Israel (Tel – – – – – –
Aviv)
Middle East – – – – – –
(Bahrain)
Middle East – – – – – –
(UAE)
South All versions All versions All versions All versions All versions All versions
America (São
Paulo)
AWS – – – – – –
GovCloud
(US-East)
AWS – – – – – –
GovCloud
(US-West)
Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014
US East (Ohio) All versions All versions All versions All versions
US East (N. Virginia) All versions All versions All versions All versions
US West (N. All versions All versions All versions All versions
California)
US West (Oregon) All versions All versions All versions All versions
Africa (Cape Town) All versions All versions All versions All versions
Asia Pacific (Hong All versions All versions All versions All versions
Kong)
Asia Pacific All versions All versions All versions All versions
(Hyderabad)
145
Amazon Relational Database Service User Guide
Kerberos authentication
Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014
Asia Pacific (Jakarta) All versions All versions All versions All versions
Asia Pacific All versions All versions All versions All versions
(Melbourne)
Asia Pacific (Mumbai) All versions All versions All versions All versions
Asia Pacific (Osaka) All versions All versions All versions All versions
Asia Pacific (Seoul) All versions All versions All versions All versions
Asia Pacific All versions All versions All versions All versions
(Singapore)
Asia Pacific (Sydney) All versions All versions All versions All versions
Asia Pacific (Tokyo) All versions All versions All versions All versions
Canada (Central) All versions All versions All versions All versions
China (Beijing) All versions All versions All versions All versions
China (Ningxia) All versions All versions All versions All versions
Europe (Frankfurt) All versions All versions All versions All versions
Europe (Ireland) All versions All versions All versions All versions
Europe (London) All versions All versions All versions All versions
Europe (Milan) All versions All versions All versions All versions
Europe (Paris) All versions All versions All versions All versions
Europe (Spain) All versions All versions All versions All versions
Europe (Stockholm) All versions All versions All versions All versions
Europe (Zurich) All versions All versions All versions All versions
Middle East (Bahrain) All versions All versions All versions All versions
Middle East (UAE) All versions All versions All versions All versions
South America (São All versions All versions All versions All versions
Paulo)
AWS GovCloud (US- All versions All versions All versions All versions
East)
AWS GovCloud (US- All versions All versions All versions All versions
West)
146
Amazon Relational Database Service User Guide
Multi-AZ DB clusters
Multi-AZ DB clusters
A Multi-AZ DB cluster deployment in Amazon RDS provides a high availability deployment mode of
Amazon RDS with two readable standby DB instances. A Multi-AZ DB cluster has a writer DB instance
and two reader DB instances in three separate Availability Zones in the same Region. Multi-AZ DB
clusters provide high availability, increased capacity for read workloads, and lower write latency
when compared to Multi-AZ DB instance deployments. For more information, see Multi-AZ DB cluster
deployments (p. 499).
Topics
• Multi-AZ DB clusters with RDS for MySQL (p. 147)
• Multi-AZ DB clusters with RDS for PostgreSQL (p. 148)
147
Amazon Relational Database Service User Guide
Multi-AZ DB clusters
Europe (Spain) –
Europe (Zurich) –
You can also list the available versions in a Region for the db.r5d.large DB instance class by running the
following AWS CLI command.
For Windows:
You can change the DB instance class to show the available engine versions for it.
148
Amazon Relational Database Service User Guide
Multi-AZ DB clusters
Region RDS for PostgreSQL 15 RDS for PostgreSQL 14 RDS for PostgreSQL 13
US East (Ohio) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
US East (N. Virginia) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
US West (Oregon) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Africa (Cape Town) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Asia Pacific (Hong Kong) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Asia Pacific (Jakarta) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Asia Pacific (Mumbai) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Asia Pacific (Osaka) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Asia Pacific (Seoul) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Asia Pacific (Singapore) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Asia Pacific (Sydney) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Asia Pacific (Tokyo) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Canada (Central) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
China (Beijing) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
China (Ningxia) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Europe (Frankfurt) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Europe (Ireland) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Europe (London) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
149
Amazon Relational Database Service User Guide
Performance Insights
Region RDS for PostgreSQL 15 RDS for PostgreSQL 14 RDS for PostgreSQL 13
Europe (Milan) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Europe (Paris) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Europe (Spain) – – –
Europe (Stockholm) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
Europe (Zurich) – – –
Middle East (Bahrain) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
South America (São Paulo) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher
You can also list the available versions in a Region for the db.r5d.large DB instance class by running the
following AWS CLI command.
For Windows:
You can change the DB instance class to show the available engine versions for it.
Performance Insights
Performance Insights in Amazon RDS expands on existing Amazon RDS monitoring features to illustrate
and help you analyze your database performance. With the Performance Insights dashboard, you can
visualize the database load on your Amazon RDS DB instance. You can also filter the load by waits, SQL
statements, hosts, or users. For more information, see Monitoring DB load with Performance Insights on
Amazon RDS (p. 720).
Performance Insights is available for all RDS DB engines and all versions.
150
Amazon Relational Database Service User Guide
RDS Custom
For the region, DB engine, and instance class support information for Performance Insights features, see
Amazon RDS DB engine, Region, and instance class support for Performance Insights features (p. 725).
RDS Custom
Amazon RDS Custom automates database administration tasks and operations. By using RDS Custom,
as a database administrator you can access and customize your database environment and operating
system. With RDS Custom, you can customize to meet the requirements of legacy, custom, and packaged
applications. For more information, see Working with Amazon RDS Custom (p. 978).
Topics
• RDS Custom for Oracle (p. 151)
• RDS Custom for SQL Server (p. 153)
Region RDS for Oracle 19c RDS for Oracle 18c RDS for Oracle 12c
US East (Ohio) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
US East (N. Virginia) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
US West (N. – – –
California)
US West (Oregon) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
Asia Pacific – – –
(Melbourne)
Asia Pacific (Mumbai) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
151
Amazon Relational Database Service User Guide
RDS Custom
Region RDS for Oracle 19c RDS for Oracle 18c RDS for Oracle 12c
Asia Pacific (Osaka) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
Asia Pacific (Seoul) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
Asia Pacific 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
(Singapore) or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
Asia Pacific (Sydney) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
Asia Pacific (Tokyo) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
Canada (Central) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
China (Beijing) – – –
China (Ningxia) – – –
Europe (Frankfurt) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
Europe (Ireland) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
Europe (London) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
Europe (Milan) – – –
Europe (Paris) – – –
Europe (Stockholm) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
South America (São 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
Paulo) or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR
152
Amazon Relational Database Service User Guide
RDS Custom
Region RDS for Oracle 19c RDS for Oracle 18c RDS for Oracle 12c
• If you use an RPEV, it includes the default Amazon Machine Image (AMI) and SQL Server installation. If
you customize or modify the operating system (OS), your changes might not persist during patching,
snapshot restore, or automatic recovery.
• If you use a CEV, you choose your own AMI with either pre-installed Microsoft SQL Server or SQL
Server that you install using your own media. When using an AWS provided CEV, you choose the latest
Amazon EC2 image (AMI) available by AWS, which has the cumulative update (CU) supported by RDS
Custom for SQL Server. With a CEV, you can customize both the OS and SQL Server configuration to
meet your enterprise needs.
The following AWS Regions and DB engine versions are available for RDS Custom for SQL Server. The
engine version support depends on whether you're using RDS Custom for SQL Server with an RPEV, AWS
provided CEV, or customer-provided CEV.
US East (N. Virginia) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20
Asia Pacific (Mumbai) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20
153
Amazon Relational Database Service User Guide
RDS Custom
Asia Pacific (Seoul) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20
Asia Pacific (Singapore) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20
Asia Pacific (Sydney) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20
Asia Pacific (Tokyo) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20
China (Beijing) – – –
China (Ningxia) – – –
Europe (Milan) – – –
Europe (Paris) – – –
Europe (Spain) – – –
Europe (Zurich) – – –
South America (São Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Paulo) Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20
154
Amazon Relational Database Service User Guide
Amazon RDS Proxy
Topics
• RDS Proxy with RDS for MariaDB (p. 155)
• RDS Proxy with RDS for MySQL (p. 157)
• RDS Proxy with RDS for PostgreSQL (p. 158)
• RDS Proxy with RDS for SQL Server (p. 160)
Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3
US East (Ohio) All available All available All available All available All available
versions versions versions versions versions
US East (N. All available All available All available All available All available
Virginia) versions versions versions versions versions
US West (N. All available All available All available All available All available
California) versions versions versions versions versions
US West All available All available All available All available All available
(Oregon) versions versions versions versions versions
Africa (Cape All available All available All available All available All available
Town) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Hong Kong) versions versions versions versions versions
Asia Pacific – – – – –
(Hyderabad)
Asia Pacific All available All available All available All available All available
(Jakarta) versions versions versions versions versions
Asia Pacific – – – – –
(Melbourne)
155
Amazon Relational Database Service User Guide
Amazon RDS Proxy
Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3
Asia Pacific All available All available All available All available All available
(Mumbai) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Osaka) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Seoul) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Singapore) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Sydney) versions versions versions versions versions
Asia Pacific All available All available All available All available All available
(Tokyo) versions versions versions versions versions
Canada (Central) All available All available All available All available All available
versions versions versions versions versions
China (Beijing) All available All available All available All available All available
versions versions versions versions versions
China (Ningxia) All available All available All available All available All available
versions versions versions versions versions
Europe All available All available All available All available All available
(Frankfurt) versions versions versions versions versions
Europe (Ireland) All available All available All available All available All available
versions versions versions versions versions
Europe (London) All available All available All available All available All available
versions versions versions versions versions
Europe (Milan) All available All available All available All available All available
versions versions versions versions versions
Europe (Paris) All available All available All available All available All available
versions versions versions versions versions
Europe (Spain) – – – – –
Europe All available All available All available All available All available
(Stockholm) versions versions versions versions versions
Middle East All available All available All available All available All available
(Bahrain) versions versions versions versions versions
Middle East – – – – –
(UAE)
156
Amazon Relational Database Service User Guide
Amazon RDS Proxy
Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3
South America All available All available All available All available All available
(São Paulo) versions versions versions versions versions
AWS GovCloud – – – – –
(US-East)
AWS GovCloud – – – – –
(US-West)
Asia Pacific (Hong Kong) All available versions All available versions
157
Amazon Relational Database Service User Guide
Amazon RDS Proxy
Europe (Spain) – –
Europe (Zurich) – –
South America (São Paulo) All available versions All available versions
Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10
US East (Ohio) All available All available All available All available All available All available
versions versions versions versions versions versions
US East (N. All available All available All available All available All available All available
Virginia) versions versions versions versions versions versions
US West (N. All available All available All available All available All available All available
California) versions versions versions versions versions versions
US West All available All available All available All available All available All available
(Oregon) versions versions versions versions versions versions
Africa (Cape All available All available All available All available All available All available
Town) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Hong Kong) versions versions versions versions versions versions
Asia Pacific – – – – – –
(Hyderabad)
Asia Pacific All available All available All available All available All available All available
(Jakarta) versions versions versions versions versions versions
Asia Pacific – – – – – –
(Melbourne)
158
Amazon Relational Database Service User Guide
Amazon RDS Proxy
Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10
Asia Pacific All available All available All available All available All available All available
(Mumbai) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Osaka) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Seoul) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Singapore) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Sydney) versions versions versions versions versions versions
Asia Pacific All available All available All available All available All available All available
(Tokyo) versions versions versions versions versions versions
Canada All available All available All available All available All available All available
(Central) versions versions versions versions versions versions
China (Beijing) All available All available All available All available All available All available
versions versions versions versions versions versions
China All available All available All available All available All available All available
(Ningxia) versions versions versions versions versions versions
Europe All available All available All available All available All available All available
(Frankfurt) versions versions versions versions versions versions
Europe All available All available All available All available All available All available
(Ireland) versions versions versions versions versions versions
Europe All available All available All available All available All available All available
(London) versions versions versions versions versions versions
Europe (Milan) All available All available All available All available All available All available
versions versions versions versions versions versions
Europe (Paris) All available All available All available All available All available All available
versions versions versions versions versions versions
Europe (Spain) – – – – – –
Europe All available All available All available All available All available All available
(Stockholm) versions versions versions versions versions versions
Europe – – – – – –
(Zurich)
Israel (Tel – – – – – –
Aviv)
Middle East All available All available All available All available All available All available
(Bahrain) versions versions versions versions versions versions
159
Amazon Relational Database Service User Guide
Amazon RDS Proxy
Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10
Middle East – – – – – –
(UAE)
South All available All available All available All available All available All available
America (São versions versions versions versions versions versions
Paulo)
AWS – – – – – –
GovCloud
(US-East)
AWS – – – – – –
GovCloud
(US-West)
Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014
US East (Ohio) All available versions All available versions All available versions All available versions
US East (N. Virginia) All available versions All available versions All available versions All available versions
US West (N. All available versions All available versions All available versions All available versions
California)
US West (Oregon) All available versions All available versions All available versions All available versions
Africa (Cape Town) All available versions All available versions All available versions All available versions
Asia Pacific (Hong All available versions All available versions All available versions All available versions
Kong)
Asia Pacific – – – –
(Hyderabad)
Asia Pacific (Jakarta) All available versions All available versions All available versions All available versions
Asia Pacific – – – –
(Melbourne)
Asia Pacific (Mumbai) All available versions All available versions All available versions All available versions
Asia Pacific (Osaka) All available versions All available versions All available versions All available versions
Asia Pacific (Seoul) All available versions All available versions All available versions All available versions
Asia Pacific All available versions All available versions All available versions All available versions
(Singapore)
Asia Pacific (Sydney) All available versions All available versions All available versions All available versions
160
Amazon Relational Database Service User Guide
Secrets Manager integration
Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014
Asia Pacific (Tokyo) All available versions All available versions All available versions All available versions
Canada (Central) All available versions All available versions All available versions All available versions
China (Beijing) All available versions All available versions All available versions All available versions
China (Ningxia) All available versions All available versions All available versions All available versions
Europe (Frankfurt) All available versions All available versions All available versions All available versions
Europe (Ireland) All available versions All available versions All available versions All available versions
Europe (London) All available versions All available versions All available versions All available versions
Europe (Milan) All available versions All available versions All available versions All available versions
Europe (Paris) All available versions All available versions All available versions All available versions
Europe (Spain) – – – –
Europe (Stockholm) All available versions All available versions All available versions All available versions
Europe (Zurich) – – – –
Middle East (Bahrain) All available versions All available versions All available versions All available versions
South America (São All available versions All available versions All available versions All available versions
Paulo)
You can specify that Amazon RDS manages the master user password in Secrets Manager for an Amazon
RDS DB instance or Multi-AZ DB cluster. RDS generates the password, stores it in Secrets Manager,
and rotates it regularly. For more information, see Password management with Amazon RDS and AWS
Secrets Manager (p. 2568).
Secrets Manager integration is supported for all RDS DB engines and all versions.
Secrets Manager integration is supported in all AWS Regions except the following:
161
Amazon Relational Database Service User Guide
Engine-native features
Engine-native features
Amazon RDS database engines also support many of the most common engine-native features and
functionality. These features are different than the Amazon RDS-native features listed on this page.
Some engine-native features might have limited support or restricted privileges.
162
Amazon Relational Database Service User Guide
DB instance billing for Amazon RDS
• DB instance hours (per hour) – Based on the DB instance class of the DB instance (for example,
db.t2.small or db.m4.large). Pricing is listed on a per-hour basis, but bills are calculated down to the
second and show times in decimal form. RDS usage is billed in 1-second increments, with a minimum
of 10 minutes. For more information, see DB instance classes (p. 11).
• Storage (per GiB per month) – Storage capacity that you have provisioned to your DB instance. If you
scale your provisioned storage capacity within the month, your bill is prorated. For more information,
see Amazon RDS DB instance storage (p. 101).
• Input/output (I/O) requests (per 1 million requests) – Total number of storage I/O requests that you
have made in a billing cycle, for Amazon RDS magnetic storage only.
• Provisioned IOPS (per IOPS per month) – Provisioned IOPS rate, regardless of IOPS consumed, for
Amazon RDS Provisioned IOPS (SSD) and General Purpose (SSD) gp3 storage. Provisioned storage for
EBS volumes are billed in 1-second increments, with a minimum of 10 minutes.
• Backup storage (per GiB per month) – Backup storage is the storage that is associated with automated
database backups and any active database snapshots that you have taken. Increasing your backup
retention period or taking additional database snapshots increases the backup storage consumed by
your database. Per second billing doesn't apply to backup storage (metered in GB-month).
Amazon RDS provides the following purchasing options to enable you to optimize your costs based on
your needs:
• On-Demand instances – Pay by the hour for the DB instance hours that you use. Pricing is listed on a
per-hour basis, but bills are calculated down to the second and show times in decimal form. RDS usage
is now billed in 1-second increments, with a minimum of 10 minutes.
• Reserved instances – Reserve a DB instance for a one-year or three-year term and get a significant
discount compared to the on-demand DB instance pricing. With Reserved Instance usage, you can
launch, delete, start, or stop multiple instances within an hour and get the Reserved Instance benefit
for all of the instances.
For Amazon RDS pricing information, see the Amazon RDS pricing page.
Topics
• On-Demand DB instances for Amazon RDS (p. 164)
• Reserved DB instances for Amazon RDS (p. 165)
163
Amazon Relational Database Service User Guide
On-Demand DB instances
Billing starts for a DB instance as soon as the DB instance is available. Pricing is listed on a per-hour
basis, but bills are calculated down to the second and show times in decimal form. Amazon RDS usage
is billed in one-second increments, with a minimum of 10 minutes. In the case of billable configuration
change, such as scaling compute or storage capacity, you're charged a 10-minute minimum. Billing
continues until the DB instance terminates, which occurs when you delete the DB instance or if the DB
instance fails.
If you no longer want to be charged for your DB instance, you must stop or delete it to avoid being billed
for additional DB instance hours. For more information about the DB instance states for which you are
billed, see Viewing Amazon RDS DB instance status (p. 684).
Stopped DB instances
While your DB instance is stopped, you're charged for provisioned storage, including Provisioned IOPS.
You are also charged for backup storage, including storage for manual snapshots and automated
backups within your specified retention window. You aren't charged for DB instance hours.
Multi-AZ DB instances
If you specify that your DB instance should be a Multi-AZ deployment, you're billed according to the
Multi-AZ pricing posted on the Amazon RDS pricing page.
164
Amazon Relational Database Service User Guide
Reserved DB instances
The general process for working with reserved DB instances is: First get information about available
reserved DB instance offerings, then purchase a reserved DB instance offering, and finally get
information about your existing reserved DB instances.
The new DB instance that you create must have the same specifications as the reserved DB instance for
the following:
• AWS Region
• DB engine
• DB instance type
• Edition (for RDS for Oracle and RDS for SQL Server)
• License type (license-included or bring-your-own-license)
• Deployment model (Single-AZ or Multi-AZ)
If the specifications of the new DB instance match an existing reserved DB instance for your account, you
are billed at the discounted rate offered for the reserved DB instance. Otherwise, the DB instance is billed
at an on-demand rate.
You can modify a DB instance that you're using as a reserved DB instance. If the modification is within
the specifications of the reserved DB instance, part or all of the discount still applies to the modified
DB instance. If the modification is outside the specifications, such as changing the instance class, the
discount no longer applies. For more information, see Size-flexible reserved DB instances (p. 166).
Topics
• Offering types (p. 165)
• Size-flexible reserved DB instances (p. 166)
• Reserved DB instance billing example (p. 168)
• Reserved DB instances for a Multi-AZ DB cluster (p. 168)
• Deleting a reserved DB instance (p. 169)
For more information about reserved DB instances, including pricing, see Amazon RDS reserved
instances.
Offering types
Reserved DB instances are available in three varieties—No Upfront, Partial Upfront, and All Upfront—
that let you optimize your Amazon RDS costs based on your expected usage.
165
Amazon Relational Database Service User Guide
Reserved DB instances
No Upfront
This option provides access to a reserved DB instance without requiring an upfront payment. Your
No Upfront reserved DB instance bills a discounted hourly rate for every hour within the term,
regardless of usage, and no upfront payment is required. This option is only available as a one-year
reservation.
Partial Upfront
This option requires a part of the reserved DB instance to be paid upfront. The remaining hours in
the term are billed at a discounted hourly rate, regardless of usage. This option is the replacement
for the previous Heavy Utilization option.
All Upfront
Full payment is made at the start of the term, with no other costs incurred for the remainder of the
term regardless of the number of hours used.
If you are using consolidated billing, all the accounts in the organization are treated as one account. This
means that all accounts in the organization can receive the hourly cost benefit of reserved DB instances
that are purchased by any other account. For more information about consolidated billing, see Amazon
RDS reserved DB instances in the AWS Billing and Cost Management User Guide.
If you have a DB instance, and you need to scale it to larger capacity, your reserved DB instance is
automatically applied to your scaled DB instance. That is, your reserved DB instances are automatically
applied across all DB instance class sizes. Size-flexible reserved DB instances are available for DB
instances with the same AWS Region and database engine. Size-flexible reserved DB instances can only
scale in their instance class type. For example, a reserved DB instance for a db.r5.large can apply to a
db.r5.xlarge, but not to a db.r6g.large, because db.r5 and db.r6g are different instance class types.
Reserved DB instance benefits also apply for both Multi-AZ and Single-AZ configurations. Flexibility
means that you can move freely between configurations within the same DB instance class type. For
example, you can move from a Single-AZ deployment running on one large DB instance (four normalized
units per hour) to a Multi-AZ deployment running on two small DB instances (2*2 = 4 normalized units
per hour).
Size-flexible reserved DB instances are available for the following Amazon RDS database engines:
• MariaDB
• MySQL
• Oracle, Bring Your Own License
• PostgreSQL
For details about using size-flexible reserved instances with Aurora, see Reserved DB instances for
Aurora.
You can compare usage for different reserved DB instance sizes by using normalized units per hour. For
example, one unit of usage on two db.r3.large DB instances is equivalent to eight normalized units per
hour of usage on one db.r3.small. The following table shows the number of normalized units per hour
for each DB instance size.
166
Amazon Relational Database Service User Guide
Reserved DB instances
small 1 2 3
medium 2 4 6
large 4 8 12
xlarge 8 16 24
2xlarge 16 32 48
4xlarge 32 64 96
6xlarge 48 96 144
For example, suppose that you purchase a db.t2.medium reserved DB instance, and you have two
running db.t2.small DB instances in your account in the same AWS Region. In this case, the billing
benefit is applied in full to both instances.
Alternatively, if you have one db.t2.large instance running in your account in the same AWS Region,
the billing benefit is applied to 50 percent of the usage of the DB instance.
167
Amazon Relational Database Service User Guide
Reserved DB instances
• An RDS for MySQL reserved Single-AZ db.r5.large DB instance class in US East (N. Virginia) with the No
Upfront option at a cost of $0.12 for the instance, or $90 per month
• 400 GiB of General Purpose SSD (gp2) storage at a cost of 0.115 per GiB per month, or $45.60 per
month
• 600 GiB of backup storage at $0.095, or $19 per month (400 GiB free)
Add all of these charges ($90 + $45.60 + $19) with the reserved DB instance, and the total cost per
month is $154.60.
If you choose to use an on-demand DB instance instead of a reserved DB instance, an RDS for MySQL
Single-AZ db.r5.large DB instance class in US East (N. Virginia) costs $0.1386 per hour, or $101.18 per
month. So, for an on-demand DB instance, add all of these options ($101.18 + $45.60 + $19), and the
total cost per month is $165.78. You save a little over $11 per month by using the reserved DB instance.
Note
The prices in this example are sample prices and might not match actual prices. For Amazon RDS
pricing information, see Amazon RDS pricing.
• Reserve three Single-AZ DB instances that are the same size as the instances in the cluster.
• Reserve one Multi-AZ DB instance and one Single-AZ DB instance that are the same size as the DB
instances in the cluster.
For example, suppose that you have one cluster consisting of three db.m6gd.large DB instances.
In this case, you can either purchase three db.m6gd.large Single-AZ reserved DB instances, or one
db.m6gd.large Multi-AZ reserved DB instance and one db.m6gd.large Single-AZ reserved DB instance.
Either of these options reserves the maximum reserved instance discount for the Multi-AZ DB cluster.
168
Amazon Relational Database Service User Guide
Reserved DB instances
Alternately, you can use size-flexible DB instances and purchase a larger DB instance to cover smaller DB
instances in one or more clusters. For example, if you have two clusters with six total db.m6gd.large DB
instances, you can purchase three db.m6gd.xl Single-AZ reserved DB instances. Doing so reserves all six
DB instances in the two clusters. For more information, see Size-flexible reserved DB instances (p. 166).
You might reserve DB instances that are the same size as the DB instances in the cluster, but reserve
fewer DB instances than the total number of DB instances in the cluster. However, if you do so,
the cluster is only partially reserved. For example, suppose that you have one cluster with three
db.m6gd.large DB instances, and you purchase one db.m6gd.large Multi-AZ reserved DB instance.
In this case, the cluster is only partially reserved, because only two of the three instances in the
cluster are covered by reserved DB instances. The remaining DB instance is charged at the on-demand
db.m6gd.large hourly rate.
For more information about Multi-AZ DB clusters, see Multi-AZ DB cluster deployments (p. 499).
You're billed for the upfront costs regardless of whether you use the resources.
If you delete a DB instance that is covered by a reserved DB instance discount, you can launch another DB
instance with compatible specifications. In this case, you continue to get the discounted rate during the
reservation term (one or three years).
Console
You can use the AWS Management Console to work with reserved DB instances as shown in the following
procedures.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Reserved instances.
3. Choose Purchase Reserved DB Instance.
4. For Product description, choose the DB engine and licensing type.
5. For DB instance class, choose the DB instance class.
6. For Deployment Option, choose whether you want a Single-AZ or Multi-AZ DB instance
deployment.
Note
To purchase the equivalent reserved DB instances for a Multi-AZ DB cluster deployment,
either purchase three Single-AZ reserved DB instances, or one Multi-AZ and one Single-AZ
reserved DB instance. For more information, see Reserved DB instances for a Multi-AZ DB
cluster (p. 168).
7. For Term, choose the length of time to reserve the the DB instance.
8. For Offering type, choose the offering type.
After you select the offering type, you can see the pricing information.
169
Amazon Relational Database Service User Guide
Reserved DB instances
Important
Choose Cancel to avoid purchasing the reserved DB instance and incurring any charges.
After you have information about the available reserved DB instance offerings, you can use the
information to purchase an offering as shown in the following procedure.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Reserved instances.
3. Choose Purchase reserved DB instance.
4. For Product description, choose the DB engine and licensing type.
5. For DB instance class, choose the DB instance class.
6. For Multi-AZ deployment, choose whether you want a Single-AZ or Multi-AZ DB instance
deployment.
Note
To purchase the equivalent reserved DB instances for a Multi-AZ DB cluster deployment,
either purchase three Single-AZ reserved DB instances, or one Multi-AZ and one Single-AZ
reserved DB instance. For more information, see Reserved DB instances for a Multi-AZ DB
cluster (p. 168).
7. For Term, choose the length of time you want the DB instance reserved.
8. For Offering type, choose the offering type.
After you choose the offering type, you can see the pricing information.
9. (Optional) You can assign your own identifier to the reserved DB instances that you purchase to help
you track them. For Reserved Id, type an identifier for your reserved DB instance.
10. Choose Submit.
Your reserved DB instance is purchased, then displayed in the Reserved instances list.
After you have purchased reserved DB instances, you can get information about your reserved DB
instances as shown in the following procedure.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the Navigation pane, choose Reserved instances.
The reserved DB instances for your account appear. To see detailed information about a particular
reserved DB instance, choose that instance in the list. You can then see detailed information about
that instance in the detail pane at the bottom of the console.
AWS CLI
You can use the AWS CLI to work with reserved DB instances as shown in the following examples.
To get information about available reserved DB instance offerings, call the AWS CLI command
describe-reserved-db-instances-offerings.
170
Amazon Relational Database Service User Guide
Reserved DB instances
After you have information about the available reserved DB instance offerings, you can use the
information to purchase an offering.
The following example purchases the reserved DB instance offering with ID 649fd0c8-cf6d-47a0-
bfa6-060f8e75e95f, and assigns the identifier of MyReservation.
For Windows:
171
Amazon Relational Database Service User Guide
Reserved DB instances
After you have purchased reserved DB instances, you can get information about your reserved DB
instances.
To get information about reserved DB instances for your AWS account, call the AWS CLI command
describe-reserved-db-instances, as shown in the following example.
RDS API
You can use the RDS API to work with reserved DB instances:
• To get information about available reserved DB instance offerings, call the Amazon RDS API operation
DescribeReservedDBInstancesOfferings.
• After you have information about the available reserved DB instance offerings, you can use the
information to purchase an offering. Call the PurchaseReservedDBInstancesOffering RDS API
operation with the following parameters:
• --reserved-db-instances-offering-id – The ID of the offering that you want to purchase.
• --reserved-db-instance-id – You can assign your own identifier to the reserved DB instances
that you purchase to help track them.
• After you have purchased reserved DB instances, you can get information about your reserved DB
instances. Call the DescribeReservedDBInstances RDS API operation.
Your reserved DB instances and their hourly charges for the current month are shown under Amazon
Relational Database Service for Database Engine Reserved Instances.
The reserved DB instance in this example was purchased All Upfront, so there are no hourly charges.
6. Choose the Cost Explorer (bar graph) icon next to the Reserved Instances heading.
The Cost Explorer displays the Monthly EC2 running hours costs and usage graph.
172
Amazon Relational Database Service User Guide
Reserved DB instances
7. Clear the Usage Type Group filter to the right of the graph.
8. Choose the time period and time unit for which you want to examine usage costs.
The following example shows usage costs for on-demand and reserved DB instances for the year to
date by month.
The reserved DB instance costs from January through June 2021 are monthly charges for a Partial
Upfront instance, while the cost in August 2021 is a one-time charge for an All Upfront instance.
The reserved instance discount for the Partial Upfront instance expired in June 2021, but the DB
instance wasn't deleted. After the expiration date, it was simply charged at the on-demand rate.
173
Amazon Relational Database Service User Guide
Sign up for an AWS account
Topics
• Sign up for an AWS account (p. 174)
• Create an administrative user (p. 174)
• Grant programmatic access (p. 175)
• Determine requirements (p. 176)
• Provide access to your DB instance in your VPC by creating a security group (p. 177)
If you already have an AWS account, know your Amazon RDS requirements, and prefer to use the
defaults for IAM and VPC security groups, skip ahead to Getting started with Amazon RDS (p. 180).
1. Open https://portal.aws.amazon.com/billing/signup.
2. Follow the online instructions.
Part of the sign-up procedure involves receiving a phone call and entering a verification code on the
phone keypad.
When you sign up for an AWS account, an AWS account root user is created. The root user has access
to all AWS services and resources in the account. As a security best practice, assign administrative
access to an administrative user, and use only the root user to perform tasks that require root user
access.
AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view
your current account activity and manage your account by going to https://aws.amazon.com/ and
choosing My Account.
1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering
your AWS account email address. On the next page, enter your password.
For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide.
174
Amazon Relational Database Service User Guide
Grant programmatic access
For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM
User Guide.
• For your daily administrative tasks, grant administrative access to an administrative user in AWS IAM
Identity Center (successor to AWS Single Sign-On).
For instructions, see Getting started in the AWS IAM Identity Center (successor to AWS Single Sign-On)
User Guide.
• To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email
address when you created the IAM Identity Center user.
For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the
AWS Sign-In User Guide.
175
Amazon Relational Database Service User Guide
Determine requirements
Determine requirements
The basic building block of Amazon RDS is the DB instance. In a DB instance, you create your databases.
A DB instance provides a network address called an endpoint. Your applications use this endpoint to
connect to your DB instance. When you create a DB instance, you specify details like storage, memory,
database engine and version, network configuration, security, and maintenance periods. You control
network access to a DB instance through a security group.
Before you create a DB instance and a security group, you must know your DB instance and network
needs. Here are some important things to consider:
• Resource requirements – What are the memory and processor requirements for your application or
service? You use these settings to help you determine what DB instance class to use. For specifications
about DB instance classes, see DB instance classes (p. 11).
• VPC, subnet, and security group – Your DB instance will most likely be in a virtual private cloud
(VPC). To connect to your DB instance, you need to set up security group rules. These rules are set up
differently depending on what kind of VPC you use and how you use it. For example, you can use: a
default VPC or a user-defined VPC.
The following list describes the rules for each VPC option:
• Default VPC – If your AWS account has a default VPC in the current AWS Region, that VPC is
configured to support DB instances. If you specify the default VPC when you create the DB instance,
do the following:
• Make sure to create a VPC security group that authorizes connections from the application or
service to the Amazon RDS DB instance. Use the Security Group option on the VPC console or
the AWS CLI to create VPC security groups. For information, see Step 3: Create a VPC security
group (p. 2700).
• Specify the default DB subnet group. If this is the first DB instance you have created in this AWS
Region, Amazon RDS creates the default DB subnet group when it creates the DB instance.
• User-defined VPC – If you want to specify a user-defined VPC when you create a DB instance, be
aware of the following:
• Make sure to create a VPC security group that authorizes connections from the application or
service to the Amazon RDS DB instance. Use the Security Group option on the VPC console or
the AWS CLI to create VPC security groups. For information, see Step 3: Create a VPC security
group (p. 2700).
• The VPC must meet certain requirements in order to host DB instances, such as having at least two
subnets, each in a separate Availability Zone. For information, see Amazon VPC VPCs and Amazon
RDS (p. 2688).
176
Amazon Relational Database Service User Guide
Provide access to your DB instance
• Make sure to specify a DB subnet group that defines which subnets in that VPC can be used by the
DB instance. For information, see the DB subnet group section in Working with a DB instance in a
VPC (p. 2689).
• High availability – Do you need failover support? On Amazon RDS, a Multi-AZ deployment creates
a primary DB instance and a secondary standby DB instance in another Availability Zone for failover
support. We recommend Multi-AZ deployments for production workloads to maintain high availability.
For development and test purposes, you can use a deployment that isn't Multi-AZ. For more
information, see Configuring and managing a Multi-AZ deployment (p. 492).
• IAM policies – Does your AWS account have policies that grant the permissions needed to perform
Amazon RDS operations? If you are connecting to AWS using IAM credentials, your IAM account must
have IAM policies that grant the permissions required to perform Amazon RDS operations. For more
information, see Identity and access management for Amazon RDS (p. 2606).
• Open ports – What TCP/IP port does your database listen on? The firewalls at some companies might
block connections to the default port for your database engine. If your company firewall blocks the
default port, choose another port for the new DB instance. When you create a DB instance that listens
on a port you specify, you can change the port by modifying the DB instance.
• AWS Region – What AWS Region do you want your database in? Having your database in close
proximity to your application or web service can reduce network latency. For more information, see
Regions, Availability Zones, and Local Zones (p. 110).
• DB disk subsystem – What are your storage requirements? Amazon RDS provides three storage types:
• General Purpose (SSD)
• Provisioned IOPS (PIOPS)
• Magnetic (also known as standard storage)
For more information on Amazon RDS storage, see Amazon RDS DB instance storage (p. 101).
When you have the information you need to create the security group and the DB instance, continue to
the next step.
Before you can connect to your DB instance, you must add rules to a security group that enable you
to connect. Use your network and configuration information to create rules to allow access to your DB
instance.
For example, suppose that you have an application that accesses a database on your DB instance in a
VPC. In this case, you must add a custom TCP rule that specifies the port range and IP addresses that
your application uses to access the database. If you have an application on an Amazon EC2 instance, you
can use the security group that you set up for the Amazon EC2 instance.
You can configure connectivity between an Amazon EC2 instance a DB instance when you create
the DB instance. For more information, see Configure automatic network connectivity with an EC2
instance (p. 300).
Tip
You can set up network connectivity between an Amazon EC2 instance and a DB instance
automatically when you create the DB instance. For more information, see Configure automatic
network connectivity with an EC2 instance (p. 300).
177
Amazon Relational Database Service User Guide
Provide access to your DB instance
For information about common scenarios for accessing a DB instance, see Scenarios for accessing a DB
instance in a VPC (p. 2701).
1. Sign in to the AWS Management Console and open the Amazon VPC console at https://
console.aws.amazon.com/vpc.
Note
Make sure you are in the VPC console, not the RDS console.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region where you want
to create your VPC security group and DB instance. In the list of Amazon VPC resources for that AWS
Region, you should see at least one VPC and several subnets. If you don't, you don't have a default
VPC in that AWS Region.
3. In the navigation pane, choose Security Groups.
4. Choose Create security group.
You can use the VPC security group that you just created as the security group for your DB instance when
you create it.
Note
If you use a default VPC, a default subnet group spanning all of the VPC's subnets is created
for you. When you create a DB instance, you can select the default VPC and use default for DB
Subnet Group.
After you have completed the setup requirements, you can create a DB instance using your requirements
and security group. To do so, follow the instructions in Creating an Amazon RDS DB instance (p. 300).
For information about getting started by creating a DB instance that uses a specific DB engine, see the
relevant documentation in the following table.
Microsoft SQL Server Creating and connecting to a Microsoft SQL Server DB instance (p. 194)
178
Amazon Relational Database Service User Guide
Provide access to your DB instance
Note
If you can't connect to a DB instance after you create it, see the troubleshooting information in
Can't connect to Amazon RDS DB instance (p. 2727).
179
Amazon Relational Database Service User Guide
Creating a DB instance and connecting to a database on a DB instance is slightly different for each of the
DB engines. Choose one of the following DB engines that you want to use for detailed information on
creating and connecting to the DB instance. After you have created and connected to your DB instance,
there are instructions to help you delete the DB instance.
Topics
• Creating and connecting to a MariaDB DB instance (p. 181)
• Creating and connecting to a Microsoft SQL Server DB instance (p. 194)
• Creating and connecting to a MySQL DB instance (p. 209)
• Creating and connecting to an Oracle DB instance (p. 222)
• Creating and connecting to a PostgreSQL DB instance (p. 235)
• Tutorial: Create a web server and an Amazon RDS DB instance (p. 249)
• Tutorial: Using a Lambda function to access an Amazon RDS database (p. 273)
180
Amazon Relational Database Service User Guide
Creating and connecting to a MariaDB DB instance
After you complete the tutorial, there is a public and private subnet in each Availability Zone in your VPC.
In one Availability Zone, the EC2 instance is in the public subnet, and the DB instance is in the private
subnet.
Important
There's no charge for creating an AWS account. However, by completing this tutorial, you might
incur costs for the AWS resources you use. You can delete these resources after you complete
the tutorial if they are no longer needed.
The following diagram shows the configuration when the tutorial is complete.
This tutorial uses Easy create to create a DB instance running MariaDB with the AWS Management
Console. With Easy create, you specify only the DB engine type, DB instance size, and DB instance
identifier. Easy create uses the default settings for the other configuration options. The DB instance
created by Easy create is private.
When you use Standard create instead of Easy create, you can specify more configuration options when
you create a DB instance, including ones for availability, security, backups, and maintenance. To create
a public DB instance, you must use Standard create. For information about creating DB instances with
Standard create, see Creating an Amazon RDS DB instance (p. 300).
Topics
• Prerequisites (p. 182)
• Step 1: Create an EC2 instance (p. 182)
• Step 2: Create a MariaDB DB instance (p. 185)
• Step 3: Connect to a MariaDB DB instance (p. 190)
• Step 4: Delete the EC2 instance and DB instance (p. 193)
• (Optional) Connect your DB instance to a Lambda function (p. 193)
181
Amazon Relational Database Service User Guide
Prerequisites
Prerequisites
Before you begin, complete the steps in the following sections:
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you
want to create the EC2 instance.
3. Choose EC2 Dashboard, and then choose Launch instance, as shown in the following image.
182
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance
For more information about creating a new key pair, see Create a key pair in the Amazon EC2
User Guide for Linux Instances.
e. For Allow SSH traffic in Network settings, choose the source of SSH connections to the EC2
instance.
You can choose My IP if the displayed IP address is correct for SSH connections. Otherwise, you
can determine the IP address to use to connect to EC2 instances in your VPC using Secure Shell
(SSH). To determine your public IP address, in a different browser window or tab, you can use
the service at https://checkip.amazonaws.com. An example of an IP address is 192.0.2.1/32.
183
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance
In many cases, you might connect through an internet service provider (ISP) or from behind your
firewall without a static IP address. If so, make sure to determine the range of IP addresses used
by client computers.
Warning
If you use 0.0.0.0/0 for SSH access, you make it possible for all IP addresses to access
your public EC2 instances using SSH. This approach is acceptable for a short time in a
test environment, but it's unsafe for production environments. In production, authorize
only a specific IP address or range of addresses to access your EC2 instances using SSH.
184
Amazon Relational Database Service User Guide
Step 2: Create a MariaDB DB instance
6. Choose the EC2 instance identifier to open the list of EC2 instances, and then select your EC2
instance.
7. In the Details tab, note the following values, which you need when you connect using SSH:
8. Wait until the Instance state for your EC2 instance has a status of Running before continuing.
185
Amazon Relational Database Service User Guide
Step 2: Create a MariaDB DB instance
In this example, you use Easy create to create a DB instance running the MariaDB database engine with a
db.t3.micro DB instance class.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to
create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database and make sure that Easy create is chosen.
The Create database page should look similar to the following image.
186
Amazon Relational Database Service User Guide
Step 2: Create a MariaDB DB instance
9. To use an automatically generated master password for the DB instance, select Auto generate a
password.
To enter your master password, make sure Auto generate a password is cleared, and then enter the
same password in Master password and Confirm password.
10. To set up a connection with the EC2 instance you created previously, open Set up EC2 connection -
optional.
Select Connect to an EC2 compute resource. Choose the EC2 instance you created previously.
187
Amazon Relational Database Service User Guide
Step 2: Create a MariaDB DB instance
188
Amazon Relational Database Service User Guide
Step 2: Create a MariaDB DB instance
You can examine the default settings used with Easy create. The Editable after database is created
column shows which options you can change after you create the database.
• If a setting has No in that column, and you want a different setting, you can use Standard create
to create the DB instance.
• If a setting has Yes in that column, and you want a different setting, you can either use Standard
create to create the DB instance, or modify the DB instance after you create it to change the
setting.
12. Choose Create database.
To view the master username and password for the DB instance, choose View credential details.
You can use the username and password that appears to connect to the DB instance as the master
user.
189
Amazon Relational Database Service User Guide
Step 3: Connect to a MariaDB DB instance
Important
You can't view the master user password again. If you don't record it, you might have to
change it.
If you need to change the master user password after the DB instance is available, you can
modify the DB instance to do so. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
13. In the Databases list, choose the name of the new MariaDB DB instance to show its details.
When the status changes to Available, you can connect to the DB instance. Depending on the DB
instance class and the amount of storage, it can take up to 20 minutes before the new instance is
available.
1. Find the endpoint (DNS name) and port number for your DB instance.
a. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
b. In the upper-right corner of the Amazon RDS console, choose the AWS Region for the DB
instance.
c. In the navigation pane, choose Databases.
d. Choose the MariaDB DB instance name to display its details.
e. On the Connectivity & security tab, copy the endpoint. Also note the port number. You need
both the endpoint and the port number to connect to the DB instance.
190
Amazon Relational Database Service User Guide
Step 3: Connect to a MariaDB DB instance
2. Connect to the EC2 instance that you created earlier by following the steps in Connect to your Linux
instance in the Amazon EC2 User Guide for Linux Instances.
We recommend that you connect to your EC2 instance using SSH. If the SSH client utility is installed
on Windows, Linux, or Mac, you can connect to the instance using the following command format:
191
Amazon Relational Database Service User Guide
Step 3: Connect to a MariaDB DB instance
3. Get the latest bug fixes and security updates by updating the software on your EC2 instance. To do
this, use the following command.
Note
The -y option installs the updates without asking for confirmation. To examine updates
before installing, omit this option.
To install the MariaDB command-line client on Amazon Linux 2023, run the following command:
5. Connect to the MariaDB DB instance. For example, enter the following command. This action lets
you connect to the MariaDB DB instance using the MySQL client.
Substitute the DB instance endpoint (DNS name) for endpoint, and substitute the master username
that you used for admin. Provide the master password that you used when prompted for a
password.
After you enter the password for the user, you should see output similar to the following.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>
For more information about connecting to a MariaDB DB instance, see Connecting to a DB instance
running the MariaDB database engine (p. 1269). If you can't connect to your DB instance, see Can't
connect to Amazon RDS DB instance (p. 2727).
For security, it is a best practice to use encrypted connections. Only use an unencrypted MariaDB
connection when the client and server are in the same VPC and the network is trusted. For
information about using encrypted connections, see Connecting from the MySQL command-line
client with SSL/TLS (encrypted) (p. 1276).
6. Run SQL commands.
For example, the following SQL command shows the current date and time:
SELECT CURRENT_TIMESTAMP;
192
Amazon Relational Database Service User Guide
Step 4: Delete the EC2 instance and DB instance
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Instances.
3. Select the EC2 instance, and choose Instance state, Terminate instance.
4. Choose Terminate when prompted for confirmation.
For more information about deleting an EC2 instance, see Terminate your instance in the Amazon EC2
User Guide for Linux Instances.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance you want to delete.
4. For Actions, choose Delete.
5. Clear Create final snapshot? and Retain automated backups.
6. Complete the acknowledgement and choose Delete.
193
Amazon Relational Database Service User Guide
Creating and connecting to a
Microsoft SQL Server DB instance
After you complete the tutorial, there is a public and private subnet in each Availability Zone in your VPC.
In one Availability Zone, the EC2 instance is in the public subnet, and the DB instance is in the private
subnet.
Important
There's no charge for creating an AWS account. However, by completing this tutorial, you might
incur costs for the AWS resources you use. You can delete these resources after you complete
the tutorial if they are no longer needed.
The following diagram shows the configuration when the tutorial is complete.
This tutorial uses Easy create to create a DB instance running Microsoft SQL Server with the AWS
Management Console. With Easy create, you specify only the DB engine type, DB instance size, and DB
instance identifier. Easy create uses the default settings for the other configuration options. The DB
instance created by Easy create is private.
When you use Standard create instead of Easy create, you can specify more configuration options when
you create a DB instance, including ones for availability, security, backups, and maintenance. To create
a public DB instance, you must use Standard create. For information about creating DB instances with
Standard create, see Creating an Amazon RDS DB instance (p. 300).
Topics
• Prerequisites (p. 195)
• Step 1: Create an EC2 instance (p. 195)
• Step 2: Create a SQL Server DB instance (p. 199)
• Step 3: Connect to your SQL Server DB instance (p. 204)
194
Amazon Relational Database Service User Guide
Prerequisites
Prerequisites
Before you begin, complete the steps in the following sections:
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region you used for the
database previously.
3. Choose EC2 Dashboard, and then choose Launch instance, as shown in the following image.
195
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance
196
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance
For more information about creating a new key pair, see Create a key pair in the Amazon EC2
User Guide for Windows Instances.
e. For Firewall (security groups) in Network settings, choose Allow RDP traffic from to connect
to the EC2 instance.
You can choose My IP if the displayed IP address is correct for RDP connections. Otherwise,
you can determine the IP address to use to connect to EC2 instances in your VPC using RDP. To
determine your public IP address, in a different browser window or tab, you can use the service
at https://checkip.amazonaws.com. An example of an IP address is 192.0.2.1/32.
In many cases, you might connect through an internet service provider (ISP) or from behind your
firewall without a static IP address. If so, make sure to determine the range of IP addresses used
by client computers.
Warning
If you use 0.0.0.0/0 for RDP access, you make it possible for all IP addresses to access
your public EC2 instances using RDP. This approach is acceptable for a short time in a
test environment, but it's unsafe for production environments. In production, authorize
only a specific IP address or range of addresses to access your EC2 instances using RDP.
197
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance
198
Amazon Relational Database Service User Guide
Step 2: Create a SQL Server DB instance
6. Choose the EC2 instance identifier to open the list of EC2 instances.
7. Wait until the Instance state for your EC2 instance has a status of Running before continuing.
In this example, you use Easy create to create a DB instance running the SQL Server database engine
with a db.t2.micro DB instance class.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to
create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database and make sure that Easy create is chosen.
199
Amazon Relational Database Service User Guide
Step 2: Create a SQL Server DB instance
The Create database page should look similar to the following image.
200
Amazon Relational Database Service User Guide
Step 2: Create a SQL Server DB instance
9. For Master username, enter a name for the master user, or keep the default name.
10. To set up a connection with the EC2 instance you created previously, open Set up EC2 connection -
optional.
Select Connect to an EC2 compute resource. Choose the EC2 instance you created previously.
201
Amazon Relational Database Service User Guide
Step 2: Create a SQL Server DB instance
11. To use an automatically generated master password for the DB instance, select the Auto generate a
password box.
To enter your master password, clear the Auto generate a password box, and then enter the same
password in Master password and Confirm password.
12. Open View default settings for Easy create.
202
Amazon Relational Database Service User Guide
Step 2: Create a SQL Server DB instance
You can examine the default settings used with Easy create. The Editable after database is created
column shows which options you can change after you create the database.
• If a setting has No in that column, and you want a different setting, you can use Standard create
to create the DB instance.
• If a setting has Yes in that column, and you want a different setting, you can either use Standard
create to create the DB instance, or modify the DB instance after you create it to change the
setting.
13. Choose Create database.
To view the master username and password for the DB instance, choose View credential details.
You can use the username and password that appears to connect to the DB instance as the master
user.
203
Amazon Relational Database Service User Guide
Step 3: Connecting to your SQL Server DB instance
Important
You can't view the master user password again. If you don't record it, you might have to
change it.
If you need to change the master user password after the DB instance is available, you can
modify the DB instance to do so. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
14. In the Databases list, choose the name of the new SQL Server DB instance to show its details.
When the status changes to Available, you can connect to the DB instance. Depending on the DB
instance class and the amount of storage, it can take up to 20 minutes before the new instance is
available.
1. Find the endpoint (DNS name) and port number for your DB instance.
a. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
b. In the upper-right corner of the Amazon RDS console, choose the AWS Region for the DB
instance.
c. In the navigation pane, choose Databases.
d. Choose the SQL Server DB instance name to display its details.
e. On the Connectivity tab, copy the endpoint. Also, note the port number. You need both the
endpoint and the port number to connect to the DB instance.
204
Amazon Relational Database Service User Guide
Step 3: Connecting to your SQL Server DB instance
2. Connect to the EC2 instance that you created earlier by following the steps in Connect to your
Microsoft Windows instance in the Amazon EC2 User Guide for Windows Instances.
3. Install the SQL Server Management Studio (SSMS) client from Microsoft.
205
Amazon Relational Database Service User Guide
Step 4: Exploring your sample DB instance
To download a standalone version of SSMS to your EC2 instance, see Download SQL Server
Management Studio (SSMS) in the Microsoft documentation.
database-test1.0123456789012.us-west-2.rds.amazonaws.com,1433
After a few moments, SSMS connects to your DB instance. For security, it is a best practice to use
encrypted connections. Only use an unencrypted SQL Server connection when the client and server
are in the same VPC and the network is trusted. For information about using encrypted connections,
see Using SSL with a Microsoft SQL Server DB instance (p. 1456)
For more information about connecting to a Microsoft SQL Server DB instance, see Connecting to a DB
instance running the Microsoft SQL Server database engine (p. 1380).
For information about connection issues, see Can't connect to Amazon RDS DB instance (p. 2727).
1. Your SQL Server DB instance comes with SQL Server's standard built-in system databases (master,
model, msdb, and tempdb). To explore the system databases, do the following:
206
Amazon Relational Database Service User Guide
Step 4: Exploring your sample DB instance
Your SQL Server DB instance also comes with a database named rdsadmin. Amazon RDS uses this
database to store the objects that it uses to manage your database. The rdsadmin database also
includes stored procedures that you can run to perform advanced tasks.
2. Start creating your own databases and running queries against your DB instance and databases as
usual. To run a test query against your sample DB instance, do the following:
a. In SSMS, on the File menu, point to New and then choose Query with Current Connection.
b. Enter the following SQL query:
select @@VERSION
c. Run the query. SSMS returns the SQL Server version of your Amazon RDS DB instance.
207
Amazon Relational Database Service User Guide
Step 5: Delete the EC2 instance and DB instance
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Instances.
3. Select the EC2 instance, and choose Instance state, Terminate instance.
4. Choose Terminate when prompted for confirmation.
For more information about deleting an EC2 instance, see Terminate your instance in the User Guide for
Windows Instances.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to delete.
4. For Actions, choose Delete.
5. Clear Create final snapshot? and Retain automated backups.
6. Complete the acknowledgement and choose Delete.
208
Amazon Relational Database Service User Guide
Creating and connecting to a MySQL DB instance
After you complete the tutorial, there is a public and private subnet in each Availability Zone in your VPC.
In one Availability Zone, the EC2 instance is in the public subnet, and the DB instance is in the private
subnet.
Important
There's no charge for creating an AWS account. However, by completing this tutorial, you might
incur costs for the AWS resources you use. You can delete these resources after you complete
the tutorial if they are no longer needed.
The following diagram shows the configuration when the tutorial is complete.
This tutorial uses Easy create to create a DB instance running MySQL with the AWS Management
Console. With Easy create, you specify only the DB engine type, DB instance size, and DB instance
identifier. Easy create uses the default settings for the other configuration options. The DB instance
created by Easy create is private.
When you use Standard create instead of Easy create, you can specify more configuration options when
you create a DB instance, including ones for availability, security, backups, and maintenance. To create
a public DB instance, you must use Standard create. For information about creating DB instances with
Standard create, see Creating an Amazon RDS DB instance (p. 300).
Topics
• Prerequisites (p. 210)
• Step 1: Create an EC2 instance (p. 210)
• Step 2: Create a MySQL DB instance (p. 213)
• Step 3: Connect to a MySQL DB instance (p. 218)
• Step 4: Delete the EC2 instance and DB instance (p. 221)
• (Optional) Connect your DB instance to a Lambda function (p. 221)
209
Amazon Relational Database Service User Guide
Prerequisites
Prerequisites
Before you begin, complete the steps in the following sections:
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you
want to create the EC2 instance.
3. Choose EC2 Dashboard, and then choose Launch instance, as shown in the following image.
210
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance
For more information about creating a new key pair, see Create a key pair in the Amazon EC2
User Guide for Linux Instances.
e. For Allow SSH traffic in Network settings, choose the source of SSH connections to the EC2
instance.
You can choose My IP if the displayed IP address is correct for SSH connections. Otherwise, you
can determine the IP address to use to connect to EC2 instances in your VPC using Secure Shell
(SSH). To determine your public IP address, in a different browser window or tab, you can use
the service at https://checkip.amazonaws.com. An example of an IP address is 192.0.2.1/32.
211
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance
In many cases, you might connect through an internet service provider (ISP) or from behind your
firewall without a static IP address. If so, make sure to determine the range of IP addresses used
by client computers.
Warning
If you use 0.0.0.0/0 for SSH access, you make it possible for all IP addresses to access
your public EC2 instances using SSH. This approach is acceptable for a short time in a
test environment, but it's unsafe for production environments. In production, authorize
only a specific IP address or range of addresses to access your EC2 instances using SSH.
212
Amazon Relational Database Service User Guide
Step 2: Create a MySQL DB instance
6. Choose the EC2 instance identifier to open the list of EC2 instances, and then select your EC2
instance.
7. In the Details tab, note the following values, which you need when you connect using SSH:
8. Wait until the Instance state for your EC2 instance has a status of Running before continuing.
213
Amazon Relational Database Service User Guide
Step 2: Create a MySQL DB instance
In this example, you use Easy create to create a DB instance running the MySQL database engine with a
db.t3.micro DB instance class.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region you used for the EC2
instance previously.
3. In the navigation pane, choose Databases.
4. Choose Create database and make sure that Easy create is chosen.
The Create database page should look similar to the following image.
214
Amazon Relational Database Service User Guide
Step 2: Create a MySQL DB instance
9. To use an automatically generated master password for the DB instance, select Auto generate a
password.
To enter your master password, make sure Auto generate a password is cleared, and then enter the
same password in Master password and Confirm password.
10. To set up a connection with the EC2 instance you created previously, open Set up EC2 connection -
optional.
Select Connect to an EC2 compute resource. Choose the EC2 instance you created previously.
215
Amazon Relational Database Service User Guide
Step 2: Create a MySQL DB instance
216
Amazon Relational Database Service User Guide
Step 2: Create a MySQL DB instance
You can examine the default settings used with Easy create. The Editable after database is created
column shows which options you can change after you create the database.
• If a setting has No in that column, and you want a different setting, you can use Standard create
to create the DB instance.
• If a setting has Yes in that column, and you want a different setting, you can either use Standard
create to create the DB instance, or modify the DB instance after you create it to change the
setting.
12. Choose Create database.
To view the master username and password for the DB instance, choose View credential details.
You can use the username and password that appears to connect to the DB instance as the master
user.
217
Amazon Relational Database Service User Guide
Step 3: Connect to a MySQL DB instance
Important
You can't view the master user password again. If you don't record it, you might have to
change it.
If you need to change the master user password after the DB instance is available, you can
modify the DB instance to do so. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
13. In the Databases list, choose the name of the new MySQL DB instance to show its details.
When the status changes to Available, you can connect to the DB instance. Depending on the DB
instance class and the amount of storage, it can take up to 20 minutes before the new instance is
available.
1. Find the endpoint (DNS name) and port number for your DB instance.
a. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
b. In the upper-right corner of the Amazon RDS console, choose the AWS Region for the DB
instance.
c. In the navigation pane, choose Databases.
d. Choose the MySQL DB instance name to display its details.
e. On the Connectivity & security tab, copy the endpoint. Also, note the port number. You need
both the endpoint and the port number to connect to the DB instance.
218
Amazon Relational Database Service User Guide
Step 3: Connect to a MySQL DB instance
2. Connect to the EC2 instance that you created earlier by following the steps in Connect to your Linux
instance in the Amazon EC2 User Guide for Linux Instances.
We recommend that you connect to your EC2 instance using SSH. If the SSH client utility is installed
on Windows, Linux, or Mac, you can connect to the instance using the following command format:
219
Amazon Relational Database Service User Guide
Step 3: Connect to a MySQL DB instance
3. Get the latest bug fixes and security updates by updating the software on your EC2 instance. To do
this, use the following command.
Note
The -y option installs the updates without asking for confirmation. To examine updates
before installing, omit this option.
4. To install the mysql command-line client from MariaDB on Amazon Linux 2023, run the following
command:
5. Connect to the MySQL DB instance. For example, enter the following command. This action lets you
connect to the MySQL DB instance using the MySQL client.
Substitute the DB instance endpoint (DNS name) for endpoint, and substitute the master username
that you used for admin. Provide the master password that you used when prompted for a
password.
After you enter the password for the user, you should see output similar to the following.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]>
For more information about connecting to a MySQL DB instance, see Connecting to a DB instance
running the MySQL database engine (p. 1630). If you can't connect to your DB instance, see Can't
connect to Amazon RDS DB instance (p. 2727).
For security, it is a best practice to use encrypted connections. Only use an unencrypted MySQL
connection when the client and server are in the same VPC and the network is trusted. For
information about using encrypted connections, see Connecting from the MySQL command-line
client with SSL/TLS (encrypted) (p. 1640).
6. Run SQL commands.
For example, the following SQL command shows the current date and time:
SELECT CURRENT_TIMESTAMP;
220
Amazon Relational Database Service User Guide
Step 4: Delete the EC2 instance and DB instance
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Instances.
3. Select the EC2 instance, and choose Instance state, Terminate instance.
4. Choose Terminate when prompted for confirmation.
For more information about deleting an EC2 instance, see Terminate your instance in the Amazon EC2
User Guide for Linux Instances.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to delete.
4. For Actions, choose Delete.
5. Clear Create final snapshot? and Retain automated backups.
6. Complete the acknowledgement and choose Delete.
221
Amazon Relational Database Service User Guide
Creating and connecting to an Oracle DB instance
After you complete the tutorial, there is a public and private subnet in each Availability Zone in your VPC.
In one Availability Zone, the EC2 instance is in the public subnet, and the DB instance is in the private
subnet.
Important
There's no charge for creating an AWS account. However, by completing this tutorial, you might
incur costs for the AWS resources you use. You can delete these resources after you complete
the tutorial if they are no longer needed.
The following diagram shows the configuration when the tutorial is complete.
This tutorial uses Easy create to create a DB instance running Oracle with the AWS Management
Console. With Easy create, you specify only the DB engine type, DB instance size, and DB instance
identifier. Easy create uses the default settings for the other configuration options. The DB instance
created by Easy create is private.
When you use Standard create instead of Easy create, you can specify more configuration options when
you create a DB instance, including ones for availability, security, backups, and maintenance. To create
a public DB instance, you must use Standard create. For information about creating DB instances with
Standard create, see Creating an Amazon RDS DB instance (p. 300).
Topics
• Prerequisites (p. 223)
• Step 1: Create an EC2 instance (p. 223)
• Step 2: Create an Oracle DB instance (p. 226)
• Step 3: Connect your SQL client to an Oracle DB instance (p. 231)
• Step 4: Delete the EC2 instance and DB instance (p. 234)
• (Optional) Connect your DB instance to a Lambda function (p. 234)
222
Amazon Relational Database Service User Guide
Prerequisites
Prerequisites
Before you begin, complete the steps in the following sections:
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you
want to create the EC2 instance.
3. Choose EC2 Dashboard, and then choose Launch instance, as shown in the following image.
223
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance
For more information about creating a new key pair, see Create a key pair in the Amazon EC2
User Guide for Linux Instances.
e. For Allow SSH traffic in Network settings, choose the source of SSH connections to the EC2
instance.
You can choose My IP if the displayed IP address is correct for SSH connections. Otherwise, you
can determine the IP address to use to connect to EC2 instances in your VPC using Secure Shell
(SSH). To determine your public IP address, in a different browser window or tab, you can use
the service at https://checkip.amazonaws.com. An example of an IP address is 192.0.2.1/32.
224
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance
In many cases, you might connect through an internet service provider (ISP) or from behind your
firewall without a static IP address. If so, make sure to determine the range of IP addresses used
by client computers.
Warning
If you use 0.0.0.0/0 for SSH access, you make it possible for all IP addresses to access
your public EC2 instances using SSH. This approach is acceptable for a short time in a
test environment, but it's unsafe for production environments. In production, authorize
only a specific IP address or range of addresses to access your EC2 instances using SSH.
225
Amazon Relational Database Service User Guide
Step 2: Create an Oracle DB instance
6. Choose the EC2 instance identifier to open the list of EC2 instances, and then select your EC2
instance.
7. In the Details tab, note the following values, which you need when you connect using SSH:
8. Wait until the Instance state for your EC2 instance has a status of Running before continuing.
226
Amazon Relational Database Service User Guide
Step 2: Create an Oracle DB instance
In this example, you use Easy create to create a DB instance running the Oracle database engine with a
db.m5.large DB instance class.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to
create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database and make sure that Easy create is chosen.
The Create database page should look similar to the following image.
227
Amazon Relational Database Service User Guide
Step 2: Create an Oracle DB instance
228
Amazon Relational Database Service User Guide
Step 2: Create an Oracle DB instance
9. To use an automatically generated master password for the DB instance, select Auto generate a
password.
To enter your master password, make sure Auto generate a password is cleared, and then enter the
same password in Master password and Confirm password.
10. To set up a connection with the EC2 instance you created previously, open Set up EC2 connection -
optional.
Select Connect to an EC2 compute resource. Choose the EC2 instance you created previously.
229
Amazon Relational Database Service User Guide
Step 2: Create an Oracle DB instance
You can examine the default settings used with Easy create. The Editable after database is created
column shows which options you can change after you create the database.
• If a setting has No in that column, and you want a different setting, you can use Standard create
to create the DB instance.
• If a setting has Yes in that column, and you want a different setting, you can either use Standard
create to create the DB instance, or modify the DB instance after you create it to change the
setting.
12. Choose Create database.
To view the master username and password for the DB instance, choose View credential details.
You can use the username and password that appears to connect to the DB instance as the master
user.
230
Amazon Relational Database Service User Guide
Step 3: Connect your SQL client to an Oracle DB instance
Important
You can't view the master user password again. If you don't record it, you might have to
change it.
If you need to change the master user password after the DB instance is available, you can
modify the DB instance to do so. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
13. In the Databases list, choose the name of the new Oracle DB instance to show its details.
When the status changes to Available, you can connect to the DB instance. Depending on the DB
instance class and the amount of storage, it can take up to 20 minutes before the new instance is
available. While the DB instance is being created, you can move on to the next step and create an
EC2 instance.
1. Find the endpoint (DNS name) and port number for your DB instance.
a. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
b. In the upper-right corner of the Amazon RDS console, choose the AWS Region for the DB
instance.
c. In the navigation pane, choose Databases.
d. Choose the Oracle DB instance name to display its details.
e. On the Connectivity & security tab, copy the endpoint. Also, note the port number. You need
both the endpoint and the port number to connect to the DB instance.
231
Amazon Relational Database Service User Guide
Step 3: Connect your SQL client to an Oracle DB instance
2. Connect to the EC2 instance that you created earlier by following the steps in Connect to your Linux
instance in the Amazon EC2 User Guide for Linux Instances.
We recommend that you connect to your EC2 instance using SSH. If the SSH client utility is installed
on Windows, Linux, or Mac, you can connect to the instance using the following command format:
3. Get the latest bug fixes and security updates by updating the software on your EC2 instance. To do
so, use the following command.
Note
The -y option installs the updates without asking for confirmation. To examine updates
before installing, omit this option.
232
Amazon Relational Database Service User Guide
Step 3: Connect your SQL client to an Oracle DB instance
• https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-instantclient-
basic-21.9.0.0.0-1.el8.x86_64.rpm
• https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-instantclient-
sqlplus-21.9.0.0.0-1.el8.x86_64.rpm
6. In your SSH session, run the wget command to the download the .rpm files from the links that you
obtained in the previous step. The following example downloads the .rpm files for Oracle Database
version 21.9:
wget https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-
instantclient-basic-21.9.0.0.0-1.el8.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-
instantclient-sqlplus-21.9.0.0.0-1.el8.x86_64.rpm
8. Start SQL*Plus and connect to the Oracle DB instance. For example, enter the following command.
sqlplus admin@oracle-db-instance-endpoint:1521/DATABASE
After you enter the password for the user, you should see output similar to the following.
Enter password:
Last Successful login time: Wed Mar 01 2023 16:30:52 +00:00
Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.18.0.0.0
SQL>
For more information about connecting to an RDS for Oracle DB instance, see Connecting to your
RDS for Oracle DB instance (p. 1806). If you can't connect to your DB instance, see Can't connect to
Amazon RDS DB instance (p. 2727).
For security, it is a best practice to use encrypted connections. Only use an unencrypted
Oracle connection when the client and server are in the same VPC and the network is
trusted. For information about using encrypted connections, see Securing Oracle DB instance
connections (p. 1816).
9. Run SQL commands.
For example, the following SQL command shows the current date:
233
Amazon Relational Database Service User Guide
Step 4: Delete the EC2 instance and DB instance
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Instances.
3. Select the EC2 instance, and choose Instance state, Terminate instance.
4. Choose Terminate when prompted for confirmation.
For more information about deleting an EC2 instance, see Terminate your instance in the Amazon EC2
User Guide for Linux Instances.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to delete.
4. For Actions, choose Delete.
5. Clear Create final snapshot? and Retain automated backups.
6. Complete the acknowledgement and choose Delete.
234
Amazon Relational Database Service User Guide
Creating and connecting to a PostgreSQL DB instance
After you complete the tutorial, there is a public and private subnet in each Availability Zone in your VPC.
In one Availability Zone, the EC2 instance is in the public subnet, and the DB instance is in the private
subnet.
Important
There's no charge for creating an AWS account. However, by completing this tutorial, you might
incur costs for the AWS resources you use. You can delete these resources after you complete
the tutorial if they are no longer needed.
The following diagram shows the configuration when the tutorial is complete.
This tutorial uses Easy create to create a DB instance running PostgreSQL with the AWS Management
Console. With Easy create, you specify only the DB engine type, DB instance size, and DB instance
identifier. Easy create uses the default settings for the other configuration options. The DB instance
created by Easy create is private.
When you use Standard create instead of Easy create, you can specify more configuration options when
you create a DB instance, including ones for availability, security, backups, and maintenance. To create
a public DB instance, you must use Standard create. For information about creating DB instances with
Standard create, see Creating an Amazon RDS DB instance (p. 300).
Topics
• Prerequisites (p. 236)
• Step 1: Create an EC2 instance (p. 236)
• Step 2: Create a PostgreSQL DB instance (p. 240)
• Step 3: Connect to a PostgreSQL DB instance (p. 245)
235
Amazon Relational Database Service User Guide
Prerequisites
Prerequisites
Before you begin, complete the steps in the following sections:
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you
want to create the EC2 instance.
3. Choose EC2 Dashboard, and then choose Launch instance, as shown in the following image.
236
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance
237
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance
For more information about creating a new key pair, see Create a key pair in the Amazon EC2
User Guide for Linux Instances.
e. For Allow SSH traffic in Network settings, choose the source of SSH connections to the EC2
instance.
You can choose My IP if the displayed IP address is correct for SSH connections. Otherwise, you
can determine the IP address to use to connect to EC2 instances in your VPC using Secure Shell
(SSH). To determine your public IP address, in a different browser window or tab, you can use
the service at https://checkip.amazonaws.com. An example of an IP address is 192.0.2.1/32.
In many cases, you might connect through an internet service provider (ISP) or from behind your
firewall without a static IP address. If so, make sure to determine the range of IP addresses used
by client computers.
Warning
If you use 0.0.0.0/0 for SSH access, you make it possible for all IP addresses to access
your public EC2 instances using SSH. This approach is acceptable for a short time in a
test environment, but it's unsafe for production environments. In production, authorize
only a specific IP address or range of addresses to access your EC2 instances using SSH.
238
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance
239
Amazon Relational Database Service User Guide
Step 2: Create a PostgreSQL DB instance
6. Choose the EC2 instance identifier to open the list of EC2 instances, and then select your EC2
instance.
7. In the Details tab, note the following values, which you need when you connect using SSH:
8. Wait until the Instance state for your EC2 instance has a status of Running before continuing.
240
Amazon Relational Database Service User Guide
Step 2: Create a PostgreSQL DB instance
In this example, you use Easy create to create a DB instance running the PostgreSQL database engine
with a db.t3.micro DB instance class.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to
create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database and make sure that Easy create is chosen.
The Create database page should look similar to the following image.
241
Amazon Relational Database Service User Guide
Step 2: Create a PostgreSQL DB instance
9. To use an automatically generated master password for the DB instance, select Auto generate a
password.
To enter your master password, make sure Auto generate a password is cleared, and then enter the
same password in Master password and Confirm password.
10. To set up a connection with the EC2 instance you created previously, open Set up EC2 connection -
optional.
Select Connect to an EC2 compute resource. Choose the EC2 instance you created previously.
242
Amazon Relational Database Service User Guide
Step 2: Create a PostgreSQL DB instance
243
Amazon Relational Database Service User Guide
Step 2: Create a PostgreSQL DB instance
You can examine the default settings used with Easy create. The Editable after database is created
column shows which options you can change after you create the database.
• If a setting has No in that column, and you want a different setting, you can use Standard create
to create the DB instance.
• If a setting has Yes in that column, and you want a different setting, you can either use Standard
create to create the DB instance, or modify the DB instance after you create it to change the
setting.
12. Choose Create database.
To view the master username and password for the DB instance, choose View credential details.
You can use the username and password that appears to connect to the DB instance as the master
user.
244
Amazon Relational Database Service User Guide
Step 3: Connect to a PostgreSQL DB instance
Important
You can't view the master user password again. If you don't record it, you might have to
change it.
If you need to change the master user password after the DB instance is available, you can
modify the DB instance to do so. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
13. In the Databases list, choose the name of the new PostgreSQL DB instance to show its details.
When the status changes to Available, you can connect to the DB instance. Depending on the DB
instance class and the amount of storage, it can take up to 20 minutes before the new instance is
available.
1. Find the endpoint (DNS name) and port number for your DB instance.
a. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
b. In the upper-right corner of the Amazon RDS console, choose the AWS Region for the DB
instance.
c. In the navigation pane, choose Databases.
d. Choose the PostgreSQL DB instance name to display its details.
e. On the Connectivity & security tab, copy the endpoint. Also note the port number. You need
both the endpoint and the port number to connect to the DB instance.
245
Amazon Relational Database Service User Guide
Step 3: Connect to a PostgreSQL DB instance
2. Connect to the EC2 instance that you created earlier by following the steps in Connect to your Linux
instance in the Amazon EC2 User Guide for Linux Instances.
We recommend that you connect to your EC2 instance using SSH. If the SSH client utility is installed
on Windows, Linux, or Mac, you can connect to the instance using the following command format:
246
Amazon Relational Database Service User Guide
Step 3: Connect to a PostgreSQL DB instance
3. Get the latest bug fixes and security updates by updating the software on your EC2 instance. To do
this, use the following command.
Note
The -y option installs the updates without asking for confirmation. To examine updates
before installing, omit this option.
4. To install the psql command-line client from PostgreSQL on Amazon Linux 2023, run the following
command:
5. Connect to the PostgreSQL DB instance. For example, enter the following command at a command
prompt on a client computer. This action lets you connect to the PostgreSQL DB instance using the
psql client.
Substitute the DB instance endpoint (DNS name) for endpoint, substitute the database name --
dbname that you want to connect to for postgres, and substitute the master username that you
used for postgres. Provide the master password that you used when prompted for a password.
After you enter the password for the user, you should see output similar to the following:
postgres=>
For security, it is a best practice to use encrypted connections. Only use an unencrypted PostgreSQL
connection when the client and server are in the same VPC and the network is trusted. For
information about using encrypted connections, see Connecting to a PostgreSQL DB instance over
SSL (p. 2174).
6. Run SQL commands.
For example, the following SQL command shows the current date and time:
SELECT CURRENT_TIMESTAMP;
247
Amazon Relational Database Service User Guide
Step 4: Delete the EC2 instance and DB instance
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Instances.
3. Select the EC2 instance, and choose Instance state, Terminate instance.
4. Choose Terminate when prompted for confirmation.
For more information about deleting an EC2 instance, see Terminate your instance in the Amazon EC2
User Guide for Linux Instances.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to delete.
4. For Actions, choose Delete.
5. Clear Create final snapshot? and Retain automated backups.
6. Complete the acknowledgement and choose Delete.
248
Amazon Relational Database Service User Guide
Tutorial: Create a web server
and an Amazon RDS DB instance
In the tutorial that follows, you create an EC2 instance that uses the default VPC, subnets, and security
group for your AWS account. This tutorial shows you how to create the DB instance and automatically set
up connectivity with the EC2 instance that you created. The tutorial then shows you how to install the
web server on the EC2 instance. You connect your web server to your DB instance in the VPC using the
DB instance endpoint.
The following diagram shows the configuration when the tutorial is complete.
249
Amazon Relational Database Service User Guide
Launch an EC2 instance
Note
After you complete the tutorial, there is a public and private subnet in each Availability Zone
in your VPC. This tutorial uses the default VPC for your AWS account and automatically sets up
connectivity between your EC2 instance and DB instance. If you would rather configure a new
VPC for this scenario instead, complete the tasks in Tutorial: Create a VPC for use with a DB
instance (IPv4 only) (p. 2706).
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region where you want
to create the EC2 instance.
3. Choose EC2 Dashboard, and then choose Launch instance, as shown following.
250
Amazon Relational Database Service User Guide
Launch an EC2 instance
251
Amazon Relational Database Service User Guide
Launch an EC2 instance
For more information about creating a new key pair, see Create a key pair in the Amazon EC2
User Guide for Linux Instances.
e. Under Network settings, set these values and keep the other values as their defaults:
• For Allow SSH traffic from, choose the source of SSH connections to the EC2 instance.
You can choose My IP if the displayed IP address is correct for SSH connections.
Otherwise, you can determine the IP address to use to connect to EC2 instances in your VPC
using Secure Shell (SSH). To determine your public IP address, in a different browser window
or tab, you can use the service at https://checkip.amazonaws.com. An example of an IP
address is 203.0.113.25/32.
In many cases, you might connect through an internet service provider (ISP) or from behind
your firewall without a static IP address. If so, make sure to determine the range of IP
addresses used by client computers.
Warning
If you use 0.0.0.0/0 for SSH access, you make it possible for all IP addresses to
access your public instances using SSH. This approach is acceptable for a short time
252
Amazon Relational Database Service User Guide
Launch an EC2 instance
253
Amazon Relational Database Service User Guide
Launch an EC2 instance
6. Choose the EC2 instance identifier to open the list of EC2 instances, and then select your EC2
instance.
7. In the Details tab, note the following values, which you need when you connect using SSH:
8. Wait until Instance state for your instance is Running before continuing.
9. Complete Create an Amazon RDS DB instance (p. 255).
254
Amazon Relational Database Service User Guide
Create a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the AWS Management Console, check the AWS Region. It should be
the same as the one where you created your EC2 instance.
3. In the navigation pane, choose Databases.
4. Choose Create database.
5. On the Create database page, choose Standard create.
6. For Engine options, choose MySQL.
7. For Templates, choose Free tier.
255
Amazon Relational Database Service User Guide
Create a DB instance
256
Amazon Relational Database Service User Guide
Create a DB instance
257
Amazon Relational Database Service User Guide
Create a DB instance
13. In the Database authentication section, make sure Password authentication is selected.
14. Open the Additional configuration section, and enter sample for Initial database name. Keep
the default settings for the other options.
15. To create your MySQL DB instance, choose Create database.
Your new DB instance appears in the Databases list with the status Creating.
16. Wait for the Status of your new DB instance to show as Available. Then choose the DB instance
name to show its details.
17. In the Connectivity & security section, view the Endpoint and Port of the DB instance.
258
Amazon Relational Database Service User Guide
Create a DB instance
Note the endpoint and port for your DB instance. You use this information to connect your web
server to your DB instance.
18. Complete Install a web server on your EC2 instance (p. 264).
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the AWS Management Console, check the AWS Region. It should be
the same as the one where you created your EC2 instance.
3. In the navigation pane, choose Databases.
4. Choose Create database.
5. On the Create database page, choose Standard create.
6. For Engine options, choose PostgreSQL.
7. For Templates, choose Free tier.
259
Amazon Relational Database Service User Guide
Create a DB instance
260
Amazon Relational Database Service User Guide
Create a DB instance
261
Amazon Relational Database Service User Guide
Create a DB instance
262
Amazon Relational Database Service User Guide
Create a DB instance
13. In the Database authentication section, make sure Password authentication is selected.
14. Open the Additional configuration section, and enter sample for Initial database name. Keep
the default settings for the other options.
263
Amazon Relational Database Service User Guide
Install a web server
Your new DB instance appears in the Databases list with the status Creating.
16. Wait for the Status of your new DB instance to show as Available. Then choose the DB instance
name to show its details.
17. In the Connectivity & security section, view the Endpoint and Port of the DB instance.
Note the endpoint and port for your DB instance. You use this information to connect your web
server to your DB instance.
18. Complete Install a web server on your EC2 instance (p. 264).
264
Amazon Relational Database Service User Guide
Install a web server
To connect to your EC2 instance and install the Apache web server with PHP
1. Connect to the EC2 instance that you created earlier by following the steps in Connect to your Linux
instance in the Amazon EC2 User Guide for Linux Instances.
We recommend that you connect to your EC2 instance using SSH. If the SSH client utility is installed
on Windows, Linux, or Mac, you can connect to the instance using the following command format:
2. Get the latest bug fixes and security updates by updating the software on your EC2 instance. To do
this, use the following command.
Note
The -y option installs the updates without asking for confirmation. To examine updates
before installing, omit this option.
3. After the updates complete, install the Apache web server, PHP, and MariaDB or PostgreSQL
software using the following commands. This command installs multiple software packages and
related dependencies at the same time.
MySQL
PostgreSQL
If you receive an error, your instance probably wasn't launched with an Amazon Linux 2023 AMI. You
might be using the Amazon Linux 2 AMI instead. You can view your version of Amazon Linux using
the following command.
cat /etc/system-release
265
Amazon Relational Database Service User Guide
Install a web server
You can test that your web server is properly installed and started. To do this, enter the public
Domain Name System (DNS) name of your EC2 instance in the address bar of a web browser, for
example: http://ec2-42-8-168-21.us-west-1.compute.amazonaws.com. If your web server
is running, then you see the Apache test page.
If you don't see the Apache test page, check your inbound rules for the VPC security group that you
created in Tutorial: Create a VPC for use with a DB instance (IPv4 only) (p. 2706). Make sure that
your inbound rules include one allowing HTTP (port 80) access for the IP address to connect to the
web server.
Note
The Apache test page appears only when there is no content in the document root
directory, /var/www/html. After you add content to the document root directory, your
content appears at the public DNS address of your EC2 instance. Before this point, it
appears on the Apache test page.
5. Configure the web server to start with each system boot using the systemctl command.
To allow ec2-user to manage files in the default root directory for your Apache web server, modify the
ownership and permissions of the /var/www directory. There are many ways to accomplish this task.
In this tutorial, you add ec2-user to the apache group, to give the apache group ownership of the /
var/www directory and assign write permissions to the group.
2. Log out to refresh your permissions and include the new apache group.
exit
3. Log back in again and verify that the apache group exists with the groups command.
groups
4. Change the group ownership of the /var/www directory and its contents to the apache group.
5. Change the directory permissions of /var/www and its subdirectories to add group write
permissions and set the group ID on subdirectories created in the future.
6. Recursively change the permissions for files in the /var/www directory and its subdirectories to add
group write permissions.
266
Amazon Relational Database Service User Guide
Install a web server
Now, ec2-user (and any future members of the apache group) can add, delete, and edit files in the
Apache document root. This makes it possible for you to add content, such as a static website or a PHP
application.
Note
A web server running the HTTP protocol provides no transport security for the data that it sends
or receives. When you connect to an HTTP server using a web browser, much information is
visible to eavesdroppers anywhere along the network pathway. This information includes the
URLs that you visit, the content of web pages that you receive, and the contents (including
passwords) of any HTML forms.
The best practice for securing your web server is to install support for HTTPS (HTTP Secure).
This protocol protects your data with SSL/TLS encryption. For more information, see Tutorial:
Configure SSL/TLS with the Amazon Linux AMI in the Amazon EC2 User Guide.
To add content to the Apache web server that connects to your DB instance
1. While still connected to your EC2 instance, change the directory to /var/www and create a new
subdirectory named inc.
cd /var/www
mkdir inc
cd inc
2. Create a new file in the inc directory named dbinfo.inc, and then edit the file by calling nano (or
the editor of your choice).
>dbinfo.inc
nano dbinfo.inc
3. Add the following contents to the dbinfo.inc file. Here, db_instance_endpoint is your DB
instance endpoint, without the port, for your DB instance.
Note
We recommend placing the user name and password information in a folder that isn't part
of the document root for your web server. Doing this reduces the possibility of your security
information being exposed.
Make sure to change master password to a suitable password in your application.
<?php
define('DB_SERVER', 'db_instance_endpoint');
define('DB_USERNAME', 'tutorial_user');
define('DB_PASSWORD', 'master password');
define('DB_DATABASE', 'sample');
?>
4. Save and close the dbinfo.inc file. If you are using nano, save and close the file by using Ctrl+S
and Ctrl+X.
5. Change the directory to /var/www/html.
267
Amazon Relational Database Service User Guide
Install a web server
cd /var/www/html
6. Create a new file in the html directory named SamplePage.php, and then edit the file by calling
nano (or the editor of your choice).
>SamplePage.php
nano SamplePage.php
MySQL
if (strlen($employee_name) || strlen($employee_address)) {
AddEmployee($connection, $employee_name, $employee_address);
}
?>
268
Amazon Relational Database Service User Guide
Install a web server
<td>ID</td>
<td>NAME</td>
<td>ADDRESS</td>
</tr>
<?php
while($query_data = mysqli_fetch_row($result)) {
echo "<tr>";
echo "<td>",$query_data[0], "</td>",
"<td>",$query_data[1], "</td>",
"<td>",$query_data[2], "</td>";
echo "</tr>";
}
?>
</table>
mysqli_free_result($result);
mysqli_close($connection);
?>
</body>
</html>
<?php
$checktable = mysqli_query($connection,
269
Amazon Relational Database Service User Guide
Install a web server
return false;
}
?>
PostgreSQL
<html>
<body>
<h1>Sample page</h1>
<?php
if (!$connection){
echo "Failed to connect to PostgreSQL";
exit;
}
if (strlen($employee_name) || strlen($employee_address)) {
AddEmployee($connection, $employee_name, $employee_address);
}
?>
270
Amazon Relational Database Service User Guide
Install a web server
<tr>
<td>ID</td>
<td>NAME</td>
<td>ADDRESS</td>
</tr>
<?php
while($query_data = pg_fetch_row($result)) {
echo "<tr>";
echo "<td>",$query_data[0], "</td>",
"<td>",$query_data[1], "</td>",
"<td>",$query_data[2], "</td>";
echo "</tr>";
}
?>
</table>
pg_free_result($result);
pg_close($connection);
?>
</body>
</html>
<?php
271
Amazon Relational Database Service User Guide
Install a web server
return false;
}
?>
You can use SamplePage.php to add data to your DB instance. The data that you add is then displayed
on the page. To verify that the data was inserted into the table, install MySQL client on the Amazon EC2
instance. Then connect to the DB instance and query the table.
For information about installing the MySQL client and connecting to a DB instance, see Connecting to a
DB instance running the MySQL database engine (p. 1630).
To make sure that your DB instance is as secure as possible, verify that sources outside of the VPC can't
connect to your DB instance.
After you have finished testing your web server and your database, you should delete your DB instance
and your Amazon EC2 instance.
• To delete a DB instance, follow the instructions in Deleting a DB instance (p. 489). You don't need to
create a final snapshot.
• To terminate an Amazon EC2 instance, follow the instruction in Terminate your instance in the Amazon
EC2 User Guide.
272
Amazon Relational Database Service User Guide
Tutorial: Create a Lambda function to
access your Amazon RDS DB instance
With Amazon RDS, you can run a managed relational database in the cloud using common database
products like Microsoft SQL Server, MariaDB, MySQL, Oracle Database, and PostgreSQL. By using
Lambda to access your database, you can read and write data in response to events, such as a new
customer registering with your website. Your function, database instance, and proxy scale automatically
to meet periods of high demand.
1. Launch an RDS for MySQL database instance and a proxy in your AWS account's default VPC.
2. Create and test a Lambda function that creates a new table in your database and writes data to it.
3. Create an Amazon SQS queue and configure it to invoke your Lambda function whenever a new
message is added.
4. Test the complete setup by adding messages to your queue using the AWS Management Console and
monitoring the results using CloudWatch Logs.
• How to use Amazon RDS to create a database instance and a proxy, and connect a Lambda function to
the proxy.
• How to use Lambda to perform create and read operations on an Amazon RDS database.
• How to use Amazon SQS to invoke a Lambda function.
You can complete this tutorial using the AWS Management Console or the AWS Command Line Interface
(AWS CLI).
273
Amazon Relational Database Service User Guide
Prerequisites
Prerequisites
Before you begin, complete the steps in the following sections:
An Amazon RDS DB instance is an isolated database environment running in the AWS Cloud. An instance
can contain one or more user-created databases. Unless you specify otherwise, Amazon RDS creates
new database instances in the default VPC included in your AWS account. For more information about
Amazon VPC, see the Amazon Virtual Private Cloud User Guide.
In this tutorial, you create a new instance in your AWS account's default VPC and create a database
named ExampleDB in that instance. You can create your DB instance and database using either the AWS
Management Console or the AWS CLI.
• Leave all the remaining default options selected and scroll down to the Additional configuration
section.
• Expand this section and enter ExampleDB as the Initial database name.
7. Leave all the remaining default options selected and choose Create database.
274
Amazon Relational Database Service User Guide
Create Lambda function and proxy
You can use the RDS console to create a Lambda function and a proxy in the same VPC as the database.
Note
You can only create these associated resources when your database has completed creation and
is in Available status.
1. From the Databases page, check if your database is in the Available status. If so, proceed to the next
step. Else, wait till your database is available.
2. Select your database and choose Set up Lambda connection from Actions.
3. In the Set up Lambda connection page, choose Create new function.
The wizard completes the set up and provides a link to the Lambda console to review your new function.
Note the proxy endpoint before switching to the Lambda console.
Before you create your Lambda function, you create an execution role to give your function the
necessary permissions. For this tutorial, Lambda needs permission to manage the network connection to
the VPC containing your database instance and to poll messages from an Amazon SQS queue.
To give your Lambda function the permissions it needs, this tutorial uses IAM managed policies. These
are policies that grant permissions for many common use cases and are available in your AWS account.
For more information about using managed policies, see Policy best practices (p. 2616).
275
Amazon Relational Database Service User Guide
Create a Lambda deployment package
1. Open the Roles page of the IAM console and choose Create role.
2. For the Trusted entity type, choose AWS service, and for the Use case, choose Lambda.
3. Choose Next.
4. Add the IAM managed policies by doing the following:
Later in the tutorial, you need the Amazon Resource Name (ARN) of the execution role you just created.
1. Open the Roles page of the IAM console and choose your role (lambda-vpc-sqs-role).
2. Copy the ARN displayed in the Summary section.
The following example Python code uses the PyMySQL package to open a connection to your database.
The first time you invoke your function, it also creates a new table called Customer. The table uses the
following schema, where CustID is the primary key:
Customer(CustID, Name)
The function also uses PyMySQL to add records to this table. The function adds records using customer
IDs and names specified in messages you will add to your Amazon SQS queue.
The code creates the connection to your database outside of the handler function. Creating the
connection in the initialization code allows the connection to be re-used by subsequent invocations
of your function and improves performance. In a production application, you can also use provisioned
concurrency to initialize a requested number of database connections. These connections are available as
soon as your function is invoked.
import sys
import logging
import pymysql
import json
import os
276
Amazon Relational Database Service User Guide
Create a Lambda deployment package
# rds settings
user_name = os.environ['USER_NAME']
password = os.environ['PASSWORD']
rds_proxy_host = os.environ['RDS_PROXY_HOST']
db_name = os.environ['DB_NAME']
logger = logging.getLogger()
logger.setLevel(logging.INFO)
item_count = 0
sql_string = f"insert into Customer (CustID, Name) values({CustID}, '{Name}')"
Note
In this example, your database access credentials are stored as environment variables. In
production applications, we recommend that you use AWS Secrets Manager as a more secure
option. Note that if your Lambda function is in a VPC, to connect to Secrets Manager you need
to create a VPC endpoint. See How to connect to Secrets Manager service within a Virtual
Private Cloud to learn more.
To include the PyMySQL dependency with your function code, create a .zip deployment package. The
following commands work for Linux, macOS, or Unix:
277
Amazon Relational Database Service User Guide
Update the Lambda function
mkdir package
pip install --target package pymysql
3. Create a zip file containing your application code and the PyMySQL library. In Linux or MacOS,
run the following CLI commands. In Windows, use your preferred zip tool to create the
lambda_function.zip file. Your lambda_function.py source code file and the folders
containing your dependencies must be installed at the root of the .zip file.
cd package
zip -r ../lambda_function.zip .
cd ..
zip lambda_function.zip lambda_function.py
You can also create your deployment package using a Python virtual environment. See Deploy
Python Lambda functions with .zip file archives.
1. Open the Functions page of the Lambda console and choose your function
LambdaFunctionWithRDS.
2. Change the Runtime of the function to Python 3.10.
3. Change the Handler to lambda_function.lambda_handler.
4. In the Code tab, choose Upload from and then .zip file.
5. Select the lambda_function.zip file you created in the previous stage and choose Save.
Now configure the function with the execution role you created earlier. This grants the function the
permissions it needs to access your database instance and poll an Amazon SQS queue.
1. In the Functions page of the Lambda console, select the Configuration tab, then choose
Permissions.
2. In Execution role, choose Edit.
3. In Existing role, choose your execution role (lambda-vpc-sqs-role).
4. Choose Save.
1. In the Functions page of the Lambda console, select the Configuration tab, then choose
Environment variables.
2. Choose Edit.
3. To add your database access credentials, do the following:
a. Choose Add environment variable, then for Key enter USER_NAME and for Value enter admin.
278
Amazon Relational Database Service User Guide
Test your Lambda function in the console
b. Choose Add environment variable, then for Key enter DB_NAME and for Value enter
ExampleDB.
c. Choose Add environment variable, then for Key enter PASSWORD and for Value enter the
password you chose when you created your database.
d. Choose Add environment variable, then for Key enter RDS_PROXY_HOST and for Value enter
the RDS Proxy endpoint you noted earlier.
e. Choose Save.
You can now use the Lambda console to test your function. You create a test event which mimics the
data your function will receive when you invoke it using Amazon SQS in the final stage of the tutorial.
Your test event contains a JSON object specifying a customer ID and customer name to add to the
Customer table your function creates.
1. Open the Functions page of the Lambda console and choose your function.
2. Choose the Test section.
3. Choose Create new event and enter myTestEvent for the event name.
4. Copy the following code into Event JSON and choose Save.
{
"Records": [
{
"messageId": "059f36b4-87a3-44ab-83d2-661975830a7d",
"receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a...",
"body": "{\n \"CustID\": 1021,\n \"Name\": \"Martha Rivera\"\n}",
"attributes": {
"ApproximateReceiveCount": "1",
"SentTimestamp": "1545082649183",
"SenderId": "AIDAIENQZJOLO23YVJ4VO",
"ApproximateFirstReceiveTimestamp": "1545082649185"
},
"messageAttributes": {},
"md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:us-west-2:123456789012:my-queue",
"awsRegion": "us-west-2"
}
]
}
5. Choose Test.
279
Amazon Relational Database Service User Guide
Create an Amazon SQS queue
In the Execution results tab, you should see results similar to the following displayed in the Function
Logs:
You have successfully tested the integration of your Lambda function and Amazon RDS database
instance. Now you create the Amazon SQS queue you will use to invoke your Lambda function in the
final stage of the tutorial.
1. Open the Queues page of the Amazon SQS console and select Create queue.
2. Leave the Type as Standard and enter LambdaRDSQueue for the name of your queue.
3. Leave all the default options selected and choose Create queue.
An event source mapping is a Lambda resource which reads items from a stream or queue and invokes
a Lambda function. When you configure an event source mapping, you can specify a batch size so that
records from your stream or queue are batched together into a single payload. In this example, you
set the batch size to 1 so that your Lambda function is invoked every time you send a message to your
queue. You can configure the event source mapping using either the AWS CLI or the Lambda console.
1. Open the Functions page of the Lambda console and select your function
(LambdaFunctionWithRDS).
280
Amazon Relational Database Service User Guide
Test and monitor your setup
You are now ready to test your complete setup by adding a message to your Amazon SQS queue.
To test your complete setup, add messages to your Amazon SQS queue using the console. You then use
CloudWatch Logs to confirm that your Lambda function is writing records to your database as expected.
1. Open the Queues page of the Amazon SQS console and select your queue (LambdaRDSQueue).
2. Choose Send and receive messages and paste the following JSON into the Message body in the
Send message section.
{
"CustID": 1054,
"Name": "Richard Roe"
}
Sending your message to the queue will cause Lambda to invoke your function through your event
source mapping. To confirm that Lambda has invoked your function as expected, use CloudWatch
Logs to verify that the function has written the customer name and ID to your database table.
4. Open the Log groups page of the CloudWatch console and select the log group for your function (/
aws/lambda/LambdaFunctionWithRDS).
5. In the Log streams section, choose the most recent log stream.
Your table should contain two customer records, one from each invocation of your function. In the
log stream, you should see messages similar to the following:
281
Amazon Relational Database Service User Guide
Clean up your resources
1. Sign in to the AWS Management Console and open the Amazon SQS console at https://
console.aws.amazon.com/sqs/.
2. Select the queue you created.
3. Choose Delete.
4. Enter delete in the text box.
5. Choose Delete.
282
Amazon Relational Database Service User Guide
Tutorials in this guide
Topics
• Tutorials in this guide (p. 283)
• Tutorials in other AWS guides (p. 284)
• AWS workshop and lab content portal for Amazon RDS PostgreSQL (p. 284)
• AWS workshop and lab content portal for Amazon RDS MySQL (p. 284)
• Tutorials and sample code in GitHub (p. 285)
• Using this service with an AWS SDK (p. 285)
• Tutorial: Create a VPC for use with a DB instance (IPv4 only) (p. 2706)
Learn how to include a DB instance in a virtual private cloud (VPC) based on the Amazon VPC service.
In this case, the VPC shares data with a web server that is running on an Amazon EC2 instance in the
same VPC.
• Tutorial: Create a VPC for use with a DB instance (dual-stack mode) (p. 2711)
Learn how to include a DB instance in a virtual private cloud (VPC) based on the Amazon VPC service.
In this case, the VPC shares data with an Amazon EC2 instance in the same VPC. In this tutorial, you
create the VPC for this scenario that works with a database running in dual-stack mode.
• Tutorial: Create a web server and an Amazon RDS DB instance (p. 249)
Learn how to install an Apache web server with PHP and create a MySQL database. The web server
runs on an Amazon EC2 instance using Amazon Linux, and the MySQL database is a MySQL DB
instance. Both the Amazon EC2 instance and the DB instance run in an Amazon VPC.
• Tutorial: Restore an Amazon RDS DB instance from a DB snapshot (p. 665)
Learn how to create a Lambda function from the RDS console to access a database through a proxy,
create a table, add a few records, and retrieve the records from the table. You also learn how to invoke
the Lambda function and verify the query results.
• Tutorial: Use tags to specify which DB instances to stop (p. 466)
283
Amazon Relational Database Service User Guide
Tutorials in other AWS guides
Learn how to log a DB instance state change using Amazon EventBridge and AWS Lambda.
• Tutorial: Creating an Amazon CloudWatch alarm for Multi-AZ DB cluster replica lag (p. 713)
Learn how to create a CloudWatch alarm that sends an Amazon SNS message when replica lag for a
Multi-AZ DB cluster has exceeded a threshold. An alarm watches the ReplicaLag metric over a time
period that you specify. The action is a notification sent to an Amazon SNS topic or Amazon EC2 Auto
Scaling policy.
• Tutorial: Rotating a Secret for an AWS Database in the AWS Secrets Manager User Guide
Learn how to create a secret for an AWS database and configure the secret to rotate on a schedule.
You trigger one rotation manually, and then confirm that the new version of the secret continues to
provide access.
• Tutorials and samples in the AWS Elastic Beanstalk Developer Guide
Learn how to deploy applications that use Amazon RDS databases with AWS Elastic Beanstalk.
• Using Data from an Amazon RDS Database to Create an Amazon ML Datasource in the Amazon
Machine Learning Developer Guide
Learn how to create an Amazon Machine Learning (Amazon ML) datasource object from data stored in
a MySQL DB instance.
• Manually Enabling Access to an Amazon RDS Instance in a VPC in the Amazon QuickSight User Guide
Learn how to enable Amazon QuickSight access to an Amazon RDS DB instance in a VPC.
• Creating a DB instance
Learn how to use AWS and SQL tools(Cloudwatch, Enhanced Monitoring, Slow Query Logs,
Performance Insights, PostgreSQL Catalog Views) to understand performance issues and identify ways
to improve performance of your database.
284
Amazon Relational Database Service User Guide
Tutorials and sample code in GitHub
• Creating a DB instance
Learn how to monitor and tune your DB instance using Performance insights.
Learn how to create an application that tracks and reports on work items. This application uses
Amazon RDS, Amazon Simple Email Service, Elastic Beanstalk, and SDK for Java 2.x.
AWS SDK for C++ AWS SDK for C++ code examples
AWS SDK for Java AWS SDK for Java code examples
AWS SDK for JavaScript AWS SDK for JavaScript code examples
AWS SDK for Kotlin AWS SDK for Kotlin code examples
AWS SDK for .NET AWS SDK for .NET code examples
AWS SDK for PHP AWS SDK for PHP code examples
AWS SDK for Python (Boto3) AWS SDK for Python (Boto3) code examples
AWS SDK for Ruby AWS SDK for Ruby code examples
AWS SDK for Rust AWS SDK for Rust code examples
AWS SDK for Swift AWS SDK for Swift code examples
For examples specific to this service, see Code examples for Amazon RDS using AWS SDKs (p. 2441).
Example availability
Can't find what you need? Request a code example by using the Provide feedback link at the
bottom of this page.
285
Amazon Relational Database Service User Guide
Amazon RDS basic operational guidelines
Topics
• Amazon RDS basic operational guidelines (p. 286)
• DB instance RAM recommendations (p. 287)
• Using Enhanced Monitoring to identify operating system issues (p. 287)
• Using metrics to identify performance issues (p. 287)
• Tuning queries (p. 291)
• Best practices for working with MySQL (p. 292)
• Best practices for working with MariaDB (p. 293)
• Best practices for working with Oracle (p. 294)
• Best practices for working with PostgreSQL (p. 294)
• Best practices for working with SQL Server (p. 296)
• Working with DB parameter groups (p. 297)
• Best practices for automating DB instance creation (p. 297)
• Amazon RDS new features and best practices presentation video (p. 298)
Note
For common recommendations for Amazon RDS, see Viewing Amazon RDS
recommendations (p. 688).
• Use metrics to monitor your memory, CPU, replica lag, and storage usage. You can set up Amazon
CloudWatch to notify you when usage patterns change or when you approach the capacity of your
deployment. This way, you can maintain system performance and availability.
• Scale up your DB instance when you are approaching storage capacity limits. You should have
some buffer in storage and memory to accommodate unforeseen increases in demand from your
applications.
• Enable automatic backups and set the backup window to occur during the daily low in write IOPS.
That's when a backup is least disruptive to your database usage.
• If your database workload requires more I/O than you have provisioned, recovery after a failover
or database failure will be slow. To increase the I/O capacity of a DB instance, do any or all of the
following:
• Migrate to a different DB instance class with high I/O capacity.
• Convert from magnetic storage to either General Purpose or Provisioned IOPS storage, depending
on how much of an increase you need. For information on available storage types, see Amazon RDS
storage types (p. 101).
286
Amazon Relational Database Service User Guide
DB instance RAM recommendations
If you convert to Provisioned IOPS storage, make sure you also use a DB instance class that is
optimized for Provisioned IOPS. For information on Provisioned IOPS, see Provisioned IOPS SSD
storage (p. 104).
• If you are already using Provisioned IOPS storage, provision additional throughput capacity.
• If your client application is caching the Domain Name Service (DNS) data of your DB instances, set
a time-to-live (TTL) value of less than 30 seconds. The underlying IP address of a DB instance can
change after a failover. Caching the DNS data for an extended time can thus lead to connection
failures. Your application might try to connect to an IP address that's no longer in service.
• Test failover for your DB instance to understand how long the process takes for your particular use
case. Also test failover to ensure that the application that accesses your DB instance can automatically
connect to the new DB instance after failover occurs.
To tell if your working set is almost all in memory, check the ReadIOPS metric (using Amazon
CloudWatch) while the DB instance is under load. The value of ReadIOPS should be small and stable.
In some cases, scaling up the DB instance class to a class with more RAM results in a dramatic drop in
ReadIOPS. In these cases, your working set was not almost completely in memory. Continue to scale up
until ReadIOPS no longer drops dramatically after a scaling operation, or ReadIOPS is reduced to a very
small amount. For information on monitoring a DB instance's metrics, see Viewing metrics in the Amazon
RDS console (p. 696).
287
Amazon Relational Database Service User Guide
Viewing performance metrics
To troubleshoot performance issues, it's important to understand the baseline performance of the
system. When you set up a DB instance and run it with a typical workload, capture the average,
maximum, and minimum values of all performance metrics. Do so at a number of different intervals (for
example, one hour, 24 hours, one week, two weeks). This can give you an idea of what is normal. It helps
to get comparisons for both peak and off-peak hours of operation. You can then use this information to
identify when performance is dropping below standard levels.
If you use Multi-AZ DB clusters, monitor the time difference between the latest transaction on the writer
DB instance and the latest applied transaction on a reader DB instance. This difference is called replica
lag. For more information, see Replica lag and Multi-AZ DB clusters (p. 504).
You can view the combined Performance Insights and CloudWatch metrics in the Performance Insights
dashboard and monitor your DB instance. To use this monitoring view, Performance Insights must be
turned on for your DB instance. For information about this monitoring view, see Viewing combined
metrics in the Amazon RDS console (p. 699).
You can create a performance analysis report for a specific time period and view the insights identified
and the recommendations to resolve the issues. For more information see, Analyzing database
performance for a period of time (p. 750).
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose a DB instance.
3. Choose Monitoring.
The dashboard provides the performance metrics. The metrics default to show the information for
the last three hours.
4. Use the numbered buttons in the upper-right to page through the additional metrics, or adjust the
settings to see more metrics.
5. Choose a performance metric to adjust the time range in order to see data for other than the
current day. You can change the Statistic, Time Range, and Period values to adjust the information
displayed. For example, you might want to see the peak values for a metric for each day of the last
two weeks. If so, set Statistic to Maximum, Time Range to Last 2 Weeks, and Period to Day.
You can also view performance metrics using the CLI or API. For more information, see Viewing metrics in
the Amazon RDS console (p. 696).
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose a DB instance.
3. Choose Logs & events.
4. In the CloudWatch alarms section, choose Create alarm.
288
Amazon Relational Database Service User Guide
Viewing performance metrics
5. For Send notifications, choose Yes, and for Send notifications to, choose New email or SMS topic.
6. For Topic name, enter a name for the notification, and for With these recipients, enter a comma-
separated list of email addresses and phone numbers.
7. For Metric, choose the alarm statistic and metric to set.
8. For Threshold, specify whether the metric must be greater than, less than, or equal to the threshold,
and specify the threshold value.
9. For Evaluation period, choose the evaluation period for the alarm. For consecutive period(s) of,
choose the period during which the threshold must have been reached in order to trigger the alarm.
10. For Name of alarm, enter a name for the alarm.
11. Choose Create Alarm.
289
Amazon Relational Database Service User Guide
Evaluating performance metrics
CPU
Memory
• Freeable Memory – How much RAM is available on the DB instance, in megabytes. The red line in the
Monitoring tab metrics is marked at 75% for CPU, Memory and Storage Metrics. If instance memory
consumption frequently crosses that line, then this indicates that you should check your workload or
upgrade your instance.
• Swap Usage – How much swap space is used by the DB instance, in megabytes.
Disk space
• Free Storage Space – How much disk space is not currently being used by the DB instance, in
megabytes.
Input/output operations
• Read IOPS, Write IOPS – The average number of disk read or write operations per second.
• Read Latency, Write Latency – The average time for a read or write operation in milliseconds.
• Read Throughput, Write Throughput – The average number of megabytes read from or written to disk
per second.
• Queue Depth – The number of I/O operations that are waiting to be written to or read from disk.
Network traffic
• Network Receive Throughput, Network Transmit Throughput – The rate of network traffic to and from
the DB instance in bytes per second.
Database connections
• DB Connections – The number of client sessions that are connected to the DB instance.
For more detailed individual descriptions of the performance metrics available, see Monitoring Amazon
RDS metrics with Amazon CloudWatch (p. 706).
Generally speaking, acceptable values for performance metrics depend on what your baseline looks
like and what your application is doing. Investigate consistent or trending variances from your baseline.
Advice about specific types of metrics follows:
• High CPU or RAM consumption – High values for CPU or RAM consumption might be appropriate. For
example, they might be so if they are in keeping with your goals for your application (like throughput
or concurrency) and are expected.
290
Amazon Relational Database Service User Guide
Tuning queries
• Disk space consumption – Investigate disk space consumption if space used is consistently at or
above 85 percent of the total disk space. See if it is possible to delete data from the instance or archive
data to a different system to free up space.
• Network traffic – For network traffic, talk with your system administrator to understand what
expected throughput is for your domain network and internet connection. Investigate network traffic if
throughput is consistently lower than expected.
• Database connections – Consider constraining database connections if you see high numbers of
user connections in conjunction with decreases in instance performance and response time. The
best number of user connections for your DB instance will vary based on your instance class and the
complexity of the operations being performed. To determine the number of database connections,
associate your DB instance with a parameter group. In this group, set the User Connections parameter
to other than 0 (unlimited). You can either use an existing parameter group or create a new one. For
more information, see Working with parameter groups (p. 347).
• IOPS metrics – The expected values for IOPS metrics depend on disk specification and server
configuration, so use your baseline to know what is typical. Investigate if values are consistently
different than your baseline. For best IOPS performance, make sure your typical working set will fit
into memory to minimize read and write operations.
For issues with performance metrics, a first step to improve performance is to tune the most used and
most expensive queries. Tune them to see if doing so lowers the pressure on system resources. For more
information, see Tuning queries (p. 291).
If your queries are tuned and an issue persists, consider upgrading your Amazon RDS DB instance
classes (p. 11). You might upgrade it to one with more of the resource (CPU, RAM, disk space, network
bandwidth, I/O capacity) that is related to the issue.
Tuning queries
One of the best ways to improve DB instance performance is to tune your most commonly used
and most resource-intensive queries. Here, you tune them to make them less expensive to run. For
information on improving queries, use the following resources:
• MySQL – See Optimizing SELECT statements in the MySQL documentation. For additional query tuning
resources, see MySQL performance tuning and optimization resources.
• Oracle – See Database SQL Tuning Guide in the Oracle Database documentation.
• SQL Server – See Analyzing a query in the Microsoft documentation. You can also use the execution-,
index-, and I/O-related data management views (DMVs) described in System Dynamic Management
Views in the Microsoft documentation to troubleshoot SQL Server query issues.
A common aspect of query tuning is creating effective indexes. For potential index improvements
for your DB instance, see Database Engine Tuning Advisor in the Microsoft documentation. For
information on using Tuning Advisor on RDS for SQL Server, see Analyzing your database workload on
an Amazon RDS for SQL Server DB instance with Database Engine Tuning Advisor (p. 1605).
• PostgreSQL – See Using EXPLAIN in the PostgreSQL documentation to learn how to analyze a query
plan. You can use this information to modify a query or underlying tables in order to improve query
performance.
For information about how to specify joins in your query for the best performance, see Controlling the
planner with explicit JOIN clauses.
• MariaDB – See Query optimizations in the MariaDB documentation.
291
Amazon Relational Database Service User Guide
Best practices for working with MySQL
Table size
Typically, operating system constraints on file sizes determine the effective maximum table size for
MySQL databases. So, the limits usually aren't determined by internal MySQL constraints.
On a MySQL DB instance, avoid tables in your database growing too large. Although the general storage
limit is 64 TiB, provisioned storage limits restrict the maximum size of a MySQL table file to 16 TiB.
Partition your large tables so that file sizes are well under the 16 TiB limit. This approach can also
improve performance and recovery time. For more information, see MySQL file size limits in Amazon
RDS (p. 1754).
Very large tables (greater than 100 GB in size) can negatively affect performance for both reads
and writes (including DML statements and especially DDL statements). Indexes on larges tables
can significantly improve select performance, but they can also degrade the performance of DML
statements. DDL statements, such as ALTER TABLE, can be significantly slower for the large tables
because those operations might completely rebuild a table in some cases. These DDL statements might
lock the tables for the duration of the operation.
The amount of memory required by MySQL for reads and writes depends on the tables involved in the
operations. It is a best practice to have at least enough RAM to the hold the indexes of actively used
tables. To find the ten largest tables and indexes in a database, use the following query:
Number of tables
Your underlying file system might have a limit on the number of files that represent tables. However,
MySQL has no limit on the number of tables. Despite this, the total number of tables in the MySQL
InnoDB storage engine can contribute to the performance degradation, regardless of the size of those
tables. To limit the operating system impact, you can split the tables across multiple databases in the
same MySQL DB instance. Doing so might limit the number of files in a directory but won't solve the
overall problem.
When there is performance degradation because of a large number of tables (more than 10 thousand), it
is caused by MySQL working with storage files, including opening and closing them. To address this issue,
you can increase the size of the table_open_cache and table_definition_cache parameters.
However, increasing the values of those parameters might significantly increase the amount of memory
MySQL uses, and might even use all of the available memory. For more information, see How MySQL
Opens and Closes Tables in the MySQL documentation.
In addition, too many tables can significantly affect MySQL startup time. Both a clean shutdown and
restart and a crash recovery can be affected, especially in versions prior to MySQL 8.0.
We recommend having fewer than 10,000 tables total across all of the databases in a DB instance. For a
use case with a large number of tables in a MySQL database, see One Million Tables in MySQL 8.0.
292
Amazon Relational Database Service User Guide
Storage engine
Storage engine
The point-in-time restore and snapshot restore features of Amazon RDS for MySQL require a crash-
recoverable storage engine. These features are supported for the InnoDB storage engine only. Although
MySQL supports multiple storage engines with varying capabilities, not all of them are optimized for
crash recovery and data durability. For example, the MyISAM storage engine doesn't support reliable
crash recovery and might prevent a point-in-time restore or snapshot restore from working as intended.
This might result in lost or corrupt data when MySQL is restarted after a crash.
InnoDB is the recommended and supported storage engine for MySQL DB instances on Amazon RDS.
InnoDB instances can also be migrated to Aurora, while MyISAM instances can't be migrated. However,
MyISAM performs better than InnoDB if you require intense, full-text search capability. If you still choose
to use MyISAM with Amazon RDS, following the steps outlined in Automated backups with unsupported
MySQL storage engines (p. 599) can be helpful in certain scenarios for snapshot restore functionality.
If you want to convert existing MyISAM tables to InnoDB tables, you can use the process outlined in the
MySQL documentation. MyISAM and InnoDB have different strengths and weaknesses, so you should
fully evaluate the impact of making this switch on your applications before doing so.
In addition, Federated Storage Engine is currently not supported by Amazon RDS for MySQL.
Table size
Typically, operating system constraints on file sizes determine the effective maximum table size for
MariaDB databases. So, the limits usually aren't determined by internal MariaDB constraints.
On a MariaDB DB instance, avoid tables in your database growing too large. Although the general
storage limit is 64 TiB, provisioned storage limits restrict the maximum size of a MariaDB table file to 16
TiB. Partition your large tables so that file sizes are well under the 16 TiB limit. This approach can also
improve performance and recovery time.
Very large tables (greater than 100 GB in size) can negatively affect performance for both reads
and writes (including DML statements and especially DDL statements). Indexes on larges tables
can significantly improve select performance, but they can also degrade the performance of DML
statements. DDL statements, such as ALTER TABLE, can be significantly slower for the large tables
because those operations might completely rebuild a table in some cases. These DDL statements might
lock the tables for the duration of the operation.
The amount of memory required by MariaDB for reads and writes depends on the tables involved in
the operations. It is a best practice to have at least enough RAM to the hold the indexes of actively used
tables. To find the ten largest tables and indexes in a database, use the following query:
293
Amazon Relational Database Service User Guide
Number of tables
Number of tables
Your underlying file system might have a limit on the number of files that represent tables. However,
MariaDB has no limit on the number of tables. Despite this, the total number of tables in the MariaDB
InnoDB storage engine can contribute to the performance degradation, regardless of the size of those
tables. To limit the operating system impact, you can split the tables across multiple databases in the
same MariaDB DB instance. Doing so might limit the number of files in a directory but doesn’t solve the
overall problem.
When there is performance degradation because of a large number of tables (more than 10,000),
it's caused by MariaDB working with storage files. This work includes MariaDB opening and closing
storage files. To address this issue, you can increase the size of the table_open_cache and
table_definition_cache parameters. However, increasing the values of those parameters might
significantly increase the amount of memory MariaDB uses. It might even use all of the available
memory. For more information, see Optimizing table_open_cache in the MariaDB documentation.
In addition, too many tables can significantly affect MariaDB startup time. Both a clean shutdown and
restart and a crash recovery can be affected. We recommend having fewer than ten thousand tables total
across all of the databases in a DB instance.
Storage engine
The point-in-time restore and snapshot restore features of Amazon RDS for MariaDB require a crash-
recoverable storage engine. Although MariaDB supports multiple storage engines with varying
capabilities, not all of them are optimized for crash recovery and data durability. For example, although
Aria is a crash-safe replacement for MyISAM, it might still prevent a point-in-time restore or snapshot
restore from working as intended. This might result in lost or corrupt data when MariaDB is restarted
after a crash. InnoDB is the recommended and supported storage engine for MariaDB DB instances on
Amazon RDS. If you still choose to use Aria with Amazon RDS, following the steps outlined in Automated
backups with unsupported MariaDB storage engines (p. 600) can be helpful in certain scenarios for
snapshot restore functionality.
If you want to convert existing MyISAM tables to InnoDB tables, you can use the process outlined in the
MariaDB documentation. MyISAM and InnoDB have different strengths and weaknesses, so you should
fully evaluate the impact of making this switch on your applications before doing so.
A 2020 AWS virtual workshop included a presentation on running production Oracle databases on
Amazon RDS. A video of the presentation is available here.
For information on how Amazon RDS implements other common PostgreSQL DBA tasks, see Common
DBA tasks for Amazon RDS for PostgreSQL (p. 2270).
294
Amazon Relational Database Service User Guide
Loading data into a PostgreSQL DB instance
Modify your DB parameter group to include the following settings. Also, test the parameter settings to
find the most efficient settings for your DB instance.
• Increase the value of the maintenance_work_mem parameter. For more information about
PostgreSQL resource consumption parameters, see the PostgreSQL documentation.
• Increase the value of the max_wal_size and checkpoint_timeout parameters to reduce the
number of writes to the write-ahead log (WAL) log.
• Disable the synchronous_commit parameter.
• Disable the PostgreSQL autovacuum parameter.
• Make sure that none of the tables you're importing are unlogged. Data stored in unlogged tables can
be lost during a failover. For more information, see CREATE TABLE UNLOGGED.
Use the pg_dump -Fc (compressed) or pg_restore -j (parallel) commands with these settings.
After the load operation completes, return your DB instance and DB parameters to their normal settings.
Your database administrator needs to know and understand this maintenance operation. For the
PostgreSQL documentation on autovacuum, see The Autovacuum Daemon.
Autovacuum is not a "resource free" operation, but it works in the background and yields to user
operations as much as possible. When enabled, autovacuum checks for tables that have had a large
number of updated or deleted tuples. It also protects against loss of very old data due to transaction ID
wraparound. For more information, see Preventing transaction ID wraparound failures.
Autovacuum should not be thought of as a high-overhead operation that can be reduced to gain better
performance. On the contrary, tables that have a high velocity of updates and deletes will quickly
deteriorate over time if autovacuum is not run.
Important
Not running autovacuum can result in an eventual required outage to perform a much more
intrusive vacuum operation. In some cases, an RDS for PostgreSQL DB instance might become
unavailable because of an over-conservative use of autovacuum. In these cases, the PostgreSQL
database shuts down to protect itself. At that point, Amazon RDS must perform a single-user-
mode full vacuum directly on the DB instance. This full vacuum can result in a multi-hour
295
Amazon Relational Database Service User Guide
Amazon RDS for PostgreSQL best practices video
outage. Thus, we strongly recommend that you do not turn off autovacuum, which is turned on
by default.
The autovacuum parameters determine when and how hard autovacuum works.
Theautovacuum_vacuum_threshold and autovacuum_vacuum_scale_factor parameters
determine when autovacuum is run. The autovacuum_max_workers, autovacuum_nap_time,
autovacuum_cost_limit, and autovacuum_cost_delay parameters determine how hard
autovacuum works. For more information about autovacuum, when it runs, and what parameters are
required, see Routine Vacuuming in the PostgreSQL documentation.
The following query shows the number of "dead" tuples in a table named table1:
• Use Amazon RDS DB events to monitor failovers. For example, you can be notified by text message
or email when a DB instance fails over. For more information about Amazon RDS events, see Working
with Amazon RDS event notification (p. 855).
• If your application caches DNS values, set time to live (TTL) to less than 30 seconds. Setting TTL as so
is a good practice in case there is a failover. In a failover, the IP address might change and the cached
value might no longer be in service.
• We recommend that you do not enable the following modes because they turn off transaction logging,
which is required for Multi-AZ:
• Simple recover mode
• Offline mode
• Read-only mode
• Test to determine how long it takes for your DB instance to failover. Failover time can vary due to
the type of database, the instance class, and the storage type you use. You should also test your
application's ability to continue working if a failover occurs.
• To shorten failover time, do the following:
• Ensure that you have sufficient Provisioned IOPS allocated for your workload. Inadequate I/O can
lengthen failover times. Database recovery requires I/O.
• Use smaller transactions. Database recovery relies on transactions, so if you can break up large
transactions into multiple smaller transactions, your failover time should be shorter.
296
Amazon Relational Database Service User Guide
Amazon RDS for SQL Server best practices video
• Take into consideration that during a failover, there will be elevated latencies. As part of the failover
process, Amazon RDS automatically replicates your data to a new standby instance. This replication
means that new data is being committed to two different DB instances. So there might be some
latency until the standby DB instance has caught up to the new primary DB instance.
• Deploy your applications in all Availability Zones. If an Availability Zone does go down, your
applications in the other Availability Zones will still be available.
When working with a Multi-AZ deployment of SQL Server, remember that Amazon RDS creates replicas
for all SQL Server databases on your instance. If you don't want specific databases to have secondary
replicas, set up a separate DB instance that doesn't use Multi-AZ for those databases.
For information about backing up your DB instance, see Backing up and restoring (p. 590).
To determine the preferred minor version, you can run the describe-db-engine-versions command
with the --default-only option as shown in the following example.
{
"DBEngineVersions": [
{
"Engine": "postgres",
"EngineVersion": "12.5",
"DBParameterGroupFamily": "postgres12",
"DBEngineDescription": "PostgreSQL",
"DBEngineVersionDescription": "PostgreSQL 12.5-R1",
...some output truncated...
}
]
}
297
Amazon Relational Database Service User Guide
Amazon RDS new features and
best practices presentation video
298
Amazon Relational Database Service User Guide
You can configure a DB instance with an option group and a DB parameter group.
• An option group specifies features, called options, that are available for a particular Amazon RDS DB
instance.
• A DB parameter group acts as a container for engine configuration values that are applied to one or
more DB instances.
The options and parameters that are available depend on the DB engine and DB engine version. You can
specify an option group and a DB parameter group when you create a DB instance. You can also modify a
DB instance to specify them.
Topics
• Creating an Amazon RDS DB instance (p. 300)
• Creating Amazon RDS resources with AWS CloudFormation (p. 324)
• Connecting to an Amazon RDS DB instance (p. 325)
• Working with option groups (p. 331)
• Working with parameter groups (p. 347)
• Creating an Amazon ElastiCache cluster using Amazon RDS DB instance settings (p. 374)
299
Amazon Relational Database Service User Guide
Creating a DB instance
Topics
• DB instance prerequisites (p. 300)
• Creating a DB instance (p. 303)
• Settings for DB instances (p. 308)
DB instance prerequisites
Important
Before you can create an Amazon RDS DB instance, you must complete the tasks in Setting up
for Amazon RDS (p. 174).
Topics
• Configure the network for the DB instance (p. 300)
• Additional prerequisites (p. 303)
To set up connectivity between your new DB instance and an Amazon EC2 instance in the same VPC,
do so when you create the DB instance. To connect to your DB instance from resources other than EC2
instances in the same VPC, configure the network connections manually.
Topics
• Configure automatic network connectivity with an EC2 instance (p. 300)
• Configure the network manually (p. 303)
The following are requirements for connecting an EC2 instance with the DB instance:
• The EC2 instance must exist in the AWS Region before you create the DB instance.
If no EC2 instances exist in the AWS Region, the console provides a link to create one.
• The user who is creating the DB instance must have permissions to perform the following operations:
• ec2:AssociateRouteTable
300
Amazon Relational Database Service User Guide
Prerequisites
• ec2:AuthorizeSecurityGroupEgress
• ec2:AuthorizeSecurityGroupIngress
• ec2:CreateRouteTable
• ec2:CreateSubnet
• ec2:CreateSecurityGroup
• ec2:DescribeInstances
• ec2:DescribeNetworkInterfaces
• ec2:DescribeRouteTables
• ec2:DescribeSecurityGroups
• ec2:DescribeSubnets
• ec2:ModifyNetworkInterfaceAttribute
• ec2:RevokeSecurityGroupEgress
Using this option creates a private DB instance. The DB instance uses a DB subnet group with only private
subnets to restrict access to resources within the VPC.
To connect an EC2 instance to the DB instance, choose Connect to an EC2 compute resource in the
Connectivity section on the Create database page.
When you choose Connect to an EC2 compute resource, RDS sets the following options automatically.
You can't change these settings unless you choose not to set up connectivity with an EC2 instance by
choosing Don't connect to an EC2 compute resource.
Network type RDS sets network type to IPv4. Currently, dual-stack mode isn't supported
when you set up a connection between an EC2 instance and the DB
instance.
Virtual Private Cloud (VPC) RDS sets the VPC to the one associated with the EC2 instance.
DB subnet group RDS requires a DB subnet group with a private subnet in the same
Availability Zone as the EC2 instance. If a DB subnet group that meets this
requirement exists, then RDS uses the existing DB subnet group. By default,
this option is set to Automatic setup.
301
Amazon Relational Database Service User Guide
Prerequisites
When a private subnet is available, RDS uses the route table associated
with the subnet and adds any subnets it creates to this route table. When
no private subnet is available, RDS creates a route table without internet
gateway access and adds the subnets it creates to the route table.
RDS also allows you to use existing DB subnet groups. Select Choose
existing if you want to use an existing DB subnet group of your choice.
Public access RDS chooses No so that the DB instance isn't publicly accessible.
For security, it is a best practice to keep the database private and make sure
it isn't accessible from the internet.
VPC security group (firewall) RDS creates a new security group that is associated with the DB instance.
The security group is named rds-ec2-n, where n is a number. This security
group includes an inbound rule with the EC2 VPC security group (firewall)
as the source. This security group that is associated with the DB instance
allows the EC2 instance to access the DB instance.
RDS also creates a new security group that is associated with the EC2
instance. The security group is named ec2-rds-n, where n is a number.
This security group includes an outbound rule with the VPC security
group of the DB instance as the source. This security group allows the EC2
instance to send traffic to the DB instance.
You can add another new security group by choosing Create new and
typing the name of the new security group.
You can add existing security groups by choosing Choose existing and
selecting security groups to add.
Availability Zone When you choose Single DB instance in Availability & durability (Single-
AZ deployment), RDS chooses the Availability Zone of the EC2 instance.
For more information about these settings, see Settings for DB instances (p. 308).
If you change these settings after the DB instance is created, the changes might affect the connection
between the EC2 instance and the DB instance.
302
Amazon Relational Database Service User Guide
Creating a DB instance
By default, Amazon RDS creates the DB instance an Availability Zone automatically for you. To choose
a specific Availability Zone, you need to change the Availability & durability setting to Single DB
instance. Doing so exposes an Availability Zone setting that lets you choose from among the Availability
Zones in your VPC. However, if you choose a Multi-AZ deployment, RDS chooses the Availability Zone of
the primary or writer DB instance automatically, and the Availability Zone setting doesn't appear.
In some cases, you might not have a default VPC or haven't created a VPC. In these cases, you can have
Amazon RDS automatically create a VPC for you when you create a DB instance using the console.
Otherwise, do the following:
• Create a VPC with at least one subnet in each of at least two of the Availability Zones in the AWS
Region where you want to deploy your DB instance. For more information, see Working with a DB
instance in a VPC (p. 2689) and Tutorial: Create a VPC for use with a DB instance (IPv4 only) (p. 2706).
• Specify a VPC security group that authorizes connections to your DB instance. For more information,
see Provide access to your DB instance in your VPC by creating a security group (p. 177) and
Controlling access with security groups (p. 2680).
• Specify an RDS DB subnet group that defines at least two subnets in the VPC that can be used by the
DB instance. For more information, see Working with DB subnet groups (p. 2689).
If you want to connect to a resource that isn't in the same VPC as the DB instance, see the appropriate
scenarios in Scenarios for accessing a DB instance in a VPC (p. 2701).
Additional prerequisites
Before you create your DB instance, consider the following additional prerequisites:
• If you are connecting to AWS using AWS Identity and Access Management (IAM) credentials, your AWS
account must have certain IAM policies. These grant the permissions required to perform Amazon RDS
operations. For more information, see Identity and access management for Amazon RDS (p. 2606).
To use IAM to access the RDS console, sign in to the AWS Management Console with your IAM user
credentials. Then go to the Amazon RDS console at https://console.aws.amazon.com/rds/.
• To tailor the configuration parameters for your DB instance, specify a DB parameter group with the
required parameter settings. For information about creating or modifying a DB parameter group, see
Working with parameter groups (p. 347).
• Determine the TCP/IP port number to specify for your DB instance. The firewalls at some companies
block connections to the default ports for RDS DB instances. If your company firewall blocks the
default port, choose another port for your DB instance.
Creating a DB instance
You can create an Amazon RDS DB instance using the AWS Management Console, the AWS CLI, or the
RDS API.
Console
You can create a DB instance by using the AWS Management Console with Easy create enabled or
not enabled. With Easy create enabled, you specify only the DB engine type, DB instance size, and DB
303
Amazon Relational Database Service User Guide
Creating a DB instance
instance identifier. Easy create uses the default setting for other configuration options. With Easy create
not enabled, you specify more configuration options when you create a database, including ones for
availability, security, backups, and maintenance.
Note
In the following procedure, Standard create is enabled, and Easy create isn't enabled. This
procedure uses Microsoft SQL Server as an example.
For examples that use Easy create to walk you through creating and connecting to sample DB
instances for each engine, see Getting started with Amazon RDS (p. 180).
To create a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to
create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database, then choose Standard create.
5. For Engine type, choose MariaDB, Microsoft SQL Server, MySQL, Oracle, or PostgreSQL.
304
Amazon Relational Database Service User Guide
Creating a DB instance
6. For Database management type, if you're using Oracle or SQL Server choose Amazon RDS or
Amazon RDS Custom.
Amazon RDS is shown here. For more information on RDS Custom, see Working with Amazon RDS
Custom (p. 978).
7. For Edition, if you're using Oracle or SQL Server choose the DB engine edition that you want to use.
MySQL has only one option for the edition, and MariaDB and PostgreSQL have none.
8. For Version, choose the engine version.
9. In Templates, choose the template that matches your use case. If you choose Production, the
following are preselected in a later step:
305
Amazon Relational Database Service User Guide
Creating a DB instance
You can configure connectivity between an Amazon EC2 instance and the new DB instance during
DB instance creation. For more information, see Configure automatic network connectivity with an
EC2 instance (p. 300).
12. In the Connectivity section under VPC security group (firewall), if you select Create new, a VPC
security group is created with an inbound rule that allows your local computer's IP address to access
the database.
13. For the remaining sections, specify your DB instance settings. For information about each setting,
see Settings for DB instances (p. 308).
14. Choose Create database.
If you chose to use an automatically generated password, the View credential details button
appears on the Databases page.
To view the master user name and password for the DB instance, choose View credential details.
To connect to the DB instance as the master user, use the user name and password that appear.
Important
You can't view the master user password again. If you don't record it, you might have to
change it. If you need to change the master user password after the DB instance is available,
modify the DB instance to do so. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
15. For Databases, choose the name of the new DB instance.
On the RDS console, the details for the new DB instance appear. The DB instance has a status of
Creating until the DB instance is created and ready for use. When the state changes to Available,
you can connect to the DB instance. Depending on the DB instance class and storage allocated, it can
take several minutes for the new instance to be available.
306
Amazon Relational Database Service User Guide
Creating a DB instance
AWS CLI
To create a DB instance by using the AWS CLI, call the create-db-instance command with the following
parameters:
• --db-instance-identifier
• --db-instance-class
• --vpc-security-group-ids
• --db-subnet-group
• --engine
• --master-username
• --master-user-password
• --allocated-storage
• --backup-retention-period
For information about each setting, see Settings for DB instances (p. 308).
Example
For Windows:
307
Amazon Relational Database Service User Guide
Available settings
--db-instance-class db.t3.large ^
--vpc-security-group-ids mysecuritygroup ^
--db-subnet-group mydbsubnetgroup ^
--master-username masterawsuser ^
--manage-master-user-password ^
--backup-retention-period 3
RDS API
To create a DB instance by using the Amazon RDS API, call the CreateDBInstance operation.
For information about each setting, see Settings for DB instances (p. 308).
You can create a DB instance using the console, the create-db-instance CLI command, or the
CreateDBInstance RDS API operation.
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
308
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
a non-CDB. A non-CDB uses the
traditional Oracle architecture.
Auto minor Enable auto minor version upgrade CLI option: All
version upgrade to enable your DB instance to
receive preferred minor DB engine --auto-minor-version-upgrade
version upgrades automatically
when they become available. --no-auto-minor-version-
Amazon RDS performs automatic upgrade
minor version upgrades in the
API parameter:
maintenance window.
AutoMinorVersionUpgrade
For more information, see
Automatically upgrading the minor
engine version (p. 431).
Availability zone The Availability Zone for your DB CLI option: All
instance. Use the default value of
No Preference unless you want to --availability-zone
specify an Availability Zone.
API parameter:
For more information, see Regions,
Availability Zones, and Local AvailabilityZone
Zones (p. 110).
AWS KMS key Only available if Encryption is set to CLI option: All
Enable encryption. Choose the AWS
KMS key to use for encrypting this --kms-key-id
DB instance. For more information,
see Encrypting Amazon RDS API parameter:
resources (p. 2586).
KmsKeyId
Backup The number of days that you want CLI option: All
retention period automatic backups of your DB
instance to be retained. For any --backup-retention-period
nontrivial DB instance, set this value
to 1 or greater. API parameter:
309
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
Backup window The time period during which CLI option: All
Amazon RDS automatically takes
a backup of your DB instance. --preferred-backup-window
Unless you have a specific time that
you want to have your database API parameter:
backed up, use the default of No
PreferredBackupWindow
Preference.
310
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
Character set The character set for your DB CLI option: Oracle
instance. The default value of
AL32UTF8 for the DB character --character-set-name
set is for the Unicode 5.0 UTF-8
Universal character set. You can't API parameter:
change the DB character set after
CharacterSetName
you create the DB instance.
In a single-tenant configuration, a
non-default DB character set affects
only the PDB, not the CDB. For more
information, see Overview of RDS
for Oracle CDBs (p. 1840).
Copy tags to This option copies any DB instance CLI option: All
snapshots tags to a DB snapshot when you
create a snapshot. --copy-tags-to-snapshot
CopyTagsToSnapshot
311
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
Database Choose Amazon RDS if you For the CLI and API, you specify the Oracle
management don't need to customize your database engine type.
type environment. SQL Server
312
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
Database port The port that you want to access the CLI option: All
DB instance through. The default
port is shown. --port
Note RDS API parameter:
The firewalls at some
companies block Port
connections to the default
MariaDB, MySQL, and
PostgreSQL ports. If your
company firewall blocks the
default port, enter another
port for your DB instance.
EngineVersion
313
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
StorageEncrypted
314
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
MonitoringRoleArn
Engine
Initial database The name for the database on your CLI option: All except
name DB instance. If you don't provide a SQL Server
name, Amazon RDS doesn't create a --db-name
database on the DB instance (except
for Oracle and PostgreSQL). The RDS API parameter:
name can't be a word reserved by
DBName
the database engine, and has other
constraints depending on the DB
engine.
Oracle:
PostgreSQL:
315
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
License Valid values for the license model: CLI option: All
Log exports The types of database log files to CLI option: All
publish to Amazon CloudWatch
Logs. --enable-cloudwatch-logs-
exports
For more information, see
Publishing database logs to Amazon RDS API parameter:
CloudWatch Logs (p. 898).
EnableCloudwatchLogsExports
316
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
Master The password for your master user CLI option: All
password account. The password has the
following number of printable ASCII --master-user-password
characters (excluding /, ", a space,
and @) depending on the DB engine: RDS API parameter:
Master The name that you use as the CLI option: All
username master user name to log on to
your DB instance with all database --master-username
privileges.
RDS API parameter:
• It can contain 1–16 alphanumeric
characters and underscores. MasterUsername
• Its first character must be a letter.
• It can't be a word reserved by the
database engine.
Microsoft SQL Enable Microsoft SQL Server CLI options: SQL Server
Server Windows Windows authentication, then
Authentication Browse Directory to choose the --domain
directory where you want to
allow authorized domain users --domain-iam-role-name
to authenticate with this SQL
RDS API parameters:
Server instance using Windows
Authentication. Domain
DomainIAMRoleName
317
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
National The national character set for your CLI option: Oracle
character set DB instance, commonly called the
(NCHAR) NCHAR character set. You can set --nchar-character-set-name
the national character set to either
AL16UTF16 (default) or UTF-8. You API parameter:
can't change the national character
NcharCharacterSetName
set after you create the DB instance.
318
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
319
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
320
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
RDS Proxy Choose Create an RDS Proxy to Not available when creating a DB MariaDB
create a proxy for your DB instance. instance.
Amazon RDS automatically creates MySQL
an IAM role and a Secrets Manager
secret for the proxy. PostgreSQL
321
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
Storage type The storage type for your DB CLI option: All
instance.
--storage-type
If you choose General Purpose
SSD (gp3), you can provision RDS API parameter:
additional provisioned IOPS and
storage throughput under Advanced StorageType
settings.
DBSubnetGroupName
322
Amazon Relational Database Service User Guide
Available settings
Console setting Setting description CLI option and RDS API parameter Supported
DB engines
Time zone The time zone for your DB instance. CLI option: SQL Server
If you don't choose a time zone,
your DB instance uses the default --timezone RDS Custom
time zone. You can't change the for SQL
time zone after the DB instance is RDS API parameter: Server
created.
Timezone
For more information, see Local
time zone for Microsoft SQL Server
DB instances (p. 1371).
Virtual Private A VPC based on the Amazon VPC For the CLI and API, you specify the VPC All
Cloud (VPC) service to associate with this DB security group IDs.
instance.
VPC security The security group to associate with CLI option: All
group (firewall) the DB instance.
--vpc-security-group-ids
For more information, see Overview
of VPC security groups (p. 2680). RDS API parameter:
VpcSecurityGroupIds
323
Amazon Relational Database Service User Guide
Creating resources with AWS CloudFormation
When you use AWS CloudFormation, you can reuse your template to set up your RDS resources
consistently and repeatedly. Describe your resources once, and then provision the same resources over
and over in multiple AWS accounts and Regions.
RDS supports creating resources in AWS CloudFormation. For more information, including examples
of JSON and YAML templates for these resources, see the RDS resource type reference in the AWS
CloudFormation User Guide.
• AWS CloudFormation
• AWS CloudFormation User Guide
• AWS CloudFormation API Reference
• AWS CloudFormation Command Line Interface User Guide
324
Amazon Relational Database Service User Guide
Connecting to a DB instance
Topics
• Finding the connection information for an Amazon RDS DB instance (p. 325)
• Database authentication options (p. 328)
• Encrypted connections (p. 329)
• Scenarios for accessing a DB instance in a VPC (p. 329)
• Connecting to a DB instance that is running a specific DB engine (p. 329)
• Managing connections with RDS Proxy (p. 330)
The endpoint is unique for each DB instance, and the values of the port and user can vary. The following
list shows the most common port for each DB engine:
• MariaDB – 3306
• Microsoft SQL Server – 1433
• MySQL – 3306
• Oracle – 1521
• PostgreSQL – 5432
To connect to a DB instance, use any client for a DB engine. For example, you might use the mysql utility
to connect to a MariaDB or MySQL DB instance. You might use Microsoft SQL Server Management Studio
to connect to a SQL Server DB instance. You might use Oracle SQL Developer to connect to an Oracle DB
instance. Similarly, you might use the psql command line utility to connect to a PostgreSQL DB instance.
To find the connection information for a DB instance, use the AWS Management Console. You can
also use the AWS Command Line Interface (AWS CLI) describe-db-instances command or the RDS API
DescribeDBInstances operation.
325
Amazon Relational Database Service User Guide
Finding the connection information
Console
To find the connection information for a DB instance in the AWS Management Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases to display a list of your DB instances.
3. Choose the name of the DB instance to display its details.
4. On the Connectivity & security tab, copy the endpoint. Also, note the port number. You need both
the endpoint and the port number to connect to the DB instance.
326
Amazon Relational Database Service User Guide
Finding the connection information
5. If you need to find the master user name, choose the Configuration tab and view the Master
username value.
AWS CLI
To find the connection information for a DB instance by using the AWS CLI, call the describe-db-
instances command. In the call, query for the DB instance ID, endpoint, port, and master user name.
327
Amazon Relational Database Service User Guide
Database authentication options
For Windows:
[
[
"mydb",
"mydb.123456789012.us-east-1.rds.amazonaws.com",
3306,
"admin"
],
[
"myoracledb",
"myoracledb.123456789012.us-east-1.rds.amazonaws.com",
1521,
"dbadmin"
],
[
"mypostgresqldb",
"mypostgresqldb.123456789012.us-east-1.rds.amazonaws.com",
5432,
"postgresadmin"
]
]
RDS API
To find the connection information for a DB instance by using the Amazon RDS API, call the
DescribeDBInstances operation. In the output, find the values for the endpoint address, endpoint port,
and master user name.
• Password authentication – Your DB instance performs all administration of user accounts. You create
users and specify passwords with SQL statements. The SQL statements you can use depend on your
DB engine.
• AWS Identity and Access Management (IAM) database authentication – You don't need to use a
password when you connect to a DB instance. Instead, you use an authentication token.
• Kerberos authentication – You use external authentication of database users using Kerberos and
Microsoft Active Directory. Kerberos is a network authentication protocol that uses tickets and
symmetric-key cryptography to eliminate the need to transmit passwords over the network. Kerberos
has been built into Active Directory and is designed to authenticate users to network resources, such as
databases.
IAM database authentication and Kerberos authentication are available only for specific DB engines and
versions.
328
Amazon Relational Database Service User Guide
Encrypted connections
For more information, see Database authentication with Amazon RDS (p. 2566).
Encrypted connections
You can use Secure Socket Layer (SSL) or Transport Layer Security (TLS) from your application to encrypt
a connection to a DB instance. Each DB engine has its own process for implementing SSL/TLS. For more
information, see Using SSL/TLS to encrypt a connection to a DB instance (p. 2591).
A VPC security group controls access to DB instances inside a VPC. Each VPC security group rule enables
a specific source to access a DB instance in a VPC that is associated with that VPC security group. The
source can be a range of addresses (for example, 203.0.113.0/24), or another VPC security group. By
specifying a VPC security group as the source, you allow incoming traffic from all instances (typically
application servers) that use the source VPC security group.
Before attempting to connect to your DB instance, configure your VPC for your use case. The following
are common scenarios for accessing a DB instance in a VPC:
• A DB instance in a VPC accessed by an Amazon EC2 instance in the same VPC – A common use of a
DB instance in a VPC is to share data with an application server that is running in an EC2 instance in
the same VPC. The EC2 instance might run a web server with an application that interacts with the DB
instance.
• A DB instance in a VPC accessed by an EC2 instance in a different VPC – In some cases, your DB
instance is in a different VPC from the EC2 instance that you're using to access it. If so, you can use VPC
peering to access the DB instance.
• A DB instance in a VPC accessed by a client application through the internet – To access a DB
instance in a VPC from a client application through the internet, you configure a VPC with a single
public subnet. You also configure an internet gateway to enable communication over the internet.
To connect to a DB instance from outside of its VPC, the DB instance must be publicly accessible.
Also, access must be granted using the inbound rules of the DB instance's security group, and
other requirements must be met. For more information, see Can't connect to Amazon RDS DB
instance (p. 2727).
• A DB instance in a VPC accessed by a private network – If your DB instance isn't publicly accessible,
you can use one of the following options to access it from a private network:
• An AWS Site-to-Site VPN connection
• An AWS Direct Connect connection
• An AWS Client VPN connection
For more information, see Scenarios for accessing a DB instance in a VPC (p. 2701).
329
Amazon Relational Database Service User Guide
Managing connections with RDS Proxy
• Connecting to a DB instance running the Microsoft SQL Server database engine (p. 1380)
• Connecting to a DB instance running the MySQL database engine (p. 1630)
• Connecting to your RDS for Oracle DB instance (p. 1806)
• Connecting to a DB instance running the PostgreSQL database engine (p. 2167)
330
Amazon Relational Database Service User Guide
Working with option groups
Microsoft SQL Server Options for the Microsoft SQL Server database engine (p. 1514)
PostgreSQL PostgreSQL does not use options and option groups. PostgreSQL
uses extensions and modules to provide additional features.
For more information, see Supported PostgreSQL extension
versions (p. 2156).
To associate an option group with a DB instance, modify the DB instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).
Both DB instances and DB snapshots can be associated with an option group. In some cases, you might
restore from a DB snapshot or perform a point-in-time restore for a DB instance. In these cases, the
option group associated with the DB snapshot or DB instance is, by default, associated with the restored
DB instance. You can associate a different option group with a restored DB instance. However, the new
option group must contain any persistent or permanent options that were included in the original option
group. Persistent and permanent options are described following.
Options require additional memory to run on a DB instance. Thus, you might need to launch a larger
instance to use them, depending on your current use of your DB instance. For example, Oracle Enterprise
Manager Database Control uses about 300 MB of RAM. If you enable this option for a small DB instance,
you might encounter performance problems or out-of-memory errors.
331
Amazon Relational Database Service User Guide
Creating an option group
Persistent options can't be removed from an option group while DB instances are associated with the
option group. An example of a persistent option is the TDE option for Microsoft SQL Server transparent
data encryption (TDE). You must disassociate all DB instances from the option group before a persistent
option can be removed from the option group. In some cases, you might restore or perform a point-in-
time restore from a DB snapshot. In these cases, if the option group associated with that DB snapshot
contains a persistent option, you can only associate the restored DB instance with that option group.
Permanent options, such as the TDE option for Oracle Advanced Security TDE, can never be removed
from an option group. You can change the option group of a DB instance that is using the permanent
option. However, the option group associated with the DB instance must include the same permanent
option. In some cases, you might restore or perform a point-in-time restore from a DB snapshot. In these
cases, if the option group associated with that DB snapshot contains a permanent option, you can only
associate the restored DB instance with an option group with that permanent option.
For Oracle DB instances, you can copy shared DB snapshots that have the options Timezone or OLS
(or both). To do so, specify a target option group that includes these options when you copy the DB
snapshot. The OLS option is permanent and persistent only for Oracle DB instances running Oracle
version 12.2 or higher. For more information about these options, see Oracle time zone (p. 2087) and
Oracle Label Security (p. 2049).
VPC considerations
The option group associated with the DB instance is linked to the DB instance's VPC. This means that you
can't use the option group assigned to a DB instance if you try to restore the instance to a different VPC.
If you restore a DB instance to a different VPC, you can do one of the following:
With persistent or permanent options, such as Oracle TDE, you must create a new option group. This
option group must include the persistent or permanent option when restoring a DB instance into a
different VPC.
Option settings control the behavior of an option. For example, the Oracle Advanced Security option
NATIVE_NETWORK_ENCRYPTION has a setting that you can use to specify the encryption algorithm for
network traffic to and from the DB instance. Some options settings are optimized for use with Amazon
RDS and cannot be changed.
• Oracle Enterprise Manager Database Express (p. 2035) and Oracle Management Agent for Enterprise
Manager Cloud Control (p. 2039).
• Oracle native network encryption (p. 2057) and Oracle Secure Sockets Layer (p. 2068).
332
Amazon Relational Database Service User Guide
Creating an option group
After you create a new option group, it has no options. To learn how to add options to the option group,
see Adding an option to an option group (p. 335). After you have added the options you want, you can
then associate the option group with a DB instance. This way, the options become available on the DB
instance. For information about associating an option group with a DB instance, see the documentation
for your engine in Working with option groups (p. 331).
Console
One way of creating an option group is by using the AWS Management Console.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose Create group.
4. In the Create option group window, do the following:
a. For Name, type a name for the option group that is unique within your AWS account. The name
can contain only letters, digits, and hyphens.
b. For Description, type a brief description of the option group. The description is used for display
purposes.
c. For Engine, choose the DB engine that you want.
d. For Major engine version, choose the major version of the DB engine that you want.
5. To continue, choose Create. To cancel the operation instead, choose Cancel.
AWS CLI
To create an option group, use the AWS CLI create-option-group command with the following
required parameters.
• --option-group-name
• --engine-name
• --major-engine-version
• --option-group-description
Example
The following example creates an option group named testoptiongroup, which is associated with the
Oracle Enterprise Edition DB engine. The description is enclosed in quotation marks.
For Windows:
333
Amazon Relational Database Service User Guide
Copying an option group
RDS API
To create an option group, call the Amazon RDS API CreateOptionGroup operation. Include the
following parameters:
• OptionGroupName
• EngineName
• MajorEngineVersion
• OptionGroupDescription
AWS CLI
To copy an option group, use the AWS CLI copy-option-group command. Include the following required
options:
• --source-option-group-identifier
• --target-option-group-identifier
• --target-option-group-description
Example
The following example creates an option group named new-option-group, which is a local copy of the
option group my-option-group.
For Windows:
334
Amazon Relational Database Service User Guide
Adding an option to an option group
RDS API
To copy an option group, call the Amazon RDS API CopyOptionGroup operation. Include the following
required parameters.
• SourceOptionGroupIdentifier
• TargetOptionGroupIdentifier
• TargetOptionGroupDescription
• When you add an option that adds or updates a port value, such as the OEM option.
• When you add or remove an option group with an option that includes a port value.
In these cases, choose the Apply Immediately option in the console. Or you can include the --apply-
immediately option when using the AWS CLI or set the ApplyImmediately parameter to true when
using the Amazon RDS API. Options that don't include port values can be applied immediately, or can be
applied during the next maintenance window for the DB instance.
Note
If you specify a security group as a value for an option in an option group, manage the security
group by modifying the option group. You can't change or remove this security group by
modifying a DB instance. Also, the security group doesn't appear in the DB instance details in
the AWS Management Console or in the output for the AWS CLI command describe-db-
instances.
Console
You can use the AWS Management Console to add an option to an option group.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group that you want to modify, and then choose Add option.
335
Amazon Relational Database Service User Guide
Adding an option to an option group
a. Choose the option that you want to add. You might need to provide additional values,
depending on the option that you select. For example, when you choose the OEM option, you
must also type a port value and specify a security group.
b. To enable the option on all associated DB instances as soon as you add it, for Apply
Immediately, choose Yes. If you choose No (the default), the option is enabled for each
associated DB instance during its next maintenance window.
336
Amazon Relational Database Service User Guide
Adding an option to an option group
5. When the settings are as you want them, choose Add option.
AWS CLI
To add an option to an option group, run the AWS CLI add-option-to-option-group command with the
option that you want to add. To enable the new option immediately on all associated DB instances,
include the --apply-immediately parameter. By default, the option is enabled for each associated DB
instance during its next maintenance window. Include the following required parameter:
• --option-group-name
Example
The following example adds the Oracle Enterprise Manager Database Control (OEM) option to an option
group named testoptiongroup and immediately enables it. Even if you use the default security group,
you must specify that security group.
337
Amazon Relational Database Service User Guide
Adding an option to an option group
--option-group-name testoptiongroup \
--options OptionName=OEM,Port=5500,DBSecurityGroupMemberships=default \
--apply-immediately
For Windows:
Example
The following example adds the Oracle OEM option to an option group. It also specifies a custom port
and a pair of Amazon EC2 VPC security groups to use for that port.
For Windows:
Example
The following example adds the Oracle option NATIVE_NETWORK_ENCRYPTION to an option group and
specifies the option settings. If no option settings are specified, default values are used.
338
Amazon Relational Database Service User Guide
Listing the options and option settings for an option group
For Windows:
RDS API
To add an option to an option group using the Amazon RDS API, call the ModifyOptionGroup operation
with the option that you want to add. To enable the new option immediately on all associated DB
instances, include the ApplyImmediately parameter and set it to true. By default, the option is
enabled for each associated DB instance during its next maintenance window. Include the following
required parameter:
• OptionGroupName
339
Amazon Relational Database Service User Guide
Modifying an option setting
Console
You can use the AWS Management Console to list all of the options and option settings for an option
group.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the name of the option group to display its details. The options and option settings in the
option group are listed.
AWS CLI
To list the options and option settings for an option group, use the AWS CLI describe-option-
groups command. Specify the name of the option group whose options and settings you want to view.
If you don't specify an option group name, all option groups are described.
Example
The following example lists the options and option settings for all option groups.
Example
The following example lists the options and option settings for an option group named
testoptiongroup.
RDS API
To list the options and option settings for an option group, use the Amazon RDS API
DescribeOptionGroups operation. Specify the name of the option group whose options and settings
you want to view. If you don't specify an option group name, all option groups are described.
• When you add an option that adds or updates a port value, such as the OEM option.
• When you add or remove an option group with an option that includes a port value.
In these cases, choose the Apply Immediately option in the console. Or you can include the --apply-
immediately option when using the AWS CLI or set the ApplyImmediately parameter to true when
340
Amazon Relational Database Service User Guide
Modifying an option setting
using the RDS API. Options that don't include port values can be applied immediately, or can be applied
during the next maintenance window for the DB instance.
Note
If you specify a security group as a value for an option in an option group, you manage the
security group by modifying the option group. You can't change or remove this security group
by modifying a DB instance. Also, the security group doesn't appear in the DB instance details
in the AWS Management Console or in the output for the AWS CLI command describe-db-
instances.
Console
You can use the AWS Management Console to modify an option setting.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Select the option group whose option that you want to modify, and then choose Modify option.
4. In the Modify option window, from Installed Options, choose the option whose setting you want to
modify. Make the changes that you want.
5. To enable the option as soon as you add it, for Apply Immediately, choose Yes. If you choose No
(the default), the option is enabled for each associated DB instance during its next maintenance
window.
6. When the settings are as you want them, choose Modify Option.
AWS CLI
To modify an option setting, use the AWS CLI add-option-to-option-group command with the
option group and option that you want to modify. By default, the option is enabled for each associated
DB instance during its next maintenance window. To apply the change immediately to all associated
DB instances, include the --apply-immediately parameter. To modify an option setting, use the --
settings argument.
Example
The following example modifies the port that the Oracle Enterprise Manager Database Control (OEM)
uses in an option group named testoptiongroup and immediately applies the change.
For Windows:
341
Amazon Relational Database Service User Guide
Modifying an option setting
Example
The following example modifies the Oracle option NATIVE_NETWORK_ENCRYPTION and changes the
option settings.
For Windows:
342
Amazon Relational Database Service User Guide
Removing an option from an option group
RDS API
To modify an option setting, use the Amazon RDS API ModifyOptionGroup command with the option
group and option that you want to modify. By default, the option is enabled for each associated DB
instance during its next maintenance window. To apply the change immediately to all associated DB
instances, include the ApplyImmediately parameter and set it to true.
If you remove all options from an option group, Amazon RDS doesn't delete the option group. DB
instances that are associated with the empty option group continue to be associated with it; they just
won't have any active options. Alternatively, to remove all options from a DB instance, you can associate
the DB instance with the default (empty) option group.
Console
You can use the AWS Management Console to remove an option from an option group.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Select the option group whose option you want to remove, and then choose Delete option.
4. In the Delete option window, do the following:
• Select the check box for the option that you want to delete.
• For the deletion to take effect as soon as you make it, for Apply immediately, choose Yes. If you
choose No (the default), the option is deleted for each associated DB instance during its next
maintenance window.
343
Amazon Relational Database Service User Guide
Deleting an option group
5. When the settings are as you want them, choose Yes, Delete.
AWS CLI
To remove an option from an option group, use the AWS CLI remove-option-from-option-group
command with the option that you want to delete. By default, the option is removed from each
associated DB instance during its next maintenance window. To apply the change immediately, include
the --apply-immediately parameter.
Example
The following example removes the Oracle Enterprise Manager Database Control (OEM) option from an
option group named testoptiongroup and immediately applies the change.
For Windows:
RDS API
To remove an option from an option group, use the Amazon RDS API ModifyOptionGroup action. By
default, the option is removed from each associated DB instance during its next maintenance window. To
apply the change immediately, include the ApplyImmediately parameter and set it to true.
• OptionGroupName
• OptionsToRemove.OptionName
You can't delete a default option group. If you try to delete an option group that is associated with an
RDS resource, an error like the following is returned.
344
Amazon Relational Database Service User Guide
Deleting an option group
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the name of the option group to show its details.
4. Check the Associated Instances and Snapshots section for the associated Amazon RDS resources.
If a DB instance is associated with the option group, modify the DB instance to use a different option
group. For more information, see Modifying an Amazon RDS DB instance (p. 401).
If a manual DB snapshot is associated with the option group, modify the DB snapshot to use a different
option group. You can do so using the AWS CLI modify-db-snapshot command.
Note
You can't modify the option group of an automated DB snapshot.
Console
One way of deleting an option group is by using the AWS Management Console.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group.
4. Choose Delete group.
5. On the confirmation page, choose Delete to finish deleting the option group, or choose Cancel to
cancel the deletion.
AWS CLI
To delete an option group, use the AWS CLI delete-option-group command with the following
required parameter.
• --option-group-name
Example
345
Amazon Relational Database Service User Guide
Deleting an option group
For Windows:
RDS API
To delete an option group, call the Amazon RDS API DeleteOptionGroup operation. Include the
following parameter:
• OptionGroupName
346
Amazon Relational Database Service User Guide
Working with parameter groups
You manage your database configuration by associating your DB instances and Multi-AZ DB clusters with
parameter groups. Amazon RDS defines parameter groups with default settings. You can also define your
own parameter groups with customized settings.
Note
Some DB engines offer additional features that you can add to your database as options in an
option group. For information about option groups, see Working with option groups (p. 331).
Topics
• Overview of parameter groups (p. 347)
• Working with DB parameter groups (p. 349)
• Working with DB cluster parameter groups for Multi-AZ DB clusters (p. 360)
• Comparing parameter groups (p. 368)
• Specifying DB parameters (p. 369)
Topics
• Default and custom parameter groups (p. 347)
• Static and dynamic DB instance parameters (p. 348)
• Static and dynamic DB cluster parameters (p. 348)
• Character set parameters (p. 349)
• Supported parameters and parameter values (p. 349)
You can't modify the parameter settings of a default parameter group. Instead, you can do the following:
347
Amazon Relational Database Service User Guide
Overview of parameter groups
Note
If you have modified your DB instance to use a custom parameter group, and you start the DB
instance, RDS automatically reboots the DB instance as part of the startup process.
If you update parameters within a DB parameter group, the changes apply to all DB instances that are
associated with that parameter group. Likewise, if you update parameters within a Multi-AZ DB cluster
parameter group, the changes apply to all Aurora DB clusters that are associated with that DB cluster
parameter group.
If you don't want to create a parameter group from scratch, you can copy an existing parameter group
with the AWS CLI copy-db-parameter-group command or copy-db-cluster-parameter-group command.
You might find that copying a parameter group is useful in some cases. For example, you might want to
include most of an existing DB parameter group's custom parameters and values in a new DB parameter
group.
• When you change a static parameter and save the DB parameter group, the parameter change takes
effect after you manually reboot the associated DB instances. For static parameters, the console always
uses pending-reboot for the ApplyMethod.
• When you change a dynamic parameter, by default the parameter change takes effect immediately,
without requiring a reboot. When you use the AWS Management Console to change DB instance
parameter values, it always uses immediate for the ApplyMethod for dynamic parameters. To defer
the parameter change until after you reboot an associated DB instance, use the AWS CLI or RDS API.
Set the ApplyMethod to pending-reboot for the parameter change.
Note
Using pending-reboot with dynamic parameters in the AWS CLI or RDS API on RDS for SQL
Server DB instances generates an error. Use apply-immediately on RDS for SQL Server.
For more information about using the AWS CLI to change a parameter value, see modify-db-
parameter-group. For more information about using the RDS API to change a parameter value, see
ModifyDBParameterGroup.
When you associate a new DB parameter group with a DB instance, RDS applies the modified static
and dynamic parameters only after the DB instance is rebooted. However, if you modify dynamic
parameters in the DB parameter group after you associate it with the DB instance, these changes are
applied immediately without a reboot. For more information about changing the DB parameter group,
see Modifying an Amazon RDS DB instance (p. 401).
If a DB instance isn't using the latest changes to its associated DB parameter group, the console shows
a status of pending-reboot for the DB parameter group. This status doesn't result in an automatic
reboot during the next maintenance window. To apply the latest parameter changes to that DB instance,
manually reboot the DB instance.
• When you change a static parameter and save the DB cluster parameter group, the parameter change
takes effect after you manually reboot the associated DB clusters. For static parameters, the console
always uses pending-reboot for the ApplyMethod.
• When you change a dynamic parameter, by default the parameter change takes effect immediately,
without requiring a reboot. When you use the AWS Management Console to change DB cluster
348
Amazon Relational Database Service User Guide
Working with DB parameter groups
parameter values, it always uses immediate for the ApplyMethod for dynamic parameters. To defer
the parameter change until after an associated DB cluster is rebooted, use the AWS CLI or RDS API. Set
the ApplyMethod to pending-reboot for the parameter change.
For more information about using the AWS CLI to change a parameter value, see modify-db-cluster-
parameter-group. For more information about using the RDS API to change a parameter value, see
ModifyDBClusterParameterGroup.
For some DB engines, you can change character set or collation values for an existing database using the
ALTER DATABASE command, for example:
For more information about changing the character set or collation values for a database, check the
documentation for your DB engine.
In many cases, you can specify integer and Boolean parameter values using expressions, formulas, and
functions. Functions can include a mathematical log expression. However, not all parameters support
expressions, formulas, and functions for parameter values. For more information, see Specifying DB
parameters (p. 369).
Improperly setting parameters in a parameter group can have unintended adverse effects, including
degraded performance and system instability. Always be cautious when modifying database parameters,
and back up your data before modifying a parameter group. Try parameter group setting changes on
a test DB instance or DB cluster before applying those parameter group changes to a production DB
instance or DB cluster.
Topics
• Creating a DB parameter group (p. 350)
• Associating a DB parameter group with a DB instance (p. 351)
• Modifying parameters in a DB parameter group (p. 352)
• Resetting parameters in a DB parameter group to their default values (p. 354)
• Copying a DB parameter group (p. 356)
349
Amazon Relational Database Service User Guide
Working with DB parameter groups
Default parameter group names can include a period, such as default.mysql8.0. However, custom
parameter group names can't include a period.
• The first character must be a letter.
• The name can't end with a hyphen or contain two consecutive hyphens.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose Create parameter group.
AWS CLI
To create a DB parameter group, use the AWS CLI create-db-parameter-group command. The
following example creates a DB parameter group named mydbparametergroup for MySQL version 8.0
with a description of "My new parameter group."
• --db-parameter-group-name
• --db-parameter-group-family
• --description
To list all of the available parameter group families, use the following command:
Note
The output contains duplicates.
350
Amazon Relational Database Service User Guide
Working with DB parameter groups
Example
For Linux, macOS, or Unix:
For Windows:
RDS API
• DBParameterGroupName
• DBParameterGroupFamily
• Description
For information about creating a DB parameter group, see Creating a DB parameter group (p. 350).
For information about creating a DB instance, see Creating an Amazon RDS DB instance (p. 300). For
information about modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401).
Note
When you associate a new DB parameter group with a DB instance, the modified static and
dynamic parameters are applied only after the DB instance is rebooted. However, if you modify
dynamic parameters in the DB parameter group after you associate it with the DB instance,
these changes are applied immediately without a reboot.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
modify.
3. Choose Modify. The Modify DB Instance page appears.
4. Change the DB parameter group setting.
351
Amazon Relational Database Service User Guide
Working with DB parameter groups
AWS CLI
To associate a DB parameter group with a DB instance, use the AWS CLI modify-db-instance
command with the following options:
• --db-instance-identifier
• --db-parameter-group-name
The following example associates the mydbpg DB parameter group with the database-1 DB
instance. The changes are applied immediately by using --apply-immediately. Use --no-apply-
immediately to apply the changes during the next maintenance window. For more information, see
Using the Apply Immediately setting (p. 402).
Example
For Linux, macOS, or Unix:
For Windows:
RDS API
To associate a DB parameter group with a DB instance, use the RDS API ModifyDBInstance operation
with the following parameters:
• DBInstanceName
• DBParameterGroupName
Changes to some parameters are applied to the DB instance immediately without a reboot. Changes
to other parameters are applied only after the DB instance is rebooted. The RDS console shows the
status of the DB parameter group associated with a DB instance on the Configuration tab. For example,
352
Amazon Relational Database Service User Guide
Working with DB parameter groups
suppose that the DB instance isn't using the latest changes to its associated DB parameter group. If so,
the RDS console shows the DB parameter group with a status of pending-reboot. To apply the latest
parameter changes to that DB instance, manually reboot the DB instance.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the parameter group that you want to modify.
4. For Parameter group actions, choose Edit.
5. Change the values of the parameters that you want to modify. You can scroll through the
parameters using the arrow keys at the top right of the dialog box.
AWS CLI
To modify a DB parameter group, use the AWS CLI modify-db-parameter-group command with the
following required options:
353
Amazon Relational Database Service User Guide
Working with DB parameter groups
• --db-parameter-group-name
• --parameters
The following example modifies the max_connections and max_allowed_packet values in the DB
parameter group named mydbparametergroup.
Example
"ParameterName=max_allowed_packet,ParameterValue=1024,ApplyMethod=immediate"
For Windows:
"ParameterName=max_allowed_packet,ParameterValue=1024,ApplyMethod=immediate"
DBPARAMETERGROUP mydbparametergroup
RDS API
To modify a DB parameter group, use the RDS API ModifyDBParameterGroup operation with the
following required parameters:
• DBParameterGroupName
• Parameters
When you use the console, you can reset specific parameters to their default values. However, you can't
easily reset all of the parameters in the DB parameter group at once. When you use the AWS CLI or RDS
API, you can reset specific parameters to their default values. You can also reset all of the parameters in
the DB parameter group at once.
Changes to some parameters are applied to the DB instance immediately without a reboot. Changes
to other parameters are applied only after the DB instance is rebooted. The RDS console shows the
status of the DB parameter group associated with a DB instance on the Configuration tab. For example,
suppose that the DB instance isn't using the latest changes to its associated DB parameter group. If so,
the RDS console shows the DB parameter group with a status of pending-reboot. To apply the latest
parameter changes to that DB instance, manually reboot the DB instance.
354
Amazon Relational Database Service User Guide
Working with DB parameter groups
Note
In a default DB parameter group, parameters are always set to their default values.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the parameter group.
4. For Parameter group actions, choose Edit.
5. Choose the parameters that you want to reset to their default values. You can scroll through the
parameters using the arrow keys at the top right of the dialog box.
AWS CLI
To reset some or all of the parameters in a DB parameter group, use the AWS CLI reset-db-
parameter-group command with the following required option: --db-parameter-group-name.
355
Amazon Relational Database Service User Guide
Working with DB parameter groups
To reset all of the parameters in the DB parameter group, specify the --reset-all-parameters
option. To reset specific parameters, specify the --parameters option.
The following example resets all of the parameters in the DB parameter group named
mydbparametergroup to their default values.
Example
For Linux, macOS, or Unix:
For Windows:
The following example resets the max_connections and max_allowed_packet options to their
default values in the DB parameter group named mydbparametergroup.
Example
For Linux, macOS, or Unix:
For Windows:
DBParameterGroupName mydbparametergroup
RDS API
To reset parameters in a DB parameter group to their default values, use the RDS
API ResetDBParameterGroup command with the following required parameter:
DBParameterGroupName.
To reset all of the parameters in the DB parameter group, set the ResetAllParameters parameter to
true. To reset specific parameters, specify the Parameters parameter.
356
Amazon Relational Database Service User Guide
Working with DB parameter groups
group by using the AWS Management Console. You can also use the AWS CLI copy-db-parameter-group
command or the RDS API CopyDBParameterGroup operation.
After you copy a DB parameter group, wait at least 5 minutes before creating your first DB instance that
uses that DB parameter group as the default parameter group. Doing this allows Amazon RDS to fully
complete the copy action before the parameter group is used. This is especially important for parameters
that are critical when creating the default database for a DB instance. An example is the character set
for the default database defined by the character_set_database parameter. Use the Parameter
Groups option of the Amazon RDS console or the describe-db-parameters command to verify that your
DB parameter group is created.
Note
You can't copy a default parameter group. However, you can create a new parameter group that
is based on a default parameter group.
You can't copy a DB parameter group to a different AWS account or AWS Region.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the custom parameter group that you want to copy.
4. For Parameter group actions, choose Copy.
5. In New DB parameter group identifier, enter a name for the new parameter group.
6. In Description, enter a description for the new parameter group.
7. Choose Copy.
AWS CLI
To copy a DB parameter group, use the AWS CLI copy-db-parameter-group command with the
following required options:
• --source-db-parameter-group-identifier
• --target-db-parameter-group-identifier
• --target-db-parameter-group-description
The following example creates a new DB parameter group named mygroup2 that is a copy of the DB
parameter group mygroup1.
Example
For Linux, macOS, or Unix:
For Windows:
357
Amazon Relational Database Service User Guide
Working with DB parameter groups
RDS API
To copy a DB parameter group, use the RDS API CopyDBParameterGroup operation with the following
required parameters:
• SourceDBParameterGroupIdentifier
• TargetDBParameterGroupIdentifier
• TargetDBParameterGroupDescription
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
AWS CLI
To list all DB parameter groups for an AWS account, use the AWS CLI describe-db-parameter-
groups command.
Example
The following example lists all available DB parameter groups for an AWS account.
For Windows:
358
Amazon Relational Database Service User Guide
Working with DB parameter groups
RDS API
To list all DB parameter groups for an AWS account, use the RDS API DescribeDBParameterGroups
operation.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
AWS CLI
To view the parameter values for a DB parameter group, use the AWS CLI describe-db-parameters
command with the following required parameter.
• --db-parameter-group-name
Example
The following example lists the parameters and parameter values for a DB parameter group named
mydbparametergroup.
RDS API
To view the parameter values for a DB parameter group, use the RDS API DescribeDBParameters
command with the following required parameter.
359
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups
• DBParameterGroupName
Topics
• Creating a DB cluster parameter group (p. 360)
• Modifying parameters in a DB cluster parameter group (p. 362)
• Resetting parameters in a DB cluster parameter group (p. 363)
• Copying a DB cluster parameter group (p. 364)
• Listing DB cluster parameter groups (p. 366)
• Viewing parameter values for a DB cluster parameter group (p. 367)
After you create a DB cluster parameter group, wait at least 5 minutes before creating a DB cluster that
uses that DB cluster parameter group. Doing this allows Amazon RDS to fully create the parameter group
before it is used by the new DB cluster. You can use the Parameter groups page in the Amazon RDS
console or the describe-db-cluster-parameters command to verify that your DB cluster parameter group
is created.
Default parameter group names can include a period, such as default.aurora-mysql5.7. However,
custom parameter group names can't include a period.
• The first character must be a letter.
• The name can't end with a hyphen or contain two consecutive hyphens.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose Create parameter group.
360
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups
8. Choose Create.
AWS CLI
The following example creates a DB cluster parameter group named mydbclusterparametergroup for RDS
for MySQL version 8.0 with a description of "My new cluster parameter group."
• --db-cluster-parameter-group-name
• --db-parameter-group-family
• --description
To list all of the available parameter group families, use the following command:
Note
The output contains duplicates.
Example
For Windows:
{
"DBClusterParameterGroup": {
"DBClusterParameterGroupName": "mydbclusterparametergroup",
"DBParameterGroupFamily": "mysql8.0",
"Description": "My new cluster parameter group",
"DBClusterParameterGroupArn": "arn:aws:rds:us-east-1:123456789012:cluster-
pg:mydbclusterparametergroup2"
}
}
RDS API
To create a DB cluster parameter group, use the RDS API CreateDBClusterParameterGroup action.
361
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups
• DBClusterParameterGroupName
• DBParameterGroupFamily
• Description
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the parameter group that you want to modify.
4. For Parameter group actions, choose Edit.
5. Change the values of the parameters you want to modify. You can scroll through the parameters
using the arrow keys at the top right of the dialog box.
AWS CLI
• --db-cluster-parameter-group-name
• --parameters
Example
"ParameterName=server_audit_logs_upload,ParameterValue=1,ApplyMethod=immediate"
For Windows:
362
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups
--db-cluster-parameter-group-name mydbclusterparametergroup ^
--parameters
"ParameterName=server_audit_logging,ParameterValue=1,ApplyMethod=immediate" ^
"ParameterName=server_audit_logs_upload,ParameterValue=1,ApplyMethod=immediate"
DBCLUSTERPARAMETERGROUP mydbclusterparametergroup
RDS API
• DBClusterParameterGroupName
• Parameters
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the parameter group.
4. For Parameter group actions, choose Edit.
5. Choose the parameters that you want to reset to their default values. You can scroll through the
parameters using the arrow keys at the top right of the dialog box.
AWS CLI
To reset parameters in a DB cluster parameter group to their default values, use the AWS CLI reset-
db-cluster-parameter-group command with the following required option: --db-cluster-
parameter-group-name.
To reset all of the parameters in the DB cluster parameter group, specify the --reset-all-
parameters option. To reset specific parameters, specify the --parameters option.
The following example resets all of the parameters in the DB parameter group named
mydbparametergroup to their default values.
363
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups
Example
For Linux, macOS, or Unix:
For Windows:
Example
For Linux, macOS, or Unix:
For Windows:
"ParameterName=server_audit_logs_upload,ParameterValue=1,ApplyMethod=immediate"
DBClusterParameterGroupName mydbclusterparametergroup
RDS API
To reset parameters in a DB cluster parameter group to their default values, use the RDS
API ResetDBClusterParameterGroup command with the following required parameter:
DBClusterParameterGroupName.
To reset all of the parameters in the DB cluster parameter group, set the ResetAllParameters
parameter to true. To reset specific parameters, specify the Parameters parameter.
After you copy a DB cluster parameter group, wait at least 5 minutes before creating a DB cluster that
uses that DB cluster parameter group. Doing this allows Amazon RDS to fully copy the parameter group
364
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups
before it is used by the new DB cluster. You can use the Parameter groups page in the Amazon RDS
console or the describe-db-cluster-parameters command to verify that your DB cluster parameter group
is created.
Note
You can't copy a default parameter group. However, you can create a new parameter group that
is based on a default parameter group.
You can't copy a DB cluster parameter group to a different AWS account or AWS Region.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the custom parameter group that you want to copy.
4. For Parameter group actions, choose Copy.
5. In New DB parameter group identifier, enter a name for the new parameter group.
6. In Description, enter a description for the new parameter group.
7. Choose Copy.
AWS CLI
• --source-db-cluster-parameter-group-identifier
• --target-db-cluster-parameter-group-identifier
• --target-db-cluster-parameter-group-description
The following example creates a new DB cluster parameter group named mygroup2 that is a copy of the
DB cluster parameter group mygroup1.
Example
For Windows:
RDS API
To copy a DB cluster parameter group, use the RDS API CopyDBClusterParameterGroup operation
with the following required parameters:
365
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups
• SourceDBClusterParameterGroupIdentifier
• TargetDBClusterParameterGroupIdentifier
• TargetDBClusterParameterGroupDescription
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
The DB cluster parameter groups appear in the list with DB cluster parameter group for Type.
AWS CLI
To list all DB cluster parameter groups for an AWS account, use the AWS CLI describe-db-cluster-
parameter-groups command.
Example
The following example lists all available DB cluster parameter groups for an AWS account.
For Windows:
{
"DBClusterParameterGroups": [
{
"DBClusterParameterGroupName": "mydbclusterparametergroup2",
"DBParameterGroupFamily": "mysql8.0",
366
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups
RDS API
To list all DB cluster parameter groups for an AWS account, use the RDS API
DescribeDBClusterParameterGroups action.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
The DB cluster parameter groups appear in the list with DB cluster parameter group for Type.
3. Choose the name of the DB cluster parameter group to see its list of parameters.
AWS CLI
To view the parameter values for a DB cluster parameter group, use the AWS CLI describe-db-
cluster-parameters command with the following required parameter.
• --db-cluster-parameter-group-name
Example
The following example lists the parameters and parameter values for a DB cluster parameter group
named mydbclusterparametergroup, in JSON format.
{
"Parameters": [
{
"ParameterName": "activate_all_roles_on_login",
"ParameterValue": "0",
"Description": "Automatically set all granted roles as active after the user
has authenticated successfully.",
"Source": "engine-default",
"ApplyType": "dynamic",
"DataType": "boolean",
"AllowedValues": "0,1",
"IsModifiable": true,
"ApplyMethod": "pending-reboot",
367
Amazon Relational Database Service User Guide
Comparing parameter groups
"SupportedEngineModes": [
"provisioned"
]
},
{
"ParameterName": "allow-suspicious-udfs",
"Description": "Controls whether user-defined functions that have only an xxx
symbol for the main function can be loaded",
"Source": "engine-default",
"ApplyType": "static",
"DataType": "boolean",
"AllowedValues": "0,1",
"IsModifiable": false,
"ApplyMethod": "pending-reboot",
"SupportedEngineModes": [
"provisioned"
]
},
...
RDS API
To view the parameter values for a DB cluster parameter group, use the RDS API
DescribeDBClusterParameters command with the following required parameter.
• DBClusterParameterGroupName
In some cases, the allowed values for a parameter aren't shown. These are always parameters where the
source is the database engine default.
To view the values of these parameters, you can run the following SQL statements:
• MySQL:
• PostgreSQL:
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the two parameter groups that you want to compare.
368
Amazon Relational Database Service User Guide
Specifying DB parameters
Specifying DB parameters
DB parameter types include the following:
• Integer
• Boolean
• String
• Long
• Double
• Timestamp
• Object of other defined data types
• Array of values of type integer, Boolean, string, long, double, timestamp, or object
You can also specify integer and Boolean parameters using expressions, formulas, and functions.
For the Oracle engine, you can use the DBInstanceClassHugePagesDefault formula variable to
specify a Boolean DB parameter. See DB parameter formula variables (p. 370).
For the PostgreSQL engine, you can use an expression to specify a Boolean DB parameter. See Boolean
DB parameter expressions (p. 371).
Contents
• DB parameter formulas (p. 369)
• DB parameter formula variables (p. 370)
• DB parameter formula operators (p. 370)
• DB parameter functions (p. 371)
• Boolean DB parameter expressions (p. 371)
• DB parameter log expressions (p. 372)
• DB parameter value examples (p. 373)
DB parameter formulas
A DB parameter formula is an expression that resolves to an integer value or a Boolean value. You
enclose the expression in braces: {}. You can use a formula for either a DB parameter value or as an
argument to a DB parameter function.
Syntax
{FormulaVariable}
{FormulaVariable*Integer}
{FormulaVariable*Integer/Integer}
369
Amazon Relational Database Service User Guide
Specifying DB parameters
{FormulaVariable/Integer}
AllocatedStorage
Returns a Boolean value. Currently, it's only supported for Oracle engines.
For more information, see Turning on HugePages for an RDS for Oracle instance (p. 1942).
DBInstanceClassMemory
Returns an integer for the number of bytes of memory available to the database process. This
number is internally calculated by starting with the total amount of memory for the DB instance
class. From this, the calculation subtracts memory reserved for the operating system and the RDS
processes that manage the instance. Therefore, the number is always somewhat lower than the
memory figures shown in the instance class tables in DB instance classes (p. 11). The exact value
depends on a combination of factors. These include instance class, DB engine, and whether it applies
to an RDS instance or an instance that's part of an Aurora cluster.
DBInstanceVCPU
Returns an integer representing the number of virtual central processing units (vCPUs) used by
Amazon RDS to manage the instance. Currently, it's only supported for the PostgreSQL engine.
EndPointPort
Returns an integer representing the port used when connecting to the DB instance.
Division operator: /
Divides the dividend by the divisor, returning an integer quotient. Decimals in the quotient are
truncated, not rounded.
Syntax
dividend / divisor
Multiplies the expressions, returning the product of the expressions. Decimals in the expressions are
truncated, not rounded.
Syntax
expression * expression
370
Amazon Relational Database Service User Guide
Specifying DB parameters
DB parameter functions
You specify the arguments of DB parameter functions as either integers or formulas. Each function must
have at least one argument. Specify multiple arguments as a comma-separated list. The list can't have
any empty members, such as argument1,,argument3. Function names are case-insensitive.
IF
Returns an argument.
Currently, it's only supported for Oracle engines, and the only supported first argument is
{DBInstanceClassHugePagesDefault}. For more information, see Turning on HugePages for an
RDS for Oracle instance (p. 1942).
Syntax
Returns the second argument if the first argument evaluates to true. Returns the third argument
otherwise.
GREATEST
Syntax
GREATEST(argument1, argument2,...argumentn)
Returns an integer.
LEAST
Syntax
LEAST(argument1, argument2,...argumentn)
Returns an integer.
SUM
Syntax
SUM(argument1, argument2,...argumentn)
Returns an integer.
371
Amazon Relational Database Service User Guide
Specifying DB parameters
Syntax
Syntax
Syntax
Syntax
Syntax
The following Boolean DB parameter expression example compares the result of a parameter formula
with an integer. It does so to modify the Boolean DB parameter wal_compression for a PostgreSQL DB
instance. The parameter expression compares the number of vCPUs with the value 2. If the number of
vCPUs is greater than 2, then the wal_compression DB parameter is set to true.
372
Amazon Relational Database Service User Guide
Specifying DB parameters
{log(DBInstanceClassMemory/8187281418)*1000}
The log function represents log base 2. This example also uses the DBInstanceClassMemory formula
variable. See DB parameter formula variables (p. 370).
Note
Currently, you can't specify the MySQL innodb_log_file_size parameter with any value
other than an integer.
You can specify the GREATEST function in an Oracle processes parameter. Use it to set the number of
user processes to the larger of either 80 or DBInstanceClassMemory divided by 9,868,951.
GREATEST({DBInstanceClassMemory/9868951},80)
You can specify the LEAST function in a MySQL max_binlog_cache_size parameter value. Use
it to set the maximum cache size a transaction can use in a MySQL instance to the lesser of 1 MB or
DBInstanceClass/256.
LEAST({DBInstanceClassMemory/256},10485760)
373
Amazon Relational Database Service User Guide
Creating an ElastiCache cluster from Amazon RDS
Amazon ElastiCache works with both the Redis and Memcached engines. If you're unsure which engine
you want to use, see Comparing Memcached and Redis. For more information about Amazon ElastiCache,
see the Amazon ElastiCache User Guide.
Topics
• Overview of ElastiCache cluster creation with RDS DB instance settings (p. 374)
• Creating an ElastiCache cluster with settings from a new RDS DB instance (p. 375)
• Creating an ElastiCache cluster with settings from an existing RDS DB instance (p. 377)
• You can save costs and improve your performance by using ElastiCache with RDS versus running on
RDS alone.
For example, you can save up to 55% in cost and gain up to 80x faster read performance by using
ElastiCache with RDS for MySQL versus RDS for MySQL alone.
• You can use the ElastiCache cluster as a primary data store for applications that don't require data
durability. Your applications that use Redis or Memcached can use ElastiCache with almost no
modification.
When you create an ElastiCache cluster from RDS, the ElastiCache cluster inherits the following settings
from the associated RDS DB instance:
You can also set the cluster configuration settings according to your requirements.
374
Amazon Relational Database Service User Guide
Creating an ElastiCache cluster with
settings from a new RDS DB instance
• To access your ElastiCache cluster and get started, see Getting started with Amazon ElastiCache for
Redis and Getting started with Amazon ElastiCache for Memcached.
• For more information about caching strategies, see Caching strategies and best practices for
Memcached and Caching strategies and best practices for Redis.
• For more information about high availability in ElastiCache for Redis clusters, see High availability
using replication groups.
• You might incur costs associated with backup storage, data transfer within or across regions, or use of
AWS Outposts. For pricing details, see Amazon ElastiCache pricing.
In the Suggested add-ons window, you can create an ElastiCache cluster from RDS with the same
settings as your newly created RDS DB instance.
1. To create a DB instance, follow the instructions in Creating an Amazon RDS DB instance (p. 300).
2. After creating a new RDS DB instance, the console displays the Suggested add-ons window. Select
Create an ElastiCache cluster from RDS using your DB settings.
In the ElastiCache configuration section, the Source DB identifier displays which DB instance the
ElastiCache cluster inherits settings from.
3. Choose whether you want to create a Redis or Memcached cluster. For more information, see
Comparing Memcached and Redis.
If you choose Redis cluster, then choose whether you want to keep the cluster mode Enabled or
Disabled. For more information, see Replication: Redis (Cluster Mode Disabled) vs. Redis (Cluster
Mode Enabled).
375
Amazon Relational Database Service User Guide
Creating an ElastiCache cluster with
settings from a new RDS DB instance
For Engine version, the recommended default value is the latest engine version. You can also choose
an Engine version for the ElastiCache cluster that best meets your requirements.
5. Choose the node type in the Node type option. For more information, see Managing nodes.
If you choose to create a Redis cluster with the Cluster mode set to Enabled, then enter the number
of shards (partitions/node groups) in the Number of shards option.
RDS automatically fills the Port and the Network type. ElastiCache creates an equivalent Subnet
group from the source database. To customize these settings, select Customize your connectivity
settings.
ElastiCache provides the default values for Encryption at rest, Encryption key, Encryption in
transit, Access control, and Security groups. To customize these settings, select Customize your
security settings.
376
Amazon Relational Database Service User Guide
Creating an ElastiCache cluster with
settings from an existing RDS DB instance
8. Verify the default and inherited settings of your ElastiCache cluster. Some settings can't be changed
after creation.
Note
RDS might adjust the backup window of your ElastiCache cluster to meet the minimum
window requirement of 60 minutes. The backup window of your source database remains
the same.
9. When you're ready, choose Create ElastiCache cluster.
The console displays a confirmation banner for the ElastiCache cluster creation. Follow the link in the
banner to the ElastiCache console to view the cluster details. The ElastiCache console displays the newly
created ElastiCache cluster.
In the ElastiCache configuration section, the Source DB identifier shows which DB instance the
ElastiCache cluster inherits settings from.
3. Choose whether you want to create a Redis or Memcached cluster. For more information, see
Comparing Memcached and Redis.
If you choose Redis cluster, then choose whether you want to keep the cluster mode Enabled or
Disabled. For more information, see Replication: Redis (Cluster Mode Disabled) vs. Redis (Cluster
Mode Enabled).
377
Amazon Relational Database Service User Guide
Creating an ElastiCache cluster with
settings from an existing RDS DB instance
For Engine version, the recommended default value is the latest engine version. You can also choose
an Engine version for the ElastiCache cluster that best meets your requirements.
5. Choose the node type in the Node type option. For more information, see Managing nodes.
If you choose to create a Redis cluster with the Cluster mode set to Enabled, then enter the number
of shards (partitions/node groups) in the Number of shards option.
RDS automatically fills the Port and the Network type. ElastiCache creates an equivalent Subnet
group from the source database. To customize these settings, select Customize your connectivity
settings.
378
Amazon Relational Database Service User Guide
Creating an ElastiCache cluster with
settings from an existing RDS DB instance
ElastiCache provides the default values for Encryption at rest, Encryption key, Encryption in
transit, Access control, and Security groups. To customize these settings, select Customize your
security settings.
8. Verify the default and inherited settings of your ElastiCache cluster. Some settings can't be changed
after creation.
Note
RDS might adjust the backup window of your ElastiCache cluster to meet the minimum
window requirement of 60 minutes. The backup window of your source database remains
the same.
9. When you're ready, choose Create ElastiCache cluster.
The console displays a confirmation banner for the ElastiCache cluster creation. Follow the link in the
banner to the ElastiCache console to view the cluster details. The ElastiCache console displays the newly
created ElastiCache cluster.
379
Amazon Relational Database Service User Guide
Topics
• Stopping an Amazon RDS DB instance temporarily (p. 381)
• Starting an Amazon RDS DB instance that was previously stopped (p. 384)
• Automatically connecting an AWS compute resource and a DB instance (p. 385)
• Modifying an Amazon RDS DB instance (p. 401)
• Maintaining a DB instance (p. 418)
• Upgrading a DB instance engine version (p. 429)
• Renaming a DB instance (p. 434)
• Rebooting a DB instance (p. 436)
• Working with DB instance read replicas (p. 438)
• Tagging Amazon RDS resources (p. 461)
• Working with Amazon Resource Names (ARNs) in Amazon RDS (p. 471)
• Working with storage for Amazon RDS DB instances (p. 478)
• Deleting a DB instance (p. 489)
380
Amazon Relational Database Service User Guide
Stopping a DB instance
• MariaDB
• Microsoft SQL Server, including RDS Custom for SQL Server.
• MySQL
• Oracle
• PostgreSQL
Stopping and starting a DB instance is supported for all DB instance classes, and in all AWS Regions.
For a Multi-AZ deployment, a long time might be required to stop a DB instance. If you have at least one
backup after a previous failover, then you can speed up the stop DB instance operation. To do so, before
stopping the DB instance, perform a reboot with failover operation.
The status of the DB instance changes to stopped. Consider the following characteristics of the
stopped state:
• Any storage volumes remain attached to the DB instance, and their data is kept. RDS deletes any
data stored in the RAM of the DB instance.
• RDS removes pending actions, except for pending actions for the option group or DB parameter
group of the DB instance.
381
Amazon Relational Database Service User Guide
Benefits
• If you don't manually start your DB instance after it is stopped for seven consecutive days,
RDS automatically starts your DB instance for you. This way, it doesn't fall behind any required
maintenance updates. To learn how to stop and start your instance on a schedule, see How can I use
Step Functions to stop an Amazon RDS instance for longer than 7 days?.
Occasionally, an RDS for PostgreSQL DB instance doesn't shut down cleanly. If this happens, you see that
the instance goes through a recovery process when you restart it later. This is expected behavior of the
database engine, intended to protect database integrity. Some memory-based statistics and counters
don't retain history and are re-initialized after restart, to capture the operational workload moving
forward.
• Instance ID
• Domain Name Server (DNS) endpoint
• Parameter group
• Security group
• Option group
• Amazon S3 transaction logs (necessary for a point-in-time restore)
When you restart a DB instance, it has the same configuration as when you stopped it.
• You can't stop a DB instance that has a read replica, or that is a read replica.
• You can't modify a stopped DB instance.
• You can't delete an option group that is associated with a stopped DB instance.
• You can't delete a DB parameter group that is associated with a stopped DB instance.
• In a Multi-AZ deployment, the primary and secondary Availability Zones might be switched after you
start the DB instance.
Additional limitations apply to RDS Custom for SQL Server. For more information, see Starting and
stopping an RDS Custom for SQL Server DB instance (p. 1146).
You can change the option group or DB parameter group that is associated with a stopped DB instance.
However, the change doesn't occur until the next time you start the DB instance. If you chose to apply
changes immediately, the change occurs when you start the DB instance. Otherwise the change occurs
during the next maintenance window after you start the DB instance.
382
Amazon Relational Database Service User Guide
Public IP address
Public IP address
When you stop a DB instance, it retains its DNS endpoint. If you stop a DB instance that has a public IP
address, Amazon RDS releases its public IP address. When the DB instance is restarted, it has a different
public IP address.
Note
You should always connect to a DB instance using the DNS endpoint, not the IP address.
Console
To stop a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to stop.
3. For Actions, choose Stop temporarily.
4. In the Stop DB instance temporarily window, select the acknowledgement that the DB instance will
restart automatically after 7 days.
5. (Optional) Select Save the DB instance in a snapshot and enter the snapshot name for Snapshot
name. Choose this option if you want to create a snapshot of the DB instance before stopping it.
6. Choose Stop temporarily to stop the DB instance, or choose Cancel to cancel the operation.
AWS CLI
To stop a DB instance by using the AWS CLI, call the stop-db-instance command with the following
option:
Example
RDS API
To stop a DB instance by using the Amazon RDS API, call the StopDBInstance operation with the
following parameter:
383
Amazon Relational Database Service User Guide
Starting a DB instance
When you start a DB instance that you previously stopped, the DB instance retains certain information.
This information is the ID, Domain Name Server (DNS) endpoint, parameter group, security group, and
option group. When you start a stopped instance, you are charged a full instance hour.
Console
To start a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to start.
3. For Actions, choose Start.
AWS CLI
To start a DB instance by using the AWS CLI, call the start-db-instance command with the following
option:
Example
RDS API
To start a DB instance by using the Amazon RDS API, call the StartDBInstance operation with the
following parameter:
384
Amazon Relational Database Service User Guide
Connecting an AWS compute resource
Topics
• Automatically connecting an EC2 instance and a DB instance (p. 385)
• Automatically connecting a Lambda function and a DB instance (p. 392)
If you want to connect to an EC2 instance that isn't in the same VPC as the DB instance, see the scenarios
in Scenarios for accessing a DB instance in a VPC (p. 2701).
Topics
• Overview of automatic connectivity with an EC2 instance (p. 386)
385
Amazon Relational Database Service User Guide
Connecting an EC2 instance
The following are requirements for connecting an EC2 instance with an RDS database:
• The EC2 instance must exist in the same VPC as the RDS database.
If no EC2 instances exist in the same VPC, then the console provides a link to create one.
• The user who sets up connectivity must have permissions to perform the following Amazon EC2
operations:
• ec2:AuthorizeSecurityGroupEgress
• ec2:AuthorizeSecurityGroupIngress
• ec2:CreateSecurityGroup
• ec2:DescribeInstances
• ec2:DescribeNetworkInterfaces
• ec2:DescribeSecurityGroups
• ec2:ModifyNetworkInterfaceAttribute
• ec2:RevokeSecurityGroupEgress
If the DB instance and EC2 instance are in different Availability Zones, your account may incur cross-
Availability Zone costs.
When you set up a connection to an EC2 instance, Amazon RDS acts according to the current
configuration of the security groups associated with the RDS database and EC2 instance, as described in
the following table.
Current RDS security group Current EC2 security group RDS action
configuration configuration
There are one or more security There are one or more security RDS takes no action.
groups associated with the RDS groups associated with the EC2
database with a name that matches instance with a name that matches A connection was already configured
the pattern rds-ec2-n (where n the pattern rds-ec2-n (where automatically between the EC2
is a number). A security group that n is a number). A security group instance and RDS database. Because
matches the pattern hasn't been that matches the pattern hasn't a connection already exists between
modified. This security group has been modified. This security group the EC2 instance and the RDS
only one inbound rule with the VPC has only one outbound rule with database, the security groups aren't
security group of the EC2 instance the VPC security group of the RDS modified.
as the source. database as the source.
Either of the following conditions Either of the following conditions RDS action: create new security
apply: apply: groups
386
Amazon Relational Database Service User Guide
Connecting an EC2 instance
Current RDS security group Current EC2 security group RDS action
configuration configuration
• There are one or more security • There are one or more security
groups associated with the groups associated with the
RDS database with a name that EC2 instance with a name that
matches the pattern rds-ec2-n. matches the pattern ec2-rds-n.
However, Amazon RDS can't use However, Amazon RDS can't use
any of these security groups for any of these security groups for
the connection with the EC2 the connection with the RDS
instance. Amazon RDS can't use a database. Amazon RDS can't use
security group that doesn't have a security group that doesn't
one inbound rule with the VPC have one outbound rule with the
security group of the EC2 instance VPC security group of the RDS
as the source. Amazon RDS also database as the source. Amazon
can't use a security group that RDS also can't use a security
has been modified. Examples of group that has been modified.
modifications include adding a
rule or changing the port of an
existing rule.
There are one or more security There are one or more security RDS action: create new security
groups associated with the RDS groups associated with the EC2 groups
database with a name that matches instance with a name that matches
the pattern rds-ec2-n. A security the pattern ec2-rds-n. However,
group that matches the pattern Amazon RDS can't use any of these
hasn't been modified. This security security groups for the connection
group has only one inbound rule with the RDS database. Amazon
with the VPC security group of the RDS can't use a security group that
EC2 instance as the source. doesn't have one outbound rule with
the VPC security group of the RDS
database as the source. Amazon RDS
also can't use a security group that
has been modified.
There are one or more security A valid EC2 security group for the RDS action: associate EC2 security
groups associated with the RDS connection exists, but it is not group
database with a name that matches associated with the EC2 instance.
the pattern rds-ec2-n. A security This security group has a name that
group that matches the pattern matches the pattern rds-ec2-n.
hasn't been modified. This security It hasn't been modified. It has only
group has only one inbound rule one outbound rule with the VPC
with the VPC security group of the security group of the RDS database
EC2 instance as the source. as the source.
387
Amazon Relational Database Service User Guide
Connecting an EC2 instance
Current RDS security group Current EC2 security group RDS action
configuration configuration
Either of the following conditions There are one or more security RDS action: create new security
apply: groups associated with the EC2 groups
instance with a name that matches
• There is no security group the pattern rds-ec2-n. A security
associated with the RDS database group that matches the pattern
with a name that matches the hasn't been modified. This security
pattern rds-ec2-n. group has only one outbound rule
• There are one or more security with the VPC security group of the
groups associated with the RDS database as the source.
RDS database with a name that
matches the pattern rds-ec2-n.
However, Amazon RDS can't use
any of these security groups for
the connection with the EC2
instance. Amazon RDS can't use a
security group that doesn't have
one inbound rule with the VPC
security group of the EC2 instance
as the source. Amazon RDS also
can't use security group that has
been modified.
• Creates a new security group that matches the pattern rds-ec2-n. This security group has an
inbound rule with the VPC security group of the EC2 instance as the source. This security group is
associated with the RDS database and allows the EC2 instance to access the RDS database.
• Creates a new security group that matches the pattern ec2-rds-n. This security group has an
outbound rule with the VPC security group of the RDS database as the source. This security group is
associated with the EC2 instance and allows the EC2 instance to send traffic to the RDS database.
Amazon RDS associates the valid, existing EC2 security group with the EC2 instance. This security group
allows the EC2 instance to send traffic to the RDS database.
If you make changes to security groups after you configure connectivity, the changes might affect the
connection between the EC2 instance and the RDS database.
Note
You can only set up a connection between an EC2 instance and an RDS database automatically
by using the AWS Management Console. You can't set up a connection automatically with the
AWS CLI or RDS API.
388
Amazon Relational Database Service User Guide
Connecting an EC2 instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS database.
3. From Actions, choose Set up EC2 connection.
If no EC2 instances exist in the same VPC, choose Create EC2 instance to create one. In this case,
make sure the new EC2 instance is in the same VPC as the RDS database.
5. Choose Continue.
389
Amazon Relational Database Service User Guide
Connecting an EC2 instance
6. On the Review and confirm page, review the changes that RDS will make to set up connectivity with
the EC2 instance.
• You can select the compute resource when you create the database.
390
Amazon Relational Database Service User Guide
Connecting an EC2 instance
For more information, see Creating an Amazon RDS DB instance (p. 300) and Creating a Multi-AZ DB
cluster (p. 508).
• You can set up connectivity between an existing database and a compute resource.
For more information, see Automatically connecting an EC2 instance and an RDS database (p. 388).
The listed compute resources don't include ones that were connected to the database manually. For
example, you can allow a compute resource to access a database manually by adding a rule to the VPC
security group associated with the database.
• The name of the security group associated with the compute resource matches the pattern ec2-
rds-n (where n is a number).
• The security group associated with the compute resource has an outbound rule with the port range set
to the port that the RDS database uses.
• The security group associated with the compute resource has an outbound rule with the source set to a
security group associated with the RDS database.
• The name of the security group associated with the RDS database matches the pattern rds-ec2-n
(where n is a number).
• The security group associated with the RDS database has an inbound rule with the port range set to
the port that the RDS database uses.
• The security group associated with the RDS database has an inbound rule with the source set to a
security group associated with the compute resource.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the name of the RDS database.
3. On the Connectivity & security tab, view the compute resources in the Connected compute
resources.
391
Amazon Relational Database Service User Guide
Connecting a Lambda function
For instructions on setting up a connection between a Lambda function and a Multi-AZ DB cluster, see
the section called “Connecting a Lambda function and a Multi-AZ DB cluster” (p. 530).
The following image shows a direct connection between your DB instance and your Lambda function.
You can set up the connection between your Lambda function and your DB instance through RDS
Proxy to improve your database performance and resiliency. Often, Lambda functions make frequent,
short database connections that benefit from connection pooling that RDS Proxy offers. You can take
advantage of any AWS Identity and Access Management (IAM) authentication that you already have for
Lambda functions, instead of managing database credentials in your Lambda application code. For more
information, see Using Amazon RDS Proxy (p. 1199).
When you use the console to connect with an existing proxy, Amazon RDS updates the proxy security
group to allow connections from your DB instance and Lambda function.
You can also create a new proxy from the same console page. When you create a proxy in the console,
to access the DB instance, you must input your database credentials or select an AWS Secrets Manager
secret.
392
Amazon Relational Database Service User Guide
Connecting a Lambda function
Topics
• Overview of automatic connectivity with a Lambda function (p. 393)
• Automatically connecting a Lambda function and an RDS database (p. 399)
• Viewing connected compute resources (p. 400)
• The Lambda function must exist in the same VPC as the DB instance.
• The user who sets up connectivity must have permissions to perform the following Amazon RDS,
Amazon EC2, Lambda, Secrets Manager, and IAM operations:
• Amazon RDS
• rds:CreateDBProxies
• rds:DescribeDBInstances
• rds:DescribeDBProxies
• rds:ModifyDBInstance
• rds:ModifyDBProxy
• rds:RegisterProxyTargets
• Amazon EC2
• ec2:AuthorizeSecurityGroupEgress
• ec2:AuthorizeSecurityGroupIngress
• ec2:CreateSecurityGroup
• ec2:DeleteSecurityGroup
• ec2:DescribeSecurityGroups
393
Amazon Relational Database Service User Guide
Connecting a Lambda function
• ec2:RevokeSecurityGroupEgress
• ec2:RevokeSecurityGroupIngress
• Lambda
• lambda:CreateFunctions
• lambda:ListFunctions
• lambda:UpdateFunctionConfiguration
• Secrets Manager
• sercetsmanager:CreateSecret
• secretsmanager:DescribeSecret
• IAM
• iam:AttachPolicy
• iam:CreateRole
• iam:CreatePolicy
• AWS KMS
• kms:describeKey
Note
If the DB instance and Lambda function are in different Availability Zones, your account might
incur cross-Availability Zone costs.
When you set up a connection between a Lambda function and an RDS database, Amazon RDS
configures the VPC security group for your function and for your DB instance. If you use RDS Proxy, then
Amazon RDS also configures the VPC security group for the proxy. Amazon RDS acts according to the
current configuration of the security groups associated with the DB instance, Lambda function, and
proxy, as described in the following table.
Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration
There are one or more There are one or more There are one or more Amazon RDS takes no
security groups associated security groups associated security groups associated action.
with the DB instance with with the Lambda function with the proxy with a
a name that matches the with a name that matches name that matches the A connection was already
pattern rds-lambda-n the pattern lambda- pattern rdsproxy- configured automatically
or if a proxy is already rds-n or lambda- lambda-n (where n is a between the Lambda
connected to your DB rdsproxy-n (where n is a number). function, the proxy
instance, RDS checks if number). (optional), and DB
the TargetHealth of A security group that instance. Because a
an associated proxy is A security group that matches the pattern connection already exists
AVAILABLE. matches the pattern hasn't been modified. between the function,
hasn't been modified. This security group has proxy, and the database,
A security group that This security group has inbound and outbound the security groups aren't
matches the pattern only one outbound rule rules with the VPC security modified.
hasn't been modified. This with either the VPC groups of the Lambda
security group has only security group of the DB function and the DB
one inbound rule with the instance or the proxy as instance.
VPC security group of the the destination.
Lambda function or proxy
as the source.
Either of the following Either of the following Either of the following RDS action: create new
conditions apply: conditions apply: conditions apply: security groups
394
Amazon Relational Database Service User Guide
Connecting a Lambda function
Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration
• There is no security • There is no security • There is no security
group associated with group associated with group associated with
the DB instance with a the Lambda function the proxy with a name
name that matches the with a name that that matches the
pattern rds-lambda-n matches the pattern pattern rdsproxy-
or if the TargetHealth lambda-rds-n or lambda-n.
of an associated proxy is lambda-rdsproxy-n. • There are one or
AVAILABLE. • There are one or more security groups
• There are one or more security groups associated with the
more security groups associated with the proxy with a name that
associated with the Lambda function matches rdsproxy-
DB instance with a with a name that lambda-n. However,
name that matches the matches the pattern Amazon RDS can't
pattern rds-lambda-n lambda-rds-n or use any of these
or if the TargetHealth lambda-rdsproxy-n. security groups for the
of an associated proxy However, Amazon RDS connection with the
is AVAILABLE. However, can't use any of these DB instance or Lambda
none of these security security groups for the function.
groups can be used for connection with the DB
the connection with the instance.
Lambda function. Amazon RDS can't use
a security group that
Amazon RDS can't doesn't have inbound
Amazon RDS can't use a use a security group and outbound rules with
security group that doesn't that doesn't have one the VPC security group
have one inbound rule outbound rule with the of the DB instance and
with the VPC security VPC security group of the the Lambda function.
group of the Lambda DB instance or proxy as Amazon RDS also can't use
function or proxy as the the destination. Amazon a security group that has
source. Amazon RDS also RDS also can't use a been modified.
can't use a security group security group that has
that has been modified. been modified.
Examples of modifications
include adding a rule or
changing the port of an
existing rule.
395
Amazon Relational Database Service User Guide
Connecting a Lambda function
Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration
There are one or more There are one or more There are one or more RDS action: create new
security groups associated security groups associated security groups associated security groups
with the DB instance with with the Lambda function with the proxy with a
a name that matches the with a name that matches name that matches the
pattern rds-lambda-n the pattern lambda- pattern rdsproxy-
or if the TargetHealth rds-n or lambda- lambda-n.
of an associated proxy is rdsproxy-n.
AVAILABLE. However, Amazon RDS
However, Amazon RDS can't use any of these
A security group that can't use any of these security groups for the
matches the pattern security groups for the connection with the
hasn't been modified. This connection with the DB DB instance or Lambda
security group has only instance. Amazon RDS function. Amazon RDS
one inbound rule with the can't use a security group can't use a security group
VPC security group of the that doesn't have one that doesn't have inbound
Lambda function or proxy outbound rule with the and outbound rules with
as the source. VPC security group of the the VPC security group
DB instance or proxy as of the DB instance and
the destination. Amazon the Lambda function.
RDS also can't use a Amazon RDS also can't use
security group that has a security group that has
been modified. been modified.
There are one or more A valid Lambda security A valid proxy security RDS action: associate
security groups associated group for the connection group for the connection Lambda security group
with the DB instance with exists, but it isn't exists, but it isn't
a name that matches the associated with the associated with the proxy.
pattern rds-lambda-n Lambda function. This This security group has
or if the TargetHealth security group has a a name that matches
of an associated proxy is name that matches the pattern rdsproxy-
AVAILABLE. the pattern lambda- lambda-n. It hasn't been
rds-n or lambda- modified. It has inbound
A security group that rdsproxy-n. It hasn't and outbound rules with
matches the pattern been modified. It has only the VPC security group of
hasn't been modified. This one outbound rule with the DB instance and the
security group has only the VPC security group of Lambda function.
one inbound rule with the the DB instance or proxy
VPC security group of the as the destination.
Lambda function or proxy
as the source.
396
Amazon Relational Database Service User Guide
Connecting a Lambda function
Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration
Either of the following There are one or more There are one or more RDS action: create new
conditions apply: security groups associated security groups associated security groups
with the Lambda function with the proxy with a
• There is no security with a name that matches name that matches the
group associated with the pattern lambda- pattern rdsproxy-
the DB instance with a rds-n or lambda- lambda-n.
name that matches the rdsproxy-n.
pattern rds-lambda-n A security group that
or if the TargetHealth A security group that matches the pattern
of an associated proxy is matches the pattern hasn't been modified.
AVAILABLE. hasn't been modified. This This security group has
• There are one or security group has only inbound and outbound
more security groups one outbound rule with rules with the VPC security
associated with the the VPC security group of group of the DB instance
DB instance with a the DB instance or proxy and the Lambda function.
name that matches the as the destination.
pattern rds-lambda-n
or if the TargetHealth
of an associated
proxy is AVAILABLE.
However, Amazon RDS
can't use any of these
security groups for the
connection with the
Lambda function or
proxy.
397
Amazon Relational Database Service User Guide
Connecting a Lambda function
Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration
Either of the following Either of the following Either of the following RDS action: create new
conditions apply: conditions apply: conditions apply: security groups
• Creates a new security group that matches the pattern rds-lambda-n or rds-rdsproxy-n (if you
choose to use RDS Proxy). This security group has an inbound rule with the VPC security group of the
Lambda function or proxy as the source. This security group is associated with the DB instance and
allows the function or proxy to access the DB instance.
• Creates a new security group that matches the pattern lambda-rds-n or lambda-rdsproxy-n. This
security group has an outbound rule with the VPC security group of the DB instance or proxy as the
destination. This security group is associated with the Lambda function and allows the function to
send traffic to the DB instance or send traffic through a proxy.
• Creates a new security group that matches the pattern rdsproxy-lambda-n. This security group has
inbound and outbound rules with the VPC security group of the DB instance and the Lambda function.
398
Amazon Relational Database Service User Guide
Connecting a Lambda function
Amazon RDS associates the valid, existing Lambda security group with the Lambda function. This
security group allows the function to send traffic to the DB instance or send traffic through a proxy.
You can also use RDS Proxy to include a proxy in your connection. Lambda functions make frequent
short database connections that benefit from the connection pooling that RDS Proxy offers. You can also
use any IAM authentication that you've already set up for your Lambda function, instead of managing
database credentials in your Lambda application code.
You can connect an existing DB instance to new and existing Lambda functions using the Set up Lambda
connection page. The setup process automatically sets up the required security groups for you.
Before setting up a connection between a Lambda function and a DB instance, make sure that:
If you change security groups after you configure connectivity, the changes might affect the connection
between the Lambda function and the DB instance.
Note
You can automatically set up a connection between a DB instance and a Lambda function only
in the AWS Management Console. To connect a Lambda function, the DB instance must be in the
Available state.
After you confirm the setup, Amazon RDS begins the process of connecting your Lambda function, RDS
Proxy (if you used a proxy), and DB instance. The console shows the Connection details dialog box,
which lists the security group changes that allow connections between your resources.
</result>
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
connect to a Lambda function.
3. For Actions, choose Set up Lambda connection.
4. On the Set up Lambda connection page, under Select Lambda function, do either of the following:
• If you have an existing Lambda function in the same VPC as your DB instance, choose Choose
existing function, and then choose the function.
• If you don't have a Lambda function in the same VPC, choose Create new function, and then
enter a Function name. The default runtime is set to Nodejs.18. You can modify the settings for
your new Lambda function in the Lambda console after you complete the connection setup.
5. (Optional) Under RDS Proxy, select Connect using RDS Proxy, and then do any of the following:
• If you have an existing proxy that you want to use, choose Choose existing proxy, and then
choose the proxy.
399
Amazon Relational Database Service User Guide
Connecting a Lambda function
• If you don't have a proxy, and you want Amazon RDS to automatically create one for you,
choose Create new proxy. Then, for Database credentials, do either of the following:
a. Choose Database username and password, and then enter the Username and Password
for your DB instance.
b. Choose Secrets Manager secret. Then, for Select secret, choose an AWS Secrets Manager
secret. If you don't have a Secrets Manager secret, choose Create new Secrets Manager
secret to create a new secret. After you create the secret, for Select secret, choose the new
secret.
After you create the new proxy, choose Choose existing proxy, and then choose the proxy. Note
that it might take some time for your proxy to be available for connection.
6. (Optional) Expand Connection summary and verify the highlighted updates for your resources.
7. Choose Set up.
The listed compute resources don't include those that are manually connected to the DB instance. For
example, you can allow a compute resource to access your DB instance manually by adding a rule to your
VPC security group associated with the database.
For the console to list a Lambda function, the following conditions must apply:
• The name of the security group associated with the compute resource matches the pattern lambda-
rds-n or lambda-rdsproxy-n (where n is a number).
• The security group associated with the compute resource has an outbound rule with the port range set
to the port of the DB instance or an associated proxy. The destination for the outbound rule must be
set to a security group associated with the DB instance or an associated proxy.
• If the configuration includes a proxy, the name of the security group attached to the proxy associated
with your database matches the pattern rdsproxy-lambda-n (where n is a number).
• The security group associated with the function has an outbound rule with the port set to the port
that the DB instance or associated proxy uses. The destination must be set to a security group
associated with the DB instance or associated proxy.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance.
3. On the Connectivity & security tab, view the compute resources under Connected compute
resources.
400
Amazon Relational Database Service User Guide
Modifying a DB instance
We recommend that you test any changes on a test instance before modifying a production instance.
Doing this helps you to fully understand the impact of each change. Testing is especially important when
upgrading database versions.
Most modifications to a DB instance you can either apply immediately or defer until the next
maintenance window. Some modifications, such as parameter group changes, require that you manually
reboot your DB instance for the change to take effect.
Important
Some modifications result in downtime because Amazon RDS must reboot your DB instance
for the change to take effect. Review the impact to your database and applications before
modifying your DB instance settings.
Console
To modify a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
modify.
3. Choose Modify. The Modify DB instance page appears.
4. Change any of the settings that you want. For information about each setting, see Settings for DB
instances (p. 402).
5. When all the changes are as you want them, choose Continue and check the summary of
modifications.
6. (Optional) Choose Apply immediately to apply the changes immediately. Choosing this option
can cause downtime in some cases. For more information, see Using the Apply Immediately
setting (p. 402).
7. On the confirmation page, review your changes. If they are correct, choose Modify DB instance to
save your changes.
AWS CLI
To modify a DB instance by using the AWS CLI, call the modify-db-instance command. Specify the DB
instance identifier and the values for the options that you want to modify. For information about each
option, see Settings for DB instances (p. 402).
Example
The following code modifies mydbinstance by setting the backup retention period to 1 week (7
days). The code enables deletion protection by using --deletion-protection. To disable deletion
protection, use --no-deletion-protection. The changes are applied during the next maintenance
window by using --no-apply-immediately. Use --apply-immediately to apply the changes
immediately. For more information, see Using the Apply Immediately setting (p. 402).
401
Amazon Relational Database Service User Guide
Apply Immediately setting
For Windows:
RDS API
To modify a DB instance by using the Amazon RDS API, call the ModifyDBInstance operation. Specify
the DB instance identifier, and the parameters for the settings that you want to modify. For information
about each parameter, see Settings for DB instances (p. 402).
If you don't choose to apply changes immediately, the changes are put into the pending modifications
queue. During the next maintenance window, any pending changes in the queue are applied. If you
choose to apply changes immediately, your new changes and any changes in the pending modifications
queue are applied.
Important
If any of the pending modifications require the DB instance to be temporarily unavailable
(downtime), choosing the apply immediately option can cause unexpected downtime.
When you choose to apply a change immediately, any pending modifications are also applied
immediately, instead of during the next maintenance window.
If you don't want a pending change to be applied in the next maintenance window, you
can modify the DB instance to revert the change. You can do this by using the AWS CLI and
specifying the --apply-immediately option.
Changes to some database settings are applied immediately, even if you choose to defer your changes.
To see how the different database settings interact with the apply immediately setting, see Settings for
DB instances (p. 402).
You can modify a DB instance using the console, the modify-db-instance CLI command, or the
ModifyDBInstance RDS API operation.
402
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
403
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
Auto minor version upgrade CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
Yes to enable your DB instance to --auto-minor- setting ignores the change.
receive preferred minor DB engine version- apply immediately
version upgrades automatically when upgrade|--no- setting.
they become available. Amazon RDS auto-minor-
performs automatic minor version version-
upgrades in the maintenance window. upgrade
Otherwise, No.
RDS API
For more information, see parameter:
Automatically upgrading the minor
engine version (p. 431). AutoMinorVersionUpgrade
Backup retention period CLI option: If you choose to Downtime occurs All DB
apply the change if you change from engines
The number of days that automatic --backup- immediately, 0 to a nonzero
backups are retained. To disable retention- it occurs value, or from a
automatic backups, set the backup period immediately. nonzero value to
retention period to 0. 0.
RDS API If you don't
For more information, see Working with parameter: choose to apply This applies to
backups (p. 591). the change both Single-AZ
BackupRetentionPeriod
immediately, and Multi-AZ DB
Note
If you use AWS Backup to and you change instances.
manage your backups, this the setting
option doesn't apply. For from a nonzero
information about AWS value to another
Backup, see the AWS Backup nonzero value, the
Developer Guide. change is applied
asynchronously, as
soon as possible.
Otherwise, the
change occurs
during the next
maintenance
window.
404
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
Copy tags to snapshots CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
If you have any DB instance tags, enable --copy-tags- setting ignores the change.
this option to copy them when you to-snapshot apply immediately
create a DB snapshot. or --no-copy- setting.
tags-to-
For more information, see Tagging snapshot
Amazon RDS resources (p. 461).
RDS API
parameter:
CopyTagsToSnapshot
405
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
Database port CLI option: The change occurs The DB instance All DB
immediately. This is rebooted engines
The port that you want to use to access --db-port- setting ignores the immediately.
the DB instance. number apply immediately
setting.
The port value must not match any of RDS API
the port values specified for options in parameter:
the option group that is associated with
the DB instance. DBPortNumber
406
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
407
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
For more
information,
see Working
with parameter
groups (p. 347)
and Rebooting
a DB
instance (p. 436).
Deletion protection CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
Enable deletion protection to prevent --deletion- setting ignores the change.
your DB instance from being deleted. protection|-- apply immediately
no-deletion- setting.
For more information, see Deleting a DB protection
instance (p. 489).
RDS API
parameter:
DeletionProtection
408
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
Enhanced Monitoring CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
Enable Enhanced Monitoring to enable --monitoring- setting ignores the change.
gathering metrics in real time for the interval and apply immediately
operating system that your DB instance --monitoring- setting.
runs on. role-arn
409
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
Log exports CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
The types of database log files to --cloudwatch- setting ignores the change.
publish to Amazon CloudWatch Logs. logs-export- apply immediately
configuration setting.
For more information, see Publishing
database logs to Amazon CloudWatch RDS API
Logs (p. 898). parameter:
CloudwatchLogsExportConfiguration
Maintenance window CLI option: The change occurs If there are one All DB
immediately. This or more pending engines
The time range during which --preferred- setting ignores the actions that cause
system maintenance occurs. System maintenance- apply immediately downtime, and
maintenance includes upgrades, if window setting. the maintenance
applicable. The maintenance window window is changed
is a start time in Universal Coordinated RDS API to include the
Time (UTC), and a duration in hours. parameter: current time,
those pending
If you set the window to the current PreferredMaintenanceWindow
actions are applied
time, there must be at least 30 minutes immediately and
between the current time and the end downtime occurs.
of the window. This timing helps ensure
that any pending changes are applied.
410
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
Manage master credentials in AWS CLI option: If you are turning Downtime doesn't All DB
Secrets Manager on or turning off occur during this engines
--manage- automatic master change.
Select Manage master credentials in master-user- user password
AWS Secrets Manager to manage the password | -- management, the
master user password in a secret in no-manage- change occurs
Secrets Manager. master-user- immediately. This
password change ignores the
Optionally, choose a KMS key to use apply immediately
to protect the secret. Choose from the --master-user- setting.
KMS keys in your account, or enter the secret-kms-
key from a different account. key-id If you are rotating
the master user
If RDS is already managing the master --rotate- password, you
user password for the DB instance, you master-user- must specify
can rotate the master user password by password | -- that the change
choosing Rotate secret immediately. no-rotate- is applied
master-user- immediately.
For more information, see Password password
management with Amazon RDS and
AWS Secrets Manager (p. 2568). RDS API
parameter:
ManageMasterUserPassword
MasterUserSecretKmsKeyId
RotateMasterUserPassword
411
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
New master password CLI option: The change Downtime doesn't All DB
is applied occur during this engines
The password for your master user. --master-user- asynchronously, as change.
The password must contain 8–41 password soon as possible.
alphanumeric characters. This setting
RDS API ignores the apply
parameter: immediately
setting.
MasterUserPassword
412
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
Performance Insights CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
Enable Performance Insights to --enable- setting ignores the change.
monitor your DB instance load so that performance- apply immediately
you can analyze and troubleshoot your insights|-- setting.
database performance. no-enable-
performance-
Performance Insights isn't available insights
for some DB engine versions and DB
instance classes. The Performance RDS API
Insights section doesn't appear in the parameter:
console if it isn't available for your DB
instance. EnablePerformanceInsights
Performance Insights AWS KMS key CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
The AWS KMS key identifier for the AWS --performance- setting ignores the change.
KMS key for encryption of Performance insights-kms- apply immediately
Insights data. The key identifier is the key-id setting.
Amazon Resource Name (ARN), AWS
KMS key identifier, or the key alias for RDS API
the KMS key. parameter:
Performance Insights Retention period CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
The amount of time, in days, to --performance- setting ignores the change.
retain Performance Insights data. insights- apply immediately
The retention setting in the free tier retention- setting.
is Default (7 days). To retain your period
performance data for longer, specify
1–24 months. For more information RDS API
about retention periods, see Pricing parameter:
and data retention for Performance
Insights (p. 726). PerformanceInsightsRetentionPeriod
413
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
ProcessorFeatures
and
UseDefaultProcessorFeatures
414
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
Public access CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
Publicly accessible to give the DB --publicly- setting ignores the change.
instance a public IP address, meaning accessible|-- apply immediately
that it's accessible outside the VPC. To no-publicly- setting.
be publicly accessible, the DB instance accessible
also has to be in a public subnet in the
VPC. RDS API
parameter:
Not publicly accessible to make the DB
instance accessible only from inside the PubliclyAccessible
VPC.
415
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
Storage autoscaling CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
Enable storage autoscaling to enable --max- setting ignores the change.
Amazon RDS to automatically increase allocated- apply immediately
storage when needed to avoid having storage setting.
your DB instance run out of storage
space. RDS API
parameter:
Use Maximum storage threshold to
set the upper limit for Amazon RDS to MaxAllocatedStorage
automatically increase storage for your
DB instance. The default is 1,000 GiB.
416
Amazon Relational Database Service User Guide
Available settings
Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines
417
Amazon Relational Database Service User Guide
Maintaining a DB instance
Maintaining a DB instance
Periodically, Amazon RDS performs maintenance on Amazon RDS resources. Maintenance most often
involves updates to the following resources in your DB instance:
• Underlying hardware
• Underlying operating system (OS)
• Database engine version
Updates to the operating system most often occur for security issues. You should do them as soon as
possible.
Some maintenance items require that Amazon RDS take your DB instance offline for a short time.
Maintenance items that require a resource to be offline include required operating system or database
patching. Required patching is automatically scheduled only for patches that are related to security
and instance reliability. Such patching occurs infrequently, typically once every few months. It seldom
requires more than a fraction of your maintenance window.
Deferred DB instance modifications that you have chosen not to apply immediately are also applied
during the maintenance window. For example, you might choose to change the DB instance class
or parameter group during the maintenance window. Such modifications that you specify using
the pending reboot setting don't show up in the Pending maintenance list. For information about
modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401).
Topics
• Viewing pending maintenance (p. 418)
• Applying updates for a DB instance (p. 421)
• Maintenance for Multi-AZ deployments (p. 422)
• The Amazon RDS maintenance window (p. 423)
• Adjusting the preferred DB instance maintenance window (p. 424)
• Working with operating system updates (p. 426)
418
Amazon Relational Database Service User Guide
Viewing pending maintenance
If no maintenance update is available for a DB instance, the column value is none for it.
If a maintenance update is available for a DB instance, the following column values are possible:
• required – The maintenance action will be applied to the resource and can't be deferred indefinitely.
• available – The maintenance action is available, but it will not be applied to the resource
automatically. You can apply it manually.
• next window – The maintenance action will be applied to the resource during the next maintenance
window.
• In progress – The maintenance action is in the process of being applied to the resource.
• If the maintenance value is next window, defer the maintenance items by choosing Defer upgrade
from Actions. You can't defer a maintenance action if it has already started.
• Apply the maintenance items immediately.
• Schedule the maintenance items to start during your next maintenance window.
• Take no action.
To take an action, choose the DB instance to show its details, then choose Maintenance & backups. The
pending maintenance items appear.
419
Amazon Relational Database Service User Guide
Viewing pending maintenance
The maintenance window determines when pending operations start, but doesn't limit the total run
time of these operations. Maintenance operations aren't guaranteed to finish before the maintenance
window ends, and can continue beyond the specified end time. For more information, see The Amazon
RDS maintenance window (p. 423).
You can also view whether a maintenance update is available for your DB instance by running the
describe-pending-maintenance-actions AWS CLI command.
420
Amazon Relational Database Service User Guide
Applying updates
Console
To manage an update for a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that has a required update.
4. For Actions, choose one of the following:
• Upgrade now
• Upgrade at next window
Note
If you choose Upgrade at next window and later want to delay the update, you can
choose Defer upgrade. You can't defer a maintenance action if it has already started.
To cancel a maintenance action, modify the DB instance and disable Auto minor version
upgrade.
AWS CLI
To apply a pending update to a DB instance, use the apply-pending-maintenance-action AWS CLI
command.
Example
For Linux, macOS, or Unix:
For Windows:
Note
To defer a maintenance action, specify undo-opt-in for --opt-in-type. You can't specify
undo-opt-in for --opt-in-type if the maintenance action has already started.
To cancel a maintenance action, run the modify-db-instance AWS CLI command and specify --
no-auto-minor-version-upgrade.
To return a list of resources that have at least one pending update, use the describe-pending-
maintenance-actions AWS CLI command.
Example
For Linux, macOS, or Unix:
421
Amazon Relational Database Service User Guide
Maintenance for Multi-AZ deployments
For Windows:
You can also return a list of resources for a DB instance by specifying the --filters parameter of the
describe-pending-maintenance-actions AWS CLI command. The format for the --filters
command is Name=filter-name,Value=resource-id,....
The following are the accepted values for the Name parameter of a filter:
• db-instance-id – Accepts a list of DB instance identifiers or Amazon Resource Names (ARNs). The
returned list only includes pending maintenance actions for the DB instances identified by these
identifiers or ARNs.
• db-cluster-id – Accepts a list of DB cluster identifiers or ARNs for Amazon Aurora. The returned list
only includes pending maintenance actions for the DB clusters identified by these identifiers or ARNs.
For example, the following example returns the pending maintenance actions for the sample-
instance1 and sample-instance2 DB instances.
Example
For Linux, macOS, or Unix:
For Windows:
RDS API
To apply an update to a DB instance, call the Amazon RDS API ApplyPendingMaintenanceAction
operation.
To return a list of resources that have at least one pending update, call the Amazon RDS API
DescribePendingMaintenanceActions operation.
If you upgrade the database engine for your DB instance in a Multi-AZ deployment, Amazon RDS
modifies both primary and secondary DB instances at the same time. In this case, both the primary and
422
Amazon Relational Database Service User Guide
The maintenance window
secondary DB instances in the Multi-AZ deployment are unavailable during the upgrade. This operation
causes downtime until the upgrade is complete. The duration of the downtime varies based on the size
of your DB instance.
If your DB instance runs RDS for MySQL or RDS for MariaDB, you can minimize the downtime required
for an upgrade by using a blue/green deployment. For more information, see Using Amazon RDS
Blue/Green Deployments for database updates (p. 566). If you upgrade an RDS for SQL Server DB
instance in a Multi-AZ deployment, then Amazon RDS performs rolling upgrades, so you have an outage
only for the duration of a failover. For more information, see Multi-AZ and in-memory optimization
considerations (p. 1417).
For more information about Multi-AZ deployments, see Configuring and managing a Multi-AZ
deployment (p. 492).
The 30-minute maintenance window is selected at random from an 8-hour block of time per region.
If you don't specify a maintenance window when you create the DB instance, RDS assigns a 30-minute
maintenance window on a randomly selected day of the week.
RDS consumes some of the resources on your DB instance while maintenance is being applied. You might
observe a minimal effect on performance. For a DB instance, on rare occasions, a Multi-AZ failover might
be required for a maintenance update to complete.
Following, you can find the time blocks for each region from which default maintenance windows are
assigned.
423
Amazon Relational Database Service User Guide
Adjusting the maintenance window for a DB instance
In the following example, you adjust the preferred maintenance window for a DB instance.
For this example, we assume that a DB instance named mydbinstance exists and has a preferred
maintenance window of "Sun:05:00-Sun:06:00" UTC.
424
Amazon Relational Database Service User Guide
Adjusting the maintenance window for a DB instance
Console
To adjust the preferred maintenance window
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then select the DB instance that you want to modify.
3. Choose Modify. The Modify DB instance page appears.
4. In the Maintenance section, update the maintenance window.
Note
The maintenance window and the backup window for the DB instance cannot overlap. If
you enter a value for the maintenance window that overlaps the backup window, an error
message appears.
5. Choose Continue.
Alternatively, choose Back to edit your changes, or choose Cancel to cancel your changes.
AWS CLI
To adjust the preferred maintenance window, use the AWS CLI modify-db-instance command with
the following parameters:
• --db-instance-identifier
• --preferred-maintenance-window
Example
The following code example sets the maintenance window to Tuesdays from 4:00-4:30AM UTC.
For Windows:
RDS API
To adjust the preferred maintenance window, use the Amazon RDS API ModifyDBInstance operation
with the following parameters:
• DBInstanceIdentifier
• PreferredMaintenanceWindow
425
Amazon Relational Database Service User Guide
Working with operating system updates
• An optional update can be applied at any time. While these updates are optional, we recommend
that you apply them periodically to keep your RDS fleet up to date. RDS does not apply these updates
automatically.
To be notified when a new, optional operating system patch becomes available, you can subscribe to
RDS-EVENT-0230 (p. 889) in the security patching event category. For information about subscribing
to RDS events, see Subscribing to Amazon RDS event notification (p. 860).
Note
RDS-EVENT-0230 doesn't apply to operating system distribution upgrades.
• A mandatory update is required and has an apply date. Plan to schedule your update before this apply
date. After the specified apply date, Amazon RDS automatically upgrades the operating system for
your DB instance to the latest version during one of your assigned maintenance windows.
Note
Staying current on all optional and mandatory updates might be required to meet various
compliance obligations. We recommend that you apply all updates made available by RDS
routinely during your maintenance windows.
You can use the AWS Management Console or the AWS CLI to get information about the type of
operating system upgrade.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then select the DB instance.
3. Choose Maintenance & backups.
4. In the Pending maintenance section, find the operating system update, and check the Status value.
In the AWS Management Console, an optional update has its maintenance Status set to available and
doesn't have an Apply date, as shown in the following image.
426
Amazon Relational Database Service User Guide
Working with operating system updates
A mandatory update has its maintenance Status set to required and has an Apply date, as shown in the
following image.
AWS CLI
To get update information from the AWS CLI, use the describe-pending-maintenance-actions command.
{
"ResourceIdentifier": "arn:aws:rds:us-east-1:123456789012:db:mydb1",
"PendingMaintenanceActionDetails": [
{
"Action": "system-update",
"AutoAppliedAfterDate": "2022-08-31T00:00:00+00:00",
"CurrentApplyDate": "2022-08-31T00:00:00+00:00",
"Description": "New Operating System update is available"
}
]
}
427
Amazon Relational Database Service User Guide
Working with operating system updates
{
"ResourceIdentifier": "arn:aws:rds:us-east-1:123456789012:db:mydb2",
"PendingMaintenanceActionDetails": [
{
"Action": "system-update",
"Description": "New Operating System update is available"
}
]
}
Note
The dates in the table apply to customers who didn't experience mandatory operating system
updates in 2022. To confirm whether the mandatory operating system updates in 2023
impact you, check the Pending maintenance section in the console for operating system
updates. For more information, see the Console section under Working with operating system
updates (p. 426).
After the apply date, Amazon RDS automatically upgrades the operating system for your DB instances to
the latest version in a subsequent maintenance window. To avoid an automatic upgrade, we recommend
that you schedule your update before the apply date.
428
Amazon Relational Database Service User Guide
Upgrading the engine version
There are two kinds of upgrades: major version upgrades and minor version upgrades. In general, a
major engine version upgrade can introduce changes that are not compatible with existing applications.
In contrast, a minor version upgrade includes only changes that are backward-compatible with existing
applications.
For Multi-AZ DB clusters, major version upgrades are only supported for RDS for PostgreSQL. Minor
version upgrades are supported for all engines that support Multi-AZ DB clusters. For more information,
see the section called “Upgrading the engine version of a Multi-AZ DB cluster” (p. 503).
The version numbering sequence is specific to each database engine. For example, RDS for MySQL 5.7
and 8.0 are major engine versions and upgrading from any 5.7 version to any 8.0 version is a major
version upgrade. RDS for MySQL version 5.7.22 and 5.7.23 are minor versions and upgrading from 5.7.22
to 5.7.23 is a minor version upgrade.
Important
You can't modify a DB instance when it is being upgraded. During an upgrade, the DB instance
status is upgrading.
For more information about major and minor version upgrades for a specific DB engine, see the
following documentation for your DB engine:
For major version upgrades, you must manually modify the DB engine version through the AWS
Management Console, AWS CLI, or RDS API. For minor version upgrades, you can manually modify the
engine version, or you can choose to enable the Auto minor version upgrade option.
Note
Database engine upgrades require downtime. You can minimize the downtime required for DB
instance upgrade by using a blue/green deployment. For more information, see Using Amazon
RDS Blue/Green Deployments for database updates (p. 566).
Topics
• Manually upgrading the engine version (p. 429)
• Automatically upgrading the minor engine version (p. 431)
429
Amazon Relational Database Service User Guide
Manually upgrading the engine version
Console
To upgrade the engine version of a DB instance by using the console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
upgrade.
3. Choose Modify. The Modify DB instance page appears.
4. For DB engine version, choose the new version.
5. Choose Continue and check the summary of modifications.
6. To apply the changes immediately, choose Apply immediately. Choosing this option can cause an
outage in some cases. For more information, see Using the Apply Immediately setting (p. 402).
7. On the confirmation page, review your changes. If they are correct, choose Modify DB instance to
save your changes.
Alternatively, choose Back to edit your changes, or choose Cancel to cancel your changes.
AWS CLI
To upgrade the engine version of a DB instance, use the CLI modify-db-instance command. Specify the
following parameters:
For information about valid engine versions, use the AWS CLI describe-db-engine-versions command.
• --allow-major-version-upgrade – to upgrade the major version.
• --no-apply-immediately – to apply changes during the next maintenance window. To apply
changes immediately, use --apply-immediately.
Example
For Windows:
RDS API
To upgrade the engine version of a DB instance, use the ModifyDBInstance action. Specify the following
parameters:
430
Amazon Relational Database Service User Guide
Automatically upgrading the minor engine version
If you want Amazon RDS to upgrade the DB engine version of a database automatically, you can enable
auto minor version upgrades for the database.
Topics
• How automatic minor version upgrades work (p. 431)
• Turning on automatic minor version upgrades (p. 431)
• Determining the availability of maintenance updates (p. 432)
• Finding automatic minor version upgrade targets (p. 432)
• The database is running a minor version of the DB engine that is lower than the preferred minor
engine version.
You can find your current engine version for your DB instance by looking on the Configuration tab of
the database details page or running the CLI command describe-db-instances.
• The database has auto minor version upgrade enabled.
RDS schedules the upgrades to run automatically in the maintenance window. During the upgrade, RDS
performs the following basic steps:
1. Runs a precheck to make sure the database is healthy and ready to be upgraded
2. Upgrades the DB engine
3. Runs post-upgrade checks
4. Marks the database upgrade as complete
Automatic upgrades incur downtime. The length of the downtime depends on various factors, including
the DB engine type and the size of the database.
431
Amazon Relational Database Service User Guide
Automatically upgrading the minor engine version
When you perform these tasks, you can control whether auto minor version upgrade is enabled for the
DB instance in the following ways:
• Using the console, set the Auto minor version upgrade option.
• Using the AWS CLI, set the --auto-minor-version-upgrade|--no-auto-minor-version-
upgrade option.
• Using the RDS API, set the AutoMinorVersionUpgrade parameter.
For Windows:
For example, the following AWS CLI command determines the automatic minor upgrade target for
MySQL minor version 8.0.11 in the US East (Ohio) AWS Region (us-east-2).
432
Amazon Relational Database Service User Guide
Automatically upgrading the minor engine version
--engine mysql \
--engine-version 8.0.11 \
--region us-east-2 \
--query "DBEngineVersions[*].ValidUpgradeTarget[*].
{AutoUpgrade:AutoUpgrade,EngineVersion:EngineVersion}" \
--output table
For Windows:
----------------------------------
| DescribeDBEngineVersions |
+--------------+-----------------+
| AutoUpgrade | EngineVersion |
+--------------+-----------------+
| False | 8.0.15 |
| False | 8.0.16 |
| False | 8.0.17 |
| False | 8.0.19 |
| False | 8.0.20 |
| False | 8.0.21 |
| True | 8.0.23 |
| False | 8.0.25 |
+--------------+-----------------+
In this example, the AutoUpgrade value is True for MySQL version 8.0.23. So, the automatic minor
upgrade target is MySQL version 8.0.23, which is highlighted in the output.
Important
If you plan to migrate an RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster
soon, we strongly recommend that you turn off auto minor version upgrades for the DB
instance early during planning. Migration to Aurora PostgreSQL might be delayed if the RDS for
PostgreSQL version isn't yet supported by Aurora PostgreSQL. For information about Aurora
PostgreSQL versions, see Engine versions for Amazon Aurora PostgreSQL.
433
Amazon Relational Database Service User Guide
Renaming a DB instance
Renaming a DB instance
You can rename a DB instance by using the AWS Management Console, the AWS CLI modify-db-
instance command, or the Amazon RDS API ModifyDBInstance action. Renaming a DB instance can
have far-reaching effects. The following is a list of considerations before you rename a DB instance.
• When you rename a DB instance, the endpoint for the DB instance changes, because the URL includes
the name you assigned to the DB instance. You should always redirect traffic from the old URL to the
new one.
• When you rename a DB instance, the old DNS name that was used by the DB instance is immediately
deleted, although it could remain cached for a few minutes. The new DNS name for the renamed DB
instance becomes effective in about 10 minutes. The renamed DB instance is not available until the
new name becomes effective.
• You cannot use an existing DB instance name when renaming an instance.
• All read replicas associated with a DB instance remain associated with that instance after it is
renamed. For example, suppose you have a DB instance that serves your production database and the
instance has several associated read replicas. If you rename the DB instance and then replace it in the
production environment with a DB snapshot, the DB instance that you renamed will still have the read
replicas associated with it.
• Metrics and events associated with the name of a DB instance are maintained if you reuse a DB
instance name. For example, if you promote a read replica and rename it to be the name of the
previous primary DB instance, the events and metrics associated with the primary DB instance are
associated with the renamed instance.
• DB instance tags remain with the DB instance, regardless of renaming.
• DB snapshots are retained for a renamed DB instance.
Note
A DB instance is an isolated database environment running in the cloud. A DB instance can host
multiple databases, or a single Oracle database with multiple schemas. For information about
changing a database name, see the documentation for your DB engine.
1. Stop all traffic going to the primary DB instance. This can involve redirecting traffic from accessing
the databases on the DB instance or some other way you want to use to prevent traffic from accessing
your databases on the DB instance.
2. Rename the primary DB instance to a name that indicates it is no longer the primary DB instance as
described later in this topic.
3. Create a new primary DB instance by restoring from a DB snapshot or by promoting a read replica, and
then give the new instance the name of the previous primary DB instance.
4. Associate any read replicas with the new primary DB instance.
If you delete the old primary DB instance, you are responsible for deleting any unwanted DB snapshots
of the old primary DB instance.
For information about promoting a read replica, see Promoting a read replica to be a standalone DB
instance (p. 447).
434
Amazon Relational Database Service User Guide
Renaming to replace an existing DB instance
Important
The DB instance is rebooted when it is renamed.
Console
To rename a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to rename.
4. Choose Modify.
5. In Settings, enter a new name for DB instance identifier.
6. Choose Continue.
7. To apply the changes immediately, choose Apply immediately. Choosing this option can cause an
outage in some cases. For more information, see Modifying an Amazon RDS DB instance (p. 401).
8. On the confirmation page, review your changes. If they are correct, choose Modify DB Instance to
save your changes.
Alternatively, choose Back to edit your changes, or choose Cancel to cancel your changes.
AWS CLI
To rename a DB instance, use the AWS CLI command modify-db-instance. Provide the current --db-
instance-identifier value and --new-db-instance-identifier parameter with the new name
of the DB instance.
Example
For Windows:
RDS API
To rename a DB instance, call Amazon RDS API operation ModifyDBInstance with the following
parameters:
435
Amazon Relational Database Service User Guide
Rebooting a DB instance
Rebooting a DB instance
You might need to reboot your DB instance, usually for maintenance reasons. For example, if you make
certain modifications, or if you change the DB parameter group associated with the DB instance, you
must reboot the instance for the changes to take effect.
Note
If a DB instance isn't using the latest changes to its associated DB parameter group, the AWS
Management Console shows the DB parameter group with a status of pending-reboot. The
pending-reboot parameter groups status doesn't result in an automatic reboot during the next
maintenance window. To apply the latest parameter changes to that DB instance, manually
reboot the DB instance. For more information about parameter groups, see Working with
parameter groups (p. 347).
If the Amazon RDS DB instance is configured for Multi-AZ, you can perform the reboot with a failover.
An Amazon RDS event is created when the reboot is completed. If your DB instance is a Multi-AZ
deployment, you can force a failover from one Availability Zone (AZ) to another when you reboot. When
you force a failover of your DB instance, Amazon RDS automatically switches to a standby replica in
another Availability Zone, and updates the DNS record for the DB instance to point to the standby DB
instance. As a result, you need to clean up and re-establish any existing connections to your DB instance.
Rebooting with failover is beneficial when you want to simulate a failure of a DB instance for testing, or
restore operations to the original AZ after a failover occurs. For more information, see Configuring and
managing a Multi-AZ deployment (p. 492).
Warning
When you force a failover of your DB instance, the database is abruptly interrupted. The DB
instance and its client sessions might not have time to shut down gracefully. To avoid the
possibility of data loss, we recommend stopping transactions on your DB instance before
rebooting with a failover.
On RDS for Microsoft SQL Server, reboot with failover reboots only the primary DB instance. After
the failover, the primary DB instance becomes the new secondary DB instance. Parameters might not
be updated for Multi-AZ instances. For reboot without failover, both the primary and secondary DB
instances reboot, and parameters are updated after the reboot. If the DB instance is unresponsive, we
recommend reboot without failover.
Note
When you force a failover from one Availability Zone to another when you reboot, the
Availability Zone change might not be reflected in the AWS Management Console, and in calls to
the AWS CLI and RDS API, for several minutes.
Rebooting a DB instance restarts the database engine service. Rebooting a DB instance results in a
momentary outage, during which the DB instance status is set to rebooting. An outage occurs for both a
Single-AZ deployment and a Multi-AZ DB instance deployment, even when you reboot with a failover.
You can't reboot your DB instance if it isn't in the available state. Your database can be unavailable for
several reasons, such as an in-progress backup, a previously requested modification, or a maintenance-
window action.
The time required to reboot your DB instance depends on the crash recovery process, database activity
at the time of reboot, and the behavior of your specific DB engine. To improve the reboot time, we
recommend that you reduce database activity as much as possible during the reboot process. Reducing
database activity reduces rollback activity for in-transit transactions.
For a DB instance with read replicas, you can reboot the source DB instance and its read replicas
independently. After a reboot completes, replication resumes automatically.
436
Amazon Relational Database Service User Guide
Rebooting a DB instance
Console
To reboot a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to reboot.
3. For Actions, choose Reboot.
AWS CLI
To reboot a DB instance by using the AWS CLI, call the reboot-db-instance command.
For Windows:
To force a failover from one AZ to the other, use the --force-failover parameter.
For Windows:
RDS API
To reboot a DB instance by using the Amazon RDS API, call the RebootDBInstance operation.
437
Amazon Relational Database Service User Guide
Working with DB instance read replicas
To create a read replica from a source DB instance, Amazon RDS uses the built-in replication features
of the DB engine. For information about using read replicas with a specific engine, see the following
sections:
After you create a read replica from a source DB instance, the source becomes the primary DB instance.
When you make updates to the primary DB instance, Amazon RDS copies them asynchronously to the
read replica. The following diagram shows a source DB instance replicating to a read replica in a different
Availability Zone (AZ). Client have read/write access to the primary DB instance and read-only access to
the replica.
438
Amazon Relational Database Service User Guide
Overview
Topics
• Overview of Amazon RDS read replicas (p. 439)
• Creating a read replica (p. 445)
• Promoting a read replica to be a standalone DB instance (p. 447)
• Monitoring read replication (p. 449)
• Creating a read replica in a different AWS Region (p. 452)
Topics
• Use cases for read replicas (p. 440)
• How read replicas work (p. 440)
• Read replicas in a Multi-AZ deployment (p. 440)
439
Amazon Relational Database Service User Guide
Overview
• Scaling beyond the compute or I/O capacity of a single DB instance for read-heavy database
workloads. You can direct this excess read traffic to one or more read replicas.
• Serving read traffic while the source DB instance is unavailable. In some cases, your source DB instance
might not be able to take I/O requests, for example due to I/O suspension for backups or scheduled
maintenance. In these cases, you can direct read traffic to your read replicas. For this use case, keep in
mind that the data on the read replica might be "stale" because the source DB instance is unavailable.
• Business reporting or data warehousing scenarios where you might want business reporting queries to
run against a read replica, rather than your production DB instance.
• Implementing disaster recovery. You can promote a read replica to a standalone instance as a disaster
recovery solution if the primary DB instance fails.
The read replica operates as a DB instance that allows only read-only connections. An exception is the
RDS for Oracle DB engine, which supports replica databases in mounted mode. A mounted replica
doesn't accept user connections and so can't serve a read-only workload. The primary use for mounted
replicas is cross-Region disaster recovery. For more information, see Working with read replicas for
Amazon RDS for Oracle (p. 1973).
Applications connect to a read replica just as they do to any DB instance. Amazon RDS replicates all
databases from the source DB instance.
In the following scenario, clients have read/write access to a primary DB instance in one AZ. The
primary instance copies updates asynchronously to a read replica in a second AZ and also copies them
synchronously to a standby replica in a third AZ. Clients have read access only to the read replica.
440
Amazon Relational Database Service User Guide
Overview
For more information about high availability and standby replicas, see Configuring and managing a
Multi-AZ deployment (p. 492).
The information in this chapter applies to creating Amazon RDS read replicas either in the same AWS
Region as the source DB instance, or in a separate AWS Region. The following information doesn't apply
to setting up replication with an instance that is running on an Amazon EC2 instance or that is on-
premises.
441
Amazon Relational Database Service User Guide
Overview
How are RDS for MySQL and If a primary DB PostgreSQL has The Virtual
transaction RDS for MariaDB keep instance has no the parameter Log File
logs purged? any binary logs that cross-Region read wal_keep_segments (VLF) of the
haven't been applied. replicas, Amazon that dictates how transaction
RDS for Oracle keeps many write ahead log file on
a minimum of two log (WAL) files are the primary
hours of transaction kept to provide data replica can
logs on the source to the read replicas. be truncated
DB instance. Logs The parameter value after it is
are purged from the specifies the number no longer
source DB instance of logs to keep. required for
after two hours or the secondary
after the archive replicas.
log retention hours
setting has passed, The VLF
whichever is longer. can only
Logs are purged be marked
from the read replica as inactive
after the archive when the
log retention hours log records
setting has passed have been
only if they have been hardened in
successfully applied the replicas.
to the database. Regardless
of how fast
In some cases, a the disk
primary DB instance subsystems
might have one or are in the
more cross-Region primary
read replicas. If replica, the
so, Amazon RDS transaction
for Oracle keeps log will keep
the transaction the VLFs until
logs on the source the slowest
DB instance until replica has
they have been hardened it.
transmitted and
applied to all cross-
Region read replicas.
Can a replica Yes. You can enable No. An Oracle read No. A PostgreSQL No. A SQL
be made the MySQL or replica is a physical read replica is a Server read
writable? MariaDB read replica copy, and Oracle physical copy, and replica is
to be writable. doesn't allow for PostgreSQL doesn't a physical
442
Amazon Relational Database Service User Guide
Overview
Can backups Yes. Automatic Yes. Automatic Yes, you can create a No.
be performed backups and manual backups and manual manual snapshot of Automatic
on the snapshots are snapshots are RDS for PostgreSQL backups
replica? supported on RDS supported on RDS for read replicas. and manual
for MySQL or RDS for Oracle read replicas. Automated backups snapshots
MariaDB read replicas. for read replicas are aren't
supported for RDS for supported on
PostgreSQL 14.1 and RDS for SQL
higher versions only. Server read
You can't turn on replicas.
automated backups
for PostgreSQL
read replicas for
RDS for PostgreSQL
versions earlier than
14.1. For RDS for
PostgreSQL 13 and
earlier versions,
create a snapshot
from a read replica if
you want a backup of
it.
Can you Yes. All supported Yes. Redo log data is No. PostgreSQL Yes. Redo log
use parallel MariaDB and MySQL always transmitted has a single process data is always
replication? versions allow for in parallel from the handling replication. transmitted
parallel replication primary database to in parallel
threads. all of its read replicas. from the
primary
database to
all of its read
replicas.
443
Amazon Relational Database Service User Guide
Overview
Source DB instance storage Source DB instance storage Read replica storage type
type allocation options
Note
When you increase the allocated storage of a read replica, it must be by at least 10 percent. If
you try to increase the value by less than 10 percent, you get an error.
For RDS for MariaDB and RDS for MySQL, and for certain versions of RDS for PostgreSQL, you can create
a read replica from an existing read replica. For example, you can create new read replica ReadReplica2
from existing replica ReadReplica1. For RDS for Oracle and RDS for SQL Server, you can't create a read
replica from an existing read replica.
444
Amazon Relational Database Service User Guide
Creating a read replica
If you have cross-Region read replicas, see Cross-Region replication considerations (p. 456) for
information related to deleting the source DB instance for a cross-Region read replica.
When you create a read replica, Amazon RDS takes a DB snapshot of your source DB instance and begins
replication. As a result, you experience a brief I/O suspension on your source DB instance while the DB
snapshot occurs.
Note
The I/O suspension typically lasts about one minute. You can avoid the I/O suspension if the
source DB instance is a Multi-AZ deployment, because in that case the snapshot is taken from
the secondary DB instance.
An active, long-running transaction can slow the process of creating the read replica. We recommend
that you wait for long-running transactions to complete before creating a read replica. If you create
multiple read replicas in parallel from the same source DB instance, Amazon RDS takes only one
snapshot at the start of the first create action.
When creating a read replica, there are a few things to consider. First, you must enable automatic
backups on the source DB instance by setting the backup retention period to a value other than 0. This
requirement also applies to a read replica that is the source DB instance for another read replica. To
enable automatic backups on an RDS for MySQL read replica, first create the read replica, then modify
the read replica to enable automatic backups.
Note
Within an AWS Region, we strongly recommend that you create all read replicas in the same
virtual private cloud (VPC) based on Amazon VPC as the source DB instance. If you create a read
replica in a different VPC from the source DB instance, classless inter-domain routing (CIDR)
ranges can overlap between the replica and the RDS system. CIDR overlap makes the replica
unstable, which can negatively impact applications connecting to it. If you receive an error when
creating the read replica, choose a different destination DB subnet group. For more information,
see Working with a DB instance in a VPC (p. 2688).
There is no direct way to create a read replica in another AWS account using the console or AWS
CLI.
Console
To create a read replica from a source DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to use as the source for a read replica.
4. For Actions, choose Create read replica.
5. For DB instance identifier, enter a name for the read replica.
445
Amazon Relational Database Service User Guide
Creating a read replica
6. Choose your instance configuration. We recommend that you use the same or larger DB instance
class and storage type as the source DB instance for the read replica.
7. For AWS Region, specify the destination Region for the read replica.
8. For Storage, specify the allocated storage size and whether you want to use storage autoscaling.
9. For Availability, choose whether to create a standby of your replica in another Availability Zone for
failover support for the replica.
Note
Creating your read replica as a Multi-AZ DB instance is independent of whether the source
database is a Multi-AZ DB instance.
10. Specify other DB instance settings. For information about each available setting, see Settings for DB
instances (p. 308).
11. To create an encrypted read replica, expand Additional configuration and specify the following
settings:
Note
The source DB instance must be encrypted. To learn more about encrypting the source DB
instance, see Encrypting Amazon RDS resources (p. 2586).
12. Choose Create read replica.
After the read replica is created, you can see it on the Databases page in the RDS console. It shows
Replica in the Role column.
AWS CLI
To create a read replica from a source DB instance, use the AWS CLI command create-db-instance-read-
replica. This example also sets the allocated storage size and enables storage autoscaling.
You can specify other settings. For information about each setting, see Settings for DB instances (p. 308).
Example
For Linux, macOS, or Unix:
For Windows:
RDS API
To create a read replica from a source MySQL, MariaDB, Oracle, PostgreSQL, or SQL Server DB instance,
call the Amazon RDS API CreateDBInstanceReadReplica operation with the following required
parameters:
446
Amazon Relational Database Service User Guide
Promoting a read replica
• DBInstanceIdentifier
• SourceDBInstanceIdentifier
There are several reasons you might want to promote a read replica to a standalone DB instance:
447
Amazon Relational Database Service User Guide
Promoting a read replica
• Performing DDL operations (MySQL and MariaDB only) – DDL operations, such as creating or
rebuilding indexes, can take time and impose a significant performance penalty on your DB instance.
You can perform these operations on a MySQL or MariaDB read replica once the read replica is in sync
with its primary DB instance. Then you can promote the read replica and direct your applications to
use the promoted instance.
• Sharding – Sharding embodies the "share-nothing" architecture and essentially involves breaking a
large database into several smaller databases. One common way to split a database is splitting tables
that are not joined in the same query onto different hosts. Another method is duplicating a table
across multiple hosts and then using a hashing algorithm to determine which host receives a given
update. You can create read replicas corresponding to each of your shards (smaller databases) and
promote them when you decide to convert them into standalone shards. You can then carve out the
key space (if you are splitting rows) or distribution of tables for each of the shards depending on your
requirements.
• Implementing failure recovery – You can use read replica promotion as a data recovery scheme if
the primary DB instance fails. This approach complements synchronous replication, automatic failure
detection, and failover.
If you are aware of the ramifications and limitations of asynchronous replication and you still want to
use read replica promotion for data recovery, you can. To do this, first create a read replica and then
monitor the primary DB instance for failures. In the event of a failure, do the following:
1. Promote the read replica.
2. Direct database traffic to the promoted DB instance.
3. Create a replacement read replica with the promoted DB instance as its source.
When you promote a read replica, the new DB instance that is created retains the option group and the
parameter group of the former read replica. The promotion process can take several minutes or longer
to complete, depending on the size of the read replica. After you promote the read replica to a new DB
instance, it's just like any other DB instance. For example, you can create read replicas from the new DB
instance and perform point-in-time restore operations. Because the promoted DB instance is no longer
a read replica, you can't use it as a replication target. If a source DB instance has several read replicas,
promoting one of the read replicas to a DB instance has no effect on the other replicas.
Backup duration is a function of the number of changes to the database since the previous backup. If
you plan to promote a read replica to a standalone instance, we recommend that you enable backups
and complete at least one backup prior to promotion. In addition, you can't promote a read replica to
a standalone instance when it has the backing-up status. If you have enabled backups on your read
replica, configure the automated backup window so that daily backups don't interfere with read replica
promotion.
The following steps show the general process for promoting a read replica to a DB instance:
1. Stop any transactions from being written to the primary DB instance, and then wait for all updates to
be made to the read replica. Database updates occur on the read replica after they have occurred on
the primary DB instance, and this replication lag can vary significantly. Use the Replica Lag metric
to determine when all updates have been made to the read replica.
2. For MySQL and MariaDB only: If you need to make changes to the MySQL or MariaDB read replica, you
must set the read_only parameter to 0 in the DB parameter group for the read replica. You can then
perform all needed DDL operations, such as creating indexes, on the read replica. Actions taken on the
read replica don't affect the performance of the primary DB instance.
3. Promote the read replica by using the Promote option on the Amazon RDS console, the AWS CLI
command promote-read-replica, or the PromoteReadReplica Amazon RDS API operation.
Note
The promotion process takes a few minutes to complete. When you promote a read replica,
replication is stopped and the read replica is rebooted. When the reboot is complete, the read
replica is available as a new DB instance.
448
Amazon Relational Database Service User Guide
Monitoring read replication
4. (Optional) Modify the new DB instance to be a Multi-AZ deployment. For more information,
see Modifying an Amazon RDS DB instance (p. 401) and Configuring and managing a Multi-AZ
deployment (p. 492).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the Amazon RDS console, choose Databases.
The Databases pane appears. Each read replica shows Replica in the Role column.
3. Choose the read replica that you want to promote.
4. For Actions, choose Promote.
5. On the Promote Read Replica page, enter the backup retention period and the backup window for
the newly promoted DB instance.
6. When the settings are as you want them, choose Continue.
7. On the acknowledgment page, choose Promote Read Replica.
AWS CLI
To promote a read replica to a standalone DB instance, use the AWS CLI promote-read-replica
command.
Example
For Windows:
RDS API
To promote a read replica to a standalone DB instance, call the Amazon RDS API PromoteReadReplica
operation with the required parameter DBInstanceIdentifier.
449
Amazon Relational Database Service User Guide
Monitoring read replication
You can also see the status of a read replica using the AWS CLI describe-db-instances command or
the Amazon RDS API DescribeDBInstances operation.
The status doesn't transition from replication degraded to error, unless an error occurs during
the degraded state.
• error – An error has occurred with the replication. Check the Replication Error field in the
Amazon RDS console or the event log to determine the exact error. For more information about
troubleshooting a replication error, see Troubleshooting a MySQL read replica problem (p. 1718).
• terminated (MariaDB, MySQL, or PostgreSQL only) – Replication is terminated. This occurs if
replication is stopped for more than 30 consecutive days, either manually or due to a replication error.
In this case, Amazon RDS terminates replication between the primary DB instance and all read replicas.
Amazon RDS does this to prevent increased storage requirements on the source DB instance and long
failover times.
Broken replication can affect storage because the logs can grow in size and number due to the high
volume of errors messages being written to the log. Broken replication can also affect failure recovery
due to the time Amazon RDS requires to maintain and process the large number of logs during
recovery.
• terminated (Oracle only) – Replication is terminated. This occurs if replication is stopped for more
than 8 hours because there isn't enough storage remaining on the read replica. In this case, Amazon
RDS terminates replication between the primary DB instance and the affected read replica. This status
is a terminal state, and the read replica must be re-created.
• stopped (MariaDB or MySQL only) – Replication has stopped because of a customer-initiated request.
• replication stop point set (MySQL only) – A customer-initiated stop point was set using the
mysql.rds_start_replication_until (p. 1780) stored procedure and the replication is in progress.
• replication stop point reached (MySQL only) – A customer-initiated stop point was set using the
mysql.rds_start_replication_until (p. 1780) stored procedure and replication is stopped because the
stop point was reached.
450
Amazon Relational Database Service User Guide
Monitoring read replication
You can see where a DB instance is being replicated and if so, check its replication status. On the
Databases page in the RDS console, it shows Primary in the Role column. Choose its DB instance name.
On its detail page, on the Connectivity & security tab, its replication status is under Replication.
For MariaDB and MySQL, the ReplicaLag metric reports the value of the Seconds_Behind_Master
field of the SHOW REPLICA STATUS command. Common causes for replication lag for MySQL and
MariaDB are the following:
• A network outage.
• Writing to tables with indexes on a read replica. If the read_only parameter is not set to 0 on the
read replica, it can break replication.
• Using a nontransactional storage engine such as MyISAM. Replication is only supported for the InnoDB
storage engine on MySQL and the XtraDB storage engine on MariaDB.
Note
Previous versions of MariaDB and MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MariaDB version before 10.5 or a MySQL version before 8.0.23, then
use SHOW SLAVE STATUS.
When the ReplicaLag metric reaches 0, the replica has caught up to the primary DB instance. If the
ReplicaLag metric returns -1, then replication is currently not active. ReplicaLag = -1 is equivalent
to Seconds_Behind_Master = NULL.
For Oracle, the ReplicaLag metric is the sum of the Apply Lag value and the difference between the
current time and the apply lag's DATUM_TIME value. The DATUM_TIME value is the last time the read
replica received data from its source DB instance. For more information, see V$DATAGUARD_STATS in
the Oracle documentation.
For SQL Server, the ReplicaLag metric is the maximum lag of databases that have fallen behind, in
seconds. For example, if you have two databases that lag 5 seconds and 10 seconds, respectively, then
ReplicaLag is 10 seconds. The ReplicaLag metric returns the value of the following query.
ReplicaLag returns -1 if RDS can't determine the lag, such as during replica setup, or when the read
replica is in the error state.
Note
New databases aren't included in the lag calculation until they are accessible on the read replica.
For PostgreSQL, the ReplicaLag metric returns the value of the following query.
PostgreSQL versions 9.5.2 and later use physical replication slots to manage write ahead log (WAL)
retention on the source instance. For each cross-Region read replica instance, Amazon RDS creates a
physical replication slot and associates it with the instance. Two Amazon CloudWatch metrics, Oldest
451
Amazon Relational Database Service User Guide
Cross-Region read replicas
Replication Slot Lag and Transaction Logs Disk Usage, show how far behind the most
lagging replica is in terms of WAL data received and how much storage is being used for WAL data. The
Transaction Logs Disk Usage value can substantially increase when a cross-Region read replica is
lagging significantly.
For more information about monitoring a DB instance with CloudWatch, see Monitoring Amazon RDS
metrics with Amazon CloudWatch (p. 706).
Creating a read replica in a different AWS Region from the source instance is similar to creating a replica
in the same AWS Region. You can use the AWS Management Console, run the create-db-instance-
read-replica command, or call the CreateDBInstanceReadReplica API operation.
Note
To create an encrypted read replica in a different AWS Region from the source DB instance, the
source DB instance must be encrypted.
452
Amazon Relational Database Service User Guide
Cross-Region read replicas
Console
You can create a read replica across AWS Regions using the AWS Management Console.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the MariaDB, Microsoft SQL Server, MySQL, Oracle, or PostgreSQL DB instance that you
want to use as the source for a read replica.
4. For Actions, choose Create read replica.
5. For DB instance identifier, enter a name for the read replica.
6. Choose the Destination Region.
7. Choose the instance specifications that you want to use. We recommend that you use the same or
larger DB instance class and storage type for the read replica.
8. To create an encrypted read replica in another AWS Region:
Note
To create an encrypted read replica, the source DB instance must be encrypted. To
learn more about encrypting the source DB instance, see Encrypting Amazon RDS
resources (p. 2586).
9. Choose other options, such as storage autoscaling.
10. Choose Create read replica.
AWS CLI
To create a read replica from a source MySQL, Microsoft SQL Server, MariaDB, Oracle, or PostgreSQL DB
instance in a different AWS Region, you can use the create-db-instance-read-replica command.
In this case, you use create-db-instance-read-replica from the AWS Region where you want
the read replica (destination Region) and specify the Amazon Resource Name (ARN) for the source DB
instance. An ARN uniquely identifies a resource created in Amazon Web Services.
For example, if your source DB instance is in the US East (N. Virginia) Region, the ARN looks similar to this
example:
arn:aws:rds:us-east-1:123456789012:db:mydbinstance
For information about ARNs, see Working with Amazon Resource Names (ARNs) in Amazon
RDS (p. 471).
To create a read replica in a different AWS Region from the source DB instance, you can use the AWS CLI
create-db-instance-read-replica command from the destination AWS Region. The following
parameters are required for creating a read replica in another AWS Region:
453
Amazon Relational Database Service User Guide
Cross-Region read replicas
• --region – The destination AWS Region where the read replica is created.
• --source-db-instance-identifier – The DB instance identifier for the source DB instance. This
identifier must be in the ARN format for the source AWS Region.
• --db-instance-identifier – The identifier for the read replica in the destination AWS Region.
The following code creates a read replica in the US West (Oregon) Region from a source DB instance in
the US East (N. Virginia) Region.
For Windows:
The following parameter is also required for creating an encrypted read replica in another AWS Region:
• --kms-key-id – The AWS KMS key identifier of the KMS key to use to encrypt the read replica in the
destination AWS Region.
The following code creates an encrypted read replica in the US West (Oregon) Region from a source DB
instance in the US East (N. Virginia) Region.
For Windows:
The --source-region option is required when you're creating an encrypted read replica between the
AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. For --source-region, specify the
AWS Region of the source DB instance.
454
Amazon Relational Database Service User Guide
Cross-Region read replicas
If --source-region isn't specified, specify a --pre-signed-url value. A presigned URL is a URL that
contains a Signature Version 4 signed request for the create-db-instance-read-replica command
that's called in the source AWS Region. To learn more about the pre-signed-url option, see create-
db-instance-read-replica in the AWS CLI Command Reference.
RDS API
To create a read replica from a source MySQL, Microsoft SQL Server, MariaDB, Oracle, or
PostgreSQL DB instance in a different AWS Region, you can call the Amazon RDS API operation
CreateDBInstanceReadReplica. In this case, you call CreateDBInstanceReadReplica from the AWS Region
where you want the read replica (destination Region) and specify the Amazon Resource Name (ARN) for
the source DB instance. An ARN uniquely identifies a resource created in Amazon Web Services.
To create an encrypted read replica in a different AWS Region from the source DB instance, you can use
the Amazon RDS API CreateDBInstanceReadReplica operation from the destination AWS Region. To
create an encrypted read replica in another AWS Region, you must specify a value for PreSignedURL.
PreSignedURL should contain a request for the CreateDBInstanceReadReplica operation to call
in the source AWS Region where the read replica is created in. To learn more about PreSignedUrl, see
CreateDBInstanceReadReplica.
For example, if your source DB instance is in the US East (N. Virginia) Region, the ARN looks similar to the
following.
arn:aws:rds:us-east-1:123456789012:db:mydbinstance
For information about ARNs, see Working with Amazon Resource Names (ARNs) in Amazon
RDS (p. 471).
Example
https://us-west-2.rds.amazonaws.com/
?Action=CreateDBInstanceReadReplica
&KmsKeyId=my-us-east-1-key
&PreSignedUrl=https%253A%252F%252Frds.us-west-2.amazonaws.com%252F
%253FAction%253DCreateDBInstanceReadReplica
%2526DestinationRegion%253Dus-east-1
%2526KmsKeyId%253Dmy-us-east-1-key
%2526SourceDBInstanceIdentifier%253Darn%25253Aaws%25253Ards%25253Aus-
west-2%123456789012%25253Adb%25253Amydbinstance
%2526SignatureMethod%253DHmacSHA256
%2526SignatureVersion%253D4%2526SourceDBInstanceIdentifier%253Darn%25253Aaws
%25253Ards%25253Aus-west-2%25253A123456789012%25253Ainstance%25253Amydbinstance
%2526Version%253D2014-10-31
%2526X-Amz-Algorithm%253DAWS4-HMAC-SHA256
%2526X-Amz-Credential%253DAKIADQKE4SARGYLE%252F20161117%252Fus-west-2%252Frds
%252Faws4_request
%2526X-Amz-Date%253D20161117T215409Z
%2526X-Amz-Expires%253D3600
%2526X-Amz-SignedHeaders%253Dcontent-type%253Bhost%253Buser-agent%253Bx-amz-
content-sha256%253Bx-amz-date
%2526X-Amz-Signature
%253D255a0f17b4e717d3b67fad163c3ec26573b882c03a65523522cf890a67fca613
&DBInstanceIdentifier=myreadreplica
&SourceDBInstanceIdentifier=®ion-arn;rds:us-east-1:123456789012:db:mydbinstance
&Version=2012-01-15
&SignatureVersion=2
&SignatureMethod=HmacSHA256
&Timestamp=2012-01-20T22%3A06%3A23.624Z
&AWSAccessKeyId=<&AWS; Access Key ID>
&Signature=<Signature>
455
Amazon Relational Database Service User Guide
Cross-Region read replicas
1. Amazon RDS begins configuring the source DB instance as a replication source and sets the status to
modifying.
2. Amazon RDS begins setting up the specified read replica in the destination AWS Region and sets the
status to creating.
3. Amazon RDS creates an automated DB snapshot of the source DB instance in the source AWS Region.
The format of the DB snapshot name is rds:<InstanceID>-<timestamp>, where <InstanceID>
is the identifier of the source instance, and <timestamp> is the date and time the copy started.
For example, rds:mysourceinstance-2013-11-14-09-24 was created from the instance
mysourceinstance at 2013-11-14-09-24. During the creation of an automated DB snapshot,
the source DB instance status remains modifying, the read replica status remains creating, and the DB
snapshot status is creating. The progress column of the DB snapshot page in the console reports how
far the DB snapshot creation has progressed. When the DB snapshot is complete, the status of both
the DB snapshot and source DB instance are set to available.
4. Amazon RDS begins a cross-Region snapshot copy for the initial data transfer. The snapshot copy is
listed as an automated snapshot in the destination AWS Region with a status of creating. It has the
same name as the source DB snapshot. The progress column of the DB snapshot display indicates how
far the copy has progressed. When the copy is complete, the status of the DB snapshot copy is set to
available.
5. Amazon RDS then uses the copied DB snapshot for the initial data load on the read replica. During this
phase, the read replica is in the list of DB instances in the destination, with a status of creating. When
the load is complete, the read replica status is set to available, and the DB snapshot copy is deleted.
6. When the read replica reaches the available status, Amazon RDS starts by replicating the changes
made to the source instance since the start of the create read replica operation. During this phase, the
replication lag time for the read replica is greater than 0.
For information about replication lag time, see Monitoring read replication (p. 449).
• A source DB instance can have cross-Region read replicas in multiple AWS Regions.
• You can replicate between the GovCloud (US-East) and GovCloud (US-West) Regions, but not into or
out of GovCloud (US).
• For Microsoft SQL Server, Oracle, and PostgreSQL DB instances, you can only create a cross-Region
Amazon RDS read replica from a source Amazon RDS DB instance that is not a read replica of another
Amazon RDS DB instance. This limitation doesn't apply to MariaDB and MySQL DB instances.
• You can expect to see a higher level of lag time for any read replica that is in a different AWS Region
than the source instance. This lag time comes from the longer network channels between regional
data centers.
• For cross-Region read replicas, any of the create read replica commands that specify the --db-
subnet-group-name parameter must specify a DB subnet group from the same VPC.
• Because of the limit on the number of access control list (ACL) entries for the source VPC, we can't
guarantee more than five cross-Region read replica instances.
456
Amazon Relational Database Service User Guide
Cross-Region read replicas
• In most cases, the read replica uses the default DB parameter group and DB option group for the
specified DB engine.
For the MySQL and Oracle DB engines, you can specify a custom parameter group for the read replica
in the --db-parameter-group-name option of the AWS CLI command create-db-instance-
read-replica. You can't specify a custom parameter group when you use the AWS Management
Console.
• The read replica uses the default security group.
• For MariaDB, Microsoft SQL Server, MySQL, and Oracle DB instances, when the source DB instance for
a cross-Region read replica is deleted, the read replica is promoted.
• For PostgreSQL DB instances, when the source DB instance for a cross-Region read replica is deleted,
the replication status of the read replica is set to terminated. The read replica isn't promoted.
Certain conditions in the requester's IAM policy can cause the request to fail. The following examples
assume that the source DB instance is in US East (Ohio) and the read replica is created in US East (N.
Virginia). These examples show conditions in the requester's IAM policy that cause the request to fail:
...
"Effect": "Allow",
"Action": "rds:CreateDBInstanceReadReplica",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": "us-east-1"
}
}
The request fails because the policy doesn't allow access to the source Region. For a successful request,
specify both the source and destination Regions.
...
"Effect": "Allow",
"Action": "rds:CreateDBInstanceReadReplica",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": [
"us-east-1",
"us-east-2"
]
}
}
...
"Effect": "Allow",
"Action": "rds:CreateDBInstanceReadReplica",
"Resource": "arn:aws:rds:us-east-1:123456789012:db:myreadreplica"
457
Amazon Relational Database Service User Guide
Cross-Region read replicas
...
For a successful request, specify both the source instance and the replica.
...
"Effect": "Allow",
"Action": "rds:CreateDBInstanceReadReplica",
"Resource": [
"arn:aws:rds:us-east-1:123456789012:db:myreadreplica",
"arn:aws:rds:us-east-2:123456789012:db:mydbinstance"
]
...
...
"Effect": "Allow",
"Action": "rds:CreateDBInstanceReadReplica",
"Resource": "*",
"Condition": {
"Bool": {"aws:ViaAWSService": "false"}
}
Communication with the source Region is made by RDS on the requester's behalf. For a successful
request, don't deny calls made by AWS services.
• The requester's policy has a condition for aws:SourceVpc or aws:SourceVpce.
These requests might fail because when RDS makes the call to the remote Region, it isn't from the
specified VPC or VPC endpoint.
If you need to use one of the previous conditions that would cause a request to fail, you can include a
second statement with aws:CalledVia in your policy to make the request succeed. For example, you
can use aws:CalledVia with aws:SourceVpce as shown here:
...
"Effect": "Allow",
"Action": "rds:CreateDBInstanceReadReplica",
"Resource": "*",
"Condition": {
"Condition" : {
"ForAnyValue:StringEquals" : {
"aws:SourceVpce": "vpce-1a2b3c4d"
}
}
},
{
"Effect": "Allow",
"Action": [
"rds:CreateDBInstanceReadReplica"
],
"Resource": "*",
"Condition": {
"ForAnyValue:StringEquals": {
"aws:CalledVia": [
"rds.amazonaws.com"
]
}
}
}
458
Amazon Relational Database Service User Guide
Cross-Region read replicas
For more information, see Policies and permissions in IAM in the IAM User Guide.
RDS uses the service-linked role to verify the authorization in the source Region. If you delete the
service-linked role during the replication creation process, the creation fails.
For more information, see Using service-linked roles in the IAM User Guide.
To use the global endpoint, make sure that it's enabled for both Regions in the operations. Set the global
endpoint to Valid in all AWS Regions in the AWS STS account settings.
For more information, see Managing AWS STS in an AWS Region in the IAM User Guide.
• When you create a read replica, Amazon RDS takes a snapshot of the source instance and transfers the
snapshot to the read replica AWS Region.
• For each data modification made in the source databases, Amazon RDS transfers data from the source
AWS Region to the read replica AWS Region.
For more information about data transfer pricing, see Amazon RDS pricing.
For MySQL and MariaDB instances, you can reduce your data transfer costs by reducing the number of
cross-Region read replicas that you create. For example, suppose that you have a source DB instance in
one AWS Region and want to have three read replicas in another AWS Region. In this case, you create
only one of the read replicas from the source DB instance. You create the other two replicas from the
first read replica instead of the source DB instance.
For example, if you have source-instance-1 in one AWS Region, you can do the following:
• Create read-replica-1 in the new AWS Region, specifying source-instance-1 as the source.
• Create read-replica-2 from read-replica-1.
459
Amazon Relational Database Service User Guide
Cross-Region read replicas
In this example, you are only charged for the data transferred from source-instance-1 to read-
replica-1. You aren't charged for the data transferred from read-replica-1 to the other two
replicas because they are all in the same AWS Region. If you create all three replicas directly from
source-instance-1, you are charged for the data transfers to all three replicas.
460
Amazon Relational Database Service User Guide
Tagging RDS resources
In particular, you can use these tags with IAM policies. You can use them to manage access to RDS
resources and to control what actions can be applied to the RDS resources. You can also use these tags to
track costs by grouping expenses for similarly tagged resources.
• DB instances
• DB clusters
• Read replicas
• DB snapshots
• DB cluster snapshots
• Reserved DB instances
• Event subscriptions
• DB option groups
• DB parameter groups
• DB cluster parameter groups
• DB subnet groups
• RDS Proxies
• RDS Proxy endpoints
• Blue/green deployments
• Zero-ETL integrations (preview)
Note
Currently, you can't tag RDS Proxies and RDS Proxy endpoints by using the AWS Management
Console.
Topics
• Overview of Amazon RDS resource tags (p. 461)
• Using tags for access control with IAM (p. 462)
• Using tags to produce detailed billing reports (p. 462)
• Adding, listing, and removing tags (p. 463)
• Using the AWS Tag Editor (p. 465)
• Copying tags to DB instance snapshots (p. 465)
• Tutorial: Use tags to specify which DB instances to stop (p. 466)
• Using tags to enable backups in AWS Backup (p. 468)
461
Amazon Relational Database Service User Guide
Using tags for access control with IAM
arbitrary information to an Amazon RDS resource. You can use a tag key, for example, to define a
category, and the tag value might be an item in that category. For example, you might define a tag
key of "project" and a tag value of "Salix". In this case, these indicate that the Amazon RDS resource is
assigned to the Salix project. You can also use tags to designate Amazon RDS resources as being used
for test or production by using a key such as environment=test or environment=production. We
recommend that you use a consistent set of tag keys to make it easier to track metadata associated with
Amazon RDS resources.
In addition, you can use conditions in your IAM policies to control access to AWS resources based on
the tags on that resource. You can do this by using the global aws:ResourceTag/tag-key condition
key. For more information, see Controlling access to AWS resources in the AWS Identity and Access
Management User Guide.
Each Amazon RDS resource has a tag set, which contains all the tags that are assigned to that Amazon
RDS resource. A tag set can contain as many as 50 tags, or it can be empty. If you add a tag to an RDS
resource with the same key as an existing resource tag, the new value overwrites the old.
AWS doesn't apply any semantic meaning to your tags; tags are interpreted strictly as character strings.
RDS can set tags on a DB instance or other RDS resources. Tag setting depends on the options that
you use when you create the resource. For example, Amazon RDS might add a tag indicating that a DB
instance is for production or for testing.
• The tag key is the required name of the tag. The string value can be from 1 to 128 Unicode characters
in length and cannot be prefixed with aws: or rds:. The string can contain only the set of Unicode
letters, digits, white space, '_', '.', ':', '/', '=', '+', '-', '@' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-
@]*)$").
• The tag value is an optional string value of the tag. The string value can be from 1 to 256 Unicode
characters in length. The string can contain only the set of Unicode letters, digits, white space, '_', '.', ':',
'/', '=', '+', '-', '@' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$").
Values do not have to be unique in a tag set and can be null. For example, you can have a key-value
pair in a tag set of project=Trinity and cost-center=Trinity.
You can use the AWS Management Console, the AWS CLI, or the Amazon RDS API to add, list, and delete
tags on Amazon RDS resources. When using the CLI or API, make sure to provide the Amazon Resource
Name (ARN) for the RDS resource to work with. For more information about constructing an ARN, see
Constructing an ARN for Amazon RDS (p. 471).
Tags are cached for authorization purposes. Because of this, additions and updates to tags on Amazon
RDS resources can take several minutes before they are available.
For information on managing access to tagged resources with IAM policies, see Identity and access
management for Amazon RDS (p. 2606).
Use tags to organize your AWS bill to reflect your own cost structure. To do this, sign up to get your AWS
account bill with tag key values included. Then, to see the cost of combined resources, organize your
462
Amazon Relational Database Service User Guide
Adding, listing, and removing tags
billing information according to resources with the same tag key values. For example, you can tag several
resources with a specific application name, and then organize your billing information to see the total
cost of that application across several services. For more information, see Using Cost Allocation Tags in
the AWS Billing User Guide.
Note
You can add a tag to a snapshot, however, your bill will not reflect this grouping.
Console
The process to tag an Amazon RDS resource is similar for all resources. The following procedure shows
how to tag an Amazon RDS DB instance.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
Note
To filter the list of DB instances in the Databases pane, enter a text string for Filter
databases. Only DB instances that contain the string appear.
3. Choose the name of the DB instance that you want to tag to show its details.
4. In the details section, scroll down to the Tags section.
5. Choose Add. The Add tags window appears.
463
Amazon Relational Database Service User Guide
Adding, listing, and removing tags
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
Note
To filter the list of DB instances in the Databases pane, enter a text string in the Filter
databases box. Only DB instances that contain the string appear.
3. Choose the name of the DB instance to show its details.
4. In the details section, scroll down to the Tags section.
5. Choose the tag you want to delete.
6. Choose Delete, and then choose Delete in the Delete tags window.
AWS CLI
You can add, list, or remove tags for a DB instance using the AWS CLI.
• To add one or more tags to an Amazon RDS resource, use the AWS CLI command add-tags-to-
resource.
• To list the tags on an Amazon RDS resource, use the AWS CLI command list-tags-for-resource.
• To remove one or more tags from an Amazon RDS resource, use the AWS CLI command remove-
tags-from-resource.
To learn more about how to construct the required ARN, see Constructing an ARN for Amazon
RDS (p. 471).
RDS API
You can add, list, or remove tags for a DB instance using the Amazon RDS API.
To learn more about how to construct the required ARN, see Constructing an ARN for Amazon
RDS (p. 471).
When working with XML using the Amazon RDS API, tags use the following schema:
<Tagging>
<TagSet>
464
Amazon Relational Database Service User Guide
Using the AWS Tag Editor
<Tag>
<Key>Project</Key>
<Value>Trinity</Value>
</Tag>
<Tag>
<Key>User</Key>
<Value>Jones</Value>
</Tag>
</TagSet>
</Tagging>
The following table provides a list of the allowed XML tags and their characteristics. Values for Key and
Value are case-dependent. For example, project=Trinity and PROJECT=Trinity are two distinct tags.
TagSet A tag set is a container for all tags assigned to an Amazon RDS resource.
There can be only one tag set per resource. You work with a TagSet only
through the Amazon RDS API.
Key A key is the required name of the tag. The string value can be from 1 to 128
Unicode characters in length and cannot be prefixed with aws: or rds:. The
string can only contain only the set of Unicode letters, digits, white space, '_',
'.', '/', '=', '+', '-' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$").
Keys must be unique to a tag set. For example, you cannot have a key-pair
in a tag set with the key the same but with different values, such as project/
Trinity and project/Xanadu.
Value A value is the optional value of the tag. The string value can be from 1 to
256 Unicode characters in length and cannot be prefixed with aws: or rds:.
The string can only contain only the set of Unicode letters, digits, white
space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$").
Values do not have to be unique in a tag set and can be null. For example,
you can have a key-value pair in a tag set of project/Trinity and cost-center/
Trinity.
You can specify that tags are copied to DB snapshots for the following actions:
• Creating a DB instance.
465
Amazon Relational Database Service User Guide
Tutorial: Use tags to specify which DB instances to stop
• Restoring a DB instance.
• Creating a read replica.
• Copying a DB snapshot.
In most cases, tags aren't copied by default. However, when you restore a DB instance from a DB
snapshot, RDS checks whether you specify new tags. If yes, the new tags are added to the restored DB
instance. If there are no new tags, RDS adds the tags from the source DB instance at the time of snapshot
creation to the restored DB instance.
To prevent tags from source DB instances from being added to restored DB instances, we recommend
that you specify new tags when restoring a DB instance.
Note
In some cases, you might include a value for the --tag-key parameter of the create-db-
snapshot AWS CLI command. Or you might supply at least one tag to the CreateDBSnapshot
API operation. In these cases, RDS doesn't copy tags from the source DB instance to the new DB
snapshot. This functionality applies even if the source DB instance has the --copy-tags-to-
snapshot (CopyTagsToSnapshot) option turned on.
If you take this approach, you can create a copy of a DB instance from a DB snapshot. This
approach avoids adding tags that don't apply to the new DB instance. You create your DB
snapshot using the AWS CLI create-db-snapshot command (or the CreateDBSnapshot
RDS API operation). After you create your DB snapshot, you can add tags as described later in
this topic.
The commands and APIs for tagging work with ARNs. That way, they can work seamlessly across
AWS Regions, AWS accounts, and different types of resources that might have identical short
names. You can specify the ARN instead of the DB instance ID in CLI commands that operate on
DB instances. Substitute the name of your own DB instances for dev-test-db-instance. In
subsequent commands that use ARN parameters, substitute the ARN of your own DB instance. The
ARN includes your own AWS account ID and the name of the AWS Region where your DB instance is
located.
You choose the name for this tag. This approach means that you can avoid devising a naming
convention that encodes all relevant information in names. In such a convention, you might encode
information in the DB instance name or names of other resources. Because this example treats
466
Amazon Relational Database Service User Guide
Tutorial: Use tags to specify which DB instances to stop
the tag as an attribute that is either present or absent, it omits the Value= part of the --tags
parameter.
These commands retrieve the tag information for the DB instance in JSON format and in plain tab-
separated text.
}
]
}
aws rds list-tags-for-resource \
--resource-name arn:aws:rds:us-east-1:123456789102:db:dev-test-db-instance --output
text
TAGLIST stoppable
4. To stop all the DB instances that are designated as stoppable, prepare a list of all your DB
instances. Loop through the list and check if each DB instance is tagged with the relevant attribute.
This Linux example uses shell scripting. This scripting saves the list of DB instance ARNs to a
temporary file and then performs CLI commands for each DB instance.
You can run a script like this at the end of each day to make sure that nonessential DB instances are
stopped. You might also schedule a job using a utility such as cron to perform such a check each night.
For example, you might do this in case some DB instances were left running by mistake. Here, you might
fine-tune the command that prepares the list of DB instances to check.
467
Amazon Relational Database Service User Guide
Enabling backups
The following command produces a list of your DB instances, but only the ones in available state. The
script can ignore DB instances that are already stopped, because they will have different status values
such as stopped or stopping.
Tip
You can use assigning tags and finding DB instances with those tags to reduce costs in other
ways. For example, take this scenario with DB instances used for development and testing. In
this case, you might designate some DB instances to be deleted at the end of each day. Or you
might designate them to have their DB instances changed to small DB instance classes during
times of expected low usage.
To enable backups in AWS Backup, you use resource tagging to associate your DB instance with a backup
plan.
This example assumes that you have already created a backup plan in AWS Backup. You use exactly the
same tag for your DB instance that is in your backup plan, as shown in the following figure.
For more information about AWS Backup, see the AWS Backup Developer Guide.
You can assign a tag to a DB instance using the AWS Management Console, the AWS CLI, or the RDS API.
The following examples are for the console and CLI.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the link for the DB instance to which you want to assign a tag.
4. On the database details page, choose the Tags tab.
5. Under Tags, choose Add tags.
6. Under Add tags:
468
Amazon Relational Database Service User Guide
Enabling backups
CLI
To assign a tag to a DB instance
For Windows:
For Windows:
469
Amazon Relational Database Service User Guide
Enabling backups
{
"TagList": [
{
"Key": "BackupPlan",
"Value": "Test"
}
]
}
470
Amazon Relational Database Service User Guide
Working with ARNs
arn:aws:rds:<region>:<account number>:<resourcetype>:<name>
rds.us-east-2.api.aws HTTPS
rds-fips.us-east-2.amazonaws.com HTTPS
rds-fips.us-east-1.amazonaws.com HTTPS
rds.us-east-1.api.aws HTTPS
rds-fips.us-west-1.amazonaws.com HTTPS
rds-fips.us-west-1.api.aws HTTPS
rds.us-west-2.api.aws HTTPS
rds-fips.us-west-2.api.aws HTTPS
471
Amazon Relational Database Service User Guide
Constructing an ARN
rds-fips.ca-central-1.api.aws HTTPS
rds-fips.ca-central-1.amazonaws.com HTTPS
472
Amazon Relational Database Service User Guide
Constructing an ARN
The following table shows the format that you should use when constructing an ARN for a particular
Amazon RDS resource type.
DB instance arn:aws:rds:<region>:<account>:db:<name>
For example:
arn:aws:rds:us-east-2:123456789012:db:my-mysql-instance-1
473
Amazon Relational Database Service User Guide
Constructing an ARN
DB cluster arn:aws:rds:<region>:<account>:cluster:<name>
For example:
arn:aws:rds:us-east-2:123456789012:cluster:my-aurora-
cluster-1
For example:
arn:aws:rds:us-east-2:123456789012:es:my-subscription
For example:
arn:aws:rds:us-east-2:123456789012:og:my-og
For example:
arn:aws:rds:us-east-2:123456789012:pg:my-param-enable-logs
For example:
arn:aws:rds:us-east-2:123456789012:cluster-pg:my-cluster-
param-timezone
For example:
arn:aws:rds:us-east-2:123456789012:ri:my-reserved-
postgresql
For example:
arn:aws:rds:us-east-2:123456789012:secgrp:my-public
For example:
arn:aws:rds:us-east-2:123456789012:snapshot:rds:my-mysql-
db-2019-07-22-07-23
474
Amazon Relational Database Service User Guide
Getting an existing ARN
For example:
arn:aws:rds:us-east-2:123456789012:cluster-snapshot:rds:my-
aurora-cluster-2019-07-22-16-16
For example:
arn:aws:rds:us-east-2:123456789012:snapshot:my-mysql-db-
snap
For example:
arn:aws:rds:us-east-2:123456789012:cluster-snapshot:my-
aurora-cluster-snap
For example:
arn:aws:rds:us-east-2:123456789012:subgrp:my-subnet-10
Console
To get an ARN from the AWS Management Console, navigate to the resource you want an ARN for, and
view the details for that resource.
For example, you can get the ARN for a DB instance from the Configuration tab of the DB instance
details.
AWS CLI
To get an ARN from the AWS CLI for a particular RDS resource, you use the describe command for
that resource. The following table shows each AWS CLI command, and the ARN property used with the
command to get an ARN.
describe-event-subscriptions EventSubscriptionArn
describe-certificates CertificateArn
475
Amazon Relational Database Service User Guide
Getting an existing ARN
describe-db-parameter-groups DBParameterGroupArn
describe-db-cluster-parameter- DBClusterParameterGroupArn
groups
describe-db-instances DBInstanceArn
describe-db-security-groups DBSecurityGroupArn
describe-db-snapshots DBSnapshotArn
describe-events SourceArn
describe-reserved-db-instances ReservedDBInstanceArn
describe-db-subnet-groups DBSubnetGroupArn
describe-option-groups OptionGroupArn
describe-db-clusters DBClusterArn
describe-db-cluster-snapshots DBClusterSnapshotArn
For example, the following AWS CLI command gets the ARN for a DB instance.
Example
For Windows:
[
{
"DBInstanceArn": "arn:aws:rds:us-west-2:account_id:db:instance_id",
"DBInstanceIdentifier": "instance_id"
}
]
RDS API
To get an ARN for a particular RDS resource, you can call the following RDS API operations and use the
ARN properties shown following.
476
Amazon Relational Database Service User Guide
Getting an existing ARN
DescribeEventSubscriptions EventSubscriptionArn
DescribeCertificates CertificateArn
DescribeDBParameterGroups DBParameterGroupArn
DescribeDBClusterParameterGroups DBClusterParameterGroupArn
DescribeDBInstances DBInstanceArn
DescribeDBSecurityGroups DBSecurityGroupArn
DescribeDBSnapshots DBSnapshotArn
DescribeEvents SourceArn
DescribeReservedDBInstances ReservedDBInstanceArn
DescribeDBSubnetGroups DBSubnetGroupArn
DescribeOptionGroups OptionGroupArn
DescribeDBClusters DBClusterArn
DescribeDBClusterSnapshots DBClusterSnapshotArn
477
Amazon Relational Database Service User Guide
Working with storage
Topics
• Increasing DB instance storage capacity (p. 478)
• Managing capacity automatically with Amazon RDS storage autoscaling (p. 480)
• Modifying settings for Provisioned IOPS SSD storage (p. 484)
• I/O-intensive storage modifications (p. 486)
• Modifying settings for General Purpose SSD (gp3) storage (p. 486)
To monitor the amount of free storage for your DB instance so you can respond when necessary, we
recommend that you create an Amazon CloudWatch alarm. For more information on setting CloudWatch
alarms, see Using CloudWatch alarms.
Scaling storage usually doesn't cause any outage or performance degradation of the DB instance. After
you modify the storage size for a DB instance, the status of the DB instance is storage-optimization.
Note
Storage optimization can take several hours. You can't make further storage modifications for
either six (6) hours or until storage optimization has completed on the instance, whichever is
longer. You can view the storage optimization progress in the AWS Management Console or by
using the describe-db-instances AWS CLI command.
However, a special case is if you have a SQL Server DB instance and haven't modified the storage
configuration since November 2017. In this case, you might experience a short outage of a few minutes
when you modify your DB instance to increase the allocated storage. After the outage, the DB instance
is online but in the storage-optimization state. Performance might be degraded during storage
optimization.
Note
You can't reduce the amount of storage for a DB instance after storage has been allocated.
When you increase the allocated storage, it must be by at least 10 percent. If you try to increase
the value by less than 10 percent, you get an error.
Console
To increase storage for a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
478
Amazon Relational Database Service User Guide
Increasing DB instance storage capacity
Or choose Apply during the next scheduled maintenance window to apply the changes during the
next maintenance window.
8. When the settings are as you want them, choose Modify DB instance.
AWS CLI
To increase the storage for a DB instance, use the AWS CLI command modify-db-instance. Set the
following parameters:
Or use --no-apply-immediately (the default) to apply the changes during the next maintenance
window. An immediate outage occurs when the changes are applied.
For more information about storage, see Amazon RDS DB instance storage (p. 101).
RDS API
To increase storage for a DB instance, use the Amazon RDS API operation ModifyDBInstance. Set the
following parameters:
For more information about storage, see Amazon RDS DB instance storage (p. 101).
479
Amazon Relational Database Service User Guide
Managing capacity automatically with storage autoscaling
For example, you might use this feature for a new mobile gaming application that users are adopting
rapidly. In this case, a rapidly increasing workload might exceed the available database storage. To avoid
having to manually scale up database storage, you can use Amazon RDS storage autoscaling.
With storage autoscaling enabled, when Amazon RDS detects that you are running out of free database
space it automatically scales up your storage. Amazon RDS starts a storage modification for an
autoscaling-enabled DB instance when these factors apply:
• Free available space is less than or equal to 10 percent of the allocated storage.
• The low-storage condition lasts at least five minutes.
• At least six hours have passed since the last storage modification, or storage optimization has
completed on the instance, whichever is longer.
• 10 GiB
• 10 percent of currently allocated storage
• Predicted storage growth exceeding the current allocated storage size in the next 7 hours based on
the FreeStorageSpace metrics from the past hour. For more information on metrics, see Monitoring
with Amazon CloudWatch.
The maximum storage threshold is the limit that you set for autoscaling the DB instance. It has the
following constraints:
• You must set the maximum storage threshold to at least 10% more than the current allocated storage.
We recommend setting it to at least 26% more to avoid receiving an event notification (p. 886) that
the storage size is approaching the maximum storage threshold.
For example, if you have DB instance with 1000 GiB of allocated storage, then set the maximum
storage threshold to at least 1100 GiB. If you don't, you get an error such as Invalid max storage size
for engine_name. However, we recommend that you set the maximum storage threshold to at least
1260 GiB to avoid the event notification.
• For a DB instance that uses Provisioned IOPS storage, the ratio of IOPS to maximum storage threshold
(in GiB) must be from 1–50 on RDS for SQL Server, and 0.5–50 on other RDS DB engines.
• You can't set the maximum storage threshold for autoscaling-enabled instances to a value greater
than the maximum allocated storage for the database engine and DB instance class.
For example, SQL Server Standard Edition on db.m5.xlarge has a default allocated storage for the
instance of 20 GiB (the minimum) and a maximum allocated storage of 16,384 GiB. The default
maximum storage threshold for autoscaling is 1,000 GiB. If you use this default, the instance doesn't
autoscale above 1,000 GiB. This is true even though the maximum allocated storage for the instance is
16,384 GiB.
Note
We recommend that you carefully choose the maximum storage threshold based on usage
patterns and customer needs. If there are any aberrations in the usage patterns, the maximum
storage threshold can prevent scaling storage to an unexpectedly high value when autoscaling
480
Amazon Relational Database Service User Guide
Managing capacity automatically with storage autoscaling
predicts a very high threshold. After a DB instance has been autoscaled, its allocated storage
can't be reduced.
Topics
• Limitations (p. 481)
• Enabling storage autoscaling for a new DB instance (p. 481)
• Changing the storage autoscaling settings for a DB instance (p. 482)
• Turning off storage autoscaling for a DB instance (p. 483)
Limitations
The following limitations apply to storage autoscaling:
• Autoscaling doesn't occur if the maximum storage threshold would be equaled or exceeded by the
storage increment.
• When autoscaling, RDS predicts the storage size for subsequent autoscaling operations. If a
subsequent operation is predicted to exceed the maximum storage threshold, then RDS autoscales to
the maximum storage threshold.
• Autoscaling can't completely prevent storage-full situations for large data loads. This is because
further storage modifications can't be made for either six (6) hours or until storage optimization has
completed on the instance, whichever is longer.
If you perform a large data load, and autoscaling doesn't provide enough space, the database might
remain in the storage-full state for several hours. This can harm the database.
• If you start a storage scaling operation at the same time that Amazon RDS starts an autoscaling
operation, your storage modification takes precedence. The autoscaling operation is canceled.
• Autoscaling can't be used with magnetic storage.
• Autoscaling can't be used with the following previous-generation instance classes that have less than 6
TiB of orderable storage: db.m3.large, db.m3.xlarge, and db.m3.2xlarge.
• Autoscaling operations aren't logged by AWS CloudTrail. For more information on CloudTrail, see
Monitoring Amazon RDS API calls in AWS CloudTrail (p. 940).
Although automatic scaling helps you to increase storage on your Amazon RDS DB instance dynamically,
you should still configure the initial storage for your DB instance to an appropriate size for your typical
workload.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
481
Amazon Relational Database Service User Guide
Managing capacity automatically with storage autoscaling
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region where you want to
create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database. On the Select engine page, choose your database engine and specify your
DB instance information as described in Getting started with Amazon RDS (p. 180).
5. In the Storage autoscaling section, set the Maximum storage threshold value for the DB instance.
6. Specify the rest of your DB instance information as described in Getting started with Amazon
RDS (p. 180).
AWS CLI
To enable storage autoscaling for a new DB instance, use the AWS CLI command create-db-instance.
Set the following parameter:
• --max-allocated-storage – Turns on storage autoscaling and sets the upper limit on storage size,
in gibibytes.
To verify that Amazon RDS storage autoscaling is available for your DB instance, use the AWS CLI
describe-valid-db-instance-modifications command. To check based on the instance class
before creating an instance, use the describe-orderable-db-instance-options command. Check
the following field in the return value:
For more information about storage, see Amazon RDS DB instance storage (p. 101).
RDS API
To enable storage autoscaling for a new DB instance, use the Amazon RDS API operation
CreateDBInstance. Set the following parameter:
• MaxAllocatedStorage – Turns on Amazon RDS storage autoscaling and sets the upper limit on
storage size, in gibibytes.
To verify that Amazon RDS storage autoscaling is available for your DB instance, use the Amazon
RDS API DescribeValidDbInstanceModifications operation for an existing instance, or the
DescribeOrderableDBInstanceOptions operation before creating an instance. Check the following
field in the return value:
For more information about storage, see Amazon RDS DB instance storage (p. 101).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
482
Amazon Relational Database Service User Guide
Managing capacity automatically with storage autoscaling
Changing the storage autoscaling limit occurs immediately. This setting ignores the Apply
immediately setting.
AWS CLI
To change the storage autoscaling settings for a DB instance, use the AWS CLI command modify-db-
instance. Set the following parameter:
• --max-allocated-storage – Sets the upper limit on storage size, in gibibytes. If the value is
greater than the --allocated-storage parameter, storage autoscaling is turned on. If the value is
the same as the --allocated-storage parameter, storage autoscaling is turned off.
To verify that Amazon RDS storage autoscaling is available for your DB instance, use the AWS CLI
describe-valid-db-instance-modifications command. To check based on the instance class
before creating an instance, use the describe-orderable-db-instance-options command. Check
the following field in the return value:
For more information about storage, see Amazon RDS DB instance storage (p. 101).
RDS API
To change the storage autoscaling settings for a DB instance, use the Amazon RDS API operation
ModifyDBInstance. Set the following parameter:
To verify that Amazon RDS storage autoscaling is available for your DB instance, use the Amazon
RDS API DescribeValidDbInstanceModifications operation for an existing instance, or the
DescribeOrderableDBInstanceOptions operation before creating an instance. Check the following
field in the return value:
For more information about storage, see Amazon RDS DB instance storage (p. 101).
483
Amazon Relational Database Service User Guide
Modifying Provisioned IOPS settings
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to modify and choose Modify. The Modify DB instance page
appears.
4. Clear the Enable storage autoscaling check box in the Storage autoscaling section. For more
information, see Modifying an Amazon RDS DB instance (p. 401).
5. When all the changes are as you want them, choose Continue and check the modifications.
6. On the confirmation page, review your changes. If they're correct, choose Modify DB Instance to
save your changes. If they aren't correct, choose Back to edit your changes or Cancel to cancel your
changes.
Changing the storage autoscaling limit occurs immediately. This setting ignores the Apply immediately
setting.
AWS CLI
To turn off storage autoscaling for a DB instance, use the AWS CLI command modify-db-instance and
the following parameter:
For more information about storage, see Amazon RDS DB instance storage (p. 101).
RDS API
To turn off storage autoscaling for a DB instance, use the Amazon RDS API operation
ModifyDBInstance. Set the following parameter:
For more information about storage, see Amazon RDS DB instance storage (p. 101).
Although you can reduce the amount of IOPS provisioned for your instance, you can't reduce the storage
size.
In most cases, scaling storage doesn't require any outage and doesn't degrade performance of the
server. After you modify the storage IOPS for a DB instance, the status of the DB instance is storage-
optimization.
484
Amazon Relational Database Service User Guide
Modifying Provisioned IOPS settings
Note
Storage optimization can take several hours. You can't make further storage modifications for
either six (6) hours or until storage optimization has completed on the instance, whichever is
longer.
For information on the ranges of allocated storage and Provisioned IOPS available for each database
engine, see Provisioned IOPS SSD storage (p. 104).
Console
To change the Provisioned IOPS settings for a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
To filter the list of DB instances, for Filter databases enter a text string for Amazon RDS to use to
filter the results. Only DB instances whose names contain the string appear.
3. Choose the DB instance with Provisioned IOPS that you want to modify.
4. Choose Modify.
5. On the Modify DB instance page, choose Provisioned IOPS SSD (io1) for Storage type.
6. For Provisioned IOPS, enter a value.
If the value that you specify for either Allocated storage or Provisioned IOPS is outside the limits
supported by the other parameter, a warning message is displayed. This message gives the range of
values required for the other parameter.
7. Choose Continue.
8. Choose Apply immediately in the Scheduling of modifications section to apply the changes to the
DB instance immediately. Or choose Apply during the next scheduled maintenance window to
apply the changes during the next maintenance window.
9. Review the parameters to be changed, and choose Modify DB instance to complete the
modification.
The new value for allocated storage or for Provisioned IOPS appears in the Status column.
AWS CLI
To change the Provisioned IOPS setting for a DB instance, use the AWS CLI command modify-db-
instance. Set the following parameters:
RDS API
To change the Provisioned IOPS settings for a DB instance, use the Amazon RDS API operation
ModifyDBInstance. Set the following parameters:
485
Amazon Relational Database Service User Guide
I/O-intensive storage modifications
In most cases, storage scaling modifications are completely offloaded to the Amazon EBS layer and
are transparent to the database. This process is typically completed within a few minutes. However,
some older RDS storage volumes require a different process for modifying the size, Provisioned IOPS, or
storage type. This involves making a full copy of the data using a potentially I/O-intensive operation.
Storage modification uses an I/O-intensive operation if any of the following factors apply:
• The source storage type is magnetic. Magnetic storage doesn't support elastic volume modification.
• The RDS DB instance isn't on a one- or four-volume Amazon EBS layout. You can view the number
of Amazon EBS volumes in use on your RDS DB instances by using Enhanced Monitoring metrics. For
more information, see Viewing OS metrics in the RDS console (p. 802).
• The target size of the modification request increases the allocated storage above 400 GiB for RDS
for MariaDB, MySQL, and PostgreSQL instances, and 200 GiB for RDS for Oracle. Storage autoscaling
operations have the same effect when they increase the allocated storage size of your DB instance
above these thresholds.
If your storage modification involves an I/O-intensive operation, it consumes I/O resources and increases
the load on your DB instance. Storage modifications with I/O-intensive operations involving General
Purpose SSD (gp2) storage can deplete your I/O credit balance, resulting in longer conversion times.
We recommend as a best practice to schedule these storage modification requests outside of peak hours
to help reduce the time required to complete the storage modification operation. Alternatively, you can
create a read replica of the DB instance and perform the storage modification on the read replica. Then
promote the read replica to be the primary DB instance. For more information, see Working with DB
instance read replicas (p. 438).
For more information, see Why is an Amazon RDS DB instance stuck in the modifying state when I try to
increase the allocated storage?
In most cases, scaling storage doesn't require any outage. After you modify the storage IOPS for a DB
instance, the status of the DB instance is storage-optimization. You can expect elevated latencies,
486
Amazon Relational Database Service User Guide
Modifying General Purpose (gp3) settings
but still within the single-digit millisecond range, during storage optimization. The DB instance is fully
operational after a storage modification.
Note
You can't make further storage modifications until six (6) hours after storage optimization has
completed on the instance.
For information on the ranges of allocated storage, Provisioned IOPS, and storage throughput available
for each database engine, see gp3 storage (p. 103).
Console
To change the storage performance settings for a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
To filter the list of DB instances, for Filter databases enter a text string for Amazon RDS to use to
filter the results. Only DB instances whose names contain the string appear.
3. Choose the DB instance with gp3 storage that you want to modify.
4. Choose Modify.
5. On the Modify DB Instance page, choose General Purpose SSD (gp3) for Storage type, then do the
following:
If the value that you specify for either Allocated storage or Provisioned IOPS is outside the
limits supported by the other parameter, a warning message appears. This message gives the
range of values required for the other parameter.
b. For Storage throughput, choose a value.
If the value that you specify for either Provisioned IOPS or Storage throughput is outside the
limits supported by the other parameter, a warning message appears. This message gives the
range of values required for the other parameter.
6. Choose Continue.
7. Choose Apply immediately in the Scheduling of modifications section to apply the changes to the
DB instance immediately. Or choose Apply during the next scheduled maintenance window to
apply the changes during the next maintenance window.
8. Review the parameters to be changed, and choose Modify DB instance to complete the
modification.
The new value for Provisioned IOPS appears in the Status column.
AWS CLI
To change the storage performance settings for a DB instance, use the AWS CLI command modify-db-
instance. Set the following parameters:
487
Amazon Relational Database Service User Guide
Modifying General Purpose (gp3) settings
RDS API
To change the storage performance settings for a DB instance, use the Amazon RDS API operation
ModifyDBInstance. Set the following parameters:
488
Amazon Relational Database Service User Guide
Deleting a DB instance
Deleting a DB instance
You can delete a DB instance using the AWS Management Console, the AWS CLI, or the RDS API. If you
want to delete a DB instance in an Aurora DB cluster, see Deleting Aurora DB clusters and DB instances.
Topics
• Prerequisites for deleting a DB instance (p. 489)
• Considerations when deleting a DB instance (p. 489)
• Deleting a DB instance (p. 490)
If your DB instance has deletion protection turned on, you can turn it off by modifying your instance
settings. Choose Modify in the database details page or call the modify-db-instance command. This
operation doesn't cause an outage. For more information, see Settings for DB instances (p. 402).
• You can choose whether to create a final DB snapshot. You have the following options:
• If you take a final snapshot, you can use it to restore your deleted DB instance. RDS retains both
the final snapshot and any manual snapshots that you took previously. You can't create a final DB
snapshot of your DB instance if it isn't in the Available state. For more information, see Viewing
Amazon RDS DB instance status (p. 684).
• If you don't take a final snapshot, deletion is faster. However, you can't use a final snapshot to
restore your DB instance. If you later decide to restore your deleted DB instance, either retain
automated backups or use an earlier manual snapshot to restore your DB instance to the point in
time of the snapshot.
• You can choose whether to retain automated backups. You have the following options:
• If you retain automated backups, RDS keeps them for the retention period that is in effect for the DB
instance at the time when you delete it. You can use automated backups to restore your DB instance
to a time during but not after your retention period. The retention period is in effect regardless
of whether you create a final DB snapshot. To delete a retained automated backup, see Deleting
retained automated backups (p. 596).
• Retained automated backups and manual snapshots incur billing charges until they're deleted. For
more information, see Retention costs (p. 596).
• If you don't retain automated backups, RDS deletes the automated backups that reside in the
same AWS Region as your DB instance. You can't recover these backups. If your automated backups
have been replicated to another AWS Region, RDS keeps them even if you don't choose to retain
automated backups. For more information, see Replicating automated backups to another AWS
Region (p. 602).
Note
Typically, if you create a final DB snapshot, you don't need to retain automated backups.
• When you delete your DB instance, RDS doesn't delete manual DB snapshots. For more information,
see Creating a DB snapshot (p. 613).
• If you want to delete all RDS resources, note that the following resources incur billing charges:
489
Amazon Relational Database Service User Guide
Deleting a DB instance
• DB instances
• DB snapshots
• DB clusters
If you purchased reserved instances, then they are billed according to contract that you agreed to
when you purchased the instance. For more information, see Reserved DB instances for Amazon
RDS (p. 165). You can get billing information for all your AWS resources by using the AWS Cost
Explorer. For more information, see Analyzing your costs with AWS Cost Explorer.
• If you delete a DB instance that has read replicas in the same AWS Region, each read replica is
automatically promoted to a standalone DB instance. For more information, see Promoting a read
replica to be a standalone DB instance (p. 447). If your DB instance has read replicas in different AWS
Regions, see Cross-Region replication considerations (p. 456) for information related to deleting the
source DB instance for a cross-Region read replica.
• When the status for a DB instance is deleting, its CA certificate value doesn't appear in the RDS
console or in output for AWS CLI commands or RDS API operations. For more information about CA
certificates, see Using SSL/TLS to encrypt a connection to a DB instance (p. 2591).
• The time required to delete a DB instance varies depending on the backup retention period (that is,
how many backups to delete), how much data is deleted, and whether a final snapshot is taken.
Deleting a DB instance
You can delete a DB instance using the AWS Management Console, the AWS CLI, or the RDS API. You
must do the following:
Note
You can't delete a DB instance when deletion protection is turned on. For more information, see
Prerequisites for deleting a DB instance (p. 489).
Console
To delete a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to delete.
3. For Actions, choose Delete.
4. To create a final DB snapshot for the DB instance, choose Create final snapshot?.
5. If you chose to create a final snapshot, enter the Final snapshot name.
6. To retain automated backups, choose Retain automated backups.
7. Enter delete me in the box.
8. Choose Delete.
AWS CLI
To find the instance IDs of the DB instances in your account, call the describe-db-instances command:
490
Amazon Relational Database Service User Guide
Deleting a DB instance
To delete a DB instance by using the AWS CLI, call the delete-db-instance command with the following
options:
• --db-instance-identifier
• --final-db-snapshot-identifier or --skip-final-snapshot
For Windows:
For Windows:
RDS API
To delete a DB instance by using the Amazon RDS API, call the DeleteDBInstance operation with the
following parameters:
• DBInstanceIdentifier
• FinalDBSnapshotIdentifier or SkipFinalSnapshot
491
Amazon Relational Database Service User Guide
You can use the AWS Management Console to determine whether a Multi-AZ deployment is a Multi-AZ
DB instance deployment or a Multi-AZ DB cluster deployment. In the navigation pane, choose Databases,
and then choose a DB identifier.
Topics
• Multi-AZ DB instance deployments (p. 493)
• Multi-AZ DB cluster deployments (p. 499)
In addition, the following topics apply to both DB instances and Multi-AZ DB clusters:
492
Amazon Relational Database Service User Guide
Multi-AZ DB instance deployments
Using the RDS console, you can create a Multi-AZ DB instance deployment by simply specifying Multi-
AZ when creating a DB instance. You can use the console to convert existing DB instances to Multi-AZ
493
Amazon Relational Database Service User Guide
Modifying a DB instance to be a
Multi-AZ DB instance deployment
DB instance deployments by modifying the DB instance and specifying the Multi-AZ option. You can
also specify a Multi-AZ DB instance deployment with the AWS CLI or Amazon RDS API. Use the create-
db-instance or modify-db-instance CLI command, or the CreateDBInstance or ModifyDBInstance API
operation.
The RDS console shows the Availability Zone of the standby replica (called the secondary AZ). You can
also use the describe-db-instances CLI command or the DescribeDBInstances API operation to find the
secondary AZ.
DB instances using Multi-AZ DB instance deployments can have increased write and commit latency
compared to a Single-AZ deployment. This can happen because of the synchronous data replication that
occurs. You might have a change in latency if your deployment fails over to the standby replica, although
AWS is engineered with low-latency network connectivity between Availability Zones. For production
workloads, we recommend that you use Provisioned IOPS (input/output operations per second) for fast,
consistent performance. For more information about DB instance classes, see DB instance classes (p. 11).
1. Takes a snapshot of the primary DB instance's Amazon Elastic Block Store (EBS) volumes.
2. Creates new volumes for the standby replica from the snapshot. These volumes initialize in the
background, and maximum volume performance is achieved after the data is fully initialized.
3. Turns on synchronous block-level replication between the volumes of the primary and standby
replicas.
Important
Using a snapshot to create the standby instance avoids downtime when you convert from
Single-AZ to Multi-AZ, but you can experience a performance impact during and after
converting to Multi-AZ. This impact can be significant for workloads that are sensitive to write
latency.
While this capability lets large volumes be restored from snapshots quickly, it can cause a
significant increase in the latency of I/O operations because of the synchronous replication. This
latency can impact your database performance. We highly recommend as a best practice not to
perform Multi-AZ conversion on a production DB instance.
To avoid the performance impact on the DB instance currently serving the sensitive workload,
create a read replica and enable backups on the read replica. Convert the read replica to Multi-
AZ, and run queries that load the data into the read replica's volumes (on both AZs). Then
promote the read replica to be the primary DB instance. For more information, see Working with
DB instance read replicas (p. 438).
Topics
• Convert to a Multi-AZ DB instance deployment with the RDS console (p. 494)
• Modifying a DB instance to be a Multi-AZ DB instance deployment (p. 495)
494
Amazon Relational Database Service User Guide
Failover process for Amazon RDS
You can only use the console to complete the conversion. To use the AWS CLI or RDS API, follow the
instructions in Modifying a DB instance to be a Multi-AZ DB instance deployment (p. 495).
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
modify.
3. From Actions, choose Convert to Multi-AZ deployment.
4. On the confirmation page, choose Apply immediately to apply the changes immediately. Choosing
this option doesn't cause downtime, but there is a possible performance impact. Alternatively, you
can choose to apply the update during the next maintenance window. For more information, see
Using the Apply Immediately setting (p. 402).
5. Choose Convert to Multi-AZ.
• Using the RDS console, modify the DB instance, and set Multi-AZ deployment to Yes.
• Using the AWS CLI, call the modify-db-instance command, and set the --multi-az option.
• Using the RDS API, call the ModifyDBInstance operation, and set the MultiAZ parameter to true.
For information about modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401).
After the modification is complete, Amazon RDS triggers an event (RDS-EVENT-0025) that indicates
the process is complete. You can monitor Amazon RDS events. For more information about events, see
Working with Amazon RDS event notification (p. 855).
Amazon RDS handles failovers automatically so you can resume database operations as quickly as
possible without administrative intervention. The primary DB instance switches over automatically to
the standby replica if any of the conditions described in the following table occurs. You can view these
failover reasons in the event log.
The operating system underlying the RDS A failover was triggered during the maintenance
database instance is being patched in an offline window for an OS patch or a security update.
operation.
495
Amazon Relational Database Service User Guide
Failover process for Amazon RDS
The primary host of the RDS Multi-AZ instance is The Multi-AZ DB instance deployment detected an
unhealthy. impaired primary DB instance and failed over.
The primary host of the RDS Multi-AZ instance is RDS monitoring detected a network reachability
unreachable due to loss of network connectivity. failure to the primary DB instance and triggered a
failover.
The RDS instance was modified by customer. An RDS DB instance modification triggered a
failover.
The RDS Multi-AZ primary instance is busy and The primary DB instance is unresponsive. We
unresponsive. recommend that you do the following:
The storage volume underlying the primary The Multi-AZ DB instance deployment detected
host of the RDS Multi-AZ instance experienced a a storage issue on the primary DB instance and
failure. failed over.
The user requested a failover of the DB instance. You rebooted the DB instance and chose Reboot
with failover.
To determine if your Multi-AZ DB instance has failed over, you can do the following:
496
Amazon Relational Database Service User Guide
Failover process for Amazon RDS
• Set up DB event subscriptions to notify you by email or SMS that a failover has been initiated. For
more information about events, see Working with Amazon RDS event notification (p. 855).
• View your DB events by using the RDS console or API operations.
• View the current state of your Multi-AZ DB instance deployment by using the RDS console or API
operations.
For information on how you can respond to failovers, reduce recovery time, and other best practices for
Amazon RDS, see Best practices for Amazon RDS (p. 286).
The JVM caches DNS name lookups. When the JVM resolves a host name to an IP address, it caches the IP
address for a specified period of time, known as the time-to-live (TTL).
Because AWS resources use DNS name entries that occasionally change, we recommend that you
configure your JVM with a TTL value of no more than 60 seconds. Doing this makes sure that when a
resource's IP address changes, your application can receive and use the resource's new IP address by
requerying the DNS.
On some Java configurations, the JVM default TTL is set so that it never refreshes DNS entries until
the JVM is restarted. Thus, if the IP address for an AWS resource changes while your application is still
running, it can't use that resource until you manually restart the JVM and the cached IP information
is refreshed. In this case, it's crucial to set the JVM's TTL so that it periodically refreshes its cached IP
information.
You can get the JVM default TTL by retrieving the networkaddress.cache.ttl property value:
Note
The default TTL can vary according to the version of your JVM and whether a security manager
is installed. Many JVMs provide a default TTL less than 60 seconds. If you're using such a JVM
and not using a security manager, you can ignore the rest of this topic. For more information on
security managers in Oracle, see The security manager in the Oracle documentation.
To modify the JVM's TTL, set the networkaddress.cache.ttl property value. Use one of the
following methods, depending on your needs:
• To set the property value globally for all applications that use the JVM, set
networkaddress.cache.ttl in the $JAVA_HOME/jre/lib/security/java.security file.
networkaddress.cache.ttl=60
• To set the property locally for your application only, set networkaddress.cache.ttl in your
application's initialization code before any network connections are established.
java.security.Security.setProperty("networkaddress.cache.ttl" , "60");
497
Amazon Relational Database Service User Guide
Failover process for Amazon RDS
498
Amazon Relational Database Service User Guide
Multi-AZ DB cluster deployments
You can import data from an on-premises database to a Multi-AZ DB cluster by following the instructions
in Importing data to an Amazon RDS MariaDB or MySQL database with reduced downtime (p. 1690).
You can purchase reserved DB instances for a Multi-AZ DB cluster. For more information, see Reserved
DB instances for a Multi-AZ DB cluster (p. 168).
Topics
• Region and version availability (p. 499)
• Instance class availability (p. 499)
• Overview of Multi-AZ DB clusters (p. 500)
• Limitations for Multi-AZ DB clusters (p. 501)
• Managing a Multi-AZ DB cluster with the AWS Management Console (p. 502)
• Working with parameter groups for Multi-AZ DB clusters (p. 503)
• Upgrading the engine version of a Multi-AZ DB cluster (p. 503)
• Replica lag and Multi-AZ DB clusters (p. 504)
• Failover process for Multi-AZ DB clusters (p. 505)
• Creating a Multi-AZ DB cluster (p. 508)
• Connecting to a Multi-AZ DB cluster (p. 522)
• Automatically connecting an AWS compute resource and a Multi-AZ DB cluster (p. 525)
• Modifying a Multi-AZ DB cluster (p. 539)
• Renaming a Multi-AZ DB cluster (p. 550)
• Rebooting a Multi-AZ DB cluster and reader DB instances (p. 552)
• Working with Multi-AZ DB cluster read replicas (p. 554)
• Using PostgreSQL logical replication with Multi-AZ DB clusters (p. 561)
• Deleting a Multi-AZ DB cluster (p. 563)
Important
Multi-AZ DB clusters aren't the same as Aurora DB clusters. For information about Aurora DB
clusters, see the Amazon Aurora User Guide.
499
Amazon Relational Database Service User Guide
Overview of Multi-AZ DB clusters
For more information about DB instance classes, see the section called “DB instance classes” (p. 11).
Reader DB instances act as automatic failover targets and also serve read traffic to increase application
read throughput. If an outage occurs on your writer DB instance, RDS manages failover to one of the
reader DB instances. RDS does this based on which reader DB instance has the most recent change
record.
Multi-AZ DB clusters typically have lower write latency when compared to Multi-AZ DB instance
deployments. They also allow read-only workloads to run on reader DB instances. The RDS console
500
Amazon Relational Database Service User Guide
Limitations for Multi-AZ DB clusters
shows the Availability Zone of the writer DB instance and the Availability Zones of the reader DB
instances. You can also use the describe-db-clusters CLI command or the DescribeDBClusters API
operation to find this information.
Important
To prevent replication errors in RDS for MySQL Multi-AZ DB clusters, we strongly recommend
that all tables have a primary key.
As an alternative, you can restore a Multi-AZ DB cluster to a point in time and specify a different
port.
• Option groups
• Point-in-time-recovery (PITR) for deleted clusters
• Restoring a Multi-AZ DB cluster snapshot from an Amazon S3 bucket
• Storage autoscaling by setting the maximum allocated storage
501
Amazon Relational Database Service User Guide
Managing a Multi-AZ DB cluster
with the AWS Management Console
RDS for MySQL Multi-AZ DB clusters don't support other system stored procedures. For information
about these procedures, see RDS for MySQL stored procedure reference (p. 1757).
• RDS for PostgreSQL Multi-AZ DB clusters don't support the following PostgreSQL extensions: aws_s3
and pg_transport.
• RDS for PostgreSQL Multi-AZ DB clusters don't support using a custom DNS server for outbound
network access.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the Multi-AZ DB cluster that you want to
manage.
The available actions in the Actions menu depend on whether the Multi-AZ DB cluster is selected or a DB
instance in the cluster is selected.
Choose the Multi-AZ DB cluster to view the cluster details and perform actions at the cluster level.
502
Amazon Relational Database Service User Guide
Working with parameter groups for Multi-AZ DB clusters
Choose a DB instance in a Multi-AZ DB cluster to view the DB instance details and perform actions at the
DB instance level.
In a Multi-AZ DB cluster, a DB parameter group is set to the default DB parameter group for the DB
engine and DB engine version. The settings in the DB cluster parameter group are used for all of the DB
instances in the cluster.
For information about parameter groups, see Working with parameter groups (p. 347).
There are two kinds of upgrades: major version upgrades and minor version upgrades. In general, a
major engine version upgrade can introduce changes that aren't compatible with existing applications.
In contrast, a minor version upgrade includes only changes that are backward-compatible with existing
applications.
Note
Currently, major version upgrades are only supported for RDS for PostgreSQL Multi-AZ DB
clusters. Minor version upgrades are supported for all DB engines that support Multi-AZ DB
clusters.
Amazon RDS doesn't automatically upgrade Multi-AZ DB cluster read replicas. For minor version
upgrades, you must first manually upgrade all read replicas and then upgrade the cluster, otherwise the
upgrade is blocked. When you perform a major version upgrade of a cluster, the replication state of all
read replicas changes to terminated. You must delete and recreate the read replicas after the upgrade
completes. For more information, see the section called “Monitoring read replication” (p. 449).
The process for upgrading the engine version of a Multi-AZ DB cluster is the same as the process for
upgrading a DB instance engine version. For instructions, see the section called “Upgrading the engine
503
Amazon Relational Database Service User Guide
Replica lag and Multi-AZ DB clusters
version” (p. 429). The only difference is that when using the AWS CLI, you use the modify-db-cluster
command and specify the --db-cluster-identifier parameter (as well as the --allow-major-
version-upgrade parameter).
For more information about major and minor version upgrades for RDS for PostgreSQL, see the
following documentation for your DB engine:
Although Multi-AZ DB clusters allow for high write performance, replica lag can still occur due to the
nature of engine-based replication. Because any failover must first resolve the replica lag before it
promotes a new writer DB instance, monitoring and managing this replica lag is a consideration.
For RDS for MySQL Multi-AZ DB clusters, failover time depends on replica lag of both remaining reader
DB instances. Both the reader DB instances must apply unapplied transactions before one of them is
promoted to the new writer DB instance.
For RDS for PostgreSQL Multi-AZ DB clusters, failover time depends on the lowest replica lag of the two
remaining reader DB instances. The reader DB instance with the lowest replica lag must apply unapplied
transactions before it is promoted to the new writer DB instance.
For a tutorial that shows you how to create a CloudWatch alarm when replica lag exceeds a set
amount of time, see Tutorial: Creating an Amazon CloudWatch alarm for Multi-AZ DB cluster replica
lag (p. 713).
• High write concurrency or heavy batch updating on the writer DB instance, causing the apply process
on the reader DB instances to fall behind.
• Heavy read workload that is using resources on one or more reader DB instances. Running slow or
large queries can affect the apply process and can cause replica lag.
• Transactions that modify large amounts of data or DDL statements can sometimes cause a temporary
increase in replica lag because the database must preserve commit order.
504
Amazon Relational Database Service User Guide
Failover process for Multi-AZ DB clusters
Mitigating replica lag with flow control for RDS for MySQL
When you are using RDS for MySQL Multi-AZ DB clusters, flow control is turned on by default using the
dynamic parameter rpl_semi_sync_master_target_apply_lag. This parameter specifies the upper
limit that you want for replica lag. As replica lag approaches this configured limit, flow control throttles
the write transactions on the writer DB Instance to try to contain the replica lag below the specified
value. In some cases, replica lag can exceed the specified limit. By default, this parameter is set to 120
seconds. To turn off flow control, set this parameter to its maximum value of 86400 seconds (one day).
To view the current delay injected by flow control, show the parameter
Rpl_semi_sync_master_flow_control_current_delay by running the following query.
+-------------------------------------------------+-------+
| Variable_name | Value |
+-------------------------------------------------+-------+
| Rpl_semi_sync_master_flow_control_current_delay | 2010 |
+-------------------------------------------------+-------+
1 row in set (0.00 sec)
Note
The delay is shown in microseconds.
When you have Performance Insights turned on for an RDS for MySQL Multi-AZ DB cluster, you can
monitor the wait event corresponding to a SQL statement indicating that the queries were delayed
by a flow control. When a delay was introduced by a flow control, you can view the wait event /
wait/synch/cond/semisync/semi_sync_flow_control_delay_cond corresponding to the
SQL statement on the Performance Insights dashboard. To view these metrics, make sure that the
Performance Schema is turned on. For information about Performance Insights, see Monitoring DB load
with Performance Insights on Amazon RDS (p. 720).
Mitigating replica lag with flow control for RDS for PostgreSQL
When you are using RDS for PostgreSQL Multi-AZ DB clusters, flow control is deployed as an extension. It
turns on a background worker for all DB instances in the DB cluster. By default, the background workers
on the reader DB instances communicate the current replica lag with the background worker on the
writer DB instance. If the lag exceeds two minutes on any reader DB instance, the background worker
on the writer DB instance adds a delay at the end of a transaction. To control the lag threshold, use the
parameter flow_control.target_standby_apply_lag.
When a flow control throttles a PostgreSQL process, the Extension wait event in pg_stat_activity
and Performance Insights indicates that. The function get_flow_control_stats displays details
about how much delay is currently being added.
Flow control can benefit most online transaction processing (OLTP) workloads that have short but highly
concurrent transactions. If the lag is caused by long-running transactions, such as batch operations, flow
control doesn't provide as strong a benefit.
You can turn off flow control by removing the extension from the preload_shared_libraries and
rebooting your DB instance.
505
Amazon Relational Database Service User Guide
Failover process for Multi-AZ DB clusters
the failover to complete depends on the database activity and other conditions when the writer DB
instance became unavailable. Failover times are typically under 35 seconds. Failover completes when
both reader DB instances have applied outstanding transactions from the failed writer. When the failover
is complete, it can take additional time for the RDS console to reflect the new Availability Zone.
Topics
• Automatic failovers (p. 506)
• Manually failing over a Multi-AZ DB cluster (p. 506)
• Determining whether a Multi-AZ DB cluster has failed over (p. 506)
• Setting the JVM TTL for DNS name lookups (p. 507)
Automatic failovers
Amazon RDS handles failovers automatically so you can resume database operations as quickly as
possible without administrative intervention. To fail over, the writer DB instance switches automatically
to a reader DB instance.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the Multi-AZ DB cluster that you want to fail over.
4. For Actions, choose Failover.
AWS CLI
To fail over a Multi-AZ DB cluster manually, use the AWS CLI command failover-db-cluster.
Example
RDS API
To fail over a Multi-AZ DB cluster manually, call the Amazon RDS API FailoverDBCluster and specify the
DBClusterIdentifier.
• Set up DB event subscriptions to notify you by email or SMS that a failover has been initiated. For
more information about events, see Working with Amazon RDS event notification (p. 855).
506
Amazon Relational Database Service User Guide
Failover process for Multi-AZ DB clusters
• View your DB events by using the Amazon RDS console or API operations.
• View the current state of your Multi-AZ DB cluster by using the Amazon RDS console, the AWS CLI, and
the RDS API.
For information on how you can respond to failovers, reduce recovery time, and other best practices for
Amazon RDS, see Best practices for Amazon RDS (p. 286).
The JVM caches DNS name lookups. When the JVM resolves a host name to an IP address, it caches the IP
address for a specified period of time, known as the time-to-live (TTL).
Because AWS resources use DNS name entries that occasionally change, we recommend that you
configure your JVM with a TTL value of no more than 60 seconds. Doing this makes sure that when a
resource's IP address changes, your application can receive and use the resource's new IP address by
requerying the DNS.
On some Java configurations, the JVM default TTL is set so that it never refreshes DNS entries until
the JVM is restarted. Thus, if the IP address for an AWS resource changes while your application is still
running, it can't use that resource until you manually restart the JVM and the cached IP information
is refreshed. In this case, it's crucial to set the JVM's TTL so that it periodically refreshes its cached IP
information.
Note
The default TTL can vary according to the version of your JVM and whether a security manager
is installed. Many JVMs provide a default TTL less than 60 seconds. If you're using such a JVM
and not using a security manager, you can ignore the rest of this topic. For more information on
security managers in Oracle, see The security manager in the Oracle documentation.
To modify the JVM's TTL, set the networkaddress.cache.ttl property value. Use one of the
following methods, depending on your needs:
• To set the property value globally for all applications that use the JVM, set
networkaddress.cache.ttl in the $JAVA_HOME/jre/lib/security/java.security file.
networkaddress.cache.ttl=60
• To set the property locally for your application only, set networkaddress.cache.ttl in your
application's initialization code before any network connections are established.
java.security.Security.setProperty("networkaddress.cache.ttl" , "60");
507
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
DB cluster prerequisites
Important
Before you can create a Multi-AZ DB cluster, you must complete the tasks in Setting up for
Amazon RDS (p. 174).
Topics
• Configure the network for the DB cluster (p. 508)
• Additional prerequisites (p. 511)
To set up connectivity between your new DB cluster and an Amazon EC2 instance in the same VPC, do so
when you create the DB cluster. To connect to your DB cluster from resources other than EC2 instances in
the same VPC, configure the network connections manually.
Topics
• Configure automatic network connectivity with an EC2 instance (p. 508)
• Configure the network manually (p. 510)
When you create a Multi-AZ DB cluster, you can use the AWS Management Console to set up connectivity
between an EC2 instance and the new DB cluster. When you do so, RDS configures your VPC and network
settings automatically. The DB cluster is created in the same VPC as the EC2 instance so that the EC2
instance can access the DB cluster.
The following are requirements for connecting an EC2 instance with the DB cluster:
• The EC2 instance must exist in the AWS Region before you create the DB cluster.
If no EC2 instances exist in the AWS Region, the console provides a link to create one.
• The user who is creating the DB cluster must have permissions to perform the following operations:
• ec2:AssociateRouteTable
• ec2:AuthorizeSecurityGroupEgress
• ec2:AuthorizeSecurityGroupIngress
• ec2:CreateRouteTable
508
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
• ec2:CreateSubnet
• ec2:CreateSecurityGroup
• ec2:DescribeInstances
• ec2:DescribeNetworkInterfaces
• ec2:DescribeRouteTables
• ec2:DescribeSecurityGroups
• ec2:DescribeSubnets
• ec2:ModifyNetworkInterfaceAttribute
• ec2:RevokeSecurityGroupEgress
Using this option creates a private DB cluster. The DB cluster uses a DB subnet group with only private
subnets to restrict access to resources within the VPC.
To connect an EC2 instance to the DB cluster, choose Connect to an EC2 compute resource in the
Connectivity section on the Create database page.
When you choose Connect to an EC2 compute resource, RDS sets the following options automatically.
You can't change these settings unless you choose not to set up connectivity with an EC2 instance by
choosing Don't connect to an EC2 compute resource.
Virtual Private Cloud (VPC) RDS sets the VPC to the one associated with the EC2 instance.
DB subnet group RDS requires a DB subnet group with a private subnet in the same
Availability Zone as the EC2 instance. If a DB subnet group that meets this
requirement exists, then RDS uses the existing DB subnet group. By default,
this option is set to Automatic setup.
When you choose Automatic setup and there is no DB subnet group that
meets this requirement, the following action happens. RDS uses three
available private subnets in three Availability Zones where one of the
Availability Zones is the same as the EC2 instance. If a private subnet
isn’t available in an Availability Zone, RDS creates a private subnet in the
Availability Zone. Then RDS creates the DB subnet group.
509
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
RDS also allows you to use existing DB subnet groups. Select Choose
existing if you want to use an existing DB subnet group of your choice.
Public access RDS chooses No so that the DB cluster isn't publicly accessible.
For security, it is a best practice to keep the database private and make sure
it isn't accessible from the internet.
VPC security group (firewall) RDS creates a new security group that is associated with the DB cluster. The
security group is named rds-ec2-n, where n is a number. This security
group includes an inbound rule with the EC2 VPC security group (firewall)
as the source. This security group that is associated with the DB cluster
allows the EC2 instance to access the DB cluster.
RDS also creates a new security group that is associated with the EC2
instance. The security group is named ec2-rds-n, where n is a number.
This security group includes an outbound rule with the VPC security group
of the DB cluster as the source. This security group allows the EC2 instance
to send traffic to the DB cluster.
You can add another new security group by choosing Create new and
typing the name of the new security group.
You can add existing security groups by choosing Choose existing and
selecting security groups to add.
Availability Zone RDS chooses the Availability Zone of the EC2 instance for one DB instance
in the Multi-AZ DB cluster deployment. RDS randomly chooses a different
Availability Zone for both of the other DB instances. The writer DB instance
is created in the same Availability Zone as the EC2 instance. There is the
possibility of cross Availability Zone costs if a failover occurs and the writer
DB instance is in a different Availability Zone.
For more information about these settings, see Settings for creating Multi-AZ DB clusters (p. 514).
If you change these settings after the DB cluster is created, the changes might affect the connection
between the EC2 instance and the DB cluster.
To connect to your DB cluster from resources other than EC2 instances in the same VPC, configure the
network connections manually. If you use the AWS Management Console to create your Multi-AZ DB
cluster, you can have Amazon RDS automatically create a VPC for you. Or you can use an existing VPC
or create a new VPC for your Multi-AZ DB cluster. Your VPC must have at least one subnet in each of at
least three Availability Zones for you to use it with a Multi-AZ DB cluster. For information on VPCs, see
Amazon VPC VPCs and Amazon RDS (p. 2688).
If you don't have a default VPC or you haven't created a VPC, and you don't plan to use the console, do
the following:
510
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
• Create a VPC with at least one subnet in each of at least three of the Availability Zones in the AWS
Region where you want to deploy your DB cluster. For more information, see Working with a DB
instance in a VPC (p. 2689).
• Specify a VPC security group that authorizes connections to your DB cluster. For more information, see
Provide access to your DB instance in your VPC by creating a security group (p. 177) and Controlling
access with security groups (p. 2680).
• Specify an RDS DB subnet group that defines at least three subnets in the VPC that can be used by the
Multi-AZ DB cluster. For more information, see Working with DB subnet groups (p. 2689).
For information about limitations that apply to Multi-AZ DB clusters, see Limitations for Multi-AZ DB
clusters (p. 501).
If you want to connect to a resource that isn't in the same VPC as the Multi-AZ DB cluster, see the
appropriate scenarios in Scenarios for accessing a DB instance in a VPC (p. 2701).
Additional prerequisites
Before you create your Multi-AZ DB cluster, consider the following additional prerequisites:
• To connect to AWS using AWS Identity and Access Management (IAM) credentials, your AWS account
must have certain IAM policies. These grant the permissions required to perform Amazon RDS
operations. For more information, see Identity and access management for Amazon RDS (p. 2606).
If you use IAM to access the RDS console, first sign in to the AWS Management Console with your IAM
user credentials. Then go to the RDS console at https://console.aws.amazon.com/rds/.
• To tailor the configuration parameters for your DB cluster, specify a DB cluster parameter group with
the required parameter settings. For information about creating or modifying a DB cluster parameter
group, see Working with parameter groups for Multi-AZ DB clusters (p. 503).
• Determine the TCP/IP port number to specify for your DB cluster. The firewalls at some companies
block connections to the default ports. If your company firewall blocks the default port, choose
another port for your DB cluster. All DB instances in a DB cluster use the same port.
Creating a DB cluster
You can create a Multi-AZ DB cluster using the AWS Management Console, the AWS CLI, or the RDS API.
Console
You can create a Multi-AZ DB cluster by choosing Multi-AZ DB cluster in the Availability and durability
section.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you
want to create the DB cluster.
For information about the AWS Regions that support Multi-AZ DB clusters, see Limitations for Multi-
AZ DB clusters (p. 501).
3. In the navigation pane, choose Databases.
4. Choose Create database.
To create a Multi-AZ DB cluster, make sure that Standard Create is selected and Easy Create isn't.
511
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
For information about the DB engine versions that support Multi-AZ DB clusters, see Limitations for
Multi-AZ DB clusters (p. 501).
7. In Templates, choose the appropriate template for your deployment.
8. In Availability and durability, choose Multi-AZ DB cluster.
You can configure connectivity between an Amazon EC2 instance and the new DB cluster during DB
cluster creation. For more information, see Configure automatic network connectivity with an EC2
instance (p. 508).
14. In the Connectivity section under VPC security group (firewall), if you select Create new, a VPC
security group is created with an inbound rule that allows your local computer's IP address to access
the database.
15. For the remaining sections, specify your DB cluster settings. For information about each setting, see
Settings for creating Multi-AZ DB clusters (p. 514).
16. Choose Create database.
If you chose to use an automatically generated password, the View credential details button
appears on the Databases page.
To view the master user name and password for the DB cluster, choose View credential details.
To connect to the DB cluster as the master user, use the user name and password that appear.
Important
You can't view the master user password again.
17. For Databases, choose the name of the new DB cluster.
512
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
On the RDS console, the details for the new DB cluster appear. The DB cluster has a status of Creating
until the DB cluster is created and ready for use. When the state changes to Available, you can connect
to the DB cluster. Depending on the DB cluster class and storage allocated, it can take several minutes
for the new DB cluster to be available.
AWS CLI
Before you create a Multi-AZ DB cluster using the AWS CLI, make sure to fulfill the required prerequisites.
These include creating a VPC and an RDS DB subnet group. For more information, see DB cluster
prerequisites (p. 508).
To create a Multi-AZ DB cluster by using the AWS CLI, call the create-db-cluster command. Specify the --
db-cluster-identifier. For the --engine option, specify either mysql or postgres.
For information about each option, see Settings for creating Multi-AZ DB clusters (p. 514).
For information about the AWS Regions, DB engines, and DB engine versions that support Multi-AZ DB
clusters, see Limitations for Multi-AZ DB clusters (p. 501).
The create-db-cluster command creates the writer DB instance for your DB cluster, and two reader
DB instances. Each DB instance is in a different Availability Zone.
For example, the following command creates a MySQL 8.0 Multi-AZ DB cluster named mysql-multi-
az-db-cluster.
Example
For Windows:
The following command creates a PostgreSQL 13.4 Multi-AZ DB cluster named postgresql-multi-
az-db-cluster.
513
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
Example
For Windows:
RDS API
Before you can create a Multi-AZ DB cluster using the RDS API, make sure to fulfill the required
prerequisites, such as creating a VPC and an RDS DB subnet group. For more information, see DB cluster
prerequisites (p. 508).
To create a Multi-AZ DB cluster by using the RDS API, call the CreateDBCluster operation. Specify the
DBClusterIdentifier. For the Engine parameter, specify either mysql or postgresql.
For information about each option, see Settings for creating Multi-AZ DB clusters (p. 514).
The CreateDBCluster operation creates the writer DB instance for your DB cluster, and two reader DB
instances. Each DB instance is in a different Availability Zone.
Console setting Setting description CLI option and RDS API parameter
514
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
Console setting Setting description CLI option and RDS API parameter
For more information, see Amazon RDS API parameter:
DB instance storage (p. 101).
AllocatedStorage
Auto minor version Enable auto minor version upgrade to CLI option:
upgrade have your DB cluster receive preferred
minor DB engine version upgrades --auto-minor-version-upgrade
automatically when they become
available. Amazon RDS performs --no-auto-minor-version-upgrade
automatic minor version upgrades in the
API parameter:
maintenance window.
AutoMinorVersionUpgrade
Backup retention The number of days that you want CLI option:
period automatic backups of your DB cluster to
be retained. For a Multi-AZ DB cluster, this --backup-retention-period
value must be set to 1 or greater.
API parameter:
For more information, see Working with
backups (p. 591). BackupRetentionPeriod
Backup window The time period during which Amazon CLI option:
RDS automatically takes a backup of
your DB cluster. Unless you have a --preferred-backup-window
specific time that you want to have your
database backed up, use the default of No API parameter:
preference.
PreferredBackupWindow
For more information, see Working with
backups (p. 591).
Copy tags to This option copies any DB cluster tags to a CLI option:
snapshots DB snapshot when you create a snapshot.
-copy-tags-to-snapshot
For more information, see Tagging
Amazon RDS resources (p. 461). -no-copy-tags-to-snapshot
CopyTagsToSnapshot
Database For Multi-AZ DB clusters, only Password None because password authentication is the
authentication authentication is supported. default.
Database port The port that you want to access the DB CLI option:
cluster through. The default port is shown.
--port
The port can't be changed after the DB
cluster is created. RDS API parameter:
515
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
Console setting Setting description CLI option and RDS API parameter
DB cluster The name for your DB cluster. Name your CLI option:
identifier DB clusters in the same way that you
name your on-premises servers. Your --db-cluster-identifier
DB cluster identifier can contain up to
63 alphanumeric characters, and must RDS API parameter:
be unique for your account in the AWS
DBClusterIdentifier
Region you chose.
DB engine version The version of database engine that you CLI option:
want to use.
--engine-version
EngineVersion
DB parameter The DB instance parameter group that you Not applicable. Amazon RDS associates each
group want associated with the DB instances in DB instance with the appropriate default
the DB cluster. parameter group.
516
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
Console setting Setting description CLI option and RDS API parameter
DB subnet group The DB subnet group you want to use for CLI option:
the DB cluster.
Select Choose existing to use an existing --db-subnet-group-name
DB subnet group. Then choose the
required subnet group from the Existing RDS API parameter:
DB subnet groups dropdown list.
DBSubnetGroupName
Choose Automatic setup to let RDS select
a compatible DB subnet group. If none
exist, RDS creates a new subnet group for
your cluster.
KmsKeyId
StorageEncrypted
MonitoringRoleArn
517
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
Console setting Setting description CLI option and RDS API parameter
Initial database The name for the database on your DB CLI option:
name cluster. If you don't provide a name,
Amazon RDS doesn't create a database --database-name
on the DB cluster for MySQL. However, it
does create a database on the DB cluster RDS API parameter:
for PostgreSQL. The name can't be a word
DatabaseName
reserved by the database engine. It has
other constraints depending on the DB
engine.
MySQL:
PostgreSQL:
Log exports The types of database log files to publish CLI option:
to Amazon CloudWatch Logs.
-enable-cloudwatch-logs-exports
For more information, see Publishing
database logs to Amazon CloudWatch RDS API parameter:
Logs (p. 898).
EnableCloudwatchLogsExports
518
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
Console setting Setting description CLI option and RDS API parameter
Master password The password for your master user CLI option:
account.
--master-user-password
MasterUserPassword
Master username The name that you use as the master user CLI option:
name to log on to your DB cluster with all
database privileges. --master-username
519
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
Console setting Setting description CLI option and RDS API parameter
Storage type The storage type for your DB cluster. CLI option:
Virtual Private A VPC based on the Amazon VPC service For the CLI and API, you specify the VPC
Cloud (VPC) to associate with this DB cluster. security group IDs.
520
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster
Console setting Setting description CLI option and RDS API parameter
VPC security group The security groups to associate with the CLI option:
(firewall) DB cluster.
--vpc-security-group-ids
For more information, see Overview of
VPC security groups (p. 2680). RDS API parameter:
VpcSecurityGroupIds
You also can't specify these settings for Multi-AZ DB clusters in the console.
--availability-zones AvailabilityZones
--backtrack-window BacktrackWindow
--character-set-name CharacterSetName
--domain Domain
--domain-iam-role-name DomainIAMRoleName
--enable-global-write-forwarding | -- EnableGlobalWriteForwarding
no-enable-global-write-forwarding
--enable-iam-database-authentication EnableIAMDatabaseAuthentication
| --no-enable-iam-database-
authentication
--global-cluster-identifier GlobalClusterIdentifier
--option-group-name OptionGroupName
--pre-signed-url PreSignedUrl
--replication-source-identifier ReplicationSourceIdentifier
--scaling-configuration ScalingConfiguration
521
Amazon Relational Database Service User Guide
Connecting to a Multi-AZ DB cluster
The writer endpoint connects to the writer DB instance of the DB cluster, which supports both read and
write operations. The reader endpoint connects to either of the two reader DB instances, which support
only read operations.
Using endpoints, you can map each connection to the appropriate DB instance or group of DB instances
based on your use case. For example, to perform DDL and DML statements, you can connect to
whichever DB instance is the writer DB instance. To perform queries, you can connect to the reader
endpoint, with the Multi-AZ DB cluster automatically managing connections among the reader DB
instances. For diagnosis or tuning, you can connect to a specific DB instance endpoint to examine details
about a specific DB instance.
Topics
• Types of Multi-AZ DB cluster endpoints (p. 522)
• Viewing the endpoints for a Multi-AZ DB cluster (p. 523)
• Using the cluster endpoint (p. 523)
• Using the reader endpoint (p. 524)
• Using the instance endpoints (p. 524)
• How Multi-AZ DB endpoints work with high availability (p. 524)
Cluster endpoint
A cluster endpoint (or writer endpoint) for a Multi-AZ DB cluster connects to the current writer DB
instance for that DB cluster. This endpoint is the only one that can perform write operations such as
DDL and DML statements. This endpoint can also perform read operations.
Each Multi-AZ DB cluster has one cluster endpoint and one writer DB instance.
You use the cluster endpoint for all write operations on the DB cluster, including inserts, updates,
deletes, and DDL changes. You can also use the cluster endpoint for read operations, such as queries.
If the current writer DB instance of a DB cluster fails, the Multi-AZ DB cluster automatically fails over
to a new writer DB instance. During a failover, the DB cluster continues to serve connection requests
to the cluster endpoint from the new writer DB instance, with minimal interruption of service.
mydbcluster.cluster-123456789012.us-east-1.rds.amazonaws.com
522
Amazon Relational Database Service User Guide
Connecting to a Multi-AZ DB cluster
Reader endpoint
A reader endpoint for a Multi-AZ DB cluster provides support for read-only connections to the
DB cluster. Use the reader endpoint for read operations, such as SELECT queries. By processing
those statements on the reader DB instances, this endpoint reduces the overhead on the writer DB
instance. It also helps the cluster to scale the capacity to handle simultaneous SELECT queries. Each
Multi-AZ DB cluster has one reader endpoint.
The reader endpoint sends each connection request to one of the reader DB instances. When you
use the reader endpoint for a session, you can only perform read-only statements such as SELECT in
that session.
The following example illustrates a reader endpoint for a Multi-AZ DB cluster. The read-only intent
of a reader endpoint is denoted by the -ro within the cluster endpoint name.
mydbcluster.cluster-ro-123456789012.us-east-1.rds.amazonaws.com
Instance endpoint
The instance endpoint provides direct control over connections to the DB cluster. This control can
help you address scenarios where using the cluster endpoint or reader endpoint might not be
appropriate. For example, your client application might require more fine-grained load balancing
based on workload type. In this case, you can configure multiple clients to connect to different
reader DB instances in a DB cluster to distribute read workloads.
The following example illustrates an instance endpoint for a DB instance in a Multi-AZ DB cluster.
mydbinstance.123456789012.us-east-1.rds.amazonaws.com
With the AWS CLI, you see the writer and reader endpoints in the output of the describe-db-clusters
command. For example, the following command shows the endpoint attributes for all clusters in your
current AWS Region.
With the Amazon RDS API, you retrieve the endpoints by calling the DescribeDBClusterEndpoints action.
The output also shows Amazon Aurora DB cluster endpoints, if any exist.
You use the cluster endpoint when you administer your DB cluster, perform extract, transform, load (ETL)
operations, or develop and test applications. The cluster endpoint connects to the writer DB instance of
the cluster. The writer DB instance is the only DB instance where you can create tables and indexes, run
INSERT statements, and perform other DDL and DML operations.
523
Amazon Relational Database Service User Guide
Connecting to a Multi-AZ DB cluster
The physical IP address pointed to by the cluster endpoint changes when the failover mechanism
promotes a new DB instance to be the writer DB instance for the cluster. If you use any form of
connection pooling or other multiplexing, be prepared to flush or reduce the time-to-live for any cached
DNS information. Doing so ensures that you don't try to establish a read/write connection to a DB
instance that became unavailable or is now read-only after a failover.
Each Multi-AZ cluster has a single built-in reader endpoint, whose name and other attributes are
managed by Amazon RDS. You can't create, delete, or modify this kind of endpoint.
In day-to-day operations, the main way that you use instance endpoints is to diagnose capacity or
performance issues that affect one specific DB instance in a Multi-AZ DB cluster. While connected to a
specific DB instance, you can examine its status variables, metrics, and so on. Doing this can help you
determine what's happening for that DB instance that's different from what's happening for other DB
instances in the cluster.
If the writer DB instance of a DB cluster fails, Amazon RDS automatically fails over to a new writer DB
instance. It does so by promoting a reader DB instance to a new writer DB instance. If a failover occurs,
you can use the writer endpoint to reconnect to the newly promoted writer DB instance. Or you can
use the reader endpoint to reconnect to one of the reader DB instances in the DB cluster. During a
failover, the reader endpoint might direct connections to the new writer DB instance of a DB cluster for
a short time after a reader DB instance is promoted to the new writer DB instance. If you design your
own application logic to manage instance endpoint connections, you can manually or programmatically
discover the resulting set of available DB instances in the DB cluster.
524
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
Topics
• Automatically connecting an EC2 instance and a Multi-AZ DB cluster (p. 525)
• Automatically connecting a Lambda function and a Multi-AZ DB cluster (p. 530)
If you want to connect to an EC2 instance that isn't in the same VPC as the Multi-AZ DB cluster, see the
scenarios in the section called “Scenarios for accessing a DB instance in a VPC” (p. 2701).
Topics
• Overview of automatic connectivity with an EC2 instance (p. 525)
• Connecting an EC2 instance and a Multi-AZ DB cluster automatically (p. 528)
• Viewing connected compute resources (p. 529)
The following are requirements for connecting an EC2 instance with a Multi-AZ DB cluster:
• The EC2 instance must exist in the same VPC as the Multi-AZ DB cluster.
If no EC2 instances exist in the same VPC, the console provides a link to create one.
525
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
• The user who is setting up connectivity must have permissions to perform the following EC2
operations:
• ec2:AuthorizeSecurityGroupEgress
• ec2:AuthorizeSecurityGroupIngress
• ec2:CreateSecurityGroup
• ec2:DescribeInstances
• ec2:DescribeNetworkInterfaces
• ec2:DescribeSecurityGroups
• ec2:ModifyNetworkInterfaceAttribute
• ec2:RevokeSecurityGroupEgress
When you set up a connection to an EC2 instance, Amazon RDS acts according to the current
configuration of the security groups associated with the Multi-AZ DB cluster and EC2 instance, as
described in the following table.
Current RDS security group Current EC2 security group RDS action
configuration configuration
There are one or more security There are one or more security Amazon RDS takes no action.
groups associated with the Multi-AZ groups associated with the EC2
DB cluster with a name that matches instance with a name that matches A connection was already configured
the pattern rds-ec2-n (where n the pattern rds-ec2-n (where n automatically between the EC2
is a number). A security group that is a number). A security group that instance and Multi-AZ DB cluster.
matches the pattern hasn't been matches the pattern hasn't been Because a connection already exists
modified. This security group has modified. This security group has between the EC2 instance and the
only one inbound rule with the VPC only one outbound rule with the RDS database, the security groups
security group of the EC2 instance VPC security group of the Multi-AZ aren't modified.
as the source. DB cluster as the source.
Either of the following conditions Either of the following conditions RDS action: create new security
apply: apply: groups
526
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
Current RDS security group Current EC2 security group RDS action
configuration configuration
There are one or more security There are one or more security RDS action: create new security
groups associated with the Multi-AZ groups associated with the EC2 groups
DB cluster with a name that matches instance with a name that matches
the pattern rds-ec2-n. A security the pattern ec2-rds-n. However,
group that matches the pattern none of these security groups can
hasn't been modified. This security be used for the connection with the
group has only one inbound rule Multi-AZ DB cluster. A security group
with the VPC security group of the can't be used if it doesn't have one
EC2 instance as the source. outbound rule with the VPC security
group of the Multi-AZ DB cluster
as the source. A security group also
can't be used if it has been modified.
There are one or more security A valid EC2 security group for the RDS action: associate EC2 security
groups associated with the Multi-AZ connection exists, but it is not group
DB cluster with a name that matches associated with the EC2 instance.
the pattern rds-ec2-n. A security This security group has a name that
group that matches the pattern matches the pattern rds-ec2-n.
hasn't been modified. This security It hasn't been modified. It has only
group has only one inbound rule one outbound rule with the VPC
with the VPC security group of the security group of the Multi-AZ DB
EC2 instance as the source. cluster as the source.
Either of the following conditions There are one or more security RDS action: create new security
apply: groups associated with the EC2 groups
instance with a name that matches
• There is no security group the pattern rds-ec2-n. A security
associated with the Multi-AZ DB group that matches the pattern
cluster with a name that matches hasn't been modified. This security
the pattern rds-ec2-n. group has only one outbound rule
• There are one or more security with the VPC security group of the
groups associated with the Multi- Multi-AZ DB cluster as the source.
AZ DB cluster with a name that
matches the pattern rds-ec2-n.
However, none of these security
groups can be used for the
connection with the EC2 instance.
A security group can't be used if
it doesn't have one inbound rule
with the VPC security group of
the EC2 instance as the source. A
security group also can't be used if
it has been modified.
• Creates a new security group that matches the pattern rds-ec2-n. This security group has an
inbound rule with the VPC security group of the EC2 instance as the source. This security group is
associated with the Multi-AZ DB cluster and allows the EC2 instance to access the Multi-AZ DB cluster.
• Creates a new security group that matches the pattern ec2-rds-n. This security group has an
outbound rule with the VPC security group of the Multi-AZ DB cluster as the source. This security
527
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
group is associated with the EC2 instance and allows the EC2 instance to send traffic to the Multi-AZ
DB cluster.
Amazon RDS associates the valid, existing EC2 security group with the EC2 instance. This security group
allows the EC2 instance to send traffic to the Multi-AZ DB cluster.
If you make changes to security groups after you configure connectivity, the changes might affect the
connection between the EC2 instance and the RDS database.
Note
You can only set up a connection between an EC2 instance and an RDS database automatically
by using the AWS Management Console. You can't set up a connection automatically with the
AWS CLI or RDS API.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS database.
3. From Actions, choose Set up EC2 connection.
If no EC2 instances exist in the same VPC, choose Create EC2 instance to create one. In this case,
make sure the new EC2 instance is in the same VPC as the RDS database.
5. Choose Continue.
528
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
6. On the Review and confirm page, review the changes that RDS will make to set up connectivity with
the EC2 instance.
• You can select the compute resource when you create the database.
For more information, see Creating an Amazon RDS DB instance (p. 300) and Creating a Multi-AZ DB
cluster (p. 508).
529
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
• You can set up connectivity between an existing database and a compute resource.
For more information, see Automatically connecting an EC2 instance and an RDS database (p. 388).
The listed compute resources don't include ones that were connected to the database manually. For
example, you can allow a compute resource to access a database manually by adding a rule to the VPC
security group associated with the database.
• The name of the security group associated with the compute resource matches the pattern ec2-
rds-n (where n is a number).
• The security group associated with the compute resource has an outbound rule with the port range set
to the port that the RDS database uses.
• The security group associated with the compute resource has an outbound rule with the source set to a
security group associated with the RDS database.
• The name of the security group associated with the RDS database matches the pattern rds-ec2-n
(where n is a number).
• The security group associated with the RDS database has an inbound rule with the port range set to
the port that the RDS database uses.
• The security group associated with the RDS database has an inbound rule with the source set to a
security group associated with the compute resource.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the name of the RDS database.
3. On the Connectivity & security tab, view the compute resources in the Connected compute
resources.
The following image shows a direct connection between your Multi-AZ DB cluster and your Lambda
function.
530
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
You can set up the connection between your Lambda function and your database through RDS Proxy
to improve your database performance and resiliency. Often, Lambda functions make frequent, short
database connections that benefit from connection pooling that RDS Proxy offers. You can take
advantage of any IAM authentication that you already have for Lambda functions, instead of managing
database credentials in your Lambda application code. For more information, see Using Amazon RDS
Proxy (p. 1199).
You can use the console to automatically create a proxy for your connection. You can also select existing
proxies. The console updates the proxy security group to allow connections from your database and
Lambda function. You can input your database credentials or select the Secrets Manager secret you
require to access the database.
Topics
• Overview of automatic connectivity with a Lambda function (p. 532)
531
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
The following are requirements for connecting a Lambda function with a Multi-AZ DB cluster:
• The Lambda function must exist in the same VPC as the Multi-AZ DB cluster.
If no Lambda function exists in the same VPC, the console provides a link to create one.
• The user who sets up connectivity must have permissions to perform the following Amazon RDS,
Amazon EC2, Lambda, Secrets Manager, and IAM operations:
• Amazon RDS
• rds:CreateDBProxies
• rds:DescribeDBInstances
• rds:DescribeDBProxies
• rds:ModifyDBInstance
• rds:ModifyDBProxy
• rds:RegisterProxyTargets
• Amazon EC2
• ec2:AuthorizeSecurityGroupEgress
• ec2:AuthorizeSecurityGroupIngress
• ec2:CreateSecurityGroup
• ec2:DeleteSecurityGroup
• ec2:DescribeSecurityGroups
• ec2:RevokeSecurityGroupEgress
• ec2:RevokeSecurityGroupIngress
• Lambda
• lambda:CreateFunctions
• lambda:ListFunctions
• lambda:UpdateFunctionConfiguration
• Secrets Manager
• sercetsmanager:CreateSecret
• secretsmanager:DescribeSecret
• IAM
• iam:AttachPolicy
• iam:CreateRole
• iam:CreatePolicy
• AWS KMS
• kms:describeKey
When you set up a connection between a Lambda function and a Multi-AZ DB cluster, Amazon RDS
configures the VPC security group for your function
532 and for your Multi-AZ DB cluster. If you use RDS
Proxy, then Amazon RDS also configures the VPC security group for the proxy. Amazon RDS acts
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
according to the current configuration of the security groups associated with the Multi-AZ DB cluster and
Lambda function, and proxy, as described in the following table.
Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration
Amazon RDS takes no There are one or more There are one or more There are one or more
action because security security groups associated security groups associated security groups associated
groups of all resources with the Multi-AZ DB with the Lambda function with the proxy with a
follow the correct naming cluster with a name that with a name that matches name that matches the
pattern and have the right matches the pattern the pattern lambda- pattern rdsproxy-
inbound and outbound rds-lambda-n (where rds-n or lambda- lambda-n (where n is a
rules. n is a number) or if rdsproxy-n (where n is a number).
the TargetHealth of number).
an associated proxy is A security group that
AVAILABLE. A security group that matches the pattern
matches the pattern hasn't been modified.
A security group that hasn't been modified. This This security group has
matches the pattern security group has only inbound and outbound
hasn't been modified. This one outbound rule with rules with the VPC security
security group has only either the VPC security groups of the Lambda
one inbound rule with the group of the Multi-AZ DB function and the Multi-AZ
VPC security group of the cluster or the proxy as the DB cluster.
Lambda function or proxy destination.
as the source.
Either of the following Either of the following Either of the following RDS action: create new
conditions apply: conditions apply: conditions apply: security groups
533
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration
have one inbound rule source. Amazon RDS also a security group that has
with the VPC security can't use a security group been modified.
group of the Lambda that has been modified.
function or proxy as the
source. Amazon RDS also
can't use a security group
that has been modified.
Examples of modifications
include adding a rule or
changing the port of an
existing rule.
There are one or more There are one or more There are one or more RDS action: create new
security groups associated security groups associated security groups associated security groups
with the Multi-AZ DB with the Lambda function with the proxy with a
cluster with a name that with a name that matches name that matches the
matches the pattern the pattern lambda- pattern rdsproxy-
rds-lambda-n or if rds-n or lambda- lambda-n.
the TargetHealth of rdsproxy-n.
an associated proxy is However, Amazon RDS
AVAILABLE. However, Amazon RDS can't use any of these
can't use any of these security groups for the
A security group that security groups for the connection with the Multi-
matches the pattern connection with the Multi- AZ DB cluster or Lambda
hasn't been modified. This AZ DB cluster. Amazon function. Amazon RDS
security group has only RDS can't use a security can't use a security group
one inbound rule with the group that doesn't have that doesn't have inbound
VPC security group of the one outbound rule with and outbound rules with
Lambda function or proxy the VPC security group of the VPC security group of
as the source. the Multi-AZ DB cluster or the Multi-AZ DB cluster
proxy as the destination. and the Lambda function.
Amazon RDS also can't use Amazon RDS also can't use
a security group that has a security group that has
been modified. been modified.
There are one or more A valid Lambda security A valid proxy security RDS action: associate
security groups associated group for the connection group for the connection Lambda security group
with the Multi-AZ DB exists, but it is not exists, but it is not
cluster with a name that associated with the associated with the proxy.
matches the pattern Lambda function. This This security group has
rds-lambda-n or if security group has a a name that matches
the TargetHealth of name that matches the the pattern rdsproxy-
an associated proxy is pattern lambda-rds-n lambda-n. It hasn't been
AVAILABLE. or lambda-rdsproxy-n. modified. It has inbound
It hasn't been modified. and outbound rules with
A security group that It has only one outbound the VPC security group of
matches the pattern rule with the VPC security the Multi-AZ DB cluster
hasn't been modified. This group of the Multi-AZ DB and the Lambda function.
security group has only cluster or proxy as the
one inbound rule with the destination.
VPC security group of the
Lambda function or proxy
as the source.
534
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration
Either of the following There are one or more There are one or more RDS action: create new
conditions apply: security groups associated security groups associated security groups
with the Lambda function with the proxy with a
• There is no security with a name that matches name that matches the
group associated with the pattern lambda- pattern rdsproxy-
the Multi-AZ DB cluster rds-n or lambda- lambda-n.
with a name that rdsproxy-n.
matches the pattern A security group that
rds-lambda-n or if A security group that matches the pattern
the TargetHealth of matches the pattern hasn't been modified.
an associated proxy is hasn't been modified. This This security group has
AVAILABLE. security group has only inbound and outbound
• There are one or one outbound rule with rules with the VPC security
more security groups the VPC security group of group of the Multi-AZ DB
associated with the the Multi-AZ DB cluster or cluster and the Lambda
Multi-AZ DB cluster proxy as the destination. function.
with a name that
matches the pattern
rds-lambda-n or if
the TargetHealth of
an associated proxy is
AVAILABLE. However,
Amazon RDS can'can't
use any of these
security groups for the
connection with the
Lambda function or
proxy.
535
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration
There are one or more Either of the following Either of the following RDS action: create new
security groups associated conditions apply: conditions apply: security groups
with the Multi-AZ DB
cluster with a name that • There is no security • There is no security
matches the pattern rds- group associated with group associated with
rdsproxy-n (where n is a the Lambda function the proxy with a name
number). with a name that that matches the
matches the pattern pattern rdsproxy-
lambda-rds-n or lambda-n.
lambda-rdsproxy-n. • There are one or
• There are one or more security groups
more security groups associated with the
associated with the proxy with a name that
Lambda function matches rdsproxy-
with a name that lambda-n. However,
matches the pattern Amazon RDS can't
lambda-rds-n or use any of these
lambda-rdsproxy-n. security groups for the
However, Amazon RDS connection with the
can't use any of these Multi-AZ DB cluster or
security groups for the Lambda function.
connection with the
Multi-AZ DB cluster.
Amazon RDS can't use
a security group that
Amazon RDS can't use a doesn't have inbound and
security group that doesn't outbound rules with the
have one outbound rule VPC security group of
with the VPC security the Multi-AZ DB cluster
group of the Multi-AZ DB and the Lambda function.
cluster or proxy as the Amazon RDS also can't use
destination. Amazon RDS a security group that has
also can't use a security been modified.
group that has been
modified.
• Creates a new security group that matches the pattern rds-lambda-n.This security group has an
inbound rule with the VPC security group of the Lambda function or proxy as the source. This security
group is associated with the Multi-AZ DB cluster and allows the function or proxy to access the Multi-
AZ DB cluster.
• Creates a new security group that matches the pattern lambda-rds-n. This security group has an
outbound rule with the VPC security group of the Multi-AZ DB cluster or proxy as the destination. This
security group is associated with the Lambda function and allows the Lambda function to send traffic
to the Multi-AZ DB cluster or send traffic through a proxy.
• Creates a new security group that matches the pattern rdsproxy-lambda-n. This security group has
inbound and outbound rules with the VPC security group of the Multi-AZ DB cluster and the Lambda
function.
536
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
Amazon RDS associates the valid, existing Lambda security group with the Lambda function. This
security group allows the function to send traffic to the Multi-AZ DB cluster or send traffic through a
proxy.
You can also use RDS Proxy to include a proxy in your connection. Lambda functions make frequent
short database connections that benefit from the connection pooling that RDS Proxy offers. You can also
use any IAM authentication that you've already set up for your Lambda function, instead of managing
database credentials in your Lambda application code.
You can connect an existing Multi-AZ DB cluster to new and existing Lambda functions using the Set up
Lambda connection page. The setup process automatically sets up the required security groups for you.
Before setting up a connection between a Lambda function and a Multi-AZ DB cluster, make sure that:
• Your Lambda function and Multi-AZ DB cluster are in the same VPC.
• You have the right permissions for your user account. For more information about the requirements,
see Overview of automatic connectivity with a Lambda function (p. 393).
If you change security groups after you configure connectivity, the changes might affect the connection
between the Lambda function and the Multi-AZ DB cluster.
Note
You can automatically set up a connection between a Multi-AZ DB cluster and a Lambda
function only in the AWS Management Console. To connect a Lambda function, all instances in
the Multi-AZ DB cluster must be in the Available state.
After you confirm the setup, Amazon RDS begins the process of connecting your Lambda function, RDS
Proxy (if you used a proxy), and Multi-AZ DB cluster. The console shows the Connection details dialog
box, which lists the security group changes that allow connections between your resources.
</result>
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the Multi-AZ DB cluster that you want to
connect to a Lambda function.
3. For Actions, choose Set up Lambda connection.
4. On the Set up Lambda connection page, under Select Lambda function, do either of the following:
• If you have an existing Lambda function in the same VPC as your Multi-AZ DB cluster, choose
Choose existing function, and then choose the function.
• If you don't have a Lambda function in the same VPC, choose Create new function, and then
enter a Function name. The default runtime is set to Nodejs.18. You can modify the settings for
your new Lambda function in the Lambda console after you complete the connection setup.
5. (Optional) Under RDS Proxy, select Connect using RDS Proxy, and then do any of the following:
• If you have an existing proxy that you want to use, choose Choose existing proxy, and then
choose the proxy.
537
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster
• If you don't have a proxy, and you want Amazon RDS to automatically create one for you,
choose Create new proxy. Then, for Database credentials, do either of the following:
a. Choose Database username and password, and then enter the Username and Password
for your Multi-AZ DB cluster.
b. Choose Secrets Manager secret. Then, for Select secret, choose an AWS Secrets Manager
secret. If you don't have a Secrets Manager secret, choose Create new Secrets Manager
secret to create a new secret. After you create the secret, for Select secret, choose the new
secret.
After you create the new proxy, choose Choose existing proxy, and then choose the proxy. Note
that it might take some time for your proxy to be available for connection.
6. (Optional) Expand Connection summary and verify the highlighted updates for your resources.
7. Choose Set up.
The listed compute resources don't include those that are manually connected to the Multi-AZ DB
cluster. For example, you can allow a compute resource to access your Multi-AZ DB cluster manually by
adding a rule to your VPC security group associated with the cluster.
For the console to list a Lambda function, the following conditions must apply:
• The name of the security group associated with the compute resource matches the pattern lambda-
rds-n or lambda-rdsproxy-n (where n is a number).
• The security group associated with the compute resource has an outbound rule with the port range set
to the port of the Multi-AZ DB cluster or an associated proxy. The destination for the outbound rule
must be set to a security group associated with the Multi-AZ DB cluster or an associated proxy.
• The name of the security group attached to the proxy associated with your database matches the
pattern rds-rdsproxy-n (where n is a number).
• The security group associated with the function has an outbound rule with the port set to the port
that the Multi-AZ DB cluster or associated proxy uses. The destination must be set to a security group
associated with the Multi-AZ DB cluster or associated proxy.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the Multi-AZ DB cluster.
3. On the Connectivity & security tab, view the compute resources under Connected compute
resources.
538
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster
You can modify a Multi-AZ DB cluster to change its settings. You can also perform operations on a Multi-
AZ DB cluster, such as taking a snapshot of it. However, you can't modify the DB instances in a Multi-AZ
DB cluster, and the only supported operation is rebooting a DB instance.
Note
Multi-AZ DB clusters are supported only for the MySQL and PostgreSQL DB engines.
You can modify a Multi-AZ DB cluster using the AWS Management Console, the AWS CLI, or the RDS API.
Console
To modify a Multi-AZ DB cluster
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the Multi-AZ DB cluster that you want to
modify.
3. Choose Modify. The Modify DB cluster page appears.
4. Change any of the settings that you want. For information about each setting, see Settings for
modifying Multi-AZ DB clusters (p. 540).
5. When all the changes are as you want them, choose Continue and check the summary of
modifications.
6. (Optional) Choose Apply immediately to apply the changes immediately. Choosing this option can
cause downtime in some cases. For more information, see Applying changes immediately (p. 540).
7. On the confirmation page, review your changes. If they're correct, choose Modify DB cluster to save
your changes.
AWS CLI
To modify a Multi-AZ DB cluster by using the AWS CLI, call the modify-db-cluster command. Specify the
DB cluster identifier and the values for the options that you want to modify. For information about each
option, see Settings for modifying Multi-AZ DB clusters (p. 540).
Example
The following code modifies my-multi-az-dbcluster by setting the backup retention period to
1 week (7 days). The code turns on deletion protection by using --deletion-protection. To turn
off deletion protection, use --no-deletion-protection. The changes are applied during the next
maintenance window by using --no-apply-immediately. Use --apply-immediately to apply the
changes immediately. For more information, see Applying changes immediately (p. 540).
539
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster
For Windows:
RDS API
To modify a Multi-AZ DB cluster by using the Amazon RDS API, call the ModifyDBCluster operation.
Specify the DB cluster identifier, and the parameters for the settings that you want to modify. For
information about each parameter, see Settings for modifying Multi-AZ DB clusters (p. 540).
If you don't choose to apply changes immediately, the changes are put into the pending modifications
queue. During the next maintenance window, any pending changes in the queue are applied. If you
choose to apply changes immediately, your new changes and any changes in the pending modifications
queue are applied.
Important
If any of the pending modifications require the DB cluster to be temporarily unavailable
(downtime), choosing the apply immediately option can cause unexpected downtime.
When you choose to apply a change immediately, any pending modifications are also applied
immediately, instead of during the next maintenance window.
If you don't want a pending change to be applied in the next maintenance window, you
can modify the DB instance to revert the change. You can do this by using the AWS CLI and
specifying the --apply-immediately option.
Changes to some database settings are applied immediately, even if you choose to defer your changes.
To see how the different database settings interact with the apply immediately setting, see Settings for
modifying Multi-AZ DB clusters (p. 540).
Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs
Allocated The amount of CLI option: If you choose to apply Downtime doesn't occur
storage storage to allocate the change immediately, during this change.
for each DB instance --allocated- it occurs immediately.
in your DB cluster (in storage
gibibytes). If you don't choose
API parameter: to apply the change
For more information, immediately, it occurs
see Amazon AllocatedStorage during the next
RDS DB instance maintenance window.
storage (p. 101).
540
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster
Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs
Auto Enable auto minor CLI option: The change occurs Downtime doesn't occur
minor version upgrade to immediately. This during this change.
version have your DB cluster --auto-minor- setting ignores the
upgrade receive preferred version-upgrade apply immediately
minor DB engine setting.
version upgrades --no-auto-minor-
automatically version-upgrade
when they become
API parameter:
available. Amazon
RDS performs AutoMinorVersionUpgrade
automatic minor
version upgrades
in the maintenance
window.
Backup The number of CLI option: If you choose to apply Downtime occurs if
retention days that you want the change immediately, you change from 0 to a
period automatic backups --backup- it occurs immediately. nonzero value, or from a
of your DB cluster to retention-period nonzero value to 0.
be retained. For any If you don't choose
nontrivial DB cluster, API parameter: to apply the change
set this value to 1 or immediately, and you
BackupRetentionPeriodchange the setting
greater.
from a nonzero value
For more information, to another nonzero
see Working with value, the change is
backups (p. 591). applied asynchronously,
as soon as possible.
Otherwise, the change
occurs during the next
maintenance window.
Backup The time period CLI option: The change is applied Downtime doesn't occur
window during which Amazon asynchronously, as soon during this change.
RDS automatically --preferred- as possible.
takes a backup of backup-window
your DB cluster.
Unless you have a API parameter:
specific time that you
PreferredBackupWindow
want to have your
database backed up,
use the default of No
preference.
541
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster
Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs
Copy This option copies CLI option: The change occurs Downtime doesn't occur
tags to any DB cluster tags immediately. This during this change.
snapshots to a DB snapshot -copy-tags-to- setting ignores the
when you create a snapshot apply immediately
snapshot. setting.
-no-copy-tags-to-
For more information, snapshot
see Tagging
Amazon RDS RDS API parameter:
resources (p. 461).
CopyTagsToSnapshot
Database For Multi-AZ None because password If you choose to apply Downtime doesn't occur
authentication
DB clusters, authentication is the the change immediately, during this change.
only Password default. it occurs immediately.
authentication is
supported. If you don't choose
to apply the change
immediately, it occurs
during the next
maintenance window.
542
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster
Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs
DB The DB cluster CLI option: If you choose to apply An outage doesn't occur
cluster identifier. This value is the change immediately, during this change.
identifier stored as a lowercase --new-db-cluster- it occurs immediately.
string. identifier
If you don't choose
When you change the RDS API parameter: to apply the change
DB cluster identifier, immediately, it occurs
the DB cluster NewDBClusterIdentifier
during the next
endpoint changes. maintenance window.
The identifiers and
endpoints of the
DB instances in
the DB cluster also
change. The new
DB cluster name
must be unique. The
maximum length is
63 characters.
543
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster
Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs
DB The compute and CLI option: If you choose to apply Downtime occurs during
cluster memory capacity the change immediately, this change.
instance of each DB instance --db-cluster- it occurs immediately.
class in the Multi-AZ DB instance-class
cluster, for example If you don't choose
db.r6gd.xlarge. RDS API parameter: to apply the change
immediately, it occurs
If possible, choose DBClusterInstanceClass
during the next
a DB instance class maintenance window.
large enough that a
typical query working
set can be held in
memory. When
working sets are
held in memory, the
system can avoid
writing to disk,
which improves
performance.
Currently, Multi-
AZ DB clusters only
support db.m6gd and
db.r6gd DB instance
classes. For more
information about
DB instance classes,
see DB instance
classes (p. 11).
DB The version of CLI option: If you choose to apply An outage occurs during
engine database engine that the change immediately, this change.
version you want to use. --engine-version it occurs immediately.
544
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster
Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs
Deletion Enable deletion CLI option: The change occurs An outage doesn't occur
protection protection to prevent immediately. This during this change.
your DB cluster from --deletion- setting ignores the
being deleted. protection apply immediately
setting.
For more information, --no-deletion-
see Deleting a DB protection
instance (p. 489).
RDS API parameter:
DeletionProtection
Maintenance
The 30-minute CLI option: The change occurs If there are one or
window window in immediately. This more pending actions
which pending --preferred- setting ignores the that cause downtime,
modifications to maintenance-window apply immediately and the maintenance
your DB cluster setting. window is changed to
are applied. If the RDS API parameter: include the current time,
time period doesn't those pending actions
PreferredMaintenanceWindow
matter, choose No are applied immediately
preference. and downtime occurs.
545
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster
Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs
Manage Select Manage CLI option: If you are turning Downtime doesn't occur
master master credentials on or turning off during this change.
credentials in AWS Secrets --manage-master- automatic master user
in AWS Manager to manage user-password | -- password management,
Secrets the master user no-manage-master- the change occurs
Manager password in a secret user-password immediately. This
in Secrets Manager. change ignores the
--master-user- apply immediately
Optionally, choose secret-kms-key-id setting.
a KMS key to use to
protect the secret. --rotate-master- If you are rotating the
Choose from the KMS user-password | -- master user password,
keys in your account, no-rotate-master- you must specify that
or enter the key from user-password the change is applied
a different account. immediately.
RDS API parameter:
If RDS is already
managing the master ManageMasterUserPassword
user password for
MasterUserSecretKmsKeyId
the DB cluster, you
can rotate the master RotateMasterUserPassword
user password by
choosing Rotate
secret immediately.
New The password for CLI option: The change is applied Downtime doesn't occur
master your master user asynchronously, as during this change.
password account. --master-user- soon as possible. This
password setting ignores the
apply immediately
RDS API parameter: setting.
MasterUserPassword
546
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster
Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs
ProvisionedThe amount of CLI option: If you choose to apply Downtime doesn't occur
IOPS Provisioned IOPS the change immediately, during this change.
(input/output --iops it occurs immediately.
operations per
second) to be initially RDS API parameter: If you don't choose
allocated for the DB to apply the change
Iops immediately, it occurs
cluster. This setting
is available only if during the next
Provisioned IOPS maintenance window.
(io1) is selected as
the storage type.
547
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster
Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs
Public Publicly accessible CLI option: The change occurs An outage doesn't occur
access to give the DB cluster immediately. This during this change.
a public IP address, --publicly- setting ignores the
meaning that it's accessible apply immediately
accessible outside its setting.
virtual private cloud --no-publicly-
(VPC). To be publicly accessible
accessible, the DB
RDS API parameter:
cluster also has to be
in a public subnet in PubliclyAccessible
the VPC.
Not publicly
accessible to make
the DB cluster
accessible only from
inside the VPC.
To connect to a DB
cluster from outside
of its VPC, the DB
cluster must be
publicly accessible.
Also, access must
be granted using
the inbound rules
of the DB cluster's
security group, and
other requirements
must be met. For
more information,
see Can't connect
to Amazon RDS DB
instance (p. 2727).
548
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster
Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs
VPC The security groups CLI option: The change is applied An outage doesn't occur
security to associate with the asynchronously, as during this change.
group DB cluster. --vpc-security- soon as possible. This
group-ids setting ignores the
For more information, apply immediately
see Overview RDS API parameter: setting.
of VPC security
groups (p. 2680). VpcSecurityGroupIds
You also can't modify these settings for Multi-AZ DB clusters in the console.
--backtrack-window BacktrackWindow
--cloudwatch-logs-export-configuration CloudwatchLogsExportConfiguration
--db-instance-parameter-group-name DBInstanceParameterGroupName
--domain Domain
--domain-iam-role-name DomainIAMRoleName
--enable-global-write-forwarding | -- EnableGlobalWriteForwarding
no-enable-global-write-forwarding
--enable-iam-database-authentication EnableIAMDatabaseAuthentication
| --no-enable-iam-database-
authentication
--option-group-name OptionGroupName
--port Port
--scaling-configuration ScalingConfiguration
--storage-type StorageType
549
Amazon Relational Database Service User Guide
Renaming a Multi-AZ DB cluster
• When you rename a Multi-AZ DB cluster, the cluster endpoints for the Multi-AZ DB cluster change.
These endpoints change because they include the name you assigned to the Multi-AZ DB cluster. You
can redirect traffic from an old endpoint to a new one. For more information about Multi-AZ DB cluster
endpoints, see Connecting to a Multi-AZ DB cluster (p. 522).
• When you rename a Multi-AZ DB cluster, the old DNS name that was used by the Multi-AZ DB cluster
is deleted, although it could remain cached for a few minutes. The new DNS name for the renamed
Multi-AZ DB cluster becomes effective in about two minutes. The renamed Multi-AZ DB cluster isn't
available until the new name becomes effective.
• You can't use an existing Multi-AZ DB cluster name when renaming a cluster.
• Metrics and events associated with the name of a Multi-AZ DB cluster are maintained if you reuse a DB
cluster name.
• Multi-AZ DB cluster tags remain with the Multi-AZ DB cluster, regardless of renaming.
• DB cluster snapshots are retained for a renamed Multi-AZ DB cluster.
Note
A Multi-AZ DB cluster is an isolated database environment running in the cloud. A Multi-AZ DB
cluster can host multiple databases. For information about changing a database name, see the
documentation for your DB engine.
1. Stop all traffic going to the Multi-AZ DB cluster. You can redirect traffic from accessing the databases
on the Multi-AZ DB cluster, or choose another way to prevent traffic from accessing your databases on
the Multi-AZ DB cluster.
2. Rename the existing Multi-AZ DB cluster.
3. Create a new Multi-AZ DB cluster by restoring from a DB cluster snapshot or recovering to a point in
time. Then, give the new Multi-AZ DB cluster the name of the previous Multi-AZ DB cluster.
If you delete the old Multi-AZ DB cluster, you are responsible for deleting any unwanted DB cluster
snapshots of the old Multi-AZ DB cluster.
Console
To rename a Multi-AZ DB cluster
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the Multi-AZ DB cluster that you want to rename.
4. Choose Modify.
550
Amazon Relational Database Service User Guide
Renaming a Multi-AZ DB cluster
Alternatively, choose Back to edit your changes, or choose Cancel to discard your changes.
AWS CLI
To rename a Multi-AZ DB cluster, use the AWS CLI command modify-db-cluster. Provide the current
--db-cluster-identifier value and --new-db-cluster-identifier parameter with the new
name of the Multi-AZ DB cluster.
Example
For Windows:
RDS API
To rename a Multi-AZ DB cluster, call the Amazon RDS API operation ModifyDBCluster with the
following parameters:
551
Amazon Relational Database Service User Guide
Rebooting a Multi-AZ DB cluster
If a DB cluster isn't using the latest changes to its associated DB cluster parameter group, the AWS
Management Console shows the DB cluster parameter group with a status of pending-reboot. The
pending-reboot parameter groups status doesn't result in an automatic reboot during the next
maintenance window. To apply the latest parameter changes to that DB cluster, manually reboot the DB
cluster. For more information about parameter groups, see Working with parameter groups for Multi-AZ
DB clusters (p. 503).
Rebooting a DB cluster restarts the database engine service. Rebooting a DB cluster results in a
momentary outage, during which the DB cluster status is set to rebooting.
You can't reboot your DB cluster if it isn't in the Available state. Your database can be unavailable for
several reasons, such as an in-progress backup, a previously requested modification, or a maintenance-
window action.
The time required to reboot your DB cluster depends on the crash recovery process, the database activity
at the time of reboot, and the behavior of your specific DB cluster. To improve the reboot time, we
recommend that you reduce database activity as much as possible during the reboot process. Reducing
database activity reduces rollback activity for in-transit transactions.
Important
Multi-AZ DB clusters don't support reboot with a failover. When you reboot the writer instance
of a Multi-AZ DB cluster, it doesn't affect the reader DB instances in that DB cluster and no
failover occurs. When you reboot a reader DB instance, no failover occurs. To fail over a Multi-AZ
DB cluster, choose Failover in the console, call the AWS CLI command failover-db-cluster,
or call the API operation FailoverDBCluster.
Console
To reboot a DB cluster
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the Multi-AZ DB cluster that you want to
reboot.
3. For Actions, choose Reboot.
Or choose Cancel.
AWS CLI
To reboot a Multi-AZ DB cluster by using the AWS CLI, call the reboot-db-cluster command.
552
Amazon Relational Database Service User Guide
Rebooting a Multi-AZ DB cluster
RDS API
To reboot a Multi-AZ DB cluster by using the Amazon RDS API, call the RebootDBCluster operation.
553
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas
You can also create one or more DB instance read replicas from a Multi-AZ DB cluster. DB instance read
replicas let you scale beyond the compute or I/O capacity of the source Multi-AZ DB cluster by directing
excess read traffic to the read replicas. Currently, you can't create a Multi-AZ DB cluster read replica from
an existing Multi-AZ DB cluster.
Topics
• Migrating to a Multi-AZ DB cluster using a read replica (p. 554)
• Creating a DB instance read replica from a Multi-AZ DB cluster (p. 557)
Consider the following before you create a Multi-AZ DB cluster read replica:
• The source DB instance must be on a version that supports Multi-AZ DB clusters. For more information,
see Multi-AZ DB clusters (p. 147).
• The Multi-AZ DB cluster read replica must be on the same major version as its source, and the same or
higher minor version.
• You must turn on automatic backups on the source DB instance by setting the backup retention period
to a value other than 0.
• The allocated storage of the source DB instance must be 100 GiB or higher.
• For RDS for MySQL, both the gtid-mode and enforce_gtid_consistency parameters must be set
to ON for the source DB instance. You must use a custom parameter group, not the default parameter
group. For more information, see the section called “Working with DB parameter groups” (p. 349).
• An active, long-running transaction can slow the process of creating the read replica. We recommend
that you wait for long-running transactions to complete before creating a read replica.
• If you delete the source DB instance for a Multi-AZ DB cluster read replica, the read replica is promoted
to a standalone Multi-AZ DB cluster.
554
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas
error when creating the read replica, choose a different destination DB subnet group. For more
information, see Working with a DB instance in a VPC (p. 2688).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Create the Multi-AZ DB cluster read replica.
a. Stop any transactions from being written to the source DB instance, and then wait for all
updates to be made to the read replica.
Database updates occur on the read replica after they have occurred on the primary DB
instance. This replication lag can vary significantly. Use the ReplicaLag metric to determine
when all updates have been made to the read replica. For more information about replica lag,
see Monitoring read replication (p. 449).
b. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
c. In the Amazon RDS console, choose Databases.
The Databases pane appears. Each read replica shows Replica in the Role column.
d. Choose the Multi-AZ DB cluster read replica that you want to promote.
e. For Actions, choose Promote.
f. On the Promote read replica page, enter the backup retention period and the backup window
for the newly promoted Multi-AZ DB cluster.
g. When the settings are as you want them, choose Promote read replica.
h. Wait for the status of the promoted Multi-AZ DB cluster to be Available.
i. Direct your applications to use the promoted Multi-AZ DB cluster.
AWS CLI
555
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas
To create a read replica from the source DB instance, use the AWS CLI command create-db-
cluster. For --replication-source-identifier, specify the Amazon Resource Name (ARN)
of the source DB instance.
For Windows:
2. Stop any transactions from being written to the source DB instance, and then wait for all updates to
be made to the read replica.
Database updates occur on the read replica after they have occurred on the primary DB instance.
This replication lag can vary significantly. Use the Replica Lag metric to determine when all
updates have been made to the read replica. For more information about replica lag, see Monitoring
read replication (p. 449).
3. When you are ready, promote the read replica to be a standalone Multi-AZ DB cluster.
To promote a Multi-AZ DB cluster read replica, use the AWS CLI command promote-read-
replica-db-cluster. For --db-cluster-identifier, specify the identifier of the Multi-AZ DB
cluster read replica.
RDS API
To create a Multi-AZ DB cluster read replica, use the CreateDBCluster operation with the required
parameter DBClusterIdentifier. For ReplicationSourceIdentifier, specify the Amazon
Resource Name (ARN) of the source DB instance.
2. Stop any transactions from being written to the source DB instance, and then wait for all updates to
be made to the read replica.
Database updates occur on the read replica after they have occurred on the primary DB instance.
This replication lag can vary significantly. Use the Replica Lag metric to determine when all
updates have been made to the read replica. For more information about replica lag, see Monitoring
read replication (p. 449).
3. When you are ready, promote read replica to be a standalone Multi-AZ DB cluster.
556
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas
• You can't create a Multi-AZ DB cluster read replica in an AWS account that is different from the AWS
account that owns the source DB instance.
• You can't create a Multi-AZ DB cluster read replica in a different AWS Region from the source DB
instance.
• You can't recover a Multi-AZ DB cluster read replica to a point in time.
• Storage encryption must have the same settings on the source DB instance and Multi-AZ DB cluster.
• If the source DB instance is encrypted, the Multi-AZ DB cluster read replica must be encrypted using
the same KMS key.
• To perform a minor version upgrade on the source DB instance, you must first perform the minor
version upgrade on the Multi-AZ DB cluster read replica.
• You can't perform a major version upgrade on a Multi-AZ DB cluster.
• You can perform a major version upgrade on the source DB instance of a Multi-AZ DB cluster read
replica, but replication to the read replica stops and can't be restarted.
• The Multi-AZ DB cluster read replica doesn't support cascading read replicas.
• For RDS for PostgreSQL, Multi-AZ DB cluster read replicas can't fail over.
To create a read replica, specify a Multi-AZ DB cluster as the replication source. One of the reader
instances of the Multi-AZ DB cluster is always the source of replication, not the writer instance. This
condition ensures that the replica is always in sync with the source cluster, even in cases of failover.
Topics
• Comparing reader DB instances and DB instance read replicas (p. 558)
• Considerations (p. 558)
• Creating a DB instance read replica (p. 558)
• Promoting the DB instance read replica (p. 559)
• Limitations for creating a DB instance read replica from a Multi-AZ DB cluster (p. 560)
557
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas
• The reader DB instances act as automatic failover targets, while DB instance read replicas do not.
• Reader DB instances must acknowledge a change from the writer DB instance before the change can
be committed. For DB instance read replicas, however, updates are asynchronously copied to the read
replica without requiring acknowledgement.
• Reader DB instances always share the same instance class, storage type, and engine version as the
writer DB instance of the Multi-AZ DB cluster. DB instance read replicas, however, don’t necessarily
have to share the same configurations as the source cluster.
• You can promote a DB instance read replica to a standalone DB instance. You can’t promote a reader
DB instance of a Multi-AZ DB cluster to a standalone instance.
• The reader endpoint only routes requests to the reader DB instances of the Multi-AZ DB cluster. It
never routes requests to a DB instance read replica.
For more information about reader and writer DB instances, see the section called “Overview of Multi-AZ
DB clusters” (p. 500).
Considerations
Consider the following before you create a DB instance read replica from a Multi-AZ DB cluster:
• When you create the DB instance read replica, it must be on the same major version as its source
cluster, and the same or higher minor version. After you create it, you can optionally upgrade the read
replica to a higher minor version than the source cluster.
• When you create the DB instance read replica, the allocated storage must be the same as the allocated
storage of the source Multi-AZ DB cluster. You can change the allocated storage after the read replica
is created.
• For RDS for MySQL, the gtid-mode parameter must be set to ON for the source Multi-AZ DB cluster.
For more information, see the section called “Working with DB cluster parameter groups” (p. 360).
• An active, long-running transaction can slow the process of creating the read replica. We recommend
that you wait for long-running transactions to complete before creating a read replica.
• If you delete the source Multi-AZ DB cluster for a DB instance read replica, any read replicas that it's
writing to are promoted to standalone DB instances.
Console
To create a DB instance read replica from a Multi-AZ DB cluster, complete the following steps using the
AWS Management Console.
558
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the Multi-AZ DB cluster that you want to use as the source for a read replica.
4. For Actions, choose Create read replica.
5. For Replica source, make sure that the correct Multi-AZ DB cluster is selected.
6. For DB identifier, enter a name for the read replica.
7. For the remaining sections, specify your DB instance settings. For information about a setting, see
the section called “Available settings” (p. 308).
Note
The allocated storage for the DB instance read replica must be the same as the allocated
storage for the source Multi-AZ DB cluster.
8. Choose Create read replica.
AWS CLI
To create a DB instance read replica from a Multi-AZ DB cluster, use the AWS CLI command create-db-
instance-read-replica. For --source-db-cluster-identifier, specify the identifier of the
Multi-AZ DB cluster.
For Windows:
RDS API
To create a DB instance read replica from a Multi-AZ DB cluster, use the
CreateDBInstanceReadReplica operation.
If you're using the read replica to migrate a Multi-AZ DB cluster deployment to a Single-AZ or Multi-AZ
DB instance deployment, make sure to stop any transactions that are being written to the source DB
cluster. Then, wait for all updates to be made to the read replica. Database updates occur on the read
replica after they occur on one of the reader DB instances of the Multi-AZ DB cluster. This replication
lag can vary significantly. Use the ReplicaLag metric to determine when all updates have been made
to the read replica. For more information about replica lag, see the section called “Monitoring read
replication” (p. 449).
After you promote the read replica, wait for the status of the promoted DB instance to be Available
before you direct your applications to use the promoted DB instance. Optionally, delete the Multi-AZ DB
cluster deployment if you no longer need it. For instructions, see the section called “Deleting a Multi-AZ
DB cluster” (p. 563).
559
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas
• You can't create a DB instance read replica in an AWS account that's different from the AWS account
that owns the source Multi-AZ DB cluster.
• You can't create a DB instance read replica in a different AWS Region from the source Multi-AZ DB
cluster.
• You can't recover a DB instance read replica to a point in time.
• Storage encryption must have the same settings on the source Multi-AZ DB cluster and DB instance
read replica.
• If the source Multi-AZ DB cluster is encrypted, the DB instance read replica must be encrypted using
the same KMS key.
• To perform a minor version upgrade on the source Multi-AZ DB cluster, you must first perform the
minor version upgrade on the DB instance read replica.
• The DB instance read replica doesn't support cascading read replicas.
• For RDS for PostgreSQL, the source Multi-AZ DB cluster must be running PostgreSQL version 13.11,
14.8, or 15.2.R2 or higher in order to create a DB instance read replica.
• You can perform a major version upgrade on the source Multi-AZ DB cluster of a DB instance read
replica, but replication to the read replica stops and can't be restarted.
560
Amazon Relational Database Service User Guide
Using PostgreSQL logical replication
with Multi-AZ DB clusters
When you create a new logical replication slot on the writer DB instance of a Multi-AZ DB cluster, the slot
is asynchronously copied to each reader DB instance in the cluster. The slots on the reader DB instances
are continuously synchronized with those on the writer DB instance.
Logical replication is supported for Multi-AZ DB clusters running RDS for PostgreSQL version 14.8-R2
and higher, and 15.3-R2 and higher.
Note
In addition to the native PostgreSQL logical replication feature, Multi-AZ DB clusters running
RDS for PostgreSQL also support the pglogical extension.
For more information about PostgreSQL logical replication, see Logical replication in the PostgreSQL
documentation.
Topics
• Prerequisites (p. 561)
• Setting up logical replication (p. 561)
Prerequisites
To configure PostgreSQL logical replication for Multi-AZ DB clusters, you must meet the following
prerequisites.
• Your user account must be a member of the rds_superuser group and have rds_superuser
privileges. For more information, see the section called “Understanding PostgreSQL roles and
permissions” (p. 2271).
• Your Multi-AZ DB cluster must be associated with a custom DB cluster parameter group so that you
can configure the parameter values described in the following procedure. For more information, see
the section called “Working with DB cluster parameter groups” (p. 360).
1. Open the custom DB cluster parameter group associated with your RDS for PostgreSQL Multi-AZ DB
cluster.
2. In the Parameters search field, locate the rds.logical_replication static parameter and set its
value to 1. This parameter change can increase WAL generation, so enable it only when you’re using
logical slots.
3. As part of this change, configure the following DB cluster parameters.
• max_wal_senders
561
Amazon Relational Database Service User Guide
Using PostgreSQL logical replication
with Multi-AZ DB clusters
• max_replication_slots
• max_connections
Depending on your expected usage, you might also need to change the values of the following
parameters. However, in many cases, the default values are sufficient.
• max_logical_replication_workers
• max_sync_workers_per_subscription
4. Reboot the Multi-AZ DB cluster for the parameter values to take effect. For instructions, see the
section called “Rebooting a Multi-AZ DB cluster” (p. 552).
5. Create a logical replication slot on the writer DB instance of the Multi-AZ DB cluster as explained in
the section called “Working with logical replication slots” (p. 2161). This process requires that you
specify a decoding plugin. Currently, RDS for PostgreSQL supports the test_decoding, wal2json,
and pgoutput plugins that ship with PostgreSQL.
The following commands demonstrate how to inspect the replication state on the reader DB
instances.
% psql -h test-postgres-instance-2.abcdefabcdef.us-west-2.rds.amazonaws.com
% psql -h test-postgres-instance-3.abcdefabcdef.us-west-2.rds.amazonaws.com
After you complete your replication tasks, stop the replication process, drop replication slots, and turn
off logical replication. To turn off logical replication, modify your DB cluster parameter group and set the
value of rds.logical_replication back to 0. Reboot the cluster for the parameter change to take
effect.
562
Amazon Relational Database Service User Guide
Deleting a Multi-AZ DB cluster
The time required to delete a Multi-AZ DB cluster can vary depending on certain factors. These are the
backup retention period (that is, how many backups to delete), how much data is deleted, and whether a
final snapshot is taken.
You can't delete a Multi-AZ DB cluster when deletion protection is turned on for it. For more information,
see Prerequisites for deleting a DB instance (p. 489). You can turn off deletion protection by modifying
the Multi-AZ DB cluster. For more information, see Modifying a Multi-AZ DB cluster (p. 539).
Console
To delete a Multi-AZ DB cluster
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the Multi-AZ DB cluster that you want to
delete.
3. For Actions, choose Delete.
4. Choose Create final snapshot? to create a final DB snapshot for the Multi-AZ DB cluster.
If you create a final snapshot, enter a name for Final snapshot name.
5. Choose Retain automated backups to retain automated backups.
6. Enter delete me in the box.
7. Choose Delete.
AWS CLI
To delete a Multi-AZ DB cluster by using the AWS CLI, call the delete-db-cluster command with the
following options:
• --db-cluster-identifier
• --final-db-snapshot-identifier or --skip-final-snapshot
For Windows:
563
Amazon Relational Database Service User Guide
Deleting a Multi-AZ DB cluster
For Windows:
RDS API
To delete a Multi-AZ DB cluster by using the Amazon RDS API, call the DeleteDBCluster operation with
the following parameters:
• DBClusterIdentifier
• FinalDBSnapshotIdentifier or SkipFinalSnapshot
564
Amazon Relational Database Service User Guide
This paid feature gives you more time to upgrade to a supported major engine version. During Extended
Support, Amazon RDS will supply patches for Critical and High CVEs as defined by the National
Vulnerability Database (NVD) CVSS severity ratings. For more information, see Vulnerability Metrics. You
can also create new databases with major engine versions that have reached the RDS end of standard
support date. When you create these databases, Amazon RDS automatically enables Amazon RDS
Extended Support.
For example, the RDS end of standard support date for RDS for MySQL version 5.7 is February 29, 2024.
But you aren't ready to manually upgrade to RDS for MySQL version 8.0 before that date or for Amazon
RDS to automatically upgrade your DB instances to RDS for MySQL version 8.0 after that date. If you
enable Extended Support for those DB instances before February 29, 2024, you can continue to run
RDS for MySQL version 5.7, and starting March 1, 2024, Amazon RDS will automatically charge you for
Extended Support.
This additional charge for Extended Support ends as soon as you upgrade to a supported major
engine version or you delete the database that was running a major version past the RDS end of
standard support date. For more information, see Amazon RDS for MySQL pricing and Amazon RDS for
PostgreSQL pricing.
Note
Extended Support is only available on the last minor version released before the RDS end of
standard support date. If you enable Extended Support, Amazon RDS automatically upgrades
your DB instance to a minor version that supports Extended Support. Amazon RDS won't
upgrade your minor version until after the RDS end of standard support date for your major
engine version. For more information, see Supported MySQL minor versions on Amazon
RDS (p. 1627).
Extended Support is available for up to 3 years past the RDS end of standard support date for a major
engine version. After 3 years, if you haven't upgraded your major engine version to a supported version,
then Amazon RDS will automatically upgrade your major engine version. We recommend that you
upgrade to a supported major engine version as soon as possible.
Extended Support is available for RDS for MySQL 5.7 and 8.0, and for RDS for PostgreSQL 11 and higher.
For more information, see Supported MySQL major versions on Amazon RDS (p. 1629) and Release
calendar for Amazon RDS for PostgreSQL.
Note
You must enable Extended Support for a particular version before the RDS end of standard
support date for that version.
Extended Support will be available through the AWS Management Console or the Amazon RDS
API in December 2023.
565
Amazon Relational Database Service User Guide
Topics
• Overview of Amazon RDS Blue/Green Deployments (p. 567)
• Creating a blue/green deployment (p. 575)
• Viewing a blue/green deployment (p. 579)
• Switching a blue/green deployment (p. 582)
• Deleting a blue/green deployment (p. 587)
566
Amazon Relational Database Service User Guide
Overview of Amazon RDS Blue/Green Deployments
You can make changes to the RDS DB instances in the green environment without affecting production
workloads. For example, you can upgrade the major or minor DB engine version, change database
parameters, or make schema changes in the staging environment. You can thoroughly test changes
in the green environment. When ready, you can switch over the environments to promote the green
environment to be the new production environment. The switchover typically takes under a minute with
no data loss and no need for application changes.
Because the green environment is a copy of the topology of the production environment, the green
environment includes the features used by the DB instance. These features include the read replicas,
the storage configuration, DB snapshots, automated backups, Performance Insights, and Enhanced
Monitoring. If the blue DB instance is a Multi-AZ DB instance deployment, then the green DB instance is
also a Multi-AZ DB instance deployment.
Note
Currently, blue/green deployments are supported only for RDS for MariaDB and RDS for MySQL.
For Amazon Aurora availabilty, see Using Amazon RDS Blue/Green Deployments for database
updates in the Amazon Aurora User Guide.
Topics
• Benefits of using Amazon RDS Blue/Green Deployments (p. 567)
• Workflow of a blue/green deployment (p. 568)
• Authorizing access to blue/green deployment operations (p. 572)
• Considerations for blue/green deployments (p. 572)
• Best practices for blue/green deployments (p. 574)
• Region and version availability (p. 575)
• Limitations for blue/green deployments (p. 575)
567
Amazon Relational Database Service User Guide
Workflow
For example, the production environment in this image has a Multi-AZ DB instance deployment
(mydb1) and a read replica (mydb2).
2. Create the blue/green deployment. For instructions, see Creating a blue/green deployment (p. 575).
The following image shows an example of a blue/green deployment of the production environment
from step 1. While creating the blue/green deployment, RDS copies the complete topology and
configuration of the primary DB instance to create the green environment. The copied DB instance
names are appended with -green-random-characters. The staging environment in the image
contains a Multi-AZ DB instance deployment (mydb1-green-abc123) and a read replica (mydb2-
green-abc123).
568
Amazon Relational Database Service User Guide
Workflow
When you create the blue/green deployment, you can upgrade your DB engine version and specify
a different DB parameter group for the DB instances in the green environment. RDS also configures
logical replication from the primary DB instance in the blue environment to the primary DB instance in
the green environment.
After you create the blue/green deployment, the DB instance in the green environment is read-only by
default.
3. Make additional changes to the staging environment, if required.
For example, you might make schema changes to your database or change the DB instance class used
by one or more DB instances in the green environment.
569
Amazon Relational Database Service User Guide
Workflow
For information about modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401).
4. Test your staging environment.
During testing, we recommend that you keep your databases in the green environment read only. We
recommend that you enable write operations on the green environment with caution because they
can result in replication conflicts. They can also result in unintended data in the production databases
after switchover.
5. When ready, switch over to promote the staging environment to be the new production environment.
For instructions, see Switching a blue/green deployment (p. 582).
The switchover results in downtime. The downtime is usually under one minute, but it can be longer
depending on your workload.
570
Amazon Relational Database Service User Guide
Workflow
After the switchover, the DB instances that were in the green environment become the new
production DB instances. The names and endpoints in the current production environment are
assigned to the newly promoted production environment, requiring no changes to your application.
As a result, your production traffic now flows to the new production environment. The DB instances
in the previous blue environment are renamed by appending -oldn to the current name, where n is
a number. For example, assume the name of the DB instance in the blue environment is mydb1. After
switchover, the DB instance name might be mydb1-old1.
In the example in the image, the following changes occur during switchover:
• The green environment Multi-AZ DB instance deployment named mydb1-green-abc123 becomes
the production Multi-AZ DB instance deployment named mydb1.
• The green environment read replica named mydb2-green-abc123 becomes the production read
replica mydb2.
571
Amazon Relational Database Service User Guide
Authorizing access
• The blue environment Multi-AZ DB instance deployment named mydb1 becomes mydb1-old1.
• The blue environment read replica named mydb2 becomes mydb2-old1.
6. If you no longer need a blue/green deployment, you can delete it. For instructions, see Deleting a
blue/green deployment (p. 587).
After switchover, the previous production environment isn't deleted so that you can use it for
regression testing, if necessary.
The user who creates a blue/green deployment must have permissions to perform the following RDS
operations:
• rds:AddTagsToResource
• rds:CreateDBInstanceReadReplica
The user who switches over a blue/green deployment must have permissions to perform the following
RDS operations:
• rds:ModifyDBInstance
• rds:PromoteDBInstance
The user who deletes a blue/green deployment must have permissions to perform the following RDS
operation:
• rds:DeleteDBInstance
572
Amazon Relational Database Service User Guide
Considerations
The name (instance ID) of a resource changes when you switch over a blue/green deployment, but each
resource keeps the same resource ID. For example, a DB instance identifier might be mydb in the blue
environment. After switchover, the same DB instance might be renamed to mydb-old1. However, the
resource ID of the DB instance doesn't change during switchover. So, when the green resources are
promoted to be the new production resources, their resource IDs don't match the blue resource IDs that
were previously in production.
After switching over a blue/green deployment, consider updating the resource IDs to those of the newly
promoted production resources for integrated features and services that you used with the production
resources. Specifically, consider the following updates:
• If you perform filtering using the RDS API and resource IDs, adjust the resource IDs used in filtering
after switchover.
• If you use CloudTrail for auditing resources, adjust the consumers of the CloudTrail to track the new
resource IDs after switchover. For more information, see Monitoring Amazon RDS API calls in AWS
CloudTrail (p. 940).
• If you use the Performance Insights API, adjust the resource IDs in calls to the API after switchover. For
more information, see Monitoring DB load with Performance Insights on Amazon RDS (p. 720).
You can monitor a database with the same name after switchover, but it doesn't contain the data from
before the switchover.
• If you use resource IDs in IAM policies, make sure you add the resource IDs of the newly promoted
resources when necessary. For more information, see Identity and access management for Amazon
RDS (p. 2606).
• If you authenticate to your DB instance using IAM database authentication (p. 2642), make sure that
the IAM policy used for database access has both the blue and the green databases listed under the
Resource element of the policy. This is required in order to connect to the green database after
switchover. For more information, see the section called “Creating and using an IAM policy for IAM
database access” (p. 2646).
• If you use AWS Backup to manage automated backups of resources in a blue/green deployment, adjust
the resource IDs used by AWS Backup after switchover. For more information, see Using AWS Backup
to manage automated backups (p. 599).
573
Amazon Relational Database Service User Guide
Best practices
• If you want to restore a manual or automated DB snapshot for a DB instance that was part of a blue/
green deployment, make sure you restore the correct DB snapshot by examining the time when the
snapshot was taken. For more information, see Restoring from a DB snapshot (p. 615).
• If you want to describe a previous blue environment DB instance automated backup or restore it to a
point in time, use the resource ID for the operation.
Because the name of the DB instance changes during switchover, you can't use its previous name for
DescribeDBInstanceAutomatedBackups or RestoreDBInstanceToPointInTime operations.
For more information, see Restoring a DB instance to a specified time (p. 660).
• When you add a read replica to a DB instance in the green environment of a blue/green deployment,
the new read replica won't replace a read replica in the blue environment when you switch over.
However, the new read replica is retained in the new production environment after switchover.
• When you delete a DB instance in the green environment of a blue/green deployment, you can't create
a new DB instance to replace it in the blue/green deployment.
If you create a new DB instance with the same name and Amazon Resource Name (ARN) as the deleted
DB instance, it has a different DbiResourceId, so it isn't part of the green environment.
The following behavior results if you delete a DB instance in the green environment:
• If the DB instance in the blue environment with the same name exists, it won't be switched over to
the DB instance in the green environment. This DB instance won't be renamed by adding -oldn to
the DB instance name.
• Any application that points to the DB instance in the blue environment continues to use the same
DB instance after switchover.
• Avoid using non-transactional storage engines, such as MyISAM, that aren't optimized for replication.
• Optimize read replicas for binary log replication.
For example, if your DB engine version supports it, consider using GTID replication, parallel replication,
and crash-safe replication in your production environment before deploying your blue/green
deployment. These options promote consistency and durability of your data before you switch over
your blue/green deployment. For more information about GTID replication for read replicas, see Using
GTID-based replication for Amazon RDS for MySQL (p. 1719).
• Thoroughly test the DB instances in the green environment before switching over.
• Keep your databases in the green environment read only. We recommend that you enable write
operations on the green environment with caution because they can result in replication conflicts.
They can also result in unintended data in the production databases after switchover.
• When using a blue/green deployment to implement schema changes, make only replication-
compatible changes.
For example, you can add new columns at the end of a table, create indexes, or drop indexes without
disrupting replication from the blue deployment to the green deployment. However, schema changes,
such as renaming columns or renaming tables, break binary log replication to the green deployment.
For more information about replication-compatible changes, see Replication with Differing Table
Definitions on Source and Replica in the MySQL documentation.
574
Amazon Relational Database Service User Guide
Region and version availability
• After you create the blue/green deployment, handle lazy loading if necessary. Make sure data loading
is complete before switching over. For more information, see Handling lazy loading when you create a
blue/green deployment (p. 576).
• When you switch over a blue/green deployment, follow the switchover best practices. For more
information, see the section called “Switchover best practices” (p. 584).
• MySQL versions 8.0.11 through 8.0.13 have a community bug that prevents RDS from supporting
them for blue/green deployments.
• The Event Scheduler (event_scheduler parameter) must be disabled on the green environment
when you create a blue/green deployment. This prevents events from being generated in the green
environment and causing inconsistencies.
• Blue/green deployments aren't supported for the following features:
• Amazon RDS Proxy
• Cascading read replicas
• Cross-Region read replicas
• AWS CloudFormation
• Multi-AZ DB cluster deployments
Blue/green deployments are supported for Multi-AZ DB instance deployments. For more
information about Multi-AZ deployments, see Configuring and managing a Multi-AZ
deployment (p. 492).
• The following are limitations for changes in a blue/green deployment:
• You can't change an unencrypted DB instance into an encrypted DB instance.
• You can't change an encrypted DB instance into an unencrypted DB instance.
• You can't change a blue environment DB instance to a higher engine version than its corresponding
green environment DB instance.
• The resources in the blue environment and green environment must be in the same AWS account.
• During switchover, the blue primary DB instance can't be the target of external replication.
• If the source database is associated with a custom option group, you can't specify a major version
upgrade when you create the blue/green deployment.
In this case, you can create a blue/green deployment without specifying a major version upgrade.
Then, you can upgrade the database in the green environment. For more information, see Upgrading
a DB instance engine version (p. 429).
575
Amazon Relational Database Service User Guide
Making changes in the green environment
RDS copies the blue environment's topology to a staging area, along with its configured features. When
the blue DB instance has read replicas, the read replicas are copied as read replicas of the green DB
instance in the deployment. If the blue DB instance is a Multi-AZ DB instance deployment, then the green
DB instance is created as a Multi-AZ DB instance deployment.
Topics
• Making changes in the green environment (p. 576)
• Handling lazy loading when you create a blue/green deployment (p. 576)
• Creating the blue/green deployment (p. 577)
• You can specify a higher engine version if you want to test a DB engine upgrade.
• You can specify a DB parameter group that is different from the one used by the DB instance in
the blue environment. You can test how parameter changes affect the DB instances in the green
environment or specify a parameter group for a new major DB engine version in the case of an
upgrade.
If you specify a different DB parameter group, the specified DB parameter group is associated with all
of the DB instances in the green environment. If you don't specify a different parameter group, each
DB instance in the green environment is associated with the parameter group of its corresponding blue
DB instance.
You can make other modifications to the DB instance in the green environment after it is deployed. For
example, you might make schema changes to your database or change the DB instance class used by one
or more DB instances in the green environment.
For information about modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401).
If you access data that hasn't been loaded yet, the DB instance immediately downloads the requested
data from Amazon S3, and then continues loading the rest of the data in the background. For more
information, see Amazon EBS snapshots.
To help mitigate the effects of lazy loading on tables to which you require quick access, you can perform
operations that involve full-table scans, such as SELECT *. This operation allows Amazon RDS to
download all of the backed-up table data from S3.
If an application attempts to access data that isn't loaded, the application can encounter higher latency
than normal while the data is loaded. This higher latency due to lazy loading could lead to poor
performance for latency-sensitive workloads.
Important
If you switch over a blue/green deployment before data loading is complete, your application
could experience performance issues due to high latency.
576
Amazon Relational Database Service User Guide
Creating the blue/green deployment
Console
To create a blue/green deployment
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to copy
to a green environment.
3. For Actions, choose Create Blue/Green Deployment.
4. On the Create Blue/Green Deployment page, review the blue database identifiers. Make sure they
match the DB instances that you expect in the blue environment. If they don't, choose Cancel.
5. For Blue/Green Deployment identifier, enter a name for your blue/green deployment.
577
Amazon Relational Database Service User Guide
Creating the blue/green deployment
6. (Optional) For Blue/Green Deployment settings, specify the settings for the green environment:
You can make other modifications to the databases in the green environment after it is deployed.
7. Choose Create Blue/Green Deployment.
AWS CLI
To create a blue/green deployment by using the AWS CLI, use the create-blue-green-deployment
command with the following options:
If not specified, each DB instance in the green environment is created with the same engine version as
the corresponding DB instance in the blue environment.
• --target-db-parameter-group-name – Specify a DB parameter group to associate with the DB
instances in the green environment.
For Windows:
RDS API
To create a blue/green deployment by using the Amazon RDS API, use the
CreateBlueGreenDeployment operation with the following parameters:
578
Amazon Relational Database Service User Guide
Viewing a blue/green deployment
If not specified, each DB instance in the green environment is created with the same engine version as
the corresponding DB instance in the blue environment.
• TargetDBParameterGroupName – Specify a DB parameter group to associate with the DB instances
in the green environment.
You can also view and subscribe to events for information about a blue/green deployment. For more
information, see Blue/green deployment events (p. 892).
Console
To view the details about a blue/green deployment
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then find the blue/green deployment in the list.
Each tab has a section for the blue deployment and a section for the green deployment. The section
for the blue deployment shows the details about DB instances in the blue environment. The section
for the green deployment shows the details about DB instances in the green environment. You can
examine the details in both environments to see differences between them. For example, on the
Configuration tab, the DB engine version might be different in the blue environment and in the
green environment if you are upgrading the DB engine version in the green environment. Make sure
the values for the DB instances in both environments are the expected values.
The following image shows an example of the Connectivity & security tab.
579
Amazon Relational Database Service User Guide
Viewing a blue/green deployment
580
Amazon Relational Database Service User Guide
Viewing a blue/green deployment
581
Amazon Relational Database Service User Guide
Switching a blue/green deployment
AWS CLI
To view the details about a blue/green deployment by using the AWS CLI, use the describe-blue-green-
deployments command.
Example View the details about a blue/green deployment by filtering on its name
When you use the describe-blue-green-deployments command, you can filter on the --blue-green-
deployment-name. The following example shows the details for a blue/green deployment named my-
blue-green-deployment.
Example View the details about a blue/green deployment by specifying its identifier
When you use the describe-blue-green-deployments command, you can specify the --blue-green-
deployment-identifier. The following example shows the details for a blue/green deployment with
the identifier bgd-1234567890abcdef.
RDS API
To view the details about a blue/green deployment by using the Amazon RDS API, use the
DescribeBlueGreenDeployments operation and specify the BlueGreenDeploymentIdentifier.
Topics
• Switchover timeout (p. 582)
• Switchover guardrails (p. 583)
• Switchover actions (p. 583)
• Switchover best practices (p. 584)
• Verifying CloudWatch metrics before switchover (p. 584)
• Switching over a blue/green deployment (p. 585)
• After switchover (p. 587)
Switchover timeout
You can specify a switchover timeout period between 30 seconds and 3,600 seconds (one hour). If the
switchover takes longer than the specified duration, then any changes are rolled back and no changes
are made to either environment. The default timeout period is 300 seconds (five minutes).
582
Amazon Relational Database Service User Guide
Switchover guardrails
Switchover guardrails
When you start a switchover, Amazon RDS runs some basic checks to test the readiness of the blue and
green environments for switchover. These checks are known as switchover guardrails. These switchover
guardrails prevent a switchover if the environments aren't ready for it. Therefore, they avoid longer than
expected downtime and prevent the loss of data between the blue and green environments that might
result if the switchover started.
Amazon RDS runs the following guardrail checks on the green environment:
• Replication health – Check if green primary DB instance replication status is healthy. The green primary
DB instance is a replica of the blue primary DB instance.
• Replication lag – Check if the replica lag of the green primary DB instance is within allowable limits for
switchover. The allowable limits are based on the specified timeout period. Replica lag indicates how
far the green primary DB instance is lagging behind its blue primary DB instance. Replica lag indicates
how much time the green replica might require before it catches up with its blue source. For more
information, see Diagnosing and resolving lag between read replicas (p. 2736).
• Active writes – Make sure there are no active writes on the green primary DB instance.
Amazon RDS runs the following guardrail checks on the blue environment:
• External replication – Make sure the blue primary DB instance isn't the target of external replication to
prevent writes on the blue primary DB instance during switchover.
• Long-running active writes – Make sure there are no long-running active writes on the blue primary DB
instance because they can increase replica lag.
• Long-running DDL statements – Make sure there are no long-running DDL statements on the blue
primary DB instance because they can increase replica lag.
Switchover actions
When you switch over a blue/green deployment, RDS performs the following actions:
1. Runs guardrail checks to verify if the blue and green environments are ready for switchover.
2. Stops new write operations on the primary DB instance in both environments.
3. Drops connections to the DB instances in both environments and doesn't allow new connections.
4. Waits for replication to catch up in the green environment so that the green environment is in sync
with the blue environment.
5. Renames the DB instances in the both environments.
RDS renames the DB instances in the green environment to match the corresponding DB instances
in the blue environment. For example, assume the name of a DB instance in the blue environment is
mydb. Also assume the name of the corresponding DB instance in the green environment is mydb-
green-abc123. During switchover, the name of the DB instance in the green environment is changed
to mydb.
RDS renames the DB instances in the blue environment by appending -oldn to the current name,
where n is a number. For example, assume the name of a DB instance in the blue environment is mydb.
After switchover, the DB instance name might be mydb-old1.
RDS also renames the endpoints in the green environment to match the corresponding endpoints in
the blue environment so that application changes aren't required.
6. Allows connections to databases in both environments.
583
Amazon Relational Database Service User Guide
Switchover best practices
7. Allows write operations on the primary DB instance in the new production environment.
After switchover, the previous production primary DB instance only allows read operations until it is
rebooted.
You can monitor the status of a switchover using Amazon EventBridge. For more information, see the
section called “Blue/green deployment events” (p. 892).
If you have tags configured in the blue environment, these tags are moved to the new production
environment during switchover. The previous production environment also retains these tags. For more
information about tags, see Tagging Amazon RDS resources (p. 461).
If the switchover starts and then stops before finishing for any reason, then any changes are rolled back,
and no changes are made to either environment.
• Thoroughly test the resources in the green environment. Make sure they function properly and
efficiently.
• Monitor relevant Amazon CloudWatch metrics. For more information, see the section called “Verifying
CloudWatch metrics before switchover” (p. 584).
• Identify the best time for the switchover.
During the switchover, writes are cut off from databases in both environments. Identify a time when
traffic is lowest on your production environment. Long-running transactions, such as active DDLs, can
increase your switchover time, resulting in longer downtime for your production workloads.
If there's a large number of connections on your DB instances, consider manually reducing them
to the minimum amount necessary for your application before you switch over the blue/green
deployment. One way to achieve this is to create a script that monitors the status of the blue/green
deployment and starts cleaning up connections when it detects that the status has changed to
SWITCHOVER_IN_PROGRESS.
• Make sure the DB instances in both environments are in Available state.
• Make sure the primary DB instance in the green environment is healthy and replicating.
• Make sure that your network and client configurations don’t increase the DNS cache Time-To-Live
(TTL) beyond five seconds, which is the default for RDS DNS zones.
Otherwise, applications will continue to send write traffic to the blue environment after
switchover.
• Make sure data loading is complete before switching over. For more information, see Handling lazy
loading when you create a blue/green deployment (p. 576).
Note
During a switchover, you can't modify any DB instances included in the switchover.
• ReplicaLag – Use this metric to identify the current replication lag on the green environment. To
reduce downtime, make sure that this value is close to zero before you switch over.
584
Amazon Relational Database Service User Guide
Switching over a blue/green deployment
• DatabaseConnections – Use this metric to estimate the level of activity on the blue/green
deployment, and make sure that the value is at an acceptable level for your deployment before you
switch over. If Performance Insights is turned on, DBLoad is a more accurate metric.
For more information about these metrics, see the section called “CloudWatch metrics for
RDS” (p. 806).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the blue/green deployment that you
want to switch over.
3. For Actions, choose Switch over.
585
Amazon Relational Database Service User Guide
Switching over a blue/green deployment
4. On the Switch over page, review the switchover summary. Make sure the resources in both
environments match what you expect. If they don't, choose Cancel.
5. For Timeout, enter the time limit for switchover.
6. Choose Switch over.
AWS CLI
To switch over a blue/green deployment by using the AWS CLI, use the switchover-blue-green-
deployment command with the following options:
586
Amazon Relational Database Service User Guide
After switchover
For Windows:
RDS API
To switch over a blue/green deployment by using the Amazon RDS API, use the
SwitchoverBlueGreenDeployment operation with the following parameters:
After switchover
After a switchover, the DB instances in the previous blue environment are retained. Standard costs apply
to these resources. Replication between the blue and green environments stops.
RDS renames the DB instances in the blue environment by appending -oldn to the current resource
name, where n is a number. The DB instances are read-only until you set the read_only parameter to 0.
When you delete a blue/green deployment before switching it over, Amazon RDS optionally deletes the
DB instances in the green environment:
• If you choose to delete the DB instances in the green environment (--delete-target), they must
have deletion protection turned off.
• If you don't delete the DB instances in the green environment (--no-delete-target), the instances
are retained, but they're no longer part of a blue/green deployment. Replication continues between
the environments.
The option to delete the green databases isn't available in the console after switchover (p. 582). When
you delete blue/green deployments using the AWS CLI, you can't specify the --delete-target option
if the deployment status is SWITCHOVER_COMPLETED.
Important
Deleting a blue/green deployment doesn't affect the blue environment.
You can delete a blue/green deployment using the AWS Management Console, the AWS CLI, or the RDS
API.
587
Amazon Relational Database Service User Guide
Deleting a blue/green deployment
Console
To delete a blue/green deployment
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the blue/green deployment that you
want to delete.
3. For Actions, choose Delete.
To delete the green databases, select Delete the green databases in this Blue/Green Deployment.
4. Enter delete me in the box.
5. Choose Delete.
AWS CLI
To delete a blue/green deployment by using the AWS CLI, use the delete-blue-green-deployment
command with the following options:
Example Delete a blue/green deployment and the DB instances in the green environment
588
Amazon Relational Database Service User Guide
Deleting a blue/green deployment
For Windows:
Example Delete a blue/green deployment but retain the DB instances in the green
environment
For Windows:
RDS API
To delete a blue/green deployment by using the Amazon RDS API, use the
DeleteBlueGreenDeployment operation with the following parameters:
589
Amazon Relational Database Service User Guide
Topics
• Working with backups (p. 591)
• Backing up and restoring a DB instance (p. 600)
• Backing up and restoring a Multi-AZ DB cluster (p. 668)
590
Amazon Relational Database Service User Guide
Working with backups
• Your DB instance must be in the available state for automated backups to occur. Automated
backups don't occur while your DB instance is in a state other than available, for example,
storage_full.
• Automated backups don't occur while a DB snapshot copy is running in the same AWS Region for the
same database.
You can also back up your DB instance manually by creating a DB snapshot. For more information about
manually creating a DB snapshot, see Creating a DB snapshot (p. 613).
The first snapshot of a DB instance contains the data for the full database. Subsequent snapshots of the
same database are incremental, which means that only the data that has changed after your most recent
snapshot is saved.
You can copy both automatic and manual DB snapshots, and share manual DB snapshots. For more
information about copying a DB snapshot, see Copying a DB snapshot (p. 619). For more information
about sharing a DB snapshot, see Sharing a DB snapshot (p. 633).
Backup storage
Your Amazon RDS backup storage for each AWS Region is composed of the automated backups and
manual DB snapshots for that Region. Total backup storage space equals the sum of the storage for all
backups in that Region. Moving a DB snapshot to another Region increases the backup storage in the
destination Region. Backups are stored in Amazon S3.
For more information about backup storage costs, see Amazon RDS pricing.
If you choose to retain automated backups when you delete a DB instance, the automated backups are
saved for the full retention period. If you don't choose Retain automated backups when you delete
a DB instance, all automated backups are deleted with the DB instance. After they are deleted, the
automated backups can't be recovered. If you choose to have Amazon RDS create a final DB snapshot
before it deletes your DB instance, you can use that to recover your DB instance. Optionally, you can
use a previously created manual snapshot. Manual snapshots are not deleted. You can have up to 100
manual snapshots per Region.
Backup window
Automated backups occur daily during the preferred backup window. If the backup requires more time
than allotted to the backup window, the backup continues after the window ends until it finishes. The
backup window can't overlap with the weekly maintenance window for the DB instance or Multi-AZ DB
cluster.
During the automatic backup window, storage I/O might be suspended briefly while the backup process
initializes (typically under a few seconds). You might experience elevated latencies for a few minutes
during backups for Multi-AZ deployments. For MariaDB, MySQL, Oracle, and PostgreSQL, I/O activity
isn't suspended on your primary during backup for Multi-AZ deployments because the backup is taken
from the standby. For SQL Server, I/O activity is suspended briefly during backup for both Single-AZ and
Multi-AZ deployments because the backup is taken from the primary.
591
Amazon Relational Database Service User Guide
Backup window
Automated backups might occasionally be skipped if the DB instance or cluster has a heavy workload at
the time a backup is supposed to start. If a backup is skipped, you can still do a point-in-time-recovery
(PITR), and a backup is still attempted during the next backup window. For more information on PITR,
see Restoring a DB instance to a specified time (p. 660).
If you don't specify a preferred backup window when you create the DB instance or Multi-AZ DB cluster,
Amazon RDS assigns a default 30-minute backup window. This window is selected at random from an 8-
hour block of time for each AWS Region. The following table lists the time blocks for each AWS Region
from which the default backup windows are assigned.
592
Amazon Relational Database Service User Guide
Backup retention period
After you create a DB instance or cluster, you can modify the backup retention period. You can set the
backup retention period of a DB instance to between 0 and 35 days. Setting the backup retention period
to 0 disables automated backups. You can set the backup retention period of a multi-AZ DB cluster to
between 1 and 35 days. Manual snapshot limits (100 per Region) don't apply to automated backups.
Automated backups aren't created while a DB instance or cluster is stopped. Backups can be retained
longer than the backup retention period if a DB instance has been stopped. RDS doesn't include time
spent in the stopped state when the backup retention window is calculated.
Important
An outage occurs if you change the backup retention period from 0 to a nonzero value or from a
nonzero value to 0. This applies to both Single-AZ and Multi-AZ DB instances.
Console
To enable automated backups immediately
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
593
Amazon Relational Database Service User Guide
Enabling automated backups
2. In the navigation pane, choose Databases, and then choose the DB instance or Multi-AZ DB cluster
that you want to modify.
3. Choose Modify.
4. For Backup retention period, choose a positive nonzero value, for example 3 days.
5. Choose Continue.
6. Choose Apply immediately.
7. Choose Modify DB instance or Modify cluster to save your changes and enable automated backups.
AWS CLI
To enable automated backups, use the AWS CLI modify-db-instance or modify-db-cluster
command.
In the following example, we enable automated backups by setting the backup retention period to three
days. The changes are applied immediately.
Example
For Windows:
RDS API
To enable automated backups, use the RDS API ModifyDBInstance or ModifyDBCluster operation
with the following required parameters:
• DBInstanceIdentifier or DBClusterIdentifier
• BackupRetentionPeriod
594
Amazon Relational Database Service User Guide
Retaining automated backups
To describe the automated backups for your existing DB instances using the AWS CLI, use one of the
following commands:
or
To describe the retained automated backups for your existing DB instances using the RDS API, call the
DescribeDBInstanceAutomatedBackups action with one of the following parameters:
• DBInstanceIdentifier
• DbiResourceId
When you delete a DB instance, you can choose to retain automated backups. Automated backups can be
retained for a number of days equal to the backup retention period configured for the DB instance at the
time when you delete it.
Retained automated backups contain system snapshots and transaction logs from a DB instance. They
also include your DB instance properties like allocated storage and DB instance class, which are required
to restore it to an active instance.
Retained automated backups and manual snapshots incur billing charges until they're deleted. For more
information, see Retention costs (p. 596).
You can retain automated backups for RDS instances running the MySQL, MariaDB, PostgreSQL, Oracle,
and Microsoft SQL Server engines.
You can restore or remove retained automated backups using the AWS Management Console, RDS API,
and AWS CLI.
Topics
• Retention period (p. 595)
• Viewing retained backups (p. 596)
• Restoration (p. 596)
• Retention costs (p. 596)
• Limitations (p. 596)
Retention period
The system snapshots and transaction logs in a retained automated backup expire the same way that
they expire for the source DB instance. Because there are no new snapshots or logs created for this
instance, the retained automated backups eventually expire completely. Effectively, they live as long
their last system snapshot would have done, based on the settings for retention period the source
instance had when you deleted it. Retained automated backups are removed by the system after their
last system snapshot expires.
595
Amazon Relational Database Service User Guide
Deleting retained automated backups
You can remove a retained automated backup in the same way that you can delete a DB instance.
You can remove retained automated backups using the console or the RDS API operation
DeleteDBInstanceAutomatedBackup.
Final snapshots are independent of retained automated backups. We strongly suggest that you take
a final snapshot even if you retain automated backups because the retained automated backups
eventually expire. The final snapshot doesn't expire.
To describe your retained automated backups using the AWS CLI, use the following command:
To describe your retained automated backups using the RDS API, call the
DescribeDBInstanceAutomatedBackups action with the DbiResourceId parameter.
Restoration
For information on restoring DB instances from automated backups, see Restoring a DB instance to a
specified time (p. 660).
Retention costs
The cost of a retained automated backup is the cost of total storage of the system snapshots that are
associated with it. There is no additional charge for transaction logs or instance metadata. All other
pricing rules for backups apply to restorable instances.
For example, suppose that your total allocated storage of running instances is 100 GB. Suppose also
that you have 50 GB of manual snapshots plus 75 GB of system snapshots associated with a retained
automated backup. In this case, you are charged only for the additional 25 GB of backup storage, like
this: (50 GB + 75 GB) – 100 GB = 25 GB.
Limitations
The following limitations apply to retained automated backups:
• The maximum number of retained automated backups in one AWS Region is 40. It's not included in the
DB instances quota. You can have 40 running DB instances and an additional 40 retained automated
backups at the same time.
• Retained automated backups don't contain information about parameters or option groups.
• You can restore a deleted instance to a point in time that is within the retention period at the time of
deletion.
• You can't modify a retained automated backup. That's because it consists of system backups,
transaction logs, and the DB instance properties that existed at the time that you deleted the source
instance.
596
Amazon Relational Database Service User Guide
Disabling automated backups
Console
To delete a retained automated backup
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.
3. On the Retained tab, choose the retained automated backup that you want to delete.
4. For Actions, choose Delete.
5. On the confirmation page, enter delete me and choose Delete.
AWS CLI
You can delete a retained automated backup by using the AWS CLI command delete-db-instance-
automated-backup with the following option:
You can find the resource identifier for the source DB instance of a retained automated backup by
running the AWS CLI command describe-db-instance-automated-backups.
Example
The following example deletes the retained automated backup with source DB instance resource
identifier db-123ABCEXAMPLE.
For Windows:
RDS API
You can delete a retained automated backup by using the Amazon RDS API operation
DeleteDBInstanceAutomatedBackup with the following parameter:
You can find the resource identifier for the source DB instance of a retained automated backup using
the Amazon RDS API operation DescribeDBInstanceAutomatedBackups.
597
Amazon Relational Database Service User Guide
Disabling automated backups
automated backups for the database. If you disable and then re-enable automated backups, you
can restore starting only from the time you re-enabled automated backups.
Console
To disable automated backups immediately
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance or Multi-AZ DB cluster
that you want to modify.
3. Choose Modify.
4. For Backup retention period, choose 0 days.
5. Choose Continue.
6. Choose Apply immediately.
7. Choose Modify DB instance or Modify cluster to save your changes and disable automated backups.
AWS CLI
To disable automated backups immediately, use the modify-db-instance or modify-db-cluster command
and set the backup retention period to 0 with --apply-immediately.
Example
For Windows:
To know when the modification is in effect, call describe-db-instances for the DB instance (or
describe-db-clusters for a Multi-AZ DB cluster) until the value for backup retention period is 0 and
mydbcluster status is available.
RDS API
To disable automated backups immediately, call the ModifyDBInstance or ModifyDBCluster operation
with the following parameters:
598
Amazon Relational Database Service User Guide
Using AWS Backup
Example
https://rds.amazonaws.com/
?Action=ModifyDBInstance
&DBInstanceIdentifier=mydbinstance
&BackupRetentionPeriod=0
&SignatureVersion=2
&SignatureMethod=HmacSHA256
&Timestamp=2009-10-14T17%3A48%3A21.746Z
&AWSAccessKeyId=<&AWS; Access Key ID>
&Signature=<Signature>
To enable backups in AWS Backup, use resource tagging to associate your database with a backup plan.
For more information, see Using tags to enable backups in AWS Backup (p. 468).
Note
Backups managed by AWS Backup are considered manual DB snapshots, but don't count toward
the DB snapshot quota for RDS. Backups that were created with AWS Backup have names
ending in awsbackup:backup-job-number.
For more information about AWS Backup, see the AWS Backup Developer Guide.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the Backup service tab.
Your AWS Backup backups are listed under Backup service snapshots.
• To convert existing MyISAM tables to InnoDB tables, you can use the ALTER TABLE command, for
example: ALTER TABLE table_name ENGINE=innodb, ALGORITHM=COPY;
• If you choose to use MyISAM, you can attempt to manually repair tables that become damaged after
a crash by using the REPAIR command. For more information, see REPAIR TABLE statement in the
MySQL documentation. However, as noted in the MySQL documentation, there is a good chance that
you might not be able to recover all your data.
• If you want to take a snapshot of your MyISAM tables before restoring, follow these steps:
1. Stop all activity to your MyISAM tables (that is, close all sessions).
599
Amazon Relational Database Service User Guide
Unsupported MariaDB storage engines
You can close all sessions by calling the mysql.rds_kill command for each process that is returned
from the SHOW FULL PROCESSLIST command.
2. Lock and flush each of your MyISAM tables. For example, the following commands lock and flush
two tables named myisam_table1 and myisam_table2:
3. Create a snapshot of your DB instance or Multi-AZ DB cluster. When the snapshot has completed,
release the locks and resume activity on the MyISAM tables. You can release the locks on your tables
using the following command:
These steps force MyISAM to flush data stored in memory to disk, which ensures a clean start when
you restore from a DB snapshot. For more information on creating a DB snapshot, see Creating a DB
snapshot (p. 613).
• To convert existing Aria tables to InnoDB tables, you can use the ALTER TABLE command. For
example: ALTER TABLE table_name ENGINE=innodb, ALGORITHM=COPY;
• If you choose to use Aria, you can attempt to manually repair tables that become damaged after a
crash by using the REPAIR TABLE command. For more information, see http://mariadb.com/kb/en/
mariadb/repair-table/.
• If you want to take a snapshot of your Aria tables before restoring, follow these steps:
1. Stop all activity to your Aria tables (that is, close all sessions).
2. Lock and flush each of your Aria tables.
3. Create a snapshot of your DB instance or Multi-AZ DB cluster. When the snapshot has completed,
release the locks and resume activity on the Aria tables. These steps force Aria to flush data stored
in memory to disk, thereby ensuring a clean start when you restore from a DB snapshot.
Topics
• Replicating automated backups to another AWS Region (p. 602)
• Creating a DB snapshot (p. 613)
• Restoring from a DB snapshot (p. 615)
• Copying a DB snapshot (p. 619)
• Sharing a DB snapshot (p. 633)
• Exporting DB snapshot data to Amazon S3 (p. 642)
600
Amazon Relational Database Service User Guide
Backing up and restoring a DB instance
601
Amazon Relational Database Service User Guide
Cross-Region automated backups
DB snapshot copy charges apply to the data transfer. After the DB snapshot is copied, standard charges
apply to storage in the destination Region. For more details, see RDS Pricing.
For an example of using backup replication, see the AWS online tech talk Managed Disaster Recovery
with Amazon RDS for Oracle Cross-Region Automated Backups.
Topics
• Region and version availability (p. 602)
• Source and destination AWS Region support (p. 602)
• Enabling cross-Region automated backups (p. 604)
• Finding information about replicated backups (p. 606)
• Restoring to a specified time from a replicated backup (p. 609)
• Stopping automated backup replication (p. 610)
• Deleting replicated backups (p. 611)
Asia Pacific (Singapore) Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific
(Tokyo)
Asia Pacific (Tokyo) Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore)
602
Amazon Relational Database Service User Guide
Cross-Region automated backups
US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon)
Europe (Frankfurt) Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm)
US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon)
Europe (London) Europe (Frankfurt), Europe (Ireland), Europe (Paris), Europe (Stockholm)
Europe (Paris) Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm)
Europe (Stockholm) Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris)
US East (N. Virginia) Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific
(Sydney), Asia Pacific (Tokyo)
Canada (Central)
US East (Ohio) Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific
(Tokyo)
Canada (Central)
603
Amazon Relational Database Service User Guide
Cross-Region automated backups
Canada (Central)
Europe (Ireland)
US West (Oregon) Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific
(Sydney), Asia Pacific (Tokyo)
Canada (Central)
You can also use the describe-source-regions AWS CLI command to find out which AWS
Regions can replicate to each other. For more information, see Finding information about replicated
backups (p. 606).
Console
• For a new DB instance, enable it when you launch the instance. For more information, see Settings for
DB instances (p. 308).
• For an existing DB instance, use the following procedure.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.
3. On the Current Region tab, choose the DB instance for which you want to enable backup
replication.
4. For Actions, choose Manage cross-Region replication.
5. Under Backup replication, choose Enable replication to another AWS Region.
6. Choose the Destination Region.
7. Choose the Replicated backup retention period.
8. If you've enabled encryption on the source DB instance, choose the AWS KMS key for encrypting the
backups.
604
Amazon Relational Database Service User Guide
Cross-Region automated backups
9. Choose Save.
In the source Region, replicated backups are listed on the Current Region tab of the Automated backups
page. In the destination Region, replicated backups are listed on the Replicated backups tab of the
Automated backups page.
AWS CLI
The following CLI example replicates automated backups from a DB instance in the US West (Oregon)
Region to the US East (N. Virginia) Region. It also encrypts the replicated backups, using an AWS KMS key
in the destination Region.
For Windows:
The --source-region option is required when you encrypt backups between the AWS GovCloud
(US-East) and AWS GovCloud (US-West) Regions. For --source-region, specify the AWS Region of
the source DB instance.
RDS API
• Region
• SourceDBInstanceArn
• BackupRetentionPeriod
• KmsKeyId (optional)
• PreSignedUrl (required if you use KmsKeyId)
605
Amazon Relational Database Service User Guide
Cross-Region automated backups
Note
If you encrypt the backups, you must also include a presigned URL. For more information on
presigned URLs, see Authenticating Requests: Using Query Parameters (AWS Signature Version
4) in the Amazon Simple Storage Service API Reference and Signature Version 4 signing process in
the AWS General Reference.
• describe-source-regions
• describe-db-instances
• describe-db-instance-automated-backups
The following describe-source-regions example lists the source AWS Regions from which
automated backups can be replicated to the US West (Oregon) destination Region.
The output shows that backups can be replicated from US East (N. Virginia), but not from US East (Ohio)
or US West (N. California), into US West (Oregon).
{
"SourceRegions": [
...
{
"RegionName": "us-east-1",
"Endpoint": "https://rds.us-east-1.amazonaws.com",
"Status": "available",
"SupportsDBInstanceAutomatedBackupsReplication": true
},
{
"RegionName": "us-east-2",
"Endpoint": "https://rds.us-east-2.amazonaws.com",
"Status": "available",
"SupportsDBInstanceAutomatedBackupsReplication": false
},
"RegionName": "us-west-1",
"Endpoint": "https://rds.us-west-1.amazonaws.com",
"Status": "available",
"SupportsDBInstanceAutomatedBackupsReplication": false
}
]
}
The following describe-db-instances example shows the automated backups for a DB instance.
606
Amazon Relational Database Service User Guide
Cross-Region automated backups
For Windows:
{
"DBInstances": [
{
"StorageEncrypted": false,
"Endpoint": {
"HostedZoneId": "Z1PVIF0B656C1W",
"Port": 1521,
...
"BackupRetentionPeriod": 7,
"DBInstanceAutomatedBackupsReplications": [{"DBInstanceAutomatedBackupsArn":
"arn:aws:rds:us-east-1:123456789012:auto-backup:ab-L2IJCEXJP7XQ7HOJ4SIEXAMPLE"}]
}
]
}
For Windows:
The output shows the source DB instance and automated backups in US West (Oregon), with backups
replicated to US East (N. Virginia).
{
"DBInstanceAutomatedBackups": [
{
"DBInstanceArn": "arn:aws:rds:us-west-2:868710585169:db:mydatabase",
"DbiResourceId": "db-L2IJCEXJP7XQ7HOJ4SIEXAMPLE",
"DBInstanceAutomatedBackupsArn": "arn:aws:rds:us-west-2:123456789012:auto-
backup:ab-L2IJCEXJP7XQ7HOJ4SIEXAMPLE",
"BackupRetentionPeriod": 7,
"DBInstanceAutomatedBackupsReplications": [{"DBInstanceAutomatedBackupsArn":
"arn:aws:rds:us-east-1:123456789012:auto-backup:ab-L2IJCEXJP7XQ7HOJ4SIEXAMPLE"}]
607
Amazon Relational Database Service User Guide
Cross-Region automated backups
"Region": "us-west-2",
"DBInstanceIdentifier": "mydatabase",
"RestoreWindow": {
"EarliestTime": "2020-10-26T01:09:07Z",
"LatestTime": "2020-10-31T19:09:53Z",
}
...
}
]
}
For Windows:
The output shows the source DB instance in US West (Oregon), with replicated backups in US East (N.
Virginia).
{
"DBInstanceAutomatedBackups": [
{
"DBInstanceArn": "arn:aws:rds:us-west-2:868710585169:db:mydatabase",
"DbiResourceId": "db-L2IJCEXJP7XQ7HOJ4SIEXAMPLE",
"DBInstanceAutomatedBackupsArn": "arn:aws:rds:us-east-1:123456789012:auto-
backup:ab-L2IJCEXJP7XQ7HOJ4SIEXAMPLE",
"Region": "us-west-2",
"DBInstanceIdentifier": "mydatabase",
"RestoreWindow": {
"EarliestTime": "2020-10-26T01:09:07Z",
"LatestTime": "2020-10-31T19:01:23Z"
},
"AllocatedStorage": 50,
"BackupRetentionPeriod": 7,
"Status": "replicating",
"Port": 1521,
...
}
]
}
608
Amazon Relational Database Service User Guide
Cross-Region automated backups
For general information on point-in-time recovery (PITR), see Restoring a DB instance to a specified
time (p. 660).
Note
On RDS for SQL Server, option groups aren't copied across AWS Regions when automated
backups are replicated. If you've associated a custom option group with your RDS for SQL Server
DB instance, you can re-create that option group in the destination Region. Then restore the
DB instance in the destination Region and associate the custom option group with it. For more
information, see Working with option groups (p. 331).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose the destination Region (where backups are replicated to) from the Region selector.
3. In the navigation pane, choose Automated backups.
4. On the Replicated backups tab, choose the DB instance that you want to restore.
5. For Actions, choose Restore to point in time.
6. Choose Latest restorable time to restore to the latest possible time, or choose Custom to choose a
time.
If you chose Custom, enter the date and time that you want to restore the instance to.
Note
Times are shown in your local time zone, which is indicated by an offset from Coordinated
Universal Time (UTC). For example, UTC-5 is Eastern Standard Time/Central Daylight Time.
7. For DB instance identifier, enter the name of the target restored DB instance.
8. (Optional) Choose other options as needed, such as enabling autoscaling.
9. Choose Restore to point in time.
AWS CLI
For Windows:
609
Amazon Relational Database Service User Guide
Cross-Region automated backups
RDS API
• SourceDBInstanceAutomatedBackupsArn
• TargetDBInstanceIdentifier
• RestoreTime
Replicated backups are retained, subject to the backup retention period set when they were created.
Console
Stop backup replication from the Automated backups page in the source Region.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose the source Region from the Region selector.
3. In the navigation pane, choose Automated backups.
4. On the Current Region tab, choose the DB instance for which you want to stop backup replication.
5. For Actions, choose Manage cross-Region replication.
6. Under Backup replication, clear the Enable replication to another AWS Region check box.
7. Choose Save.
Replicated backups are listed on the Retained tab of the Automated backups page in the destination
Region.
AWS CLI
The following CLI example stops automated backups of a DB instance from replicating in the US West
(Oregon) Region.
610
Amazon Relational Database Service User Guide
Cross-Region automated backups
For Windows:
RDS API
• Region
• SourceDBInstanceArn
Console
Delete replicated backups in the destination Region from the Automated backups page.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose the destination Region from the Region selector.
3. In the navigation pane, choose Automated backups.
4. On the Replicated backups tab, choose the DB instance for which you want to delete the replicated
backups.
5. For Actions, choose Delete.
6. On the confirmation page, enter delete me and choose Delete.
AWS CLI
You can use the describe-db-instances CLI command to find the Amazon Resource Names
(ARNs) of the replicated backups. For more information, see Finding information about replicated
backups (p. 606).
611
Amazon Relational Database Service User Guide
Cross-Region automated backups
For Windows:
RDS API
Delete replicated backups by using the DeleteDBInstanceAutomatedBackup RDS API operation with
the DBInstanceAutomatedBackupsArn parameter.
612
Amazon Relational Database Service User Guide
Creating a DB snapshot
Creating a DB snapshot
Amazon RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance
and not just individual databases. Creating this DB snapshot on a Single-AZ DB instance results in a brief
I/O suspension that can last from a few seconds to a few minutes, depending on the size and class of
your DB instance. For MariaDB, MySQL, Oracle, and PostgreSQL, I/O activity is not suspended on your
primary during backup for Multi-AZ deployments, because the backup is taken from the standby. For
SQL Server, I/O activity is suspended briefly during backup for Multi-AZ deployments.
When you create a DB snapshot, you need to identify which DB instance you are going to back up, and
then give your DB snapshot a name so you can restore from it later. The amount of time it takes to create
a snapshot varies with the size of your databases. Since the snapshot includes the entire storage volume,
the size of files, such as temporary files, also affects the amount of time it takes to create the snapshot.
Note
Your DB instance must be in the available state to take a DB snapshot.
For PostgreSQL DB instances, data in unlogged tables might not be restored from snapshots.
For more information, see Best practices for working with PostgreSQL (p. 294).
Unlike automated backups, manual snapshots aren't subject to the backup retention period. Snapshots
don't expire.
For very long-term backups of MariaDB, MySQL, and PostgreSQL data, we recommend exporting
snapshot data to Amazon S3. If the major version of your DB engine is no longer supported, you can't
restore to that version from a snapshot. For more information, see Exporting DB snapshot data to
Amazon S3 (p. 642).
You can create a DB snapshot using the AWS Management Console, the AWS CLI, or the RDS API.
Console
To create a DB snapshot
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. In the list of DB instances, choose the DB instance for which you want to take a snapshot.
4. For Actions, choose Take snapshot.
613
Amazon Relational Database Service User Guide
Creating a DB snapshot
The Snapshots page appears, with the new DB snapshot's status shown as Creating. After its status is
Available, you can see its creation time.
AWS CLI
When you create a DB snapshot using the AWS CLI, you need to identify which DB instance you are going
to back up, and then give your DB snapshot a name so you can restore from it later. You can do this by
using the AWS CLI create-db-snapshot command with the following parameters:
• --db-instance-identifier
• --db-snapshot-identifier
In this example, you create a DB snapshot called mydbsnapshot for a DB instance called
mydbinstance.
Example
For Windows:
RDS API
When you create a DB snapshot using the Amazon RDS API, you need to identify which DB instance you
are going to back up, and then give your DB snapshot a name so you can restore from it later. You can do
this by using the Amazon RDS API CreateDBSnapshot command with the following parameters:
• DBInstanceIdentifier
• DBSnapshotIdentifier
614
Amazon Relational Database Service User Guide
Restoring from a DB snapshot
You can use the restored DB instance as soon as its status is available. The DB instance continues to
load data in the background. This is known as lazy loading.
If you access data that hasn't been loaded yet, the DB instance immediately downloads the requested
data from Amazon S3, and then continues loading the rest of the data in the background. For more
information, see Amazon EBS snapshots.
To help mitigate the effects of lazy loading on tables to which you require quick access, you can perform
operations that involve full-table scans, such as SELECT *. This allows Amazon RDS to download all of
the backed-up table data from S3.
You can restore a DB instance and use a different storage type than the source DB snapshot. In this case,
the restoration process is slower because of the additional work required to migrate the data to the new
storage type. If you restore to or from magnetic storage, the migration process is the slowest. That's
because magnetic storage doesn't have the IOPS capability of Provisioned IOPS or General Purpose (SSD)
storage.
You can use AWS CloudFormation to restore a DB instance from a DB instance snapshot. For more
information, see AWS::RDS::DBInstance in the AWS CloudFormation User Guide.
Note
You can't restore a DB instance from a DB snapshot that is both shared and encrypted. Instead,
you can make a copy of the DB snapshot and restore the DB instance from the copy. For more
information, see Copying a DB snapshot (p. 619).
The default DB parameter group is associated with the restored instance, unless you choose a different
one. No custom parameter settings are available in the default parameter group.
You can specify the parameter group when you restore the DB instance.
For more information about DB parameter groups, see Working with parameter groups (p. 347).
• If you're using the Amazon RDS console, you can specify a custom VPC security group to associate with
the instance or create a new VPC security group.
• If you're using the AWS CLI, you can specify a custom VPC security group to associate with the instance
by including the --vpc-security-group-ids option in the restore-db-instance-from-db-
snapshot command.
• If you're using the Amazon RDS API, you can include the
VpcSecurityGroupIds.VpcSecurityGroupId.N parameter in the
RestoreDBInstanceFromDBSnapshot action.
615
Amazon Relational Database Service User Guide
Restoring from a DB snapshot
As soon as the restore is complete and your new DB instance is available, you can also change the
VPC settings by modifying the DB instance. For more information, see Modifying an Amazon RDS DB
instance (p. 401).
The exception is when the source DB instance is associated with an option group that contains a
persistent or permanent option. For example, if the source DB instance uses Oracle Transparent Data
Encryption (TDE), the restored DB instance must use an option group that has the TDE option.
If you restore a DB instance into a different VPC, you must do one of the following to assign a DB option
group:
• Assign the default option group for that VPC group to the instance.
• Assign another option group that is linked to that VPC.
• Create a new option group and assign it to the DB instance. With persistent or permanent options,
such as Oracle TDE, you must create a new option group that includes the persistent or permanent
option.
For more information about DB option groups, see Working with option groups (p. 331).
For more information, see Copying tags to DB instance snapshots (p. 465).
• The DB snapshot must have enough storage allocated for the new edition.
• Only the following edition changes are supported:
• From Standard Edition to Enterprise Edition
• From Web Edition to Standard Edition or Enterprise Edition
• From Express Edition to Web Edition, Standard Edition, or Enterprise Edition
If you want to change from one edition to a new edition that isn't supported by restoring a snapshot,
you can try using the native backup and restore feature. SQL Server verifies whether your database is
compatible with the new edition based on what SQL Server features you have enabled on the database.
For more information, see Importing and exporting SQL Server databases using native backup and
restore (p. 1419).
616
Amazon Relational Database Service User Guide
Restoring from a DB snapshot
If you restore a snapshot of a CDB instance, you can change the PDB name. You can't change the CDB
name, which is always RDSCDB. This CDB name is the same for all RDS instances that use a single-tenant
architecture. For more information, see Backing up and restoring a CDB (p. 1844).
Before you restore a DB snapshot, you can upgrade it to a later release. For more information, see
Upgrading an Oracle DB snapshot (p. 2111).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the DB snapshot that you want to restore from.
4. For Actions, choose Restore snapshot.
5. On the Restore snapshot page, for DB instance identifier, enter the name for your restored DB
instance.
6. Specify other settings, such as allocated storage size.
For information about each setting, see Settings for DB instances (p. 308).
7. Choose Restore DB instance.
AWS CLI
To restore a DB instance from a DB snapshot, use the AWS CLI command restore-db-instance-from-db-
snapshot.
In this example, you restore from a previously created DB snapshot named mydbsnapshot. You restore
to a new DB instance named mynewdbinstance. This example also sets the allocated storage size.
You can specify other settings. For information about each setting, see Settings for DB instances (p. 308).
Example
For Windows:
617
Amazon Relational Database Service User Guide
Restoring from a DB snapshot
RDS API
To restore a DB instance from a DB snapshot, call the Amazon RDS API function
RestoreDBInstanceFromDBSnapshot with the following parameters:
• DBInstanceIdentifier
• DBSnapshotIdentifier
618
Amazon Relational Database Service User Guide
Copying a DB snapshot
Copying a DB snapshot
With Amazon RDS, you can copy automated backups or manual DB snapshots. After you copy a
snapshot, the copy is a manual snapshot. You can make multiple copies of an automated backup or
manual snapshot, but each copy must have a unique identifier.
You can copy a snapshot within the same AWS Region, you can copy a snapshot across AWS Regions, and
you can copy shared snapshots.
Limitations
The following are some limitations when you copy snapshots:
• You can't copy a snapshot to or from the China (Beijing) Region or the China (Ningxia) Region.
• You can copy a snapshot between AWS GovCloud (US-East) and AWS GovCloud (US-West). However,
you can't copy a snapshot between these GovCloud (US) Regions and Regions that aren't GovCloud
(US) Regions.
• If you delete a source snapshot before the target snapshot becomes available, the snapshot copy
might fail. Verify that the target snapshot has a status of AVAILABLE before you delete a source
snapshot.
• You can have up to 20 snapshot copy requests in progress to a single destination Region per account.
• When you request multiple snapshot copies for the same source DB instance, they're queued internally.
The copies requested later won't start until the previous snapshot copies are completed. For more
information, see Why is my EC2 AMI or EBS snapshot creation slow? in the AWS Knowledge Center.
• Depending on the AWS Regions involved and the amount of data to be copied, a cross-Region
snapshot copy can take hours to complete. In some cases, there might be a large number of cross-
Region snapshot copy requests from a given source Region. In such cases, Amazon RDS might put
new cross-Region copy requests from that source Region into a queue until some in-progress copies
complete. No progress information is displayed about copy requests while they are in the queue.
Progress information is displayed when the copy starts.
• If a copy is still pending when you start another copy, the second copy doesn't start until the first copy
finishes.
Snapshot retention
Amazon RDS deletes automated backups in several situations:
If you want to keep an automated backup for a longer period, copy it to create a manual snapshot, which
is retained until you delete it. Amazon RDS storage costs might apply to manual snapshots if they exceed
your default storage space.
For more information about backup storage costs, see Amazon RDS pricing.
619
Amazon Relational Database Service User Guide
Copying a DB snapshot
You can copy a shared DB snapshot across AWS Regions if the snapshot is unencrypted. However, if the
shared DB snapshot is encrypted, you can only copy it in the same Region.
Note
Copying shared incremental snapshots in the same AWS Region is supported when they're
unencrypted, or encrypted using the same KMS key as the initial full snapshot. If you use a
different KMS key to encrypt subsequent snapshots when copying them, those shared snapshots
are full snapshots. For more information, see Incremental snapshot copying (p. 620).
Handling encryption
You can copy a snapshot that has been encrypted using a KMS key. If you copy an encrypted snapshot,
the copy of the snapshot must also be encrypted. If you copy an encrypted snapshot within the same
AWS Region, you can encrypt the copy with the same KMS key as the original snapshot. Or you can
specify a different KMS key.
If you copy an encrypted snapshot across Regions, you must specify a KMS key valid in the destination
AWS Region. It can be a Region-specific KMS key, or a multi-Region key. For more information on multi-
Region KMS keys, see Using multi-Region keys in AWS KMS.
The source snapshot remains encrypted throughout the copy process. For more information, see
Limitations of Amazon RDS encrypted DB instances (p. 2589).
You can also encrypt a copy of an unencrypted snapshot. This way, you can quickly add encryption to
a previously unencrypted DB instance. To do this, you create a snapshot of your DB instance when you
are ready to encrypt it. You then create a copy of that snapshot and specify a KMS key to encrypt that
snapshot copy. You can then restore an encrypted DB instance from the encrypted snapshot.
Whether a snapshot copy is incremental is determined by the most recently completed snapshot copy. If
the most recent snapshot copy was deleted, the next copy is a full copy, not an incremental copy.
When you copy a snapshot across AWS accounts, the copy is an incremental copy only if all of the
following conditions are met:
• A different snapshot of the same source DB instance was previously copied to the destination account.
• The most recent snapshot copy still exists in the destination account.
• All copies of the snapshot in the destination account are either unencrypted, or were encrypted using
the same KMS key.
• If the source DB instance is a Multi-AZ instance, it hasn't failed over to another AZ since the last
snapshot was taken from it.
The following examples illustrate the difference between full and incremental snapshots. They apply to
both shared and unshared snapshots.
S1 K1 Full
620
Amazon Relational Database Service User Guide
Copying a DB snapshot
S2 K1 Incremental of S1
S3 K1 Incremental of S2
S4 K1 Incremental of S3
Note
In these examples, snapshots S2, S3, and S4 are incremental only if the previous snapshot still
exists.
The same applies to copies. Snapshot copies S3C and S4C are incremental only if the previous
copy still exists.
For information on copying incremental snapshots across AWS Regions, see Full and incremental
copies (p. 624).
Certain conditions in the requester's IAM policy can cause the request to fail. The following examples
assume that you're copying the DB snapshot from US East (Ohio) to US East (N. Virginia). These examples
show conditions in the requester's IAM policy that cause the request to fail:
...
"Effect": "Allow",
"Action": "rds:CopyDBSnapshot",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": "us-east-1"
}
}
The request fails because the policy doesn't allow access to the source Region. For a successful request,
specify both the source and destination Regions.
...
"Effect": "Allow",
621
Amazon Relational Database Service User Guide
Copying a DB snapshot
"Action": "rds:CopyDBSnapshot",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": [
"us-east-1",
"us-east-2"
]
}
}
...
"Effect": "Allow",
"Action": "rds:CopyDBSnapshot",
"Resource": "arn:aws:rds:us-east-1:123456789012:snapshot:target-snapshot"
...
For a successful request, specify both the source and target snapshots.
...
"Effect": "Allow",
"Action": "rds:CopyDBSnapshot",
"Resource": [
"arn:aws:rds:us-east-1:123456789012:snapshot:target-snapshot",
"arn:aws:rds:us-east-2:123456789012:snapshot:source-snapshot"
]
...
...
"Effect": "Allow",
"Action": "rds:CopyDBSnapshot",
"Resource": "*",
"Condition": {
"Bool": {"aws:ViaAWSService": "false"}
}
Communication with the source Region is made by RDS on the requester's behalf. For a successful
request, don't deny calls made by AWS services.
• The requester's policy has a condition for aws:SourceVpc or aws:SourceVpce.
These requests might fail because when RDS makes the call to the remote Region, it isn't from the
specified VPC or VPC endpoint.
If you need to use one of the previous conditions that would cause a request to fail, you can include a
second statement with aws:CalledVia in your policy to make the request succeed. For example, you
can use aws:CalledVia with aws:SourceVpce as shown here:
...
"Effect": "Allow",
"Action": "rds:CopyDBSnapshot",
"Resource": "*",
"Condition": {
"Condition" : {
"ForAnyValue:StringEquals" : {
"aws:SourceVpce": "vpce-1a2b3c4d"
}
622
Amazon Relational Database Service User Guide
Copying a DB snapshot
}
},
{
"Effect": "Allow",
"Action": [
"rds:CopyDBSnapshot"
],
"Resource": "*",
"Condition": {
"ForAnyValue:StringEquals": {
"aws:CalledVia": [
"rds.amazonaws.com"
]
}
}
}
For more information, see Policies and permissions in IAM in the IAM User Guide.
RDS doesn't have access to DB snapshots that weren't authorized previously by a CopyDBSnapshot
request. The authorization is revoked when copying completes.
RDS uses the service-linked role to verify the authorization in the source Region. If you delete the
service-linked role during the copy process, the copy fails.
For more information, see Using service-linked roles in the IAM User Guide.
To use the global endpoint, make sure that it's enabled for both Regions in the operations. Set the global
endpoint to Valid in all AWS Regions in the AWS STS account settings.
For more information, see Managing AWS STS in an AWS Region in the IAM User Guide.
In some cases, there might be a large number of cross-Region snapshot copy requests from a given
source AWS Region. In such cases, Amazon RDS might put new cross-Region copy requests from that
623
Amazon Relational Database Service User Guide
Copying a DB snapshot
source AWS Region into a queue until some in-progress copies complete. No progress information is
displayed about copy requests while they are in the queue. Progress information is displayed when the
copying starts.
Incremental snapshot copying across AWS Regions is supported for both unencrypted and encrypted
snapshots.
When you copy a snapshot across AWS Regions, the copy is an incremental copy if the following
conditions are met:
For Oracle databases, you can use the AWS CLI or RDS API to copy the custom DB option group from a
snapshot that has been shared with your AWS account. You can only copy option groups within the same
AWS Region. The option group isn't copied if it has already been copied to the destination account and
no changes have been made to it since being copied. If the source option group has been copied before,
but has changed since being copied, RDS copies the new version to the destination account. Default
option groups aren't copied.
When you copy a snapshot across Regions, you can specify a new option group for the snapshot. We
recommend that you prepare the new option group before you copy the snapshot. In the destination
AWS Region, create an option group with the same settings as the original DB instance. If one already
exists in the new AWS Region, you can use that one.
In some cases, you might copy a snapshot and not specify a new option group for the snapshot. In these
cases, when you restore the snapshot the DB instance gets the default option group. To give the new DB
instance the same options as the original, do the following:
1. In the destination AWS Region, create an option group with the same settings as the original DB
instance. If one already exists in the new AWS Region, you can use that one.
2. After you restore the snapshot in the destination AWS Region, modify the new DB instance and add
the new or existing option group from the previous step.
624
Amazon Relational Database Service User Guide
Copying a DB snapshot
1. In the destination AWS Region, create a DB parameter group with the same settings as the original DB
instance. If one already exists in the new AWS Region, you can use that one.
2. After you restore the snapshot in the destination AWS Region, modify the new DB instance and add
the new or existing parameter group from the previous step.
Copying a DB snapshot
Use the procedures in this topic to copy a DB snapshot. For an overview of copying a snapshot, see
Copying a DB snapshot (p. 619)
For each AWS account, you can copy up to 20 DB snapshots at a time from one AWS Region to another.
If you copy a DB snapshot to another AWS Region, you create a manual DB snapshot that is retained in
that AWS Region. Copying a DB snapshot out of the source AWS Region incurs Amazon RDS data transfer
charges.
For more information about data transfer pricing, see Amazon RDS pricing.
After the DB snapshot copy has been created in the new AWS Region, the DB snapshot copy behaves the
same as all other DB snapshots in that AWS Region.
You can copy a DB snapshot using the AWS Management Console, the AWS CLI, or the RDS API.
Console
The following procedure copies an encrypted or unencrypted DB snapshot, in the same AWS Region or
across Regions, by using the AWS Management Console.
To copy a DB snapshot
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Select the DB snapshot that you want to copy.
4. For Actions, choose Copy snapshot.
625
Amazon Relational Database Service User Guide
Copying a DB snapshot
5. For Target option group (optional), choose a new option group if you want.
626
Amazon Relational Database Service User Guide
Copying a DB snapshot
Specify this option if you are copying a snapshot from one AWS Region to another, and your DB
instance uses a nondefault option group.
If your source DB instance uses Transparent Data Encryption for Oracle or Microsoft SQL Server,
you must specify this option when copying across Regions. For more information, see Option group
considerations (p. 624).
6. (Optional) To copy the DB snapshot to a different AWS Region, for Destination Region, choose the
new AWS Region.
Note
The destination AWS Region must have the same database engine version available as the
source AWS Region.
7. For New DB snapshot identifier, type the name of the DB snapshot copy.
You can make multiple copies of an automated backup or manual snapshot, but each copy must
have a unique identifier.
8. (Optional) Select Copy Tags to copy tags and values from the snapshot to the copy of the snapshot.
9. (Optional) For Encryption, do the following:
a. Choose Enable Encryption if the DB snapshot isn't encrypted but you want to encrypt the copy.
Note
If the DB snapshot is encrypted, you must encrypt the copy, so the check box is already
selected.
b. For AWS KMS key, specify the KMS key identifier to use to encrypt the DB snapshot copy.
10. Choose Copy snapshot.
AWS CLI
You can copy a DB snapshot by using the AWS CLI command copy-db-snapshot. If you are copying the
snapshot to a new AWS Region, run the command in the new AWS Region.
The following options are used to copy a DB snapshot. Not all options are required for all scenarios. Use
the descriptions and the examples that follow to determine which options to use.
627
Amazon Relational Database Service User Guide
Copying a DB snapshot
• --copy-tags – Include the copy tags option to copy tags and values from the snapshot to the copy of
the snapshot.
• --option-group-name – The option group to associate with the copy of the snapshot.
Specify this option if you are copying a snapshot from one AWS Region to another, and your DB
instance uses a non-default option group.
If your source DB instance uses Transparent Data Encryption for Oracle or Microsoft SQL Server,
you must specify this option when copying across Regions. For more information, see Option group
considerations (p. 624).
• --kms-key-id – The KMS key identifier for an encrypted DB snapshot. The KMS key identifier is the
Amazon Resource Name (ARN), key identifier, or key alias for the KMS key.
• If you copy an encrypted DB snapshot from your AWS account, you can specify a value for this
parameter to encrypt the copy with a new KMS key. If you don't specify a value for this parameter,
then the copy of the DB snapshot is encrypted with the same KMS key as the source DB snapshot.
• If you copy an encrypted DB snapshot that is shared from another AWS account, then you must
specify a value for this parameter.
• If you specify this parameter when you copy an unencrypted snapshot, the copy is encrypted.
• If you copy an encrypted snapshot to a different AWS Region, then you must specify a KMS key for
the destination AWS Region. KMS keys are specific to the AWS Region that they are created in, and
you cannot use encryption keys from one AWS Region in another AWS Region.
The following code creates a copy of a snapshot, with the new name mydbsnapshotcopy, in the same
AWS Region as the source snapshot. When the copy is made, the DB option group and tags on the
original snapshot are copied to the snapshot copy.
For Windows:
The following code creates a copy of a snapshot, with the new name mydbsnapshotcopy, in the AWS
Region in which the command is run.
628
Amazon Relational Database Service User Guide
Copying a DB snapshot
--target-db-snapshot-identifier mydbsnapshotcopy
For Windows:
For Windows:
The --source-region parameter is required when you're copying an encrypted snapshot between the
AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. For --source-region, specify the
AWS Region of the source DB instance.
If --source-region isn't specified, specify a --pre-signed-url value. A presigned URL is a URL that
contains a Signature Version 4 signed request for the copy-db-snapshot command that's called in the
source AWS Region. To learn more about the pre-signed-url option, see copy-db-snapshot in the
AWS CLI Command Reference.
RDS API
You can copy a DB snapshot by using the Amazon RDS API operation CopyDBSnapshot. If you are
copying the snapshot to a new AWS Region, perform the action in the new AWS Region.
The following parameters are used to copy a DB snapshot. Not all parameters are required for all
scenarios. Use the descriptions and the examples that follow to determine which parameters to use.
629
Amazon Relational Database Service User Guide
Copying a DB snapshot
• If you are copying from a shared manual DB snapshot, this parameter must be the Amazon Resource
Name (ARN) of the shared DB snapshot.
• If you are copying an encrypted snapshot this parameter must be in the ARN format for the
source AWS Region, and must match the SourceDBSnapshotIdentifier in the PreSignedUrl
parameter.
• TargetDBSnapshotIdentifier – The identifier for the new copy of the encrypted DB snapshot.
• CopyOptionGroup – Set this parameter to true to copy the option group from a shared snapshot to
the copy of the snapshot. The default is false.
• CopyTags – Set this parameter to true to copy tags and values from the snapshot to the copy of the
snapshot. The default is false.
• OptionGroupName – The option group to associate with the copy of the snapshot.
Specify this parameter if you are copying a snapshot from one AWS Region to another, and your DB
instance uses a non-default option group.
If your source DB instance uses Transparent Data Encryption for Oracle or Microsoft SQL Server, you
must specify this parameter when copying across Regions. For more information, see Option group
considerations (p. 624).
• KmsKeyId – The KMS key identifier for an encrypted DB snapshot. The KMS key identifier is the
Amazon Resource Name (ARN), key identifier, or key alias for the KMS key.
• If you copy an encrypted DB snapshot from your AWS account, you can specify a value for this
parameter to encrypt the copy with a new KMS key. If you don't specify a value for this parameter,
then the copy of the DB snapshot is encrypted with the same KMS key as the source DB snapshot.
• If you copy an encrypted DB snapshot that is shared from another AWS account, then you must
specify a value for this parameter.
• If you specify this parameter when you copy an unencrypted snapshot, the copy is encrypted.
• If you copy an encrypted snapshot to a different AWS Region, then you must specify a KMS key for
the destination AWS Region. KMS keys are specific to the AWS Region that they are created in, and
you cannot use encryption keys from one AWS Region in another AWS Region.
• PreSignedUrl – The URL that contains a Signature Version 4 signed request for the
CopyDBSnapshot API operation in the source AWS Region that contains the source DB snapshot to
copy.
Specify this parameter when you copy an encrypted DB snapshot from another AWS Region by using
the Amazon RDS API. You can specify the source Region option instead of this parameter when you
copy an encrypted DB snapshot from another AWS Region by using the AWS CLI.
The presigned URL must be a valid request for the CopyDBSnapshot API operation that can be run in
the source AWS Region containing the encrypted DB snapshot to be copied. The presigned URL request
must contain the following parameter values:
• DestinationRegion – The AWS Region that the encrypted DB snapshot will be copied to. This
AWS Region is the same one where the CopyDBSnapshot operation is called that contains this
presigned URL.
For example, suppose that you copy an encrypted DB snapshot from the us-west-2 Region to the us-
east-1 Region. You then call the CopyDBSnapshot operation in the us-east-1 Region and provide a
presigned URL that contains a call to the CopyDBSnapshot operation in the us-west-2 Region. For
this example, the DestinationRegion in the presigned URL must be set to the us-east-1 Region.
• KmsKeyId – The KMS key identifier for the key to use to encrypt the copy of the DB snapshot in the
destination AWS Region. This is the same identifier for both the CopyDBSnapshot operation that is
called in the destination AWS Region, and the operation contained in the presigned URL.
• SourceDBSnapshotIdentifier – The DB snapshot identifier for the encrypted snapshot to be
copied. This identifier must be in the Amazon Resource Name (ARN) format for the source AWS
Region. For example, if you are copying an encrypted DB snapshot from the us-west-2 Region,
630
Amazon Relational Database Service User Guide
Copying a DB snapshot
For more information on Signature Version 4 signed requests, see the following:
• Authenticating requests: Using query parameters (AWS signature version 4) in the Amazon Simple
Storage Service API Reference
• Signature version 4 signing process in the AWS General Reference
The following code creates a copy of a snapshot, with the new name mydbsnapshotcopy, in the same
AWS Region as the source snapshot. When the copy is made, all tags on the original snapshot are copied
to the snapshot copy.
https://rds.us-west-1.amazonaws.com/
?Action=CopyDBSnapshot
&CopyTags=true
&SignatureMethod=HmacSHA256
&SignatureVersion=4
&SourceDBSnapshotIdentifier=mysql-instance1-snapshot-20130805
&TargetDBSnapshotIdentifier=mydbsnapshotcopy
&Version=2013-09-09
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=AKIADQKE4SARGYLE/20140429/us-west-1/rds/aws4_request
&X-Amz-Date=20140429T175351Z
&X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
&X-Amz-Signature=9164337efa99caf850e874a1cb7ef62f3cea29d0b448b9e0e7c53b288ddffed2
The following code creates a copy of a snapshot, with the new name mydbsnapshotcopy, in the US
West (N. California) Region.
https://rds.us-west-1.amazonaws.com/
?Action=CopyDBSnapshot
&SignatureMethod=HmacSHA256
&SignatureVersion=4
&SourceDBSnapshotIdentifier=arn%3Aaws%3Ards%3Aus-east-1%3A123456789012%3Asnapshot%3Amysql-
instance1-snapshot-20130805
&TargetDBSnapshotIdentifier=mydbsnapshotcopy
&Version=2013-09-09
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=AKIADQKE4SARGYLE/20140429/us-west-1/rds/aws4_request
&X-Amz-Date=20140429T175351Z
&X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
&X-Amz-Signature=9164337efa99caf850e874a1cb7ef62f3cea29d0b448b9e0e7c53b288ddffed2
The following code creates a copy of a snapshot, with the new name mydbsnapshotcopy, in the US East
(N. Virginia) Region.
https://rds.us-east-1.amazonaws.com/
?Action=CopyDBSnapshot
&KmsKeyId=my-us-east-1-key
&OptionGroupName=custom-option-group-name
&PreSignedUrl=https%253A%252F%252Frds.us-west-2.amazonaws.com%252F
%253FAction%253DCopyDBSnapshot
%2526DestinationRegion%253Dus-east-1
631
Amazon Relational Database Service User Guide
Copying a DB snapshot
%2526KmsKeyId%253Dmy-us-east-1-key
%2526SourceDBSnapshotIdentifier%253Darn%25253Aaws%25253Ards%25253Aus-
west-2%25253A123456789012%25253Asnapshot%25253Amysql-instance1-snapshot-20161115
%2526SignatureMethod%253DHmacSHA256
%2526SignatureVersion%253D4
%2526Version%253D2014-10-31
%2526X-Amz-Algorithm%253DAWS4-HMAC-SHA256
%2526X-Amz-Credential%253DAKIADQKE4SARGYLE%252F20161117%252Fus-west-2%252Frds
%252Faws4_request
%2526X-Amz-Date%253D20161117T215409Z
%2526X-Amz-Expires%253D3600
%2526X-Amz-SignedHeaders%253Dcontent-type%253Bhost%253Buser-agent%253Bx-amz-
content-sha256%253Bx-amz-date
%2526X-Amz-Signature
%253D255a0f17b4e717d3b67fad163c3ec26573b882c03a65523522cf890a67fca613
&SignatureMethod=HmacSHA256
&SignatureVersion=4
&SourceDBSnapshotIdentifier=arn%3Aaws%3Ards%3Aus-west-2%3A123456789012%3Asnapshot
%3Amysql-instance1-snapshot-20161115
&TargetDBSnapshotIdentifier=mydbsnapshotcopy
&Version=2014-10-31
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=AKIADQKE4SARGYLE/20161117/us-east-1/rds/aws4_request
&X-Amz-Date=20161117T221704Z
&X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
&X-Amz-Signature=da4f2da66739d2e722c85fcfd225dc27bba7e2b8dbea8d8612434378e52adccf
632
Amazon Relational Database Service User Guide
Sharing a DB snapshot
Sharing a DB snapshot
Using Amazon RDS, you can share a manual DB snapshot in the following ways:
• Sharing a manual DB snapshot, whether encrypted or unencrypted, enables authorized AWS accounts
to copy the snapshot.
• Sharing an unencrypted manual DB snapshot enables authorized AWS accounts to directly restore a
DB instance from the snapshot instead of taking a copy of it and restoring from that. However, you
can't restore a DB instance from a DB snapshot that is both shared and encrypted. Instead, you can
make a copy of the DB snapshot and restore the DB instance from the copy.
Note
To share an automated DB snapshot, create a manual DB snapshot by copying the automated
snapshot, and then share that copy. This process also applies to AWS Backup–generated
resources.
For more information on copying a snapshot, see Copying a DB snapshot (p. 619). For more information
on restoring a DB instance from a DB snapshot, see Restoring from a DB snapshot (p. 615).
The following limitations apply when sharing manual snapshots with other AWS accounts:
• When you restore a DB instance from a shared snapshot using the AWS Command Line Interface (AWS
CLI) or Amazon RDS API, you must specify the Amazon Resource Name (ARN) of the shared snapshot
as the snapshot identifier.
• You can't share a DB snapshot that uses an option group with permanent or persistent options, except
for Oracle DB instances that have the Timezone or OLS option (or both).
A permanent option can't be removed from an option group. Option groups with persistent options
can't be removed from a DB instance once the option group has been assigned to the DB instance.
The following table lists permanent and persistent options and their related DB engines.
For Oracle DB instances, you can copy shared DB snapshots that have the Timezone or OLS option
(or both). To do so, specify a target option group that includes these options when you copy the DB
snapshot. The OLS option is permanent and persistent only for Oracle DB instances running Oracle
version 12.2 or higher. For more information about these options, see Oracle time zone (p. 2087) and
Oracle Label Security (p. 2049).
633
Amazon Relational Database Service User Guide
Sharing a DB snapshot
When a snapshot is shared publicly, it gives all AWS accounts permission both to copy the snapshot and
to create DB instances from it.
You aren't billed for the backup storage of public snapshots owned by other accounts. You're billed only
for snapshots that you own.
If you copy a public snapshot, you own the copy. You're billed for the backup storage of your snapshot
copy. If you create a DB instance from a public snapshot, you're billed for that DB instance. For Amazon
RDS pricing information, see the Amazon RDS product page.
You can delete only the public snapshots that you own. To delete a shared or public snapshot, make sure
to log into the AWS account that owns the snapshot.
The public snapshots appear. You can see which account owns a public snapshot in the Owner
column.
Note
You might have to modify the page preferences, by selecting the gear icon at the upper
right of the Public snapshots list, to see this column.
The output returned is similar to the following example if you have public snapshots.
"DBSnapshotArn": "arn:aws:rds:us-east-1:123456789012:snapshot:mysnapshot1",
"DBSnapshotArn": "arn:aws:rds:us-east-1:123456789012:snapshot:mysnapshot2",
Note
You might see duplicate entries for DBSnapshotIdentifier or
SourceDBSnapshotIdentifier.
634
Amazon Relational Database Service User Guide
Sharing a DB snapshot
1. Share the AWS KMS key that was used to encrypt the snapshot with any accounts that you want to be
able to access the snapshot.
You can share KMS keys with another AWS account by adding the other account to the KMS key policy.
For details on updating a key policy, see Key policies in the AWS KMS Developer Guide. For an example
of creating a key policy, see Allowing access to an AWS KMS key (p. 635) later in this topic.
2. Use the AWS Management Console, AWS CLI, or Amazon RDS API to share the encrypted snapshot
with the other accounts.
To allow another AWS account access to a KMS key, update the key policy for the KMS key. You update it
with the Amazon Resource Name (ARN) of the AWS account that you are sharing to as Principal in the
KMS key policy. Then you allow the kms:CreateGrant action.
After you have given an AWS account access to your KMS key, to copy your encrypted snapshot that AWS
account must create an AWS Identity and Access Management (IAM) role or user if it doesn't already
have one. In addition, that AWS account must also attach an IAM policy to that IAM permission set or
roles that allows the permission set or roles to copy an encrypted DB snapshot using your KMS key.
The account must be an IAM user and can't be a root AWS account identity due to AWS KMS security
restrictions.
In the following key policy example, user 111122223333 is the owner of the KMS key, and user
444455556666 is the account that the key is being shared with. This updated key policy gives the
AWS account access to the KMS key by including the ARN for the root AWS account identity for user
444455556666 as a Principal for the policy, and by allowing the kms:CreateGrant action.
{
"Id": "key-policy-1",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {"AWS": [
"arn:aws:iam::111122223333:user/KeyUser",
"arn:aws:iam::444455556666:root"
]},
"Action": [
"kms:CreateGrant",
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
635
Amazon Relational Database Service User Guide
Sharing a DB snapshot
"Resource": "*"
},
{
"Sid": "Allow attachment of persistent resources",
"Effect": "Allow",
"Principal": {"AWS": [
"arn:aws:iam::111122223333:user/KeyUser",
"arn:aws:iam::444455556666:root"
]},
"Action": [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": "*",
"Condition": {"Bool": {"kms:GrantIsForAWSResource": true}}
}
]
}
Once the external AWS account has access to your KMS key, the owner of that AWS account can create
a policy that allows an IAM user created for that account to copy an encrypted snapshot encrypted with
that KMS key.
The following example shows a policy that can be attached to an IAM user for AWS account
444455556666 that enables the IAM user to copy a shared snapshot from AWS account 111122223333
that has been encrypted with the KMS key c989c1dd-a3f2-4a5d-8d96-e793d082ab26 in the us-
west-2 region.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUseOfTheKey",
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey",
"kms:CreateGrant",
"kms:RetireGrant"
],
"Resource": ["arn:aws:kms:us-west-2:111122223333:key/c989c1dd-a3f2-4a5d-8d96-
e793d082ab26"]
},
{
"Sid": "AllowAttachmentOfPersistentResources",
"Effect": "Allow",
"Action": [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": ["arn:aws:kms:us-west-2:111122223333:key/c989c1dd-a3f2-4a5d-8d96-
e793d082ab26"],
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": true
636
Amazon Relational Database Service User Guide
Sharing a DB snapshot
}
}
}
]
}
For details on updating a key policy, see Key policies in the AWS KMS Developer Guide.
Sharing a snapshot
You can share a DB snapshot using the AWS Management Console, the AWS CLI, or the RDS API.
Console
Using the Amazon RDS console, you can share a manual DB snapshot with up to 20 AWS accounts. You
can also use the console to stop sharing a manual snapshot with one or more accounts.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Select the manual snapshot that you want to share.
4. For Actions, choose Share snapshot.
5. Choose one of the following options for DB snapshot visibility.
• If the source is unencrypted, choose Public to permit all AWS accounts to restore a DB instance
from your manual DB snapshot, or choose Private to permit only AWS accounts that you specify
to restore a DB instance from your manual DB snapshot.
Warning
If you set DB snapshot visibility to Public, all AWS accounts can restore a DB instance
from your manual DB snapshot and have access to your data. Do not share any manual
DB snapshots that contain private information as Public.
• If the source is encrypted, DB snapshot visibility is set as Private because encrypted snapshots
can't be shared as public.
6. For AWS Account ID, type the AWS account identifier for an account that you want to permit
to restore a DB instance from your manual snapshot, and then choose Add. Repeat to include
additional AWS account identifiers, up to 20 AWS accounts.
If you make an error when adding an AWS account identifier to the list of permitted accounts, you
can delete it from the list by choosing Delete at the right of the incorrect AWS account identifier.
637
Amazon Relational Database Service User Guide
Sharing a DB snapshot
7. After you have added identifiers for all of the AWS accounts that you want to permit to restore the
manual snapshot, choose Save to save your changes.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Select the manual snapshot that you want to stop sharing.
4. Choose Actions, and then choose Share snapshot.
5. To remove permission for an AWS account, choose Delete for the AWS account identifier for that
account from the list of authorized accounts.
638
Amazon Relational Database Service User Guide
Sharing a DB snapshot
AWS CLI
To share a DB snapshot, use the aws rds modify-db-snapshot-attribute command. Use the --
values-to-add parameter to add a list of the IDs for the AWS accounts that are authorized to restore
the manual snapshot.
The following example enables AWS account identifier 123456789012 to restore the DB snapshot
named db7-snapshot.
For Windows:
639
Amazon Relational Database Service User Guide
Sharing a DB snapshot
The following example enables two AWS account identifiers, 111122223333 and 444455556666, to
restore the DB snapshot named manual-snapshot1.
For Windows:
Note
When using the Windows command prompt, you must escape double quotes (") in JSON code by
prefixing them with a backslash (\).
To remove an AWS account identifier from the list, use the --values-to-remove parameter.
The following example prevents AWS account ID 444455556666 from restoring the snapshot.
For Windows:
To list the AWS accounts enabled to restore a snapshot, use the describe-db-snapshot-attributes
AWS CLI command.
RDS API
You can also share a manual DB snapshot with other AWS accounts by using the Amazon RDS API. To do
so, call the ModifyDBSnapshotAttribute operation. Specify restore for AttributeName, and use
the ValuesToAdd parameter to add a list of the IDs for the AWS accounts that are authorized to restore
the manual snapshot.
To make a manual snapshot public and restorable by all AWS accounts, use the value all. However,
take care not to add the all value for any manual snapshots that contain private information that you
don't want to be available to all AWS accounts. Also, don't specify all for encrypted snapshots, because
making such snapshots public isn't supported.
640
Amazon Relational Database Service User Guide
Sharing a DB snapshot
To remove sharing permission for an AWS account, use the ModifyDBSnapshotAttribute operation
with AttributeName set to restore and the ValuesToRemove parameter. To mark a manual
snapshot as private, remove the value all from the values list for the restore attribute.
To list all of the AWS accounts permitted to restore a snapshot, use the
DescribeDBSnapshotAttributes API operation.
641
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
When you export a DB snapshot, Amazon RDS extracts data from the snapshot and stores it in an
Amazon S3 bucket. The data is stored in an Apache Parquet format that is compressed and consistent.
You can export all types of DB snapshots—including manual snapshots, automated system snapshots,
and snapshots created by the AWS Backup service. By default, all data in the snapshot is exported.
However, you can choose to export specific sets of databases, schemas, or tables.
After the data is exported, you can analyze the exported data directly through tools like Amazon Athena
or Amazon Redshift Spectrum. For more information on using Athena to read Parquet data, see Parquet
SerDe in the Amazon Athena User Guide. For more information on using Redshift Spectrum to read
Parquet data, see COPY from columnar data formats in the Amazon Redshift Database Developer Guide.
Topics
• Region and version availability (p. 642)
• Limitations (p. 642)
• Overview of exporting snapshot data (p. 643)
• Setting up access to an Amazon S3 bucket (p. 644)
• Exporting a DB snapshot to an Amazon S3 bucket (p. 647)
• Monitoring snapshot exports (p. 650)
• Canceling a snapshot export task (p. 651)
• Failure messages for Amazon S3 export tasks (p. 652)
• Troubleshooting PostgreSQL permissions errors (p. 653)
• File naming convention (p. 653)
• Data conversion when exporting to an Amazon S3 bucket (p. 654)
Limitations
Exporting DB snapshot data to Amazon S3 has the following limitations:
• You can't run multiple export tasks for the same DB snapshot simultaneously. This applies to both full
and partial exports.
• Exporting snapshots from DB instances that use magnetic storage isn't supported.
• The following characters in the S3 file path are converted to underscores (_) during export:
\ ` " (space)
• If a database, schema, or table has characters in its name other than the following, partial export isn't
supported. However, you can export the entire DB snapshot.
• Latin letters (A–Z)
• Digits (0–9)
• Dollar symbol ($)
642
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
• Underscore (_)
• Spaces ( ) and certain characters aren't supported in database table column names. Tables with the
following characters in column names are skipped during export:
, ; { } ( ) \n \t = (space)
• Tables with slashes (/) in their names are skipped during export.
• RDS for PostgreSQL temporary and unlogged tables are skipped during export.
• If the data contains a large object, such as a BLOB or CLOB, that is close to or greater than 500 MB,
then the export fails.
• If a table contains a large row that is close to or greater than 2 GB, then the table is skipped during
export.
• We strongly recommend that you use a unique name for each export task. If you don't use a unique
task name, you might receive the following error message:
A bucket is a container for Amazon S3 objects or files. To provide the information to access a bucket,
take the following steps:
a. Identify the S3 bucket where the snapshot is to be exported to. The S3 bucket must be in the
same AWS Region as the snapshot. For more information, see Identifying the Amazon S3 bucket
for export (p. 644).
b. Create an AWS Identity and Access Management (IAM) role that grants the snapshot export task
access to the S3 bucket. For more information, see Providing access to an Amazon S3 bucket
using an IAM role (p. 644).
3. Create a symmetric encryption AWS KMS key for the server-side encryption. The KMS key is used by
the snapshot export task to set up AWS KMS server-side encryption when writing the export data
to S3. The KMS key policy must include both the kms:Encrypt and kms:Decrypt permissions. For
more information on using KMS keys in Amazon RDS, see AWS KMS key management (p. 2589).
If you have a deny statement in your KMS key policy, make sure to explicitly exclude the AWS service
principal export.rds.amazonaws.com.
You can use a KMS key within your AWS account, or you can use a cross-account KMS key. For more
information, see Using a cross-account AWS KMS key for encrypting Amazon S3 exports (p. 646).
4. Export the snapshot to Amazon S3 using the console or the start-export-task CLI command.
For more information, see Exporting a DB snapshot to an Amazon S3 bucket (p. 647).
5. To access your exported data in the Amazon S3 bucket, see Uploading, downloading, and managing
objects in the Amazon Simple Storage Service User Guide.
643
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
Topics
• Identifying the Amazon S3 bucket for export (p. 644)
• Providing access to an Amazon S3 bucket using an IAM role (p. 644)
• Using a cross-account Amazon S3 bucket (p. 646)
• Using a cross-account AWS KMS key for encrypting Amazon S3 exports (p. 646)
For more information about working with Amazon S3 buckets, see the following in the Amazon Simple
Storage Service User Guide:
To grant this permission, create an IAM policy that provides access to the bucket, then create an IAM role
and attach the policy to the role. You later assign the IAM role to your snapshot export task.
Important
If you plan to use the AWS Management Console to export your snapshot, you can choose to
create the IAM policy and the role automatically when you export the snapshot. For instructions,
see Exporting a DB snapshot to an Amazon S3 bucket (p. 647).
1. Create an IAM policy. This policy provides the bucket and object permissions that allow your
snapshot export task to access Amazon S3.
In the policy, include the following required actions to allow the transfer of files from Amazon RDS
to an S3 bucket:
• s3:PutObject*
• s3:GetObject*
• s3:ListBucket
• s3:DeleteObject*
• s3:GetBucketLocation
644
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
In the policy, include the following resources to identify the S3 bucket and objects in the bucket. The
following list of resources shows the Amazon Resource Name (ARN) format for accessing Amazon S3.
• arn:aws:s3:::your-s3-bucket
• arn:aws:s3:::your-s3-bucket/*
For more information on creating an IAM policy for Amazon RDS, see Creating and using an IAM
policy for IAM database access (p. 2646). See also Tutorial: Create and attach your first customer
managed policy in the IAM User Guide.
The following AWS CLI command creates an IAM policy named ExportPolicy with these options. It
grants access to a bucket named your-s3-bucket.
Note
After you create the policy, note the ARN of the policy. You need the ARN for a subsequent
step when you attach the policy to an IAM role.
2. Create an IAM role, so that Amazon RDS can assume this IAM role on your behalf to access your
Amazon S3 buckets. For more information, see Creating a role to delegate permissions to an IAM
user in the IAM User Guide.
The following example shows using the AWS CLI command to create a role named rds-s3-
export-role.
3. Attach the IAM policy that you created to the IAM role that you created.
645
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
The following AWS CLI command attaches the policy created earlier to the role named rds-s3-
export-role. Replace your-policy-arn with the policy ARN that you noted in an earlier step.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/Admin"
},
"Action": [
"s3:PutObject*",
"s3:ListBucket",
"s3:GetObject*",
"s3:DeleteObject*",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::mycrossaccountbucket",
"arn:aws:s3:::mycrossaccountbucket/*"
]
}
]
}
The following example gives ExampleRole and ExampleUser in the external account
444455556666 permissions in the local account 123456789012.
{
"Sid": "Allow an external account to use this KMS key",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::444455556666:role/ExampleRole",
"arn:aws:iam::444455556666:user/ExampleUser"
646
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
]
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:CreateGrant",
"kms:DescribeKey",
"kms:RetireGrant"
],
"Resource": "*"
}
The following example IAM policy allows the principal to use the KMS key in account 123456789012
for cryptographic operations. To give this permission to ExampleRole and ExampleUser in
account 444455556666, attach the policy to them in that account.
{
"Sid": "Allow use of KMS key in account 123456789012",
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:CreateGrant",
"kms:DescribeKey",
"kms:RetireGrant"
],
"Resource": "arn:aws:kms:us-
west-2:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"
}
You can export a DB snapshot to Amazon S3 using the AWS Management Console, the AWS CLI, or the
RDS API.
If you use a Lambda function to export a snapshot, add the kms:DescribeKey action to the Lambda
function policy. For more information, see AWS Lambda permissions.
Console
The Export to Amazon S3 console option appears only for snapshots that can be exported to Amazon
S3. A snapshot might not be available for export because of the following reasons:
647
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
To export a DB snapshot
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. From the tabs, choose the type of snapshot that you want to export.
4. In the list of snapshots, choose the snapshot that you want to export.
5. For Actions, choose Export to Amazon S3.
For example:
To assign the exported data to a folder path in the S3 bucket, enter the optional path for S3 prefix.
9. For IAM role, either choose a role that grants you write access to your chosen S3 bucket, or create a
new role.
• If you created a role by following the steps in Providing access to an Amazon S3 bucket using an
IAM role (p. 644), choose that role.
• If you didn't create a role that grants you write access to your chosen S3 bucket, then choose
Create a new role to create the role automatically. Next, enter a name for the role in IAM role
name.
10. For AWS KMS key, enter the ARN for the key to use for encrypting the exported data.
11. Choose Export to Amazon S3.
AWS CLI
To export a DB snapshot to Amazon S3 using the AWS CLI, use the start-export-task command with the
following required options:
• --export-task-identifier
648
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
• --source-arn
• --s3-bucket-name
• --iam-role-arn
• --kms-key-id
In the following examples, the snapshot export task is named my-snapshot-export, which exports a
snapshot to an S3 bucket named my-export-bucket.
Example
For Windows:
{
"Status": "STARTING",
"IamRoleArn": "iam-role",
"ExportTime": "2019-08-12T01:23:53.109Z",
"S3Bucket": "my-export-bucket",
"PercentProgress": 0,
"KmsKeyId": "my-key",
"ExportTaskIdentifier": "my-snapshot-export",
"TotalExtractedDataInGB": 0,
"TaskStartTime": "2019-11-13T19:46:00.173Z",
"SourceArn": "arn:aws:rds:AWS_Region:123456789012:snapshot:snapshot-name"
}
To provide a folder path in the S3 bucket for the snapshot export, include the --s3-prefix option in
the start-export-task command.
RDS API
To export a DB snapshot to Amazon S3 using the Amazon RDS API, use the StartExportTask operation
with the following required parameters:
• ExportTaskIdentifier
• SourceArn
• S3BucketName
• IamRoleArn
649
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
• KmsKeyId
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. To view the list of snapshot exports, choose the Exports in Amazon S3 tab.
4. To view information about a specific snapshot export, choose the export task.
AWS CLI
To monitor DB snapshot exports using the AWS CLI, use the describe-export-tasks command.
The following example shows how to display current information about all of your snapshot exports.
Example
{
"ExportTasks": [
{
"Status": "CANCELED",
"TaskEndTime": "2019-11-01T17:36:46.961Z",
"S3Prefix": "something",
"ExportTime": "2019-10-24T20:23:48.364Z",
"S3Bucket": "examplebucket",
"PercentProgress": 0,
"KmsKeyId": "arn:aws:kms:AWS_Region:123456789012:key/K7MDENG/
bPxRfiCYEXAMPLEKEY",
"ExportTaskIdentifier": "anewtest",
"IamRoleArn": "arn:aws:iam::123456789012:role/export-to-s3",
"TotalExtractedDataInGB": 0,
"TaskStartTime": "2019-10-25T19:10:58.885Z",
"SourceArn": "arn:aws:rds:AWS_Region:123456789012:snapshot:parameter-groups-
test"
},
{
"Status": "COMPLETE",
"TaskEndTime": "2019-10-31T21:37:28.312Z",
"WarningMessage": "{\"skippedTables\":[],\"skippedObjectives\":[],\"general\":
[{\"reason\":\"FAILED_TO_EXTRACT_TABLES_LIST_FOR_DATABASE\"}]}",
"S3Prefix": "",
"ExportTime": "2019-10-31T06:44:53.452Z",
"S3Bucket": "examplebucket1",
"PercentProgress": 100,
"KmsKeyId": "arn:aws:kms:AWS_Region:123456789012:key/2Zp9Utk/
h3yCo8nvbEXAMPLEKEY",
"ExportTaskIdentifier": "thursday-events-test",
"IamRoleArn": "arn:aws:iam::123456789012:role/export-to-s3",
"TotalExtractedDataInGB": 263,
"TaskStartTime": "2019-10-31T20:58:06.998Z",
650
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
"SourceArn":
"arn:aws:rds:AWS_Region:123456789012:snapshot:rds:example-1-2019-10-31-06-44"
},
{
"Status": "FAILED",
"TaskEndTime": "2019-10-31T02:12:36.409Z",
"FailureCause": "The S3 bucket edgcuc-export isn't located in the current AWS
Region. Please, review your S3 bucket name and retry the export.",
"S3Prefix": "",
"ExportTime": "2019-10-30T06:45:04.526Z",
"S3Bucket": "examplebucket2",
"PercentProgress": 0,
"KmsKeyId": "arn:aws:kms:AWS_Region:123456789012:key/2Zp9Utk/
h3yCo8nvbEXAMPLEKEY",
"ExportTaskIdentifier": "wednesday-afternoon-test",
"IamRoleArn": "arn:aws:iam::123456789012:role/export-to-s3",
"TotalExtractedDataInGB": 0,
"TaskStartTime": "2019-10-30T22:43:40.034Z",
"SourceArn":
"arn:aws:rds:AWS_Region:123456789012:snapshot:rds:example-1-2019-10-30-06-45"
}
]
}
RDS API
To display information about DB snapshot exports using the Amazon RDS API, use the
DescribeExportTasks operation.
To track completion of the export workflow or to initiate another workflow, you can subscribe to Amazon
Simple Notification Service topics. For more information on Amazon SNS, see Working with Amazon RDS
event notification (p. 855).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the Exports in Amazon S3 tab.
4. Choose the snapshot export task that you want to cancel.
5. Choose Cancel.
6. Choose Cancel export task on the confirmation page.
651
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
AWS CLI
To cancel a snapshot export task using the AWS CLI, use the cancel-export-task command. The command
requires the --export-task-identifier option.
Example
RDS API
To cancel a snapshot export task using the Amazon RDS API, use the CancelExportTask operation with
the ExportTaskIdentifier parameter.
An unknown internal error occurred. The task has failed because of an unknown error,
exception, or failure.
An unknown internal error occurred The task has failed because of an unknown error,
writing the export task's metadata to the exception, or failure.
S3 bucket [bucket name].
The RDS export failed to write the export The export task assumes your IAM role to validate
task's metadata because it can't assume whether it is allowed to write metadata to your S3 bucket.
the IAM role [role ARN]. If the task can't assume your IAM role, it fails.
The RDS export failed to write the export One or more permissions are missing, so the export task
task's metadata to the S3 bucket [bucket can't access the S3 bucket. This failure message is raised
name] using the IAM role [role ARN] with when receiving one of the following error codes:
the KMS key [key ID]. Error code: [error
code] • AWSSecurityTokenServiceException with the
error code AccessDenied
• AmazonS3Exception with the error
code NoSuchBucket, AccessDenied,
KMS.KMSInvalidStateException, 403 Forbidden,
or KMS.DisabledException
652
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
The IAM role [role ARN] isn't authorized to The IAM policy is misconfigured. Permission for the
call [S3 action] on the S3 bucket [bucket specific S3 action on the S3 bucket is missing, which
name]. Review your permissions and retry causes the export task to fail.
the export.
KMS key check failed. Check the The KMS key credential check failed.
credentials on your KMS key and try again.
The S3 bucket [bucket name] isn't located The S3 bucket is in the wrong AWS Region.
in the current AWS Region. Review your S3
bucket name and retry the export.
For more information on superuser privileges, see Master user account privileges (p. 2682).
export_identifier/database_name/schema_name.table_name/
For example:
export-1234567890123-459/rdststdb/rdststdb.DataInsert_7ADB5D19965123A2/
There are two conventions for how files are named. The current convention is the following:
partition_index/part-00000-random_uuid.format-based_extension
For example:
1/part-00000-c5a881bb-58ff-4ee6-1111-b41ecff340a3-c000.gz.parquet
2/part-00000-d7a881cc-88cc-5ab7-2222-c41ecab340a4-c000.gz.parquet
3/part-00000-f5a991ab-59aa-7fa6-3333-d41eccd340a7-c000.gz.parquet
653
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
part-partition_index-random_uuid.format-based_extension
For example:
part-00000-c5a881bb-58ff-4ee6-1111-b41ecff340a3-c000.gz.parquet
part-00001-d7a881cc-88cc-5ab7-2222-c41ecab340a4-c000.gz.parquet
part-00002-f5a991ab-59aa-7fa6-3333-d41eccd340a7-c000.gz.parquet
The file naming convention is subject to change. Therefore, when reading target tables, we recommend
that you read everything inside the base prefix for the table.
• BOOLEAN
• INT32
• INT64
• INT96
• FLOAT
• DOUBLE
• BYTE_ARRAY – A variable-length byte array, also known as binary
• FIXED_LEN_BYTE_ARRAY – A fixed-length byte array used when the values have a constant size
The Parquet data types are few to reduce the complexity of reading and writing the format. Parquet
provides logical types for extending primitive types. A logical type is implemented as an annotation with
the data in a LogicalType metadata field. The logical type annotation explains how to interpret the
primitive type.
When the STRING logical type annotates a BYTE_ARRAY type, it indicates that the byte array should be
interpreted as a UTF-8 encoded character string. After an export task completes, Amazon RDS notifies
you if any string conversion occurred. The underlying data exported is always the same as the data from
the source. However, due to the encoding difference in UTF-8, some characters might appear different
from the source when read in tools such as Athena.
For more information, see Parquet logical type definitions in the Parquet documentation.
Topics
• MySQL and MariaDB data type mapping to Parquet (p. 654)
• PostgreSQL data type mapping to Parquet (p. 657)
654
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
Source data type Parquet primitive type Logical type Conversion notes
annotation
BIGINT INT64
BIT BYTE_ARRAY
DOUBLE DOUBLE
FLOAT DOUBLE
INT INT32
MEDIUMINT INT32
655
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
Source data type Parquet primitive type Logical type Conversion notes
annotation
SMALLINT INT32
TINYINT INT32
BINARY BYTE_ARRAY
BLOB BYTE_ARRAY
CHAR BYTE_ARRAY
LINESTRING BYTE_ARRAY
LONGBLOB BYTE_ARRAY
MEDIUMBLOB BYTE_ARRAY
MULTILINESTRING BYTE_ARRAY
TINYBLOB BYTE_ARRAY
VARBINARY BYTE_ARRAY
656
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
Source data type Parquet primitive type Logical type Conversion notes
annotation
YEAR INT32
GEOMETRY BYTE_ARRAY
GEOMETRYCOLLECTION BYTE_ARRAY
MULTIPOINT BYTE_ARRAY
MULTIPOLYGON BYTE_ARRAY
POINT BYTE_ARRAY
POLYGON BYTE_ARRAY
PostgreSQL data type Parquet primitive type Logical type Mapping notes
annotation
BIGINT INT64
BIGSERIAL INT64
This conversion is to
avoid complications due
to data precision and
data values that are not
a number (NaN).
INTEGER INT32
657
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
PostgreSQL data type Parquet primitive type Logical type Mapping notes
annotation
REAL FLOAT
SERIAL INT32
This conversion is to
avoid complications due
to data precision, data
values that are not a
number (NaN), and time
data values.
BYTEA BINARY
658
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3
PostgreSQL data type Parquet primitive type Logical type Mapping notes
annotation
BOOLEAN BOOLEAN
659
Amazon Relational Database Service User Guide
Restoring a DB instance to a specified time
When you restore a DB instance to a point in time, you can choose the default virtual private cloud (VPC)
security group. Or you can apply a custom VPC security group to your DB instance.
Restored DB instances are automatically associated with the default DB parameter and option groups.
However, you can apply a custom parameter group and option group by specifying them during a
restore.
If the source DB instance has resource tags, RDS adds the latest tags to the restored DB instance.
RDS uploads transaction logs for DB instances to Amazon S3 every five minutes. To see the latest
restorable time for a DB instance, use the AWS CLI describe-db-instances command and look at the value
returned in the LatestRestorableTime field for the DB instance. To see the latest restorable time for
each DB instance in the Amazon RDS console, choose Automated backups.
You can restore to any point in time within your backup retention period. To see the earliest restorable
time for each DB instance, choose Automated backups in the Amazon RDS console.
Note
We recommend that you restore to the same or similar DB instance size—and IOPS if using
Provisioned IOPS storage—as the source DB instance. You might get an error if, for example, you
choose a DB instance size with an incompatible IOPS value.
Some of the database engines used by Amazon RDS have special considerations when restoring from a
point in time:
• When you restore an Oracle DB instance to a point in time, you can specify a different Oracle DB
engine, license model, and DBName (SID) to be used by the new DB instance.
• When you restore a Microsoft SQL Server DB instance to a point in time, each database within that
instance is restored to a point in time within 1 second of each other database within the instance.
Transactions that span multiple databases within the instance might be restored inconsistently.
• For a SQL Server DB instance, the OFFLINE, EMERGENCY, and SINGLE_USER modes aren't supported.
Setting any database into one of these modes causes the latest restorable time to stop moving ahead
for the whole instance.
• Some actions, such as changing the recovery model of a SQL Server database, can break the sequence
of logs that are used for point-in-time recovery. In some cases, Amazon RDS can detect this issue
and the latest restorable time is prevented from moving forward. In other cases, such as when a SQL
Server database uses the BULK_LOGGED recovery model, the break in log sequence isn't detected. It
660
Amazon Relational Database Service User Guide
Restoring a DB instance to a specified time
might not be possible to restore a SQL Server DB instance to a point in time if there is a break in the
log sequence. For these reasons, Amazon RDS doesn't support changing the recovery model of SQL
Server databases.
You can also use AWS Backup to manage backups of Amazon RDS DB instances. If your DB instance
is associated with a backup plan in AWS Backup, that backup plan is used for point-in-time recovery.
Backups that were created with AWS Backup have names ending in awsbackup:AWS-Backup-job-
number. For information about AWS Backup, see the AWS Backup Developer Guide.
Note
Information in this topic applies to Amazon RDS. For information on restoring an Amazon
Aurora DB cluster, see Restoring a DB cluster to a specified time.
You can restore a DB instance to a point in time using the AWS Management Console, the AWS CLI, or
the RDS API.
Note
You can't reduce the amount of storage when you restore a DB instance. When you increase the
allocated storage, it must be by at least 10 percent. If you try to increase the value by less than
10 percent, you get an error. You can't increase the allocated storage when restoring RDS for
SQL Server DB instances.
Console
To restore a DB instance to a specified time
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.
If you chose Custom, enter the date and time to which you want to restore the instance.
Note
Times are shown in your local time zone, which is indicated by an offset from Coordinated
Universal Time (UTC). For example, UTC-5 is Eastern Standard Time/Central Daylight Time.
6. For DB instance identifier, enter the name of the target restored DB instance. The name must be
unique.
7. Choose other options as needed, such as DB instance class, storage, and whether you want to use
storage autoscaling.
For information about each setting, see Settings for DB instances (p. 308).
8. Choose Restore to point in time.
AWS CLI
To restore a DB instance to a specified time, use the AWS CLI command restore-db-instance-to-point-in-
time to create a new DB instance. This example also sets the allocated storage size and enables storage
autoscaling.
661
Amazon Relational Database Service User Guide
Restoring a DB instance to a specified time
Resource tagging is supported for this operation. When you use the --tags option, the source DB
instance tags are ignored and the provided ones are used. Otherwise, the latest tags from the source
instance are used.
You can specify other settings. For information about each setting, see Settings for DB instances (p. 308).
Example
For Windows:
RDS API
To restore a DB instance to a specified time, call the Amazon RDS API
RestoreDBInstanceToPointInTime operation with the following parameters:
• SourceDBInstanceIdentifier
• TargetDBInstanceIdentifier
• RestoreTime
662
Amazon Relational Database Service User Guide
Deleting a DB snapshot
Deleting a DB snapshot
You can delete DB snapshots managed by Amazon RDS when you no longer need them.
Note
To delete backups managed by AWS Backup, use the AWS Backup console. For information
about AWS Backup, see the AWS Backup Developer Guide.
Deleting a DB snapshot
You can delete a manual, shared, or public DB snapshot using the AWS Management Console, the AWS
CLI, or the RDS API.
To delete a shared or public snapshot, you must sign in to the AWS account that owns the snapshot.
If you have automated DB snapshots that you want to delete without deleting the DB instance, change
the backup retention period for the DB instance to 0. The automated snapshots are deleted when
the change is applied. You can apply the change immediately if you don't want to wait until the next
maintenance period. After the change is complete, you can then re-enable automatic backups by setting
the backup retention period to a number greater than 0. For information about modifying a DB instance,
see Modifying an Amazon RDS DB instance (p. 401).
Retained automated backups and manual snapshots incur billing charges until they're deleted. For more
information, see Retention costs (p. 596).
If you deleted a DB instance, you can delete its automated DB snapshots by removing the automated
backups for the DB instance. For information about automated backups, see Working with
backups (p. 591).
Console
To delete a DB snapshot
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
AWS CLI
You can delete a DB snapshot by using the AWS CLI command delete-db-snapshot.
Example
663
Amazon Relational Database Service User Guide
Deleting a DB snapshot
For Windows:
RDS API
You can delete a DB snapshot by using the Amazon RDS API operation DeleteDBSnapshot.
664
Amazon Relational Database Service User Guide
Tutorial: Restore a DB instance from a DB snapshot
When you restore the DB instance, you provide the name of the DB snapshot to restore from. You then
provide a name for the new DB instance that's created from the restore operation.
For more detailed information on restoring DB instances from snapshots, see Restoring from a DB
snapshot (p. 615).
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the DB snapshot that you want to restore from.
4. For Actions, choose Restore snapshot.
665
Amazon Relational Database Service User Guide
Tutorial: Restore a DB instance from a DB snapshot
5. Under DB instance settings, use the default settings for DB engine and License model (for Oracle
or Microsoft SQL Server).
6. Under Settings, for DB instance identifier enter the unique name that you want to use for the
restored DB instance, for example mynewdbinstance.
If you're restoring from a DB instance that you deleted after you made the DB snapshot, you can use
the name of that DB instance.
7. Under Availability & durability, choose whether to create a standby instance in another Availability
Zone.
For this tutorial, choose Burstable classes (includes t classes), and then choose db.t3.small.
10. For Encryption, use the default settings.
If the source DB instance for the snapshot was encrypted, the restored DB instance is also encrypted.
You can't make it unencrypted.
11. Expand Additional configuration at the bottom of the page.
666
Amazon Relational Database Service User Guide
Tutorial: Restore a DB instance from a DB snapshot
The Databases page displays the restored DB instance, with a status of Creating.
667
Amazon Relational Database Service User Guide
Backing up and restoring a Multi-AZ DB cluster
Topics
• Creating a Multi-AZ DB cluster snapshot (p. 669)
• Restoring from a snapshot to a Multi-AZ DB cluster (p. 671)
• Restoring from a Multi-AZ DB cluster snapshot to a DB instance (p. 673)
• Restoring a Multi-AZ DB cluster to a specified time (p. 675)
In addition, the following topics apply to both DB instances and Multi-AZ DB clusters:
668
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster snapshot
You can create a Multi-AZ DB cluster snapshot using the AWS Management Console, the AWS CLI, or the
RDS API.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. In the list, choose the Multi-AZ DB cluster for which you want to take a snapshot.
4. For Actions, choose Take snapshot.
The Snapshots page appears, with the new Multi-AZ DB cluster snapshot's status shown as Creating.
After its status is Available, you can see its creation time.
AWS CLI
You can create a Multi-AZ DB cluster snapshot by using the AWS CLI create-db-cluster-snapshot
command with the following options:
• --db-cluster-identifier
• --db-cluster-snapshot-identifier
In this example, you create a Multi-AZ DB cluster snapshot called mymultiazdbclustersnapshot for a
DB cluster called mymultiazdbcluster.
Example
For Windows:
669
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster snapshot
RDS API
You can create a Multi-AZ DB cluster snapshot by using the Amazon RDS API CreateDBClusterSnapshot
operation with the following parameters:
• DBClusterIdentifier
• DBClusterSnapshotIdentifier
670
Amazon Relational Database Service User Guide
Restoring from a snapshot to a Multi-AZ DB cluster
For information about Multi-AZ deployments, see Configuring and managing a Multi-AZ
deployment (p. 492).
Tip
You can migrate a Single-AZ deployment or a Multi-AZ DB instance deployment to a Multi-AZ
DB cluster deployment by restoring a snapshot.
Console
To restore a snapshot to a Multi-AZ DB cluster
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the snapshot that you want to restore from.
4. For Actions, choose Restore snapshot.
5. On the Restore snapshot page, in Availability and durability, choose Multi-AZ DB cluster.
6. For DB cluster identifier, enter the name for your restored Multi-AZ DB cluster.
7. For the remaining sections, specify your DB cluster settings. For information about each setting, see
Settings for creating Multi-AZ DB clusters (p. 514).
8. Choose Restore DB instance.
AWS CLI
To restore a snapshot to a Multi-AZ DB cluster, use the AWS CLI command restore-db-cluster-from-
snapshot.
In the following example, you restore from a previously created snapshot named mysnapshot. You
restore to a new Multi-AZ DB cluster named mynewmultiazdbcluster. You also specify the DB
671
Amazon Relational Database Service User Guide
Restoring from a snapshot to a Multi-AZ DB cluster
instance class used by the DB instances in the Multi-AZ DB cluster. Specify either mysql or postgres for
the DB engine.
For the --snapshot-identifier option, you can use either the name or the Amazon Resource Name
(ARN) to specify a DB cluster snapshot. However, you can use only the ARN to specify a DB snapshot.
For the --db-cluster-instance-class option, specify the DB instance class for the new Multi-
AZ DB cluster. Multi-AZ DB clusters only support specific DB instance classes, such as the db.m6gd
and db.r6gd DB instance classes. For more information about DB instance classes, see DB instance
classes (p. 11).
Example
For Windows:
After you restore the DB cluster, you can add the Multi-AZ DB cluster to the security group associated
with the DB cluster or DB instance that you used to create the snapshot, if applicable. Completing this
action provides the same functions of the previous DB cluster or DB instance.
RDS API
To restore a snapshot to a Multi-AZ DB cluster, call the RDS API operation
RestoreDBClusterFromSnapshot with the following parameters:
• DBClusterIdentifier
• SnapshotIdentifier
• Engine
After you restore the DB cluster, you can add the Multi-AZ DB cluster to the security group associated
with the DB cluster or DB instance that you used to create the snapshot, if applicable. Completing this
action provides the same functions of the previous DB cluster or DB instance.
672
Amazon Relational Database Service User Guide
Restoring from a Multi-AZ DB
cluster snapshot to a DB instance
Use the AWS Management Console, the AWS CLI, or the RDS API to restore a Multi-AZ DB cluster
snapshot to a Single-AZ deployment or Multi-AZ DB instance deployment.
Console
To restore a Multi-AZ DB cluster snapshot to a Single-AZ deployment or Multi-AZ DB
instance deployment
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the Multi-AZ DB cluster snapshot that you want to restore from.
4. For Actions, choose Restore snapshot.
5. On the Restore snapshot page, in Availability and durability, choose one of the following:
• Single DB instance – Restores the snapshot to one DB instance with no standby DB instance.
• Multi-AZ DB instance – Restores the snapshot to a Multi-AZ DB instance deployment with one
primary DB instance and one standby DB instance.
6. For DB instance identifier, enter the name for your restored DB instance.
7. For the remaining sections, specify your DB instance settings. For information about each setting,
see Settings for DB instances (p. 308).
8. Choose Restore DB instance.
AWS CLI
To restore a Multi-AZ DB cluster snapshot to a DB instance deployment, use the AWS CLI command
restore-db-instance-from-db-snapshot.
In the following example, you restore from a previously created Multi-AZ DB cluster snapshot named
myclustersnapshot. You restore to a new Multi-AZ DB instance deployment with a primary DB
instance named mynewdbinstance. For the --db-cluster-snapshot-identifier option, specify
the name of the Multi-AZ DB cluster snapshot.
For the --db-instance-class option, specify the DB instance class for the new DB instance
deployment. For more information about DB instance classes, see DB instance classes (p. 11).
Example
673
Amazon Relational Database Service User Guide
Restoring from a Multi-AZ DB
cluster snapshot to a DB instance
For Windows:
After you restore the DB instance, you can add it to the security group associated with the Multi-AZ DB
cluster that you used to create the snapshot, if applicable. Completing this action provides the same
functions of the previous Multi-AZ DB cluster.
RDS API
To restore a Multi-AZ DB cluster snapshot to a DB instance deployment, call the RDS API operation
RestoreDBInstanceFromDBSnapshot with the following parameters:
• DBInstanceIdentifier
• DBClusterSnapshotIdentifier
• Engine
After you restore the DB instance, you can add it to the security group associated with the Multi-AZ DB
cluster that you used to create the snapshot, if applicable. Completing this action provides the same
functions of the previous Multi-AZ DB cluster.
674
Amazon Relational Database Service User Guide
Restoring a Multi-AZ DB cluster to a specified time
RDS uploads transaction logs for Multi-AZ DB clusters to Amazon S3 continuously. You can restore
to any point in time within your backup retention period. To see the earliest restorable time for a
Multi-AZ DB cluster, use the AWS CLI describe-db-clusters command. Look at the value returned in the
EarliestRestorableTime field for the DB cluster. To see the latest restorable time for a Multi-AZ DB
cluster, look at the value returned in the LatestRestorableTime field for the DB cluster.
When you restore a Multi-AZ DB cluster to a point in time, you can choose the default VPC security group
for your Multi-AZ DB cluster. Or you can apply a custom VPC security group to your Multi-AZ DB cluster.
Restored Multi-AZ DB clusters are automatically associated with the default DB cluster parameter group.
However, you can apply a customer DB cluster parameter group by specifying it during a restore.
If the source DB instance has resource tags, RDS adds the latest tags to the restored DB instance.
Note
We recommend that you restore to the same or similar Multi-AZ DB cluster size as the source DB
cluster. We also recommend that you restore with the same or similar IOPS value if you're using
Provisioned IOPS storage. You might get an error if, for example, you choose a DB cluster size
with an incompatible IOPS value.
You can restore a Multi-AZ DB cluster to a point in time using the AWS Management Console, the AWS
CLI, or the RDS API.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the Multi-AZ DB cluster that you want to restore.
4. For Actions, choose Restore to point in time.
If you chose Custom, enter the date and time to which you want to restore the Multi-AZ DB cluster.
Note
Times are shown in your local time zone, which is indicated by an offset from Coordinated
Universal Time (UTC). For example, UTC-5 is Eastern Standard Time/Central Daylight Time.
6. For DB cluster identifier, enter the name for your restored Multi-AZ DB cluster.
7. In Availability and durability, choose Multi-AZ DB cluster.
675
Amazon Relational Database Service User Guide
Restoring a Multi-AZ DB cluster to a specified time
Currently, Multi-AZ DB clusters only support db.m6gd and db.r6gd DB instance classes. For more
information about DB instance classes, see DB instance classes (p. 11).
9. For the remaining sections, specify your DB cluster settings. For information about each setting, see
Settings for creating Multi-AZ DB clusters (p. 514).
10. Choose Restore to point in time.
AWS CLI
To restore a Multi-AZ DB cluster to a specified time, use the AWS CLI command restore-db-cluster-to-
point-in-time to create a new Multi-AZ DB cluster.
Currently, Multi-AZ DB clusters only support db.m6gd and db.r6gd DB instance classes. For more
information about DB instance classes, see DB instance classes (p. 11).
Example
For Windows:
RDS API
To restore a DB cluster to a specified time, call the Amazon RDS API RestoreDBClusterToPointInTime
operation with the following parameters:
• SourceDBClusterIdentifier
• DBClusterIdentifier
676
Amazon Relational Database Service User Guide
Restoring a Multi-AZ DB cluster to a specified time
• RestoreToTime
677
Amazon Relational Database Service User Guide
Topics
• Overview of monitoring metrics in Amazon RDS (p. 679)
• Viewing instance status and recommendations (p. 683)
• Viewing metrics in the Amazon RDS console (p. 696)
• Viewing combined metrics in the Amazon RDS console (p. 699)
• Monitoring Amazon RDS metrics with Amazon CloudWatch (p. 706)
• Monitoring DB load with Performance Insights on Amazon RDS (p. 720)
• Analyzing performance anomalies with Amazon DevOps Guru for Amazon RDS (p. 789)
• Monitoring OS metrics with Enhanced Monitoring (p. 797)
• Metrics reference for Amazon RDS (p. 806)
678
Amazon Relational Database Service User Guide
Overview of monitoring
Topics
• Monitoring plan (p. 679)
• Performance baseline (p. 679)
• Performance guidelines (p. 679)
• Monitoring tools (p. 680)
Monitoring plan
Before you start monitoring Amazon RDS, create a monitoring plan. This plan should answer the
following questions:
Performance baseline
To achieve your monitoring goals, you need to establish a baseline. To do this, measure performance
under different load conditions at various times in your Amazon RDS environment. You can monitor
metrics such as the following:
• Network throughput
• Client connections
• I/O for read, write, or metadata operations
• Burst credit balances for your DB instances
We recommend that you store historical performance data for Amazon RDS. Using the stored data, you
can compare current performance against past trends. You can also distinguish normal performance
patterns from anomalies, and devise techniques to address issues.
Performance guidelines
In general, acceptable values for performance metrics depend on what your application is doing relative
to your baseline. Investigate consistent or trending variances from your baseline. The following metrics
are often the source of performance issues:
• High CPU or RAM consumption – High values for CPU or RAM consumption might be appropriate,
if they're in keeping with your goals for your application (like throughput or concurrency) and are
expected.
679
Amazon Relational Database Service User Guide
Monitoring tools
• Disk space consumption – Investigate disk space consumption if space used is consistently at or above
85 percent of the total disk space. See if it is possible to delete data from the instance or archive data
to a different system to free up space.
• Network traffic – For network traffic, talk with your system administrator to understand what
expected throughput is for your domain network and internet connection. Investigate network traffic if
throughput is consistently lower than expected.
• Database connections – If you see high numbers of user connections and also decreases in instance
performance and response time, consider constraining database connections. The best number of
user connections for your DB instance varies based on your instance class and the complexity of the
operations being performed. To determine the number of database connections, associate your DB
instance with a parameter group where the User Connections parameter is set to a value other
than 0 (unlimited). You can either use an existing parameter group or create a new one. For more
information, see Working with parameter groups (p. 347).
• IOPS metrics – The expected values for IOPS metrics depend on disk specification and server
configuration, so use your baseline to know what is typical. Investigate if values are consistently
different than your baseline. For best IOPS performance, make sure that your typical working set fits
into memory to minimize read and write operations.
When performance falls outside your established baseline, you might need to make changes to optimize
your database availability for your workload. For example, you might need to change the instance class
of your DB instance. Or you might need to change the number of DB instances and read replicas that are
available for clients.
Monitoring tools
Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon
RDS and your other AWS solutions. AWS provides various monitoring tools to watch Amazon RDS, report
when something is wrong, and take automatic actions when appropriate.
Topics
• Automated monitoring tools (p. 680)
• Manual monitoring tools (p. 681)
Topics
• Amazon RDS instance status and recommendations (p. 680)
• Amazon CloudWatch metrics for Amazon RDS (p. 681)
• Amazon RDS Performance Insights and operating-system monitoring (p. 681)
• Integrated services (p. 681)
• Amazon RDS instance status — View details about the current status of your instance by using the
Amazon RDS console, the AWS CLI, or the RDS API.
• Amazon RDS recommendations — Respond to automated recommendations for database resources,
such as DB instances, read replicas, and DB parameter groups. For more information, see Viewing
Amazon RDS recommendations (p. 688).
680
Amazon Relational Database Service User Guide
Monitoring tools
• Amazon CloudWatch – This service monitors your AWS resources and the applications you run on AWS
in real time. You can use the following Amazon CloudWatch features with Amazon RDS:
• Amazon CloudWatch metrics – Amazon RDS automatically sends metrics to CloudWatch every
minute for each active database. You don't get additional charges for Amazon RDS metrics
in CloudWatch. For more information, see Monitoring Amazon RDS metrics with Amazon
CloudWatch (p. 706).
• Amazon CloudWatch alarms – You can watch a single Amazon RDS metric over a specific time
period. You can then perform one or more actions based on the value of the metric relative to a
threshold that you set. For more information, see Monitoring Amazon RDS metrics with Amazon
CloudWatch (p. 706).
• Amazon RDS Performance Insights – Assess the load on your database, and determine when and
where to take action. For more information, see Monitoring DB load with Performance Insights on
Amazon RDS (p. 720).
• Amazon RDS Enhanced Monitoring – Look at metrics in real time for the operating system. For more
information, see Monitoring OS metrics with Enhanced Monitoring (p. 797).
Integrated services
The following AWS services are integrated with Amazon RDS:
• Amazon EventBridge is a serverless event bus service that makes it easy to connect your
applications with data from a variety of sources. For more information, see Monitoring Amazon RDS
events (p. 850).
• Amazon CloudWatch Logs lets you monitor, store, and access your log files from Amazon RDS
instances, CloudTrail, and other sources. For more information, see Monitoring Amazon RDS log
files (p. 895).
• AWS CloudTrail captures API calls and related events made by or on behalf of your AWS account and
delivers the log files to an Amazon S3 bucket that you specify. For more information, see Monitoring
Amazon RDS API calls in AWS CloudTrail (p. 940).
• Database Activity Streams is an Amazon RDS feature that provides a near-real-time stream of the
activity in your Oracle DB instance. For more information, see Monitoring Amazon RDS with Database
Activity Streams (p. 944).
• From the Amazon RDS console, you can monitor the following items for your resources:
• The number of connections to a DB instance
• The amount of read and write operations to a DB instance
• The amount of storage that a DB instance is currently using
• The amount of memory and CPU being used for a DB instance
681
Amazon Relational Database Service User Guide
Monitoring tools
For more information on these checks, see Trusted Advisor best practices (checks).
• CloudWatch home page shows:
• Current alarms and status
• Graphs of alarms and resources
• Service health status
682
Amazon Relational Database Service User Guide
Viewing instance status and recommendations
Topics
• Viewing Amazon RDS DB instance status (p. 684)
• Viewing Amazon RDS recommendations (p. 688)
683
Amazon Relational Database Service User Guide
Viewing Amazon RDS DB instance status
Find the possible status values for DB instances in the following table. This table also shows whether you
will be billed for the DB instance and storage, billed only for storage, or not billed. For all DB instance
statuses, you are always billed for backup usage.
Configuring-log- Billed Publishing log files to Amazon CloudWatch Logs is being enabled
exports or disabled for this DB instance.
Converting-to-vpc Billed The DB instance is being converted from a DB instance that is not
in an Amazon Virtual Private Cloud (Amazon VPC) to a DB instance
that is in an Amazon VPC.
Delete-precheck Not Amazon RDS is validating that read replicas are healthy and are
billed safe to delete.
Failed Not The DB instance has failed and Amazon RDS can't recover it.
billed Perform a point-in-time restore to the latest restorable time of the
DB instance to recover the data.
Inaccessible- Not The AWS KMS key used to encrypt or decrypt the DB instance can't
encryption-credentials billed be accessed or recovered.
Inaccessible- Billed The KMS key used to encrypt or decrypt the DB instance can't
encryption-credentials- for be accessed. However, if the KMS key is active, restarting the DB
recoverable storage instance can recover it.
684
Amazon Relational Database Service User Guide
Viewing Amazon RDS DB instance status
Incompatible-option- Billed Amazon RDS attempted to apply an option group change but
group can't do so, and Amazon RDS can't roll back to the previous option
group state. For more information, check the Recent Events list
for the DB instance. This status can occur if, for example, the
option group contains an option such as TDE and the DB instance
doesn't contain encrypted information.
Incompatible- Billed Amazon RDS can't start the DB instance because the parameters
parameters specified in the DB instance's DB parameter group aren't
compatible with the DB instance. Revert the parameter changes
or make them compatible with the DB instance to regain access to
your DB instance. For more information about the incompatible
parameters, check the Recent Events list for the DB instance.
Incompatible-restore Not Amazon RDS can't do a point-in-time restore. Common causes for
billed this status include using temp tables, using MyISAM tables with
MySQL, or using Aria tables with MariaDB.
Insufficient-capacity Not Amazon RDS can’t create your instance because sufficient capacity
billed isn’t currently available. To create your DB instance in the same AZ
with the same instance type, delete your DB instance, wait a few
hours, and try to create again. Alternatively, create a new instance
using a different instance class or AZ.
Moving-to-vpc Billed The DB instance is being moved to a new Amazon Virtual Private
Cloud (Amazon VPC).
Resetting-master- Billed The master credentials for the DB instance are being reset because
credentials of a customer request to reset them.
685
Amazon Relational Database Service User Guide
Viewing Amazon RDS DB instance status
Storage-full Billed The DB instance has reached its storage capacity allocation. This
is a critical status, and we recommend that you fix this issue
immediately. To do so, scale up your storage by modifying the DB
instance. To avoid this situation, set Amazon CloudWatch alarms
to warn you when storage space is getting low.
Storage-optimization Billed Amazon RDS is optimizing the storage of your DB instance. The DB
instance is fully operational. The storage optimization process is
usually short, but can sometimes take up to and even beyond 24
hours.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
The Databases page appears with the list of DB instances. For each DB instance , the status value is
displayed.
CLI
To view DB instance and its status information by using the AWS CLI, use the describe-db-instances
command. For example, the following AWS CLI command lists all the DB instances information .
To view a specific DB instance and its status, call the describe-db-instances command with the following
option:
686
Amazon Relational Database Service User Guide
Viewing Amazon RDS DB instance status
To view just the status of all the DB instances, use the following query in AWS CLI.
API
To view the status of the DB instance using the Amazon RDS API, call the DescribeDBInstances operation.
687
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations
DB instance Your DB instance isn't We recommend that you use Multi- Amazon RDS Multi-AZ
isn't a Multi- using the Multi-AZ AZ deployment. The Multi-AZ
AZ DB deployment. deployments enhance the availability
instance and durability of the DB instance.
Automated Your DB instance has We recommend that you enable Working with
backups automated backups automated backups on your DB backups (p. 591)
disabled disabled. instance. Automated backups enable
point-in-time recovery of your DB
instance. You receive backup storage
up to the storage size of your DB
instance at no additional charge.
688
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations
Nondefault Your DB parameter Settings that diverge too much Working with
custom group sets memory from the default values can cause parameter
memory parameters that poor performance and errors. We groups (p. 347)
parameters diverge too much recommend setting custom memory
from the default parameters to their default values in
values. the DB parameter group used by the
DB instance.
689
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations
General Your DB instance has Evaluate your required general Managing table-
logging is the general logging logging usage. General logging can based MySQL
enabled for turned on. Turning increase the amount of I/O operations logs (p. 920)
a MySQL DB on general logging and allocated storage space, and
instance increases the amount lead to contention and performance
of I/O operations degradation.
and allocated storage
space,which can
lead to contention
and performance
degradation.
Maximum Your DB instance has We recommend that you set the innodb_open_files
InnoDB open a low value for the innodb_open_files parameter to a
files setting is maximum number of minimum value of 65.
misconfigured files InnoDB can open
for a MySQL at one time.
DB instance
Read replica Your DB instance has We recommend that you don't change Best practices
is open in the Read replica in MySQL read replicas to writable mode for configuring
writable writable mode, which for a long duration. This setting can parameters for
mode for a allows updates from cause replication errors and data Amazon RDS for
MySQL DB clients. consistency issues. MySQL, part 2:
instance Parameters related
to replication on the
AWS Database Blog
690
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations
Found an The synchronization We recommend that you set the Best practices
unsafe of the binary sync_binlog parameter to 1. for configuring
durability log to disk isn't Currently, the synchronization of parameters for
parameter enforced before the the binary log to disk isn't enforced Amazon RDS for
value for a acknowledgement before acknowledgement of the MySQL, part 2:
MySQL DB of the transactions transactions commit. If there is a Parameters related
instance commit in your DB power failure or the operating system to replication on the
instance. crashes, the committed transactions AWS Database Blog
can be lost.
Query cache Your DB parameter The query cache can cause the DB Best practices
enabled for group has query instance to appear to stall when for configuring
a MySQL DB cache parameter changes require the cache to be parameters for
instance enabled. purged. Most workloads don't benefit Amazon RDS for
from a query cache. The query cache MySQL, part 1:
was removed from MySQL version 8.0. Parameters related to
We recommend that you disable the performance on the
query cache parameter. AWS Database Blog
691
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations
Index only The query planner We recommend that you set the enable_indexonlyscan
scan plan or optimizer can't parameter enable_indexonlyscan (boolean)
type is use the index only to ON.
disabled for scan plan when it is
a PostgreSQL disabled.
DB instance
index-scan The query planner We recommend that you set the enable_indexscan
plan type is or optimizer can't parameter enable_indexscan to (boolean)
disabled for use the index scan ON.
a PostgreSQL plan types when it is
DB instance disabled.
Logging to Your DB parameter Setting logging output to TABLE MySQL database log
table group sets logging uses more storage than setting this files (p. 915)
output to TABLE. parameter to FILE. To avoid reaching
the storage limit, we recommend
setting the logging output parameter
to FILE.
Amazon RDS generates recommendations for a resource when the resource is created or modified.
Amazon RDS also periodically scans your resources and generates recommendations.
692
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Recommendations.
• Active – Shows the current recommendations that you can apply, dismiss, or schedule.
693
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations
• Dismissed – Shows the recommendations that have been dismissed. When you choose Dismissed,
you can apply these dismissed recommendations.
• Scheduled – Shows the recommendations that are scheduled but not yet applied. These
recommendations will be applied in the next scheduled maintenance window.
• Applied – Shows the recommendations that are currently applied.
From any list of recommendations, you can open a section to view the recommendations in that
section.
To configure preferences for displaying recommendations in each section, choose the Preferences
icon.
From the Preferences window that appears, you can set display options. These options include the
visible columns and the number of recommendations to display on the page.
4. (optional) Respond to your active recommendations as follows:
694
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations
a. Choose Active and open one or more sections to view the recommendations in them.
b. Choose one or more recommendations and choose Apply now (to apply them immediately),
Schedule (to apply them in next maintenance window), or Dismiss.
If the Apply now button appears for a recommendation but is unavailable (grayed out), the DB
instance is not available. You can apply recommendations immediately only if the DB instance
status is available. For example, you can't apply recommendations immediately to the DB
instance if its status is modifying. In this case, wait for the DB instance to be available and then
apply the recommendation.
If the Apply now button doesn't appear for a recommendation, you can't apply the
recommendation using the Recommendations page. You can modify the DB instance to apply
the recommendation manually.
For more information about modifying a DB instance, see Modifying an Amazon RDS DB
instance (p. 401).
Note
When you choose Apply now, a brief DB instance outage might result.
695
Amazon Relational Database Service User Guide
Viewing metrics in the Amazon RDS console
• CloudWatch – Shows the Amazon CloudWatch metrics for RDS that you can access in the RDS console.
You can also access these metrics in the CloudWatch console. Each metric includes a graph that
shows the metric monitored over a specific time span. For a list of CloudWatch metrics, see Amazon
CloudWatch metrics for Amazon RDS (p. 806).
• Enhanced monitoring – Shows a summary of operating-system metrics when your RDS DB instance
has turned on Enhanced Monitoring. RDS delivers the metrics from Enhanced Monitoring to
your Amazon CloudWatch Logs account. Each OS metric includes a graph showing the metric
monitored over a specific time span. For an overview, see Monitoring OS metrics with Enhanced
Monitoring (p. 797). For a list of Enhanced Monitoring metrics, see OS metrics in Enhanced
Monitoring (p. 837).
• OS Process list – Shows details for each process running in your DB instance.
• Performance Insights – Opens the Amazon RDS Performance Insights dashboard for a DB instance.
For an overview of Performance Insights, see Monitoring DB load with Performance Insights on
Amazon RDS (p. 720). For a list of Performance Insights metrics, see Amazon CloudWatch metrics for
Performance Insights (p. 813).
Amazon RDS now provides a consolidated view of Performance Insights and CloudWatch metrics in the
Performance Insights dashboard. Performance Insights must be turned on for your DB instance to use
this view. You can choose the new monitoring view in the Monitoring tab or Performance Insights in
the navigation pane. To view the instructions for choosing this view, see Viewing combined metrics in the
Amazon RDS console (p. 699).
If you want to continue with the legacy monitoring view, continue with this procedure.
Note
The legacy monitoring view will be discontinued on December 15, 2023.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the name of the DB instance that you want to monitor.
The database page appears. The following example shows an Oracle database named orclb.
696
Amazon Relational Database Service User Guide
Viewing metrics in the Amazon RDS console
The monitoring section appears. By default, CloudWatch metrics are shown. For descriptions of
these metrics, see Amazon CloudWatch metrics for Amazon RDS (p. 806).
The following example shows Enhanced Monitoring metrics. For descriptions of these metrics, see
OS metrics in Enhanced Monitoring (p. 837).
Note
Currently, viewing OS metrics for a Multi-AZ standby replica is not supported for MariaDB
DB instances.
697
Amazon Relational Database Service User Guide
Viewing metrics in the Amazon RDS console
Tip
To choose the time range of the metrics represented by the graphs, you can use the time
range list.
To bring up a more detailed view, you can choose any graph. You can also apply metric-
specific filters to the data.
698
Amazon Relational Database Service User Guide
Viewing combined metrics in the Amazon RDS console
You can choose the new monitoring view in the Monitoring tab or Performance Insights in the
navigation pane. When you navigate to the Performance Insights page, you see the options to choose
between the new monitoring view and legacy view. The option you choose is saved as the default view.
Performance Insights must be turned on for your DB instance to view the combined metrics in the
Performance Insights dashboard. For more information about turning on Performance Insights, see
Turning Performance Insights on and off (p. 727).
Note
We recommend that you choose the new monitoring view. You can continue to use the legacy
monitoring view until it is discontinued on December 15, 2023.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the left navigation pane, choose Databases.
3. Choose the DB instance that you want to monitor.
A banner appears with the option to choose the new monitoring view. The following example shows
the banner to choose the new monitoring view.
5. Choose Go to new monitoring view to open the Performance Insights dashboard with Performance
Insights and CloudWatch metrics for your DB instance.
6. (Optional) If Performance Insights is turned off for your DB instance, a banner appears with the
option to modify your DB cluster and turn on Performance Insights.
The following example shows the banner to modify the DB cluster in the Monitoring tab .
699
Amazon Relational Database Service User Guide
Choosing the new monitoring view with
Performance Insights in the navigation pane
Choose Modify to modify your DB cluster and turn on Performance Insights. For more information
about turning on Performance Insights, see Turning Performance Insights on and off (p. 727)
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the left navigation pane, choose Performance Insights.
3. Choose a DB instance to open a window that has the monitoring view options.
The following example shows the window with the monitoring view options.
4. Choose the Performance Insights and CloudWatch metrics view (New) option, and then choose
Continue.
You can now view the Performance Insights dashboard that shows both Performance Insights and
CloudWatch metrics for your DB instance. The following example shows the Performance Insights
and CloudWatch metrics in the dashboard.
700
Amazon Relational Database Service User Guide
Choosing the legacy view with Performance
Insights in the navigation pane
To choose the legacy monitoring view with Performance Insights in the navigation pane:
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the left navigation pane, choose Performance Insights.
3. Choose a DB instance.
4. Choose the settings icon on the Performance Insights dashboard.
You can now see the Settings window that shows the option to choose the legacy Performance
Insights view.
The following example shows the window with the option for the legacy monitoring view.
701
Amazon Relational Database Service User Guide
Creating a custom dashboard with
Performance Insights in the navigation pane
A warning message appears. Any dashboard configurations that you saved won't be available in this
view.
6. Choose Confirm to continue to the legacy Performance Insights view.
You can now view the Performance Insights dashboard that shows only Performance Insights
metrics for the DB instance.
You can create a custom dashboard by selecting Performance Insights and CloudWatch metrics for your
DB instance. You can use this custom dashboard for other DB instances of the same database engine
type in your AWS account.
Note
The customized dashboard supports up to 50 metrics.
Use the widget settings menu to edit or delete the dashboard, and move or resize the widget window.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the left navigation pane, choose Performance Insights.
702
Amazon Relational Database Service User Guide
Creating a custom dashboard with
Performance Insights in the navigation pane
3. Choose a DB instance.
4. Scroll down to the Metrics tab in the window.
5. Select the custom dashboard from the drop down list. The following example shows the custom
dashboard creation.
6. Choose Add widget to open the Add widget window. You can open and view the available operating
system (OS) metrics, database metrics, and CloudWatch metrics in the window.
The following example shows the Add widget window with the metrics.
703
Amazon Relational Database Service User Guide
Creating a custom dashboard with
Performance Insights in the navigation pane
7. Select the metrics that you want to view in the dashboard and choose Add widget. You can use the
search field to find a specific metric.
704
Amazon Relational Database Service User Guide
Choosing the preconfigured dashboard with
Performance Insights in the navigation pane
8. (Optional) If you want to modify or delete your dashboard, choose the settings icon on the upper
right of the widget, and then select one of the following actions in the menu.
• Edit – Modify the metrics list in the window. Choose Update widget after you select the metrics
for your dashboard.
• Delete – Deletes the widget. Choose Delete in the confirmation window.
To choose the preconfigured dashboard with Performance Insights in the navigation pane:
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the left navigation pane, choose Performance Insights.
3. Choose a DB instance.
4. Scroll down to the Metrics tab in the window
5. Select a preconfigured dashboard from the drop down list.
You can view the metrics for the DB instance in the dashboard. The following example shows a
preconfigured metrics dashboard.
705
Amazon Relational Database Service User Guide
Monitoring RDS with CloudWatch
Topics
• Overview of Amazon RDS and Amazon CloudWatch (p. 707)
• Viewing DB instance metrics in the CloudWatch console and AWS CLI (p. 708)
• Creating CloudWatch alarms to monitor Amazon RDS (p. 713)
• Tutorial: Creating an Amazon CloudWatch alarm for Multi-AZ DB cluster replica lag (p. 713)
706
Amazon Relational Database Service User Guide
Overview of Amazon RDS and Amazon CloudWatch
As shown in the following diagram, you can set up alarms for your CloudWatch metrics. For example,
you might create an alarm that signals when the CPU utilization for an instance is over 70%. You can
configure Amazon Simple Notification Service to email you when the threshold is passed.
For a table of these metrics, see Amazon CloudWatch metrics for Amazon RDS (p. 806).
• Performance Insights metrics
For a table of these metrics, see Amazon CloudWatch metrics for Performance Insights (p. 813) and
Performance Insights counter metrics (p. 814).
707
Amazon Relational Database Service User Guide
Viewing CloudWatch metrics
For a table of these metrics, see OS metrics in Enhanced Monitoring (p. 837).
• Usage metrics for the Amazon RDS service quotas in your AWS account
For a table of these metrics, see Amazon CloudWatch usage metrics for Amazon RDS (p. 812). For
more information about Amazon RDS quotas, see Quotas and constraints for Amazon RDS (p. 2720).
For more information about CloudWatch, see What is Amazon CloudWatch? in the Amazon CloudWatch
User Guide. For more information about CloudWatch metrics retention, see Metrics retention.
When you use Amazon RDS resources, Amazon RDS sends metrics and dimensions to Amazon
CloudWatch every minute. You can use the following procedures to view the metrics for Amazon RDS in
the CloudWatch console and CLI.
Console
Metrics are grouped first by the service namespace, and then by the various dimension combinations
within each namespace.
708
Amazon Relational Database Service User Guide
Viewing CloudWatch metrics
709
Amazon Relational Database Service User Guide
Viewing CloudWatch metrics
2. If necessary, change the AWS Region. From the navigation bar, choose the AWS Region where your
AWS resources are. For more information, see Regions and endpoints.
3. In the navigation pane, choose Metrics and then All metrics.
The page displays the Amazon RDS dimensions. For descriptions of these dimensions, see Amazon
CloudWatch dimensions for Amazon RDS (p. 813).
710
Amazon Relational Database Service User Guide
Viewing CloudWatch metrics
The following example filters on the db.t3.medium class and graphs the CPUUtilization metric.
711
Amazon Relational Database Service User Guide
Viewing CloudWatch metrics
AWS CLI
To obtain metric information by using the AWS CLI, use the CloudWatch command list-metrics. In
the following example, you list all metrics in the AWS/RDS namespace.
To obtain metric statistics, use the command get-metric-statistics. The following command gets
CPUUtilization statistics for instance my-instance over the specific 24-hour period, with a 5-minute
granularity.
Example
For Windows:
{
"Datapoints": [
{
"Timestamp": "2021-12-15T18:00:00Z",
"Minimum": 8.7,
"Unit": "Percent"
},
{
"Timestamp": "2021-12-15T23:54:00Z",
"Minimum": 8.12486458559024,
"Unit": "Percent"
},
{
"Timestamp": "2021-12-15T17:24:00Z",
"Minimum": 8.841666666666667,
"Unit": "Percent"
}, ...
{
"Timestamp": "2021-12-15T22:48:00Z",
"Minimum": 8.366248354248954,
"Unit": "Percent"
}
],
"Label": "CPUUtilization"
712
Amazon Relational Database Service User Guide
Creating CloudWatch alarms
For more information, see Getting statistics for a metric in the Amazon CloudWatch User Guide.
Alarms invoke actions for sustained state changes only. CloudWatch alarms don't invoke actions simply
because they are in a particular state. The state must have changed and have been maintained for a
specified number of time periods.
You can use the DB_PERF_INSIGHTS metric math function in the CloudWatch console to query Amazon
RDS for Performance Insights counter metrics. The DB_PERF_INSIGHTS function also includes the
DBLoad metric at sub-minute intervals. You can set CloudWatch alarms on these metrics.
For more details on how to create an alarm, see Create an alarm on Performance Insights counter
metrics from an AWS database.
• Call put-metric-alarm. For more information, see AWS CLI Command Reference.
• Call PutMetricAlarm. For more information, see Amazon CloudWatch API Reference
For more information about setting up Amazon SNS topics and creating alarms, see Using Amazon
CloudWatch alarms.
1. Sign in to the AWS Management Console and open the CloudWatch console at https://
console.aws.amazon.com/cloudwatch/.
2. In the navigation pane, choose Alarms, All alarms.
3. Choose Create alarm.
4. On the Specify metric and conditions page, choose Select metric.
5. In the search box, enter the name of your Multi-AZ DB cluster and press Enter.
The following image shows the Select metric page with a Multi-AZ DB cluster named rds-cluster
entered.
713
Amazon Relational Database Service User Guide
Tutorial: Creating a CloudWatch
alarm for DB cluster replica lag
The following image shows the Select metric page with the DB instances selected for the
ReplicaLag metric.
This alarm considers the replica lag for all three of the DB instances in the Multi-AZ DB cluster. The
alarm responds when any DB instance exceeds the threshold. It uses a math expression that returns
714
Amazon Relational Database Service User Guide
Tutorial: Creating a CloudWatch
alarm for DB cluster replica lag
the maximum value of the three metrics. Start by sorting by metric name, and then choose all three
ReplicaLag metrics.
8. From Add math, choose All functions, MAX.
9. Choose the Graphed metrics tab, and edit the details for Expression1 to MAX([m1,m2,m3]).
10. For all three ReplicaLag metrics, change the Period to 1 minute.
11. Clear selection from all metrics except for Expression1.
The Select metric page should look similar to the following image.
715
Amazon Relational Database Service User Guide
Tutorial: Creating a CloudWatch
alarm for DB cluster replica lag
The Specify metric and conditions page should look similar to the following image.
716
Amazon Relational Database Service User Guide
Tutorial: Creating a CloudWatch
alarm for DB cluster replica lag
717
Amazon Relational Database Service User Guide
Tutorial: Creating a CloudWatch
alarm for DB cluster replica lag
718
Amazon Relational Database Service User Guide
Tutorial: Creating a CloudWatch
alarm for DB cluster replica lag
18. Preview the alarm that you're about to create on the Preview and create page, and then choose
Create alarm.
719
Amazon Relational Database Service User Guide
Monitoring DB load with Performance Insights
Topics
• Overview of Performance Insights on Amazon RDS (p. 720)
• Turning Performance Insights on and off (p. 727)
• Turning on the Performance Schema for Performance Insights on Amazon RDS for MariaDB or
MySQL (p. 731)
• Configuring access policies for Performance Insights (p. 734)
• Analyzing metrics with the Performance Insights dashboard (p. 738)
• Retrieving metrics with the Performance Insights API (p. 769)
• Logging Performance Insights calls using AWS CloudTrail (p. 786)
You can find an overview of Performance Insights for Amazon RDS in the following video.
Topics
• Database load (p. 720)
• Maximum CPU (p. 724)
• Amazon RDS DB engine, Region, and instance class support for Performance Insights (p. 724)
• Pricing and data retention for Performance Insights (p. 726)
Database load
Database load (DB load) measures the level of session activity in your database. The key metric in
Performance Insights is DBLoad, which is collected every second.
Topics
• Active sessions (p. 721)
• Average active sessions (p. 721)
• Average active executions (p. 721)
• Dimensions (p. 722)
720
Amazon Relational Database Service User Guide
Overview of Performance Insights
Active sessions
A database session represents an application's dialogue with a relational database. An active session is a
connection that has submitted work to the DB engine and is waiting for a response.
A session is active when it's either running on CPU or waiting for a resource to become available so that it
can proceed. For example, an active session might wait for a page (or block) to be read into memory, and
then consume CPU while it reads data from the page.
Every second, Performance Insights samples the number of sessions concurrently running a query. For
each active session, Performance Insights collects the following data:
• SQL statement
• Session state (running on CPU or waiting)
• Host
• User running the SQL
Performance Insights calculates the AAS by dividing the total number of sessions by the number of
samples for a specific time period. For example, the following table shows 5 consecutive samples of a
running query taken at 1-second intervals.
In the preceding example, the DB load for the time interval was 2 AAS. This measurement means that, on
average, 2 sessions were active at any given time during the interval when the 5 samples were taken.
An analogy for DB load is worker activity in a warehouse. Suppose that the warehouse employs 100
workers. If 1 order comes in, 1 worker fulfills the order while 99 workers are idle. If 100 orders come
in, all 100 workers fulfill orders simultaneously. If every 15 minutes a manager writes down how many
workers are simultaneously active, adds these numbers at the end of the day, and then divides the total
by the number of samples, the manager calculates the average number of workers active at any given
time. If the average was 50 workers yesterday and 75 workers today, then the average activity level in
the warehouse increased. Similarly, DB load increases as database session activity increases.
721
Amazon Relational Database Service User Guide
Overview of Performance Insights
In most cases, the AAS and AAE for a query are approximately the same. However, because the inputs to
the calculations are different data sources, the calculations often vary slightly.
Dimensions
The db.load metric is different from the other time-series metrics because you can break it into
subcomponents called dimensions. You can think of dimensions as "slice by" categories for the different
characteristics of the DBLoad metric.
When you are diagnosing performance issues, the following dimensions are often the most useful:
Topics
• Wait events (p. 722)
• Top SQL (p. 723)
• Plans (p. 723)
For a complete list of dimensions for the Amazon RDS engines, see DB load sliced by
dimensions (p. 743).
Wait events
A wait event causes a SQL statement to wait for a specific event to happen before it can continue
running. Wait events are an important dimension, or category, for DB load because they indicate where
work is impeded.
Every active session is either running on the CPU or waiting. For example, sessions consume CPU when
they search memory for a buffer, perform a calculation, or run procedural code. When sessions aren't
consuming CPU, they might be waiting for a memory buffer to become free, a data file to be read, or a
log to be written to. The more time that a session waits for resources, the less time it runs on the CPU.
When you tune a database, you often try to find out the resources that sessions are waiting for. For
example, two or three wait events might account for 90 percent of DB load. This measure means that, on
average, active sessions are spending most of their time waiting for a small number of resources. If you
can find out the cause of these waits, you can attempt a solution.
722
Amazon Relational Database Service User Guide
Overview of Performance Insights
Consider the analogy of a warehouse worker. An order comes in for a book. The worker might be delayed
in fulfilling the order. For example, a different worker might be currently restocking the shelves, a trolley
might not be available. Or the system used to enter the order status might be slow. The longer the
worker waits, the longer it takes to fulfill the order. Waiting is a natural part of the warehouse workflow,
but if wait time becomes excessive, productivity decreases. In the same way, repeated or lengthy session
waits can degrade database performance. For more information, see Tuning with wait events for Aurora
PostgreSQL and Tuning with wait events for Aurora MySQL in the Amazon Aurora User Guide.
• For information about all MariaDB and MySQL wait events, see Wait Event Summary Tables in the
MySQL documentation.
• For information about all PostgreSQL wait events, see The Statistics Collector > Wait Event tables in
the PostgreSQL documentation.
• For information about all Oracle wait events, see Descriptions of Wait Events in the Oracle
documentation.
• For information about all SQL Server wait events, see Types of Waits in the SQL Server
documentation.
Note
For Oracle, background processes sometimes do work without an associated SQL statement. In
these cases, Performance Insights reports the type of background process concatenated with a
colon and the wait class associated with that background process. Types of background process
include LGWR, ARC0, PMON, and so on.
For example, when the archiver is performing I/O, the Performance Insights report for it is
similar to ARC1:System I/O. Occasionally, the background process type is also missing, and
Performance Insights only reports the wait class, for example :System I/O.
Top SQL
Where wait events show bottlenecks, top SQL shows which queries are contributing the most to DB
load. For example, many queries might be currently running on the database, but a single query might
consume 99 percent of the DB load. In this case, the high load might indicate a problem with the query.
By default, the Performance Insights console displays top SQL queries that are contributing to the
database load. The console also shows relevant statistics for each statement. To diagnose performance
problems for a specific statement, you can examine its execution plan.
Plans
An execution plan, also called simply a plan, is a sequence of steps that access data. For example, a plan
for joining tables t1 and t2 might loop through all rows in t1 and compare each row to a row in t2. In a
relational database, an optimizer is built-in code that determines the most efficient plan for a SQL query.
For Oracle DB instances, Performance Insights collects execution plans automatically. To diagnose SQL
performance problems, examine the captured plans for high-resource Oracle SQL queries. The plans
show how Oracle Database has parsed and run queries.
To learn how to analyze DB load using plans, see Analyzing Oracle execution plans using the
Performance Insights dashboard (p. 766).
Plan capture
Every five minutes, Performance Insights identifies the most resource-intensive Oracle queries and
captures their plans. Thus, you don't need to manually collect and manage a huge number of plans.
Instead, you can use the Top SQL tab to focus on the plans for the most problematic queries.
723
Amazon Relational Database Service User Guide
Overview of Performance Insights
Note
Performance Insights doesn't capture plans for queries whose text exceeds the maximum
collectable query text limit. For more information, see Accessing more SQL text in the
Performance Insights dashboard (p. 761).
The retention period for execution plans is the same as for your Performance Insights data. The retention
setting in the free tier is Default (7 days). To retain your performance data for longer, specify 1–24
months. For more information about retention periods, see Pricing and data retention for Performance
Insights (p. 726).
Digest queries
The Top SQL tab shows digest queries by default. A digest query doesn't itself have a plan,
but all queries that use literal values have plans. For example, a digest query might include
the text WHERE `email`=?. The digest might contain two queries, one with the text WHERE
[email protected] and another with WHERE [email protected]. Each of these
literal queries might include multiple plans.
If you select a digest query, the console shows all plans for child statements of the selected digest. Thus,
you don't need to look through all the child statements to find the plan. You might see plans that aren’t
in the displayed list of top 10 child statements. The console shows plans for all child queries for which
plans have been collected, regardless of whether the queries are in the top 10.
Maximum CPU
In the dashboard, the Database load chart collects, aggregates, and displays session information. To see
whether active sessions are exceeding the maximum CPU, look at their relationship to the Max vCPU line.
The Max vCPU value is determined by the number of vCPU (virtual CPU) cores for your DB instance.
One process can run on a vCPU at a time. If the number of processes exceed the number of vCPUs, the
processes start queuing. When the queuing increase, the performance is impacted. If the DB load is
often above the Max vCPU line, and the primary wait state is CPU, the CPU is overloaded. In this case,
you might want to throttle connections to the instance, tune any SQL queries with a high CPU load, or
consider a larger instance class. High and consistent instances of any wait state indicate that there might
be bottlenecks or resource contention issues to resolve. This can be true even if the DB load doesn't cross
the Max vCPU line.
Amazon RDS for For more information on version and Performance Insights isn't supported
MariaDB Region availability of Performance for the following instance classes:
Insights with RDS for MariaDB, see
Performance Insights (p. 150). • db.t2.micro
• db.t2.small
• db.t3.micro
• db.t3.small
• db.t4g.micro
724
Amazon Relational Database Service User Guide
Overview of Performance Insights
RDS for MySQL For more information on version and Performance Insights isn't supported
Region availability of Performance for the following instance classes:
Insights with RDS for MySQL, see
Performance Insights (p. 150). • db.t2.micro
• db.t2.small
• db.t3.micro
• db.t3.small
• db.t4g.micro
• db.t4g.small
Amazon RDS DB engine, Region, and instance class support for Performance
Insights features
The following table provides Amazon RDS DB engines that support Performance Insights features.
725
Amazon Relational Database Service User Guide
Overview of Performance Insights
In the RDS console, you can choose any of the following retention periods for your Performance Insights
data:
• Default (7 days)
• n months, where n is a number from 1–24
726
Amazon Relational Database Service User Guide
Turning Performance Insights on and off
To learn how to set a retention period using the AWS CLI, see AWS CLI (p. 729).
727
Amazon Relational Database Service User Guide
Turning Performance Insights on and off
The Performance Insights agent consumes limited CPU and memory on the DB host. When the DB load is
high, the agent limits the performance impact by collecting data less frequently.
Console
In the console, you can turn Performance Insights on or off when you create or modify a DB instance or
Multi-AZ DB cluster.
• To create a DB instance, follow the instructions for your DB engine in Creating an Amazon RDS DB
instance (p. 300).
• To create a Multi-AZ DB cluster, follow the instructions for your DB engine in Creating a Multi-AZ DB
cluster (p. 508).
If you choose Enable Performance Insights, you have the following options:
• Retention – The amount of time to retain Performance Insights data. The retention setting in the
free tier is Default (7 days). To retain your performance data for longer, specify 1–24 months.
For more information about retention periods, see Pricing and data retention for Performance
Insights (p. 726).
• AWS KMS key – Specify your AWS KMS key. Performance Insights encrypts all potentially sensitive
data using your KMS key. Data is encrypted in flight and at rest. For more information, see Configuring
an AWS KMS policy for Performance Insights (p. 736).
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose Databases.
3. Choose a DB instance or Multi-AZ DB cluster, and choose Modify.
4. In the Performance Insights section, choose either Enable Performance Insights or Disable
Performance Insights.
728
Amazon Relational Database Service User Guide
Turning Performance Insights on and off
If you choose Enable Performance Insights, you have the following options:
• Retention – The amount of time to retain Performance Insights data. The retention setting in the
free tier is Default (7 days). To retain your performance data for longer, specify 1–24 months.
For more information about retention periods, see Pricing and data retention for Performance
Insights (p. 726).
• AWS KMS key – Specify your KMS key. Performance Insights encrypts all potentially sensitive data
using your KMS key. Data is encrypted in flight and at rest. For more information, see Encrypting
Amazon RDS resources (p. 2586).
5. Choose Continue.
6. For Scheduling of Modifications, choose Apply immediately. If you choose Apply during the next
scheduled maintenance window, your instance ignores this setting and turns on Performance
Insights immediately.
7. Choose Modify instance.
AWS CLI
When you use the create-db-instance AWS CLI command, turn on Performance Insights by specifying
--enable-performance-insights. Or turn off Performance Insights by specifying --no-enable-
performance-insights.
You can also specify these values using the following AWS CLI commands:
• create-db-instance-read-replica
• modify-db-instance
• restore-db-instance-from-s3
• create-db-cluster (Multi-AZ DB cluster)
• modify-db-cluster (Multi-AZ DB cluster)
The following procedure describes how to turn Performance Insights on or off for an existing DB instance
using the AWS CLI.
• Call the modify-db-instance AWS CLI command and supply the following values:
For Windows:
729
Amazon Relational Database Service User Guide
Turning Performance Insights on and off
--enable-performance-insights
When you turn on Performance Insights in the CLI, you can optionally specify the number of days to
retain Performance Insights data with the --performance-insights-retention-period option.
You can specify 7, month * 31 (where month is a number from 1–23), or 731. For example, if you want to
retain your performance data for 3 months, specify 93, which is 3 * 31. The default is 7 days. For more
information about retention periods, see Pricing and data retention for Performance Insights (p. 726).
The following example turns on Performance Insights for sample-db-instance and specifies that
Performance Insights data is retained for 93 days (3 months).
For Windows:
If you specify a retention period such as 94 days, which isn't a valid value, RDS issues an error.
RDS API
When you create a new DB instance using the CreateDBInstance operation Amazon RDS API operation,
turn on Performance Insights by setting EnablePerformanceInsights to True. To turn off
Performance Insights, set EnablePerformanceInsights to False.
You can also specify the EnablePerformanceInsights value using the following API operations:
• ModifyDBInstance
• CreateDBInstanceReadReplica
• RestoreDBInstanceFromS3
• CreateDBCluster (Multi-AZ DB cluster)
• ModifyDBCluster (Multi-AZ DB cluster)
When you turn on Performance Insights, you can optionally specify the amount of time, in days, to
retain Performance Insights data with the PerformanceInsightsRetentionPeriod parameter. You
can specify 7, month * 31 (where month is a number from 1–23), or 731. For example, if you want to
retain your performance data for 3 months, specify 93, which is 3 * 31. The default is 7 days. For more
information about retention periods, see Pricing and data retention for Performance Insights (p. 726).
730
Amazon Relational Database Service User Guide
Turning on the Performance Schema for MariaDB or MySQL
Topics
• Overview of the Performance Schema (p. 731)
• Performance Insights and the Performance Schema (p. 731)
• Automatic management of the Performance Schema by Performance Insights (p. 732)
• Effect of a reboot on the Performance Schema (p. 732)
• Determining whether Performance Insights is managing the Performance Schema (p. 733)
• Configuring the Performance Schema for automatic management (p. 733)
• Function calls
• Waits for the operating system
• Stages of SQL execution
• Groups of SQL statements
731
Amazon Relational Database Service User Guide
Turning on the Performance Schema for MariaDB or MySQL
For automatic management of the Performance Schema, the following conditions must be true:
If you change the performance_schema parameter value manually, and then later want to
change to automatic management, see Configuring the Performance Schema for automatic
management (p. 733).
Important
When Performance Insights turns on the Performance Schema, it doesn't change the parameter
group values. However, the values are changed on the DB instances that are running. The only
way to see the changed values is to run the SHOW GLOBAL VARIABLES command.
Performance Schema
To turn this feature on or off, you don't need to reboot the DB instance.
If the Performance Schema isn't currently turned on, and you turn on Performance Insights without
rebooting the DB instance, the Performance Schema won't be turned on.
732
Amazon Relational Database Service User Guide
Turning on the Performance Schema for MariaDB or MySQL
0 system Yes
0 or 1 user No
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose Parameter groups.
3. Select the name of the parameter group for your DB instance.
4. Enter performance_schema in the search bar.
5. Check whether Source is the system default and Values is 0. If so, Performance Insights is
managing the Performance Schema automatically. If not, Performance Insights isn't managing the
Performance Schema automatically.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose Parameter groups.
3. Select the name of the parameter group for your DB instance or Multi-AZ DB cluster.
4. Enter performance_schema in the search bar.
5. Select the performance_schema parameter.
6. Choose Edit parameters.
7. Select the performance_schema parameter.
733
Amazon Relational Database Service User Guide
Performance Insights policies
8. In Values, choose 0.
9. Choose Reset and then Reset parameters.
10. Reboot the DB instance or Multi-AZ DB cluster.
Important
Whenever you turn the Performance Schema on or off, make sure to reboot the DB instance
or Multi-AZ DB cluster.
For more information about modifying instance parameters, see Modifying parameters in a DB
parameter group (p. 352). For more information about the dashboard, see Analyzing metrics with
the Performance Insights dashboard (p. 738). For more information about the MySQL performance
schema, see MySQL 8.0 Reference Manual.
If you specified a customer managed key when you turned on Performance Insights, make sure that users
in your account have the kms:Decrypt and kms:GenerateDataKey permissions on the KMS key.
For more information, see AWS managed policy: AmazonRDSPerformanceInsightsReadOnly (p. 2630).
For more information, see AWS managed policy: AmazonRDSPerformanceInsightsFullAccess (p. 2630).
734
Amazon Relational Database Service User Guide
Performance Insights policies
by creating or modifying a user-managed IAM policy. When you attach the policy to an IAM permission
set or role, the recipient can use Performance Insights.
You can now attach the policy to a permission set or role. The following procedure assumes that you
already have a user available for this purpose.
735
Amazon Relational Database Service User Guide
Performance Insights policies
Amazon RDS uses the AWS managed key for your new DB instance. Amazon RDS creates an AWS
managed key for your AWS account. Your AWS account has a different AWS managed key for Amazon
RDS for each AWS Region.
• Choose a customer managed key.
If you specify a customer managed key, users in your account that call the Performance Insights API
need the kms:Decrypt and kms:GenerateDataKey permissions on the KMS key. You can configure
these permissions through IAM policies. However, we recommend that you manage these permissions
through your KMS key policy. For more information, see Using key policies in AWS KMS.
Example
The following example shows how to add statements to your KMS key policy. These statements allow
access to Performance Insights. Depending on how you use the KMS key, you might want to change
some restrictions. Before adding statements to your policy, remove all comments.
{
"Version" : "2012-10-17",
"Id" : "your-policy",
"Statement" : [ {
//This represents a statement that currently exists in your policy.
}
....,
//Starting here, add new statement to your policy for Performance Insights.
//We recommend that you add one new statement for every RDS instance
{
"Sid" : "Allow viewing RDS Performance Insights",
"Effect": "Allow",
"Principal": {
"AWS": [
//One or more principals allowed to access Performance Insights
"arn:aws:iam::444455556666:role/Role1"
]
},
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": "*",
"Condition" : {
"StringEquals" : {
//Restrict access to only RDS APIs (including Performance Insights).
//Replace region with your AWS Region.
//For example, specify us-west-2.
"kms:ViaService" : "rds.region.amazonaws.com"
},
"ForAnyValue:StringEquals": {
//Restrict access to only data encrypted by Performance Insights.
"kms:EncryptionContext:aws:pi:service": "rds",
"kms:EncryptionContext:service": "pi",
736
Amazon Relational Database Service User Guide
Performance Insights policies
}
}
}
• DescribeDimensionKeys
• GetDimensionKeyDetails
• GetResourceMetadata
• GetResourceMetrics
• ListAvailableResourceDimensions
• ListAvailableResourceMetrics
You can use the following API requests to get sensitive data.
• DescribeDimensionKeys
• GetDimensionKeyDetails
• GetResourceMetrics
When you use the API to get sensitive data, Performance Insights leverages the caller's credentials. This
check ensures that access to sensitive data is limited to those with access to the KMS key.
When calling these APIs, you need permissions to call the API through the IAM policy and permissions to
invoke the kms:decrypt action through the AWS KMS key policy.
The GetResourceMetrics API can return both sensitive and non-sensitive data. The request
parameters determine whether the response should include sensitive data. The API returns sensitive data
when the request includes a sensitive dimension in either the filter or group-by parameters.
For more information about the dimensions that you can use with the GetResourceMetrics API, see
DimensionGroup.
Example Examples
The following example requests the sensitive data for the db.user group:
POST / HTTP/1.1
Host: <Hostname>
Accept-Encoding: identity
X-Amz-Target: PerformanceInsightsv20180227.GetResourceMetrics
Content-Type: application/x-amz-json-1.1
User-Agent: <UserAgentString>
X-Amz-Date: <Date>
Authorization: AWS4-HMAC-SHA256 Credential=<Credential>, SignedHeaders=<Headers>,
Signature=<Signature>
737
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
Content-Length: <PayloadSizeBytes>
{
"ServiceType": "RDS",
"Identifier": "db-ABC1DEFGHIJKL2MNOPQRSTUV3W",
"MetricQueries": [
{
"Metric": "db.load.avg",
"GroupBy": {
"Group": "db.user",
"Limit": 2
}
}
],
"StartTime": 1693872000,
"EndTime": 1694044800,
"PeriodInSeconds": 86400
}
Example
The following example requests the non-sensitive data for the db.load.avg metric:
POST / HTTP/1.1
Host: <Hostname>
Accept-Encoding: identity
X-Amz-Target: PerformanceInsightsv20180227.GetResourceMetrics
Content-Type: application/x-amz-json-1.1
User-Agent: <UserAgentString>
X-Amz-Date: <Date>
Authorization: AWS4-HMAC-SHA256 Credential=<Credential>, SignedHeaders=<Headers>,
Signature=<Signature>
Content-Length: <PayloadSizeBytes>
{
"ServiceType": "RDS",
"Identifier": "db-ABC1DEFGHIJKL2MNOPQRSTUV3W",
"MetricQueries": [
{
"Metric": "db.load.avg"
}
],
"StartTime": 1693872000,
"EndTime": 1694044800,
"PeriodInSeconds": 86400
}
738
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
Topics
• Time range filter (p. 739)
• Counter metrics chart (p. 741)
• Database load chart (p. 743)
• Top dimensions table (p. 745)
739
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
You can select an absolute range with a beginning and ending date and time. The following example
shows the time range beginning at midnight on 4/11/22 and ending at 11:59 PM on 4/14/22.
740
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
The Counter metrics chart displays data for performance counters. The default metrics depend on the
DB engine:
741
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
To change the performance counters, choose Manage Metrics. You can select multiple OS metrics or
Database metrics, as shown in the following screenshot. To see details for any metric, hover over the
metric name.
For descriptions of the counter metrics that you can add for each DB engine, see Performance Insights
counter metrics (p. 814).
742
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
You can choose to display load as active sessions grouped by any supported dimensions. The following
table shows which dimensions are supported for the different engines.
Plans Yes No No No
Application No No Yes No
743
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
To see details about a DB load item within a dimension, hover over the item name. The following image
shows details for a SQL statement.
To see details for any item for the selected time period in the legend, hover over that item.
744
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
745
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
Top session types The type of the current session PostgreSQL only
To learn how to analyze queries by using the Top SQL tab, see Overview of the Top SQL tab (p. 756).
• Select the Performance Insights and CloudWatch metrics view (New) option and choose
Continue to view Performance Insights and CloudWatch metrics.
• Select the Performance Insights view option and choose Continue for the legacy monitoring
view. Then, continue with this procedure.
Note
This view will be discontinued on December 15, 2023.
For DB instances with Performance Insights turned on, you can also access the dashboard by
choosing the Sessions item in the list of DB instances. Under Current activity, the Sessions item
shows the database load in average active sessions over the last five minutes. The bar graphically
shows the load. When the bar is empty, the DB instance is idle. As the load increases, the bar fills
with blue. When the load passes the number of virtual CPUs (vCPUs) on the DB instance class, the
bar turns red, indicating a potential bottleneck.
746
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
5. (Optional) Choose the date or time range in the upper right and specify a different relative or
absolute time interval. You can now specify a time period, and generate a database performance
analysis report. The report provides the identified insights and recommendations. For more
information, see Analyzing database performance for a period of time (p. 750).
747
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
6. (Optional) To zoom in on a portion of the DB load chart, choose the start time and drag to the end
of the time period you want.
When you release the mouse, the DB load chart zooms in on the selected AWS Region, and the Top
dimensions table is recalculated.
748
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
The Performance Insights dashboard automatically refreshes with new data. The refresh rate
depends on the amount of data displayed:
DB load grouped by waits and top SQL queries is the default Performance Insights dashboard view.
This combination typically provides the most insight into performance issues. DB load grouped by waits
749
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
shows if there are any resource or concurrency bottlenecks in the database. In this case, the SQL tab of
the top load items table shows which queries are driving that load.
1. Review the Database load chart and see if there are any incidents of database load exceeding the Max
CPU line.
2. If there is, look at the Database load chart and identify which wait state or states are primarily
responsible.
3. Identify the digest queries causing the load by seeing which of the queries the SQL tab on the top
load items table are contributing most to those wait states. You can identify these by the DB Load by
Wait column.
4. Choose one of these digest queries in the SQL tab to expand it and see the child queries that it is
composed of.
For example, in the dashboard following, log file sync waits account for most of the DB load. The LGWR
all worker groups wait is also high. The Top SQL chart shows what is causing the log file sync waits:
frequent COMMIT statements. In this case, committing less frequently will reduce DB load.
To use this feature, you must be using the paid tier retention period. For more information, see Pricing
and data retention for Performance Insights (p. 726)
750
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
The report is available in the Performance analysis reports - new tab to select and view. The report
contains the insights, related metrics, and recommendations to resolve the performance issue. The
report is available to view for the duration of Performance Insights retention period.
The report is deleted if the start time of the report analysis period is outside of the retention period. You
can also delete the report before the retention period ends.
To detect the performance issues and generate the analysis report for your DB instance, you must turn
on Performance Insights. For more information about turning on Performance Insights, see Turning
Performance Insights on and off (p. 727).
For the region, DB engine, and instance class support information for this feature, see Amazon RDS DB
engine, Region, and instance class support for Performance Insights features (p. 725)
The analysis period can range from 5 minutes to 6 days. There must be at least 24 hours of performance
data before the analysis start time.
The fields to set the time period and add one or more tags to the performance analysis report are
displayed.
5. Choose the time period. If you set a time period in the Relative range or Absolute range in the
upper right, you can only enter or select the analysis report date and time within this time period. If
you select the analysis period outside of this time period, an error message displays.
The Performance analysis period box displays the selected time period and DB load chart
highlights the selected time period.
• Choose the Start date, Start time, End date, and End time in the Performance analysis period
box.
751
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
6. (Optional) Enter Key and Value-optional to add a tag for the report.
A banner displays a message whether the report generation is successful or failed. The message also
provides the link to view the report.
The following example shows the banner with the report creation successful message.
752
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the left navigation pane, choose Performance Insights.
3. Choose a DB instance for which you want to view the analysis report.
All the analysis reports for the different time periods are displayed.
5. Choose ID of the report you want to view.
The DB load chart displays the entire analysis period by default if more than one insight is identified.
If the report has identified one insight then the DB load chart displays the insight by default.
The dashboard also lists the tags for the report in the Tags section.
The following example shows the entire analysis period for the report.
6. Choose the insight in the Database load insights list you want to view if more than one insight is
identified in the report.
The dashboard displays the insight message, DB load chart highlighting the time period of the
insight, analysis and recommendations, and the list of report tags.
753
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
You need permissions to add the tags. For more information about the access policies for Performance
Insights, see Configuring access policies for Performance Insights (p. 734)
To add one or more tags while creating a report, see step 6 in the procedure Analyzing database
performance for a period of time (p. 751).
The following example provides the option to add a new tag for the selected report.
754
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
The list of tags for the report is displayed in the Tags section on the dashboard. If you want to
remove a tag from the report, choose Remove next to the tag.
To delete a report
A confirmation window is displayed. The report is deleted after you choose confirm.
6. (Optional) Choose ID of the report you want to delete.
755
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
A confirmation window is displayed. The report is deleted after you choose confirm.
Topics
• Overview of the Top SQL tab (p. 756)
• Accessing more SQL text in the Performance Insights dashboard (p. 761)
• Viewing SQL statistics in the Performance Insights dashboard (p. 763)
Topics
• SQL text (p. 756)
• SQL statistics (p. 757)
• Load by waits (AAS) (p. 758)
• SQL information (p. 759)
• Preferences (p. 759)
SQL text
By default, each row in the Top SQL table shows 500 bytes of text for each statement.
To learn how to see more than the default 500 bytes of SQL text, see Accessing more SQL text in the
Performance Insights dashboard (p. 761).
756
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
A SQL digest is a composite of multiple actual queries that are structurally similar but might have
different literal values. The digest replaces hardcoded values with a question mark. For example, a
digest might be SELECT * FROM emp WHERE lname= ?. This digest might include the following child
queries:
To see the literal SQL statements in a digest, select the query, and then choose the plus symbol (+). In
the following example, the selected query is a digest.
Note
A SQL digest groups similar SQL statements, but doesn't redact sensitive information.
Performance Insights can show Oracle SQL text as Unknown. The text has this status in the following
situations:
• An Oracle database user other than SYS is active but not currently executing SQL. For example, when
a parallel query completes, the query coordinator waits for helper processes to send their session
statistics. For the duration of the wait, the query text shows Unknown.
• For an RDS for Oracle instance on Standard Edition 2, Oracle Resource Manager limits the number of
parallel threads. The background process doing this work causes the query text to show as Unknown.
SQL statistics
SQL statistics are performance-related metrics about SQL queries. For example, Performance Insights
might show executions per second or rows processed per second. Performance Insights collects statistics
for only the most common queries. Typically, these match the top queries by load shown in the
Performance Insights dashboard.
Every line in the Top SQL table shows relevant statistics for the SQL statement or digest, as shown in the
following example.
757
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
Performance Insights can report 0.00 and - (unknown) for SQL statistics. This situation occurs under the
following conditions:
• Only one sample exists. For example, Performance Insights calculates rates of change for RDS
PostgreSQL queries based on multiple samples from the pg_stats_statements view. When a
workload runs for a short time, Performance Insights might collect only one sample, which means that
it can't calculate a rate of change. The unknown value is represented with a dash (-).
• Two samples have the same values. Performance Insights can't calculate a rate of change because no
change has occurred, so it reports the rate as 0.00.
• An RDS PostgreSQL statement lacks a valid identifier. PostgreSQL creates a identifier for a statement
only after parsing and analysis. Thus, a statement can exist in the PostgreSQL internal in-memory
structures with no identifier. Because Performance Insights samples internal in-memory structures
once per second, low-latency queries might appear for only a single sample. If the query identifier isn't
available for this sample, Performance Insights can't associate this statement with its statistics. The
unknown value is represented with a dash (-).
For a description of the SQL statistics for the Amazon RDS engines, see SQL statistics for Performance
Insights (p. 830).
In Top SQL, the Load by waits (AAS) column illustrates the percentage of the database load associated
with each top load item. This column reflects the load for that item by whatever grouping is currently
selected in the DB Load Chart. For more information about Average active sessions (AAS), see Average
active sessions (p. 721).
For example, you might group the DB load chart by wait states. You examine SQL queries in the top load
items table. In this case, the DB Load by Waits bar is sized, segmented, and color-coded to show how
much of a given wait state that query is contributing to. It also shows which wait states are affecting the
selected query.
758
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
SQL information
In the Top SQL table, you can open a statement to view its information. The information appears in the
bottom pane.
The following types of identifiers (IDs) that are associated with SQL statements:
• Support SQL ID – A hash value of the SQL ID. This value is only for referencing a SQL ID when you are
working with AWS Support. AWS Support doesn't have access to your actual SQL IDs and SQL text.
• Support Digest ID – A hash value of the digest ID. This value is only for referencing a digest ID when
you are working with AWS Support. AWS Support doesn't have access to your actual digest IDs and
SQL text.
Preferences
You can control the statistics displayed in the Top SQL tab by choosing the Preferences icon.
759
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
When you choose the Preferences icon, the Preferences window opens. The following screenshot is an
example of the Preferences window.
To enable the statistics that you want to appear in the Top SQL tab, use your mouse to scroll to the
bottom of the window, and then choose Continue.
760
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
For more information about per-second or per-call statistics for the Amazon RDS engines, see the engine
specific SQL statistics section in SQL statistics for Performance Insights (p. 830)
When a SQL statement exceeds 500 bytes, you can view more text in the SQL text section below the
Top SQL table. In this case, the maximum length for the text displayed in SQL text is 4 KB. This limit is
introduced by the console and is subject to the limits set by the database engine. To save the text shown
in SQL text, choose Download.
Topics
• Text size limits for Amazon RDS engines (p. 761)
• Setting the SQL text limit for Amazon RDS for PostgreSQL DB instances (p. 761)
• Viewing and downloading SQL text in the Performance Insights dashboard (p. 762)
When you download SQL text, the database engine determines its maximum length. You can download
SQL text up to the following per-engine limits.
The SQL text section of the Performance Insights console displays up to the maximum that the engine
returns. For example, if MySQL returns at most 1 KB to Performance Insights, it can only collect and
show 1 KB, even if the original query is larger. Thus, when you view the query in SQL text or download it,
Performance Insights returns the same number of bytes.
If you use the AWS CLI or API, Performance Insights doesn't have the 4 KB limit enforced by the
console. DescribeDimensionKeys and GetResourceMetrics return at most 500 bytes.
GetDimensionKeyDetails returns the full query, but the size is subject to the engine limit.
Setting the SQL text limit for Amazon RDS for PostgreSQL DB instances
Amazon RDS for PostgreSQL handles text differently. You can set the text size limit with the DB instance
parameter track_activity_query_size. This parameter has the following characteristics:
On Amazon RDS for PostgreSQL version 9.6, the default setting for the
track_activity_query_size parameter is 1,024 bytes. On Amazon RDS for PostgreSQL version
10 or higher, the default is 4,096 bytes.
Maximum text size
The limit for track_activity_query_size is 102,400 bytes for Amazon RDS for PostgreSQL
version 12 and lower. The maximum is 1 MB for version 13 and higher.
761
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
If the engine returns 1 MB to Performance Insights, the console displays only the first 4 KB. If you
download the query, you get the full 1 MB. In this case, viewing and downloading return different
numbers of bytes. For more information about the track_activity_query_size DB instance
parameter, see Run-time Statistics in the PostgreSQL documentation.
To increase the SQL text size, increase the track_activity_query_size limit. To modify the
parameter, change the parameter setting in the parameter group that is associated with the Amazon RDS
for PostgreSQL DB instance.
To change the setting when the instance uses the default parameter group
1. Create a new DB instance parameter group for the appropriate DB engine and DB engine version.
2. Set the parameter in the new parameter group.
3. Associate the new parameter group with the DB instance.
For information about setting a DB instance parameter, see Modifying parameters in a DB parameter
group (p. 352).
In the Performance Insights dashboard, you can view or download SQL text.
SQL statements with text larger than 500 bytes look similar to the following image.
762
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
The Performance Insights dashboard can display up to 4,096 bytes for each SQL statement.
7. (Optional) Choose Copy to copy the displayed SQL statement, or choose Download to download the
SQL statement to view the SQL text up to the DB engine limit.
Note
To copy or download the SQL statement, disable pop-up blockers.
763
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
6. Choose which statistics to display by choosing the gear icon in the upper-right corner of the
chart. For descriptions of the SQL statistics for the Amazon RDS engines, see SQL statistics for
Performance Insights (p. 830).
The following example shows the statistics preferences for Oracle DB instances.
764
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
The following example shows the preferences for MariaDB and MySQL DB instances.
765
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
766
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
With the plan feature of Performance Insights, you can do the following:
• Find out which plans are used by the top SQL queries.
For example, you might find out that most of the DB load is generated by queries using plan A and
plan B, with only a small percentage using plan C.
• Compare different plans for the same query.
In the preceding example, three queries are identical except for the product ID. Two queries use plan A,
but one query uses plan B. To see the difference in the two plans, you can use Performance Insights.
• Find out when a query switched to a new plan.
You might see that a query used plan A and then switched to plan B at a certain time. Was there a
change in the database at this point? For example, if a table is empty, the optimizer might choose a
full table scan. If the table is loaded with a million rows, the optimizer might switch to an index range
scan.
• Drill down to the specific steps of a plan with the highest cost.
For example, the for a long-running query might show a missing a join condition in an equijoin. This
missing condition forces a Cartesian join, which joins all rows of two tables.
You can perform the preceding tasks by using the plan capture feature of Performance Insights. Just as
you can slice Oracle queries by wait events and top SQL, you can slice them by the plan dimension.
For the region, DB engine, and instance class support information for this feature, see Amazon RDS DB
engine, Region, and instance class support for Performance Insights features (p. 725)
The Average active sessions chart shows the plans used by your top SQL statements. The plan hash
values appear to the right of the color-coded squares. Each hash value uniquely identifies a plan.
767
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard
In the following example, the top SQL digest has two plans. You can tell that it's a digest by the
question mark in the statement.
In the following example, the SELECT statement is a digest query. The component queries in the
digest use two different plans. The colors of the plans correspond to the database load chart. The
total number of plans in the digest is shown in the second column.
7. Scroll down and choose two Plans to compare from Plans for digest query list.
768
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
You can view either one or two plans for a query at a time. The following screenshot compares the
two plans in the digest, with hash 2032253151 and hash 1117438016. In the following example,
62% of the average active sessions running this digest query are using the plan on the left, whereas
38% are using the plan on the right.
In this example, the plans differ in an important way. Step 2 in plan 2032253151 uses an index scan,
whereas plan 1117438016 uses a full table scan. For a table with a large number of rows, a query of
a single row is almost always faster with an index scan.
8. (Optional) Choose Copy to copy the plan to the clipboard, or Download to save the plan to your
hard drive.
Performance Insights offers a domain-specific view of database load measured as average active
sessions (AAS). This metric appears to API consumers as a two-dimensional time-series dataset. The time
dimension of the data provides DB load data for each time point in the queried time range. Each time
point decomposes overall load in relation to the requested dimensions, such as SQL, Wait-event, User,
or Host, measured at that time point.
Amazon RDS Performance Insights monitors your Amazon RDS DB instance so that you can analyze
and troubleshoot database performance. One way to view Performance Insights data is in the AWS
Management Console. Performance Insights also provides a public API so that you can query your own
data. You can use the API to do the following:
769
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
To use the Performance Insights API, enable Performance Insights on one of your Amazon RDS DB
instances. For information about enabling Performance Insights, see Turning Performance Insights
on and off (p. 727). For more information about the Performance Insights API, see the Amazon RDS
Performance Insights API Reference.
CreatePerformanceAnalysisReport
aws pi create-performance- Creates a performance analysis
analysis-report report for a specific time period
for the DB instance. The result is
AnalysisReportId which is the
unique identifier of the report.
DeletePerformanceAnalysisReport
aws pi delete-performance- Deletes a performance analysis
analysis-report report.
770
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
ListAvailableResourceDimensions
aws pi list-available- Retrieve the dimensions that can
resource-dimensions be queried for each specified metric
type on a specified instance.
Topics
• AWS CLI for Performance Insights (p. 771)
• Retrieving time-series metrics (p. 771)
• AWS CLI examples for Performance Insights (p. 773)
aws pi help
If you don't have the AWS CLI installed, see Installing the AWS Command Line Interface in the AWS CLI
User Guide for information about installing it.
For example, the AWS Management Console uses GetResourceMetrics to populate the Counter
Metrics chart and the Database Load chart, as seen in the following image.
771
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
All metrics returned by GetResourceMetrics are standard time-series metrics, with the exception of
db.load. This metric is displayed in the Database Load chart. The db.load metric is different from
the other time-series metrics because you can break it into subcomponents called dimensions. In the
previous image, db.load is broken down and grouped by the waits states that make up the db.load.
Note
GetResourceMetrics can also return the db.sampleload metric, but the db.load metric is
appropriate in most cases.
For information about the counter metrics returned by GetResourceMetrics, see Performance
Insights counter metrics (p. 814).
• Average – The average value for the metric over a period of time. Append .avg to the metric name.
• Minimum – The minimum value for the metric over a period of time. Append .min to the metric name.
• Maximum – The maximum value for the metric over a period of time. Append .max to the metric
name.
• Sum – The sum of the metric values over a period of time. Append .sum to the metric name.
• Sample count – The number of times the metric was collected over a period of time. Append
.sample_count to the metric name.
For example, assume that a metric is collected for 300 seconds (5 minutes), and that the metric is
collected one time each minute. The values for each minute are 1, 2, 3, 4, and 5. In this case, the
following calculations are returned:
• Average – 3
• Minimum – 1
• Maximum – 5
• Sum – 15
• Sample count – 5
For information about using the get-resource-metrics AWS CLI command, see get-resource-
metrics.
772
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
For the --metric-queries option, specify one or more queries that you want to get results for. Each
query consists of a mandatory Metric and optional GroupBy and Filter parameters. The following is
an example of a --metric-queries option specification.
{
"Metric": "string",
"GroupBy": {
"Group": "string",
"Dimensions": ["string", ...],
"Limit": integer
},
"Filter": {"string": "string"
...}
Topics
• Retrieving counter metrics (p. 773)
• Retrieving the DB load average for top wait events (p. 776)
• Retrieving the DB load average for top SQL (p. 777)
• Retrieving the DB load average filtered by SQL (p. 780)
• Retrieving the full text of a SQL statement (p. 783)
• Creating a performance analysis report for a time period (p. 783)
• Retrieving a performance analysis report (p. 784)
• Listing all the performance analysis reports for the DB instance (p. 784)
• Deleting a performance analysis report (p. 785)
• Adding tag to a performance analysis report (p. 785)
• Listing all the tags for a performance analysis report (p. 785)
• Deleting tags from a performance analysis report (p. 786)
773
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
The following example shows how to gather the same data that the AWS Management Console uses to
generate the two counter metric charts.
aws pi get-resource-metrics \
--service-type RDS \
--identifier db-ID \
--start-time 2018-10-30T00:00:00Z \
--end-time 2018-10-30T01:00:00Z \
--period-in-seconds 60 \
--metric-queries '[{"Metric": "os.cpuUtilization.user.avg" },
{"Metric": "os.cpuUtilization.idle.avg"}]'
For Windows:
aws pi get-resource-metrics ^
--service-type RDS ^
--identifier db-ID ^
--start-time 2018-10-30T00:00:00Z ^
--end-time 2018-10-30T01:00:00Z ^
--period-in-seconds 60 ^
--metric-queries '[{"Metric": "os.cpuUtilization.user.avg" },
{"Metric": "os.cpuUtilization.idle.avg"}]'
You can also make a command easier to read by specifying a file for the --metrics-query option. The
following example uses a file called query.json for the option. The file has the following contents.
[
{
"Metric": "os.cpuUtilization.user.avg"
},
{
"Metric": "os.cpuUtilization.idle.avg"
}
]
aws pi get-resource-metrics \
--service-type RDS \
--identifier db-ID \
--start-time 2018-10-30T00:00:00Z \
--end-time 2018-10-30T01:00:00Z \
--period-in-seconds 60 \
--metric-queries file://query.json
For Windows:
aws pi get-resource-metrics ^
--service-type RDS ^
--identifier db-ID ^
--start-time 2018-10-30T00:00:00Z ^
--end-time 2018-10-30T01:00:00Z ^
--period-in-seconds 60 ^
--metric-queries file://query.json
774
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
The preceding example specifies the following values for the options:
The metric name uses dots to classify the metric in a useful category, with the final element being
a function. In the example, the function is avg for each query. As with Amazon CloudWatch, the
supported functions are min, max, total, and avg.
{
"Identifier": "db-XXX",
"AlignedStartTime": 1540857600.0,
"AlignedEndTime": 1540861200.0,
"MetricList": [
{ //A list of key/datapoints
"Key": {
"Metric": "os.cpuUtilization.user.avg" //Metric1
},
"DataPoints": [
//Each list of datapoints has the same timestamps and same number of items
{
"Timestamp": 1540857660.0, //Minute1
"Value": 4.0
},
{
"Timestamp": 1540857720.0, //Minute2
"Value": 4.0
},
{
"Timestamp": 1540857780.0, //Minute 3
"Value": 10.0
}
//... 60 datapoints for the os.cpuUtilization.user.avg metric
]
},
{
"Key": {
"Metric": "os.cpuUtilization.idle.avg" //Metric2
},
"DataPoints": [
{
"Timestamp": 1540857660.0, //Minute1
"Value": 12.0
},
{
"Timestamp": 1540857720.0, //Minute2
"Value": 13.5
},
//... 60 datapoints for the os.cpuUtilization.idle.avg metric
]
}
] //end of MetricList
775
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
} //end of response
The MetricList in the response has a number of entries, each with a Key and a DataPoints entry.
Each DataPoint has a Timestamp and a Value. Each Datapoints list has 60 data points because the
queries are for per-minute data over an hour, with Timestamp1/Minute1, Timestamp2/Minute2, and
so on, up to Timestamp60/Minute60.
Because the query is for two different counter metrics, there are two elements in the response
MetricList.
[
{
"Metric": "db.load.avg",
"GroupBy": { "Group": "db.wait_event", "Limit": 7 }
}
]
aws pi get-resource-metrics \
--service-type RDS \
--identifier db-ID \
--start-time 2018-10-30T00:00:00Z \
--end-time 2018-10-30T01:00:00Z \
--period-in-seconds 60 \
--metric-queries file://query.json
For Windows:
aws pi get-resource-metrics ^
--service-type RDS ^
--identifier db-ID ^
--start-time 2018-10-30T00:00:00Z ^
--end-time 2018-10-30T01:00:00Z ^
--period-in-seconds 60 ^
--metric-queries file://query.json
The example specifies the metric of db.load.avg and a GroupBy of the top seven wait events.
For details about valid values for this example, see DimensionGroup in the Performance Insights API
Reference.
{
"Identifier": "db-XXX",
776
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
"AlignedStartTime": 1540857600.0,
"AlignedEndTime": 1540861200.0,
"MetricList": [
{ //A list of key/datapoints
"Key": {
//A Metric with no dimensions. This is the total db.load.avg
"Metric": "db.load.avg"
},
"DataPoints": [
//Each list of datapoints has the same timestamps and same number of items
{
"Timestamp": 1540857660.0, //Minute1
"Value": 0.5166666666666667
},
{
"Timestamp": 1540857720.0, //Minute2
"Value": 0.38333333333333336
},
{
"Timestamp": 1540857780.0, //Minute 3
"Value": 0.26666666666666666
}
//... 60 datapoints for the total db.load.avg key
]
},
{
"Key": {
//Another key. This is db.load.avg broken down by CPU
"Metric": "db.load.avg",
"Dimensions": {
"db.wait_event.name": "CPU",
"db.wait_event.type": "CPU"
}
},
"DataPoints": [
{
"Timestamp": 1540857660.0, //Minute1
"Value": 0.35
},
{
"Timestamp": 1540857720.0, //Minute2
"Value": 0.15
},
//... 60 datapoints for the CPU key
]
},
//... In total we have 8 key/datapoints entries, 1) total, 2-8) Top Wait Events
] //end of MetricList
} //end of response
In this response, there are eight entries in the MetricList. There is one entry for the total
db.load.avg, and seven entries each for the db.load.avg divided according to one of the top seven
wait events. Unlike in the first example, because there was a grouping dimension, there must be one
key for each grouping of the metric. There can't be only one key for each metric, as in the basic counter
metric use case.
• db.sql – The full SQL statement, such as select * from customers where customer_id =
123
777
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
• db.sql_tokenized – The tokenized SQL statement, such as select * from customers where
customer_id = ?
When analyzing database performance, it can be useful to consider SQL statements that only differ
by their parameters as one logic item. So, you can use db.sql_tokenized when querying. However,
especially when you're interested in explain plans, sometimes it's more useful to examine full SQL
statements with parameters, and query grouping by db.sql. There is a parent-child relationship
between tokenized and full SQL, with multiple full SQL (children) grouped under the same tokenized
SQL (parent).
The command in this example is the similar to the command in Retrieving the DB load average for top
wait events (p. 776). However, the query.json file has the following contents.
[
{
"Metric": "db.load.avg",
"GroupBy": { "Group": "db.sql_tokenized", "Limit": 10 }
}
]
aws pi get-resource-metrics \
--service-type RDS \
--identifier db-ID \
--start-time 2018-10-29T00:00:00Z \
--end-time 2018-10-30T00:00:00Z \
--period-in-seconds 3600 \
--metric-queries file://query.json
For Windows:
aws pi get-resource-metrics ^
--service-type RDS ^
--identifier db-ID ^
--start-time 2018-10-29T00:00:00Z ^
--end-time 2018-10-30T00:00:00Z ^
--period-in-seconds 3600 ^
--metric-queries file://query.json
The example specifies the metric of db.load.avg and a GroupBy of the top seven wait events.
For details about valid values for this example, see DimensionGroup in the Performance Insights API
Reference.
{
"AlignedStartTime": 1540771200.0,
"AlignedEndTime": 1540857600.0,
"Identifier": "db-XXX",
778
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
This response has 11 entries in the MetricList (1 total, 10 top tokenized SQL), with each entry having
24 per-hour DataPoints.
For tokenized SQL, there are three entries in each dimensions list:
In the AWS Management Console, this ID is called the Support ID. It's named this because the ID is
data that AWS Support can examine to help you troubleshoot an issue with your database. AWS takes
the security and privacy of your data extremely seriously, and almost all data is stored encrypted with
your AWS KMS customer master key (CMK). Therefore, nobody inside AWS can look at this data. In
the example preceding, both the tokenized.statement and the tokenized.db_id are stored
encrypted. If you have an issue with your database, AWS Support can help you by referencing the
Support ID.
When querying, it might be convenient to specify a Group in GroupBy. However, for finer-grained
control over the data that's returned, specify the list of dimensions. For example, if all that is needed is
the db.sql_tokenized.statement, then a Dimensions attribute can be added to the query.json file.
[
{
"Metric": "db.load.avg",
"GroupBy": {
"Group": "db.sql_tokenized",
"Dimensions":["db.sql_tokenized.statement"],
"Limit": 10
}
}
779
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
The preceding image shows that a particular query is selected, and the top average active sessions
stacked area line graph is scoped to that query. Although the query is still for the top seven overall wait
events, the value of the response is filtered. The filter causes it to take into account only sessions that are
a match for the particular filter.
The corresponding API query in this example is similar to the command in Retrieving the DB load average
for top SQL (p. 777). However, the query.json file has the following contents.
[
{
"Metric": "db.load.avg",
"GroupBy": { "Group": "db.wait_event", "Limit": 5 },
"Filter": { "db.sql_tokenized.id": "AKIAIOSFODNN7EXAMPLE" }
}
]
aws pi get-resource-metrics \
--service-type RDS \
--identifier db-ID \
--start-time 2018-10-30T00:00:00Z \
--end-time 2018-10-30T01:00:00Z \
--period-in-seconds 60 \
--metric-queries file://query.json
For Windows:
aws pi get-resource-metrics ^
--service-type RDS ^
--identifier db-ID ^
780
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
--start-time 2018-10-30T00:00:00Z ^
--end-time 2018-10-30T01:00:00Z ^
--period-in-seconds 60 ^
--metric-queries file://query.json
{
"Identifier": "db-XXX",
"AlignedStartTime": 1556215200.0,
"MetricList": [
{
"Key": {
"Metric": "db.load.avg"
},
"DataPoints": [
{
"Timestamp": 1556218800.0,
"Value": 1.4878117913832196
},
{
"Timestamp": 1556222400.0,
"Value": 1.192823803967328
}
]
},
{
"Key": {
"Metric": "db.load.avg",
"Dimensions": {
"db.wait_event.type": "io",
"db.wait_event.name": "wait/io/aurora_redo_log_flush"
}
},
"DataPoints": [
{
"Timestamp": 1556218800.0,
"Value": 1.1360544217687074
},
{
"Timestamp": 1556222400.0,
"Value": 1.058051341890315
}
]
},
{
"Key": {
"Metric": "db.load.avg",
"Dimensions": {
"db.wait_event.type": "io",
"db.wait_event.name": "wait/io/table/sql/handler"
}
},
"DataPoints": [
{
"Timestamp": 1556218800.0,
"Value": 0.16241496598639457
},
{
"Timestamp": 1556222400.0,
"Value": 0.05163360560093349
}
]
},
{
781
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
"Key": {
"Metric": "db.load.avg",
"Dimensions": {
"db.wait_event.type": "synch",
"db.wait_event.name": "wait/synch/mutex/innodb/
aurora_lock_thread_slot_futex"
}
},
"DataPoints": [
{
"Timestamp": 1556218800.0,
"Value": 0.11479591836734694
},
{
"Timestamp": 1556222400.0,
"Value": 0.013127187864644107
}
]
},
{
"Key": {
"Metric": "db.load.avg",
"Dimensions": {
"db.wait_event.type": "CPU",
"db.wait_event.name": "CPU"
}
},
"DataPoints": [
{
"Timestamp": 1556218800.0,
"Value": 0.05215419501133787
},
{
"Timestamp": 1556222400.0,
"Value": 0.05805134189031505
}
]
},
{
"Key": {
"Metric": "db.load.avg",
"Dimensions": {
"db.wait_event.type": "synch",
"db.wait_event.name": "wait/synch/mutex/innodb/lock_wait_mutex"
}
},
"DataPoints": [
{
"Timestamp": 1556218800.0,
"Value": 0.017573696145124718
},
{
"Timestamp": 1556222400.0,
"Value": 0.002333722287047841
}
]
}
],
"AlignedEndTime": 1556222400.0
} //end of response
In this response, all values are filtered according to the contribution of tokenized SQL
AKIAIOSFODNN7EXAMPLE specified in the query.json file. The keys also might follow a different order
than a query without a filter, because it's the top five wait events that affected the filtered SQL.
782
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
aws pi get-dimension-key-details \
--service-type RDS \
--identifier db-10BCD2EFGHIJ3KL4M5NO6PQRS5 \
--group db.sql \
--group-identifier my-sql-id \
--requested-dimensions statement
For Windows:
aws pi get-dimension-key-details ^
--service-type RDS ^
--identifier db-10BCD2EFGHIJ3KL4M5NO6PQRS5 ^
--group db.sql ^
--group-identifier my-sql-id ^
--requested-dimensions statement
In this example, the dimensions details are available. Thus, Performance Insights retrieves the full text of
the SQL statement, without truncating it.
{
"Dimensions":[
{
"Value": "SELECT e.last_name, d.department_name FROM employees e, departments d
WHERE e.department_id=d.department_id",
"Dimension": "db.sql.statement",
"Status": "AVAILABLE"
},
...
]
}
783
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
{
"AnalysisReportId": "report-0234d3ed98e28fb17"
}
The reponse provides the report status, ID, time details, and insights.
{
"AnalysisReport": {
"Status": "Succeeded",
"ServiceType": "RDS",
"Identifier": "db-loadtest-0",
"StartTime": 1680583486.584,
"AnalysisReportId": "report-0d99cc91c4422ee61",
"EndTime": 1680587086.584,
"CreateTime": 1680587087.139,
"Insights": [
... (Condensed for space)
]
}
}
The response lists all the reports with the report ID, status, and time period details.
{
"AnalysisReports": [
{
"Status": "Succeeded",
"EndTime": 1680587086.584,
"CreationTime": 1680587087.139,
784
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API
"StartTime": 1680583486.584,
"AnalysisReportId": "report-0d99cc91c4422ee61"
},
{
"Status": "Succeeded",
"EndTime": 1681491137.914,
"CreationTime": 1681491145.973,
"StartTime": 1681487537.914,
"AnalysisReportId": "report-002633115cc002233"
},
{
"Status": "Succeeded",
"EndTime": 1681493499.849,
"CreationTime": 1681493507.762,
"StartTime": 1681489899.849,
"AnalysisReportId": "report-043b1e006b47246f9"
},
{
"Status": "InProgress",
"EndTime": 1682979503.0,
"CreationTime": 1682979618.994,
"StartTime": 1682969503.0,
"AnalysisReportId": "report-01ad15f9b88bcbd56"
}
]
}
785
Amazon Relational Database Service User Guide
Logging Performance Insights calls using AWS CloudTrail
The response lists the value and key for all the tags added to the report:
{
"Tags": [
{
"Value": "test-tag",
"Key": "name"
}
]
}
After the tag is deleted, calling the list-tags-for-resource API doesn't list this tag.
If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket,
including events for Performance Insights. If you don't configure a trail, you can still view the most
recent events in the CloudTrail console in Event history. Using the data collected by CloudTrail, you can
determine certain information. This information includes the request that was made to Performance
Insights, the IP address the request was made from, who made the request, and when it was made. It
also includes additional details.
To learn more about CloudTrail, see the AWS CloudTrail User Guide.
786
Amazon Relational Database Service User Guide
Logging Performance Insights calls using AWS CloudTrail
in the CloudTrail console in Event history. You can view, search, and download recent events in your AWS
account. For more information, see Viewing Events with CloudTrail Event History in AWS CloudTrail User
Guide.
For an ongoing record of events in your AWS account, including events for Performance Insights, create a
trail. A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a
trail in the console, the trail applies to all AWS Regions. The trail logs events from all AWS Regions in the
AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can
configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs.
For more information, see the following topics in AWS CloudTrail User Guide:
All Performance Insights operations are logged by CloudTrail and are documented in the Performance
Insights API Reference. For example, calls to the DescribeDimensionKeys and GetResourceMetrics
operations generate entries in the CloudTrail log files.
Every event or log entry contains information about who generated the request. The identity
information helps you determine the following:
• Whether the request was made with root or IAM user credentials.
• Whether the request was made with temporary security credentials for a role or federated user.
• Whether the request was made by another AWS service.
The following example shows a CloudTrail log entry that demonstrates the GetResourceMetrics
operation.
{
"eventVersion": "1.05",
"userIdentity": {
"type": "IAMUser",
"principalId": "AKIAIOSFODNN7EXAMPLE",
"arn": "arn:aws:iam::123456789012:user/johndoe",
"accountId": "123456789012",
"accessKeyId": "AKIAI44QH8DHBEXAMPLE",
"userName": "johndoe"
},
"eventTime": "2019-12-18T19:28:46Z",
"eventSource": "pi.amazonaws.com",
"eventName": "GetResourceMetrics",
"awsRegion": "us-east-1",
"sourceIPAddress": "72.21.198.67",
"userAgent": "aws-cli/1.16.240 Python/3.7.4 Darwin/18.7.0 botocore/1.12.230",
787
Amazon Relational Database Service User Guide
Logging Performance Insights calls using AWS CloudTrail
"requestParameters": {
"identifier": "db-YTDU5J5V66X7CXSCVDFD2V3SZM",
"metricQueries": [
{
"metric": "os.cpuUtilization.user.avg"
},
{
"metric": "os.cpuUtilization.idle.avg"
}
],
"startTime": "Dec 18, 2019 5:28:46 PM",
"periodInSeconds": 60,
"endTime": "Dec 18, 2019 7:28:46 PM",
"serviceType": "RDS"
},
"responseElements": null,
"requestID": "9ffbe15c-96b5-4fe6-bed9-9fccff1a0525",
"eventID": "08908de0-2431-4e2e-ba7b-f5424f908433",
"eventType": "AwsApiCall",
"recipientAccountId": "123456789012"
}
788
Amazon Relational Database Service User Guide
Analyzing performance with DevOps Guru for RDS
DevOps Guru detects, analyzes, and makes recommendations for existing operational issues for all
Amazon RDS DB engines. DevOps Guru for RDS extends this capability by applying machine learning
to Performance Insights metrics for RDS for PostgreSQL databases. These monitoring features allow
DevOps Guru for RDS to detect and diagnose performance bottlenecks and recommend specific
corrective actions. DevOps Guru for RDS can also detect problematic conditions in your RDS for
PostgreSQL database before they occur.
For a deep dive on this subject, see Amazon DevOps Guru for RDS under the hood.
Topics
• Benefits of DevOps Guru for RDS (p. 789)
• How DevOps Guru for RDS works (p. 790)
• Setting up DevOps Guru for RDS (p. 791)
You gain the following advantages from the detailed analysis of DevOps Guru for RDS:
Fast diagnosis
DevOps Guru for RDS continuously monitors and analyzes database telemetry. Performance Insights,
Enhanced Monitoring, and Amazon CloudWatch collect telemetry data for your database instance.
DevOps Guru for RDS uses statistical and machine learning techniques to mine this data and detect
anomalies. To learn more about telemetry data, see Monitoring DB load with Performance Insights
on Amazon RDS and Monitoring OS metrics with Enhanced Monitoring in the Amazon RDS User
Guide.
Fast resolution
Each anomaly identifies the performance issue and suggests avenues of investigation or corrective
actions. For example, DevOps Guru for RDS might recommend that you investigate specific wait
events. Or it might recommend that you tune your application pool settings to limit the number of
database connections. Based on these recommendations, you can resolve performance issues more
quickly than by troubleshooting manually.
Proactive insights
DevOps Guru for RDS uses metrics from your resources to detect potentially problematic behavior
before it becomes a bigger problem. For example, it can detect when your database is using
an increasing number of on-disk temporary tables, which could start to impact performance.
789
Amazon Relational Database Service User Guide
How DevOps Guru for RDS works
DevOps Guru then provides recommendations to help you address issues before they become bigger
problems.
Deep knowledge of Amazon engineers and machine learning
To detect performance issues and help you resolve bottlenecks, DevOps Guru for RDS relies
on machine learning (ML) and advanced mathematical formulas. Amazon database engineers
contributed to the development of the DevOps Guru for RDS findings, which encapsulate many
years of managing hundreds of thousands of databases. By drawing on this collective knowledge,
DevOps Guru for RDS can teach you best practices.
In DevOps Guru for RDS, an anomaly is a pattern that deviates from what is considered normal
performance for your RDS for PostgreSQL database.
Proactive insights
A proactive insight lets you know about problematic behavior before it occurs. It contains anomalies with
recommendations and related metrics to help you address issues in your RDS for PostgreSQL databases
before become bigger problems. These insights are published in the DevOps Guru dashboard.
For example, DevOps Guru might detect that your RDS for PostgreSQL database is creating many on-
disk temporary tables. If not addressed, this trend might lead to performance issues. Each proactive
insight includes recommendations for corrective behavior and links to relevant topics in Tuning RDS for
PostgreSQL with Amazon DevOps Guru proactive insights (p. 2353). For more information, see Working
with insights in DevOps Guru in the Amazon DevOps Guru User Guide.
Reactive insights
A reactive insight identifies anomalous behavior as it occurs. If DevOps Guru for RDS finds performance
issues in your RDS for PostgreSQL DB instances, it publishes a reactive insight in the DevOps Guru
dashboard. For more information, see Working with insights in DevOps Guru in the Amazon DevOps Guru
User Guide.
Causal anomalies
A causal anomaly is a top-level anomaly within a reactive insight. Database load (DB load) is the causal
anomaly for DevOps Guru for RDS.
An anomaly measures performance impact by assigning a severity level of High, Medium, or Low. To
learn more, see Key concepts for DevOps Guru for RDS in the Amazon DevOps Guru User Guide.
If DevOps Guru detects a current anomaly on your DB instance, you're alerted in the Databases page
of the RDS console. The console also alerts you to anomalies that occurred in the past 24 hours. To go
to the anomaly page from the RDS console, choose the link in the alert message. The RDS console also
alerts you in the page for your RDS for PostgreSQL DB instance.
Contextual anomalies
A contextual anomaly is a finding within Database load (DB load) that is related to a reactive insight.
Each contextual anomaly describes a specific RDS for PostgreSQL performance issue that requires
790
Amazon Relational Database Service User Guide
Setting up DevOps Guru for RDS
investigation. For example, DevOps Guru for RDS might recommend that you consider increasing CPU
capacity or investigate wait events that are contributing to DB load.
Important
We recommend that you test any changes on a test instance before modifying a production
instance. In this way, you understand the impact of the change.
To learn more, see Analyzing anomalies in Amazon RDS in the Amazon DevOps Guru User Guide.
Topics
• Configuring IAM access policies for DevOps Guru for RDS (p. 791)
• Turning on Performance Insights for your RDS for PostgreSQL DB instances (p. 791)
• Turning on DevOps Guru and specifying resource coverage (p. 791)
For more information, see Configuring access policies for Performance Insights (p. 734).
When you create or modify a RDS for PostgreSQL DB instance, you can turn on Performance Insights. For
more information, see Turning Performance Insights on and off (p. 727).
Topics
• Turning on DevOps Guru in the RDS console (p. 792)
• Adding RDS for PostgreSQL resources in the DevOps Guru console (p. 795)
• Adding RDS for PostgreSQL resources using AWS CloudFormation (p. 795)
791
Amazon Relational Database Service User Guide
Setting up DevOps Guru for RDS
Topics
• Turning on DevOps Guru when you create an RDS for PostgreSQL database (p. 792)
• Turning on DevOps Guru from the notification banner (p. 793)
• Responding to a permissions error when you turn on DevOps Guru (p. 794)
Turning on DevOps Guru when you create an RDS for PostgreSQL database
The creation workflow includes a setting that turns on DevOps Guru coverage for your database. This
setting is turned on by default when you choose the Production template.
To turn on DevOps Guru when you create an RDS for PostgreSQL database
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Follow the steps in Creating a DB instance (p. 303), up to but not including the step where you
choose monitoring settings.
3. In Monitoring, choose Turn on Performance Insights. For DevOps Guru for RDS to provide detailed
analysis of performance anomalies, Performance Insights must be turned on.
4. Choose Turn on DevOps Guru.
792
Amazon Relational Database Service User Guide
Setting up DevOps Guru for RDS
5. Create a tag for your database so that DevOps Guru can monitor it. Do the following:
• In the text field for Tag key, enter a name that begins with Devops-Guru-.
• In the text field for Tag value, enter any value. For example, if you enter rds-database-1 for
the name of your RDS for PostgreSQL database, you can also enter rds-database-1 as the tag
value.
For more information about tags, see "Use tags to identify resources in your DevOps Guru
applications" in the Amazon DevOps Guru User Guide.
6. Complete the remaining steps in Creating a DB instance (p. 303).
If your resources aren't covered by DevOps Guru, Amazon RDS notifies you with a banner in the following
locations:
793
Amazon Relational Database Service User Guide
Setting up DevOps Guru for RDS
If you turn on DevOps Guru from the RDS console when you create a database, RDS might display the
following banner about missing permissions.
1. Grant your IAM user or role the user managed role AmazonDevOpsGuruConsoleFullAccess. For
more information, see Configuring IAM access policies for DevOps Guru for RDS (p. 791).
2. Open the RDS console.
3. In the navigation pane, choose Performance Insights.
4. Choose a DB instance in the cluster that you just created.
5. Turn on DevOps Guru for RDS.
794
Amazon Relational Database Service User Guide
Setting up DevOps Guru for RDS
6. Choose a tag value. For more information, see "Use tags to identify resources in your DevOps Guru
applications" in the Amazon DevOps Guru User Guide.
• Choose All account resources to analyze all supported resources, including the RDS for PostgreSQL
databases, in your AWS account and Region.
• Choose CloudFormation stacks to analyze the RDS for PostgreSQL databases that are in stacks you
choose. For more information, see Use AWS CloudFormation stacks to identify resources in your
DevOps Guru applications in the Amazon DevOps Guru User Guide.
• Choose Tags to analyze the RDS for PostgreSQL databases that you have tagged. For more
information, see Use tags to identify resources in your DevOps Guru applications in the Amazon
DevOps Guru User Guide.
For more information, see Enable DevOps Guru in the Amazon DevOps Guru User Guide.
1. In the CloudFormation template for your DB instance, define a tag using a key/value pair.
795
Amazon Relational Database Service User Guide
Setting up DevOps Guru for RDS
MyDBInstance1:
Type: "AWS::RDS::DBInstance"
Properties:
DBInstanceIdentifier: my-db-instance1
Tags:
- Key: Devops-guru-cfn-default
Value: devopsguru-my-db-instance1
2. In the CloudFormation template for your DevOps Guru stack, specify the same tag in your resource
collection filter.
The following example configures DevOps Guru to provide coverage for the resource with the tag
value my-db-instance1.
DevOpsGuruResourceCollection:
Type: AWS::DevOpsGuru::ResourceCollection
Properties:
ResourceCollectionFilter:
Tags:
- AppBoundaryKey: "Devops-guru-cfn-default"
TagValues:
- "devopsguru-my-db-instance1"
The following example provides coverage for all resources within the application boundary Devops-
guru-cfn-default.
DevOpsGuruResourceCollection:
Type: AWS::DevOpsGuru::ResourceCollection
Properties:
ResourceCollectionFilter:
Tags:
- AppBoundaryKey: "Devops-guru-cfn-default"
TagValues:
- "*"
796
Amazon Relational Database Service User Guide
Monitoring the OS with Enhanced Monitoring
Topics
• Overview of Enhanced Monitoring (p. 797)
• Setting up and enabling Enhanced Monitoring (p. 798)
• Viewing OS metrics in the RDS console (p. 802)
• Viewing OS metrics using CloudWatch Logs (p. 805)
RDS delivers the metrics from Enhanced Monitoring into your Amazon CloudWatch Logs account.
You can create metrics filters in CloudWatch from CloudWatch Logs and display the graphs on the
CloudWatch dashboard. You can consume the Enhanced Monitoring JSON output from CloudWatch Logs
in a monitoring system of your choice. For more information, see Enhanced Monitoring in the Amazon
RDS FAQs.
Topics
• Enhanced Monitoring availability (p. 797)
• Differences between CloudWatch and Enhanced Monitoring metrics (p. 797)
• Retention of Enhanced Monitoring metrics (p. 798)
• Cost of Enhanced Monitoring (p. 798)
• MariaDB
• Microsoft SQL Server
• MySQL
• Oracle
• PostgreSQL
Enhanced Monitoring is available for all DB instance classes except for the db.m1.small instance class.
797
Amazon Relational Database Service User Guide
Setting up and enabling Enhanced Monitoring
utilization from the hypervisor for a DB instance. In contrast, Enhanced Monitoring gathers its metrics
from an agent on the DB instance.
You might find differences between the CloudWatch and Enhanced Monitoring measurements, because
the hypervisor layer performs a small amount of work. The differences can be greater if your DB
instances use smaller instance classes. In this scenario, more virtual machines (VMs) are probably
managed by the hypervisor layer on a single physical instance.
For descriptions of the Enhanced Monitoring metrics, see OS metrics in Enhanced Monitoring (p. 837).
For more information about CloudWatch metrics, see the Amazon CloudWatch User Guide.
To modify the amount of time the metrics are stored in the CloudWatch Logs, change the retention for
the RDSOSMetrics log group in the CloudWatch console. For more information, see Change log data
retention in CloudWatch logs in the Amazon CloudWatch Logs User Guide.
• You are charged for Enhanced Monitoring only if you exceed the free tier provided by Amazon
CloudWatch Logs. Charges are based on CloudWatch Logs data transfer and storage rates.
• The amount of information transferred for an RDS instance is directly proportional to the defined
granularity for the Enhanced Monitoring feature. A smaller monitoring interval results in more
frequent reporting of OS metrics and increases your monitoring cost. To manage costs, set different
granularities for different instances in your accounts.
• Usage costs for Enhanced Monitoring are applied for each DB instance that Enhanced Monitoring is
enabled for. Monitoring a large number of DB instances is more expensive than monitoring only a few.
• DB instances that support a more compute-intensive workload have more OS process activity to report
and higher costs for Enhanced Monitoring.
Topics
• Creating an IAM role for Enhanced Monitoring (p. 798)
• Turning Enhanced Monitoring on and off (p. 799)
• Protecting against the confused deputy problem (p. 801)
798
Amazon Relational Database Service User Guide
Setting up and enabling Enhanced Monitoring
Topics
• Creating the IAM role when you enable Enhanced Monitoring (p. 799)
• Creating the IAM role before you enable Enhanced Monitoring (p. 799)
The user that enables Enhanced Monitoring must be granted the PassRole permission. For more
information, see Example 2 in Granting a user permissions to pass a role to an AWS service in the IAM
User Guide.
The trusted entity for your role is the AWS service monitoring.rds.amazonaws.com.
8. Choose Create role.
Console
You can turn on Enhanced Monitoring when you create a DB instance, Multi-AZ DB cluster, or read
replica, or when you modify a DB instance or Multi-AZ DB cluster. If you modify a DB instance to turn on
Enhanced Monitoring, you don't need to reboot your DB instance for the change to take effect.
You can turn on Enhanced Monitoring in the RDS console when you do one of the following actions in
the Databases page:
799
Amazon Relational Database Service User Guide
Setting up and enabling Enhanced Monitoring
The fastest that the RDS console refreshes is every 5 seconds. If you set the granularity to 1 second
in the RDS console, you still see updated metrics only every 5 seconds. You can retrieve 1-second
metric updates by using CloudWatch Logs.
AWS CLI
To turn on Enhanced Monitoring using the AWS CLI, in the following commands, set the --
monitoring-interval option to a value other than 0 and set the --monitoring-role-arn option
to the role you created in Creating an IAM role for Enhanced Monitoring (p. 798).
• create-db-instance
• create-db-instance-read-replica
• modify-db-instance
• create-db-cluster (Multi-AZ DB cluster)
• modify-db-cluster (Multi-AZ DB cluster)
The --monitoring-interval option specifies the interval, in seconds, between points when Enhanced
Monitoring metrics are collected. Valid values for the option are 0, 1, 5, 10, 15, 30, and 60.
To turn off Enhanced Monitoring using the AWS CLI, set the --monitoring-interval option to 0 in
these commands.
Example
For Windows:
800
Amazon Relational Database Service User Guide
Setting up and enabling Enhanced Monitoring
--db-instance-identifier mydbinstance ^
--monitoring-interval 30 ^
--monitoring-role-arn arn:aws:iam::123456789012:role/emaccess
Example
For Windows:
RDS API
To turn on Enhanced Monitoring using the RDS API, set the MonitoringInterval parameter to a value
other than 0 and set the MonitoringRoleArn parameter to the role you created in Creating an IAM
role for Enhanced Monitoring (p. 798). Set these parameters in the following actions:
• CreateDBInstance
• CreateDBInstanceReadReplica
• ModifyDBInstance
• CreateDBCluster (Multi-AZ DB cluster)
• ModifyDBCluster (Multi-AZ DB cluster)
The MonitoringInterval parameter specifies the interval, in seconds, between points when Enhanced
Monitoring metrics are collected. Valid values are 0, 1, 5, 10, 15, 30, and 60.
To limit the permissions to the resource that Amazon RDS can give another service, we recommend using
the aws:SourceArn and aws:SourceAccount global condition context keys in a trust policy for your
Enhanced Monitoring role. If you use both global condition context keys, they must use the same account
ID.
801
Amazon Relational Database Service User Guide
Viewing OS metrics in the RDS console
The most effective way to protect against the confused deputy problem is to use the aws:SourceArn
global condition context key with the full ARN of the resource. For Amazon RDS, set aws:SourceArn to
arn:aws:rds:Region:my-account-id:db:dbname.
The following example uses the aws:SourceArn and aws:SourceAccount global condition context
keys in a trust policy to prevent the confused deputy problem.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringLike": {
"aws:SourceArn": "arn:aws:rds:Region:my-account-id:db:dbname"
},
"StringEquals": {
"aws:SourceAccount": "my-account-id"
}
}
}
]
}
The following example shows the Enhanced Monitoring page. For descriptions of the Enhanced
Monitoring metrics, see OS metrics in Enhanced Monitoring (p. 837).
Some DB instances use more than one disk for the DB instance's data storage volume. On those
DB instances, the Physical Devices graphs show metrics for each one of the disks. For example, the
following graph shows metrics for four disks.
802
Amazon Relational Database Service User Guide
Viewing OS metrics in the RDS console
Note
Currently, Physical Devices graphs are not available for Microsoft SQL Server DB instances.
When you are viewing aggregated Disk I/O and File system graphs, the rdsdev device relates to the /
rdsdbdata file system, where all database files and logs are stored. The filesystem device relates to the
/ file system (also known as root), where files related to the operating system are stored.
If the DB instance is a Multi-AZ deployment, you can view the OS metrics for the primary DB instance
and its Multi-AZ standby replica. In the Enhanced monitoring view, choose primary to view the OS
metrics for the primary DB instance, or choose secondary to view the OS metrics for the standby replica.
803
Amazon Relational Database Service User Guide
Viewing OS metrics in the RDS console
For more information about Multi-AZ deployments, see Configuring and managing a Multi-AZ
deployment (p. 492).
Note
Currently, viewing OS metrics for a Multi-AZ standby replica is not supported for MariaDB DB
instances.
If you want to see details for the processes running on your DB instance, choose OS process list for
Monitoring.
The Enhanced Monitoring metrics shown in the Process list view are organized as follows:
• RDS child processes – Shows a summary of the RDS processes that support the DB instance, for
example mysqld for MySQL DB instances. Process threads appear nested beneath the parent process.
Process threads show CPU utilization only as other metrics are the same for all threads for the process.
The console displays a maximum of 100 processes and threads. The results are a combination of
the top CPU consuming and memory consuming processes and threads. If there are more than 50
processes and more than 50 threads, the console displays the top 50 consumers in each category. This
display helps you identify which processes are having the greatest impact on performance.
• RDS processes – Shows a summary of the resources used by the RDS management agent, diagnostics
monitoring processes, and other AWS processes that are required to support RDS DB instances.
• OS processes – Shows a summary of the kernel and system processes, which generally have minimal
impact on performance.
804
Amazon Relational Database Service User Guide
Viewing OS metrics using CloudWatch Logs
The monitoring data that is shown in the RDS console is retrieved from Amazon CloudWatch Logs.
You can also retrieve the metrics for a DB instance as a log stream from CloudWatch Logs. For more
information, see Viewing OS metrics using CloudWatch Logs (p. 805).
Enhanced Monitoring metrics are returned during a reboot of a DB instance because only the database
engine is rebooted. Metrics for the operating system are still reported.
In a Multi-AZ DB instance deployment, log files with -secondary appended to the name are for the
Multi-AZ standby replica.
5. Choose the log stream that you want to view from the list of log streams.
805
Amazon Relational Database Service User Guide
RDS metrics reference
Topics
• Amazon CloudWatch metrics for Amazon RDS (p. 806)
• Amazon CloudWatch dimensions for Amazon RDS (p. 813)
• Amazon CloudWatch metrics for Performance Insights (p. 813)
• Performance Insights counter metrics (p. 814)
• SQL statistics for Performance Insights (p. 830)
• OS metrics in Enhanced Monitoring (p. 837)
Topics
• Amazon CloudWatch instance-level metrics for Amazon RDS (p. 806)
• Amazon CloudWatch usage metrics for Amazon RDS (p. 812)
Binary Log
BinLogDiskUsage The amount of disk space occupied MariaDB Bytes
Disk Usage by binary logs. If automatic
(MB) backups are enabled for MySQL MySQL
and MariaDB instances, including
read replicas, binary logs are
created.
Connection
ConnectionAttempts The number of attempts to Count
Attempts connect to an instance, whether
(Count) successful or not.
CPU
CPUUtilization The percentage of CPU utilization. All Percentage
Utilization
(Percent)
806
Amazon Relational Database Service User Guide
CloudWatch metrics for RDS
CPU Credit
CPUCreditUsage (T2 instances) The number of CPU Credits (vCPU-
Usage (Count) credits spent by the instance for minutes)
CPU utilization. One CPU credit
equals one vCPU running at 100
percent utilization for one minute
or an equivalent combination
of vCPUs, utilization, and time.
For example, you might have
one vCPU running at 50 percent
utilization for two minutes or
two vCPUs running at 25 percent
utilization for two minutes.
807
Amazon Relational Database Service User Guide
CloudWatch metrics for RDS
CPU Credit
CPUCreditBalance (T2 instances) The number Credits (vCPU-
Balance of earned CPU credits that minutes)
(Count) an instance has accrued
since it was launched or
started. For T2 Standard, the
CPUCreditBalance also includes
the number of launch credits that
have been accrued.
808
Amazon Relational Database Service User Guide
CloudWatch metrics for RDS
DB
DatabaseConnections The number of client network All Count
Connections connections to the database
(Count) instance.
Queue Depth
DiskQueueDepth The number of outstanding I/Os All Count
(Count) (read/write requests) waiting to
access the disk.
EBS Byte
EBSByteBalance The percentage of throughput All Percentage
% Balance credits remaining in the burst
(Percent) bucket of your RDS database.
This metric is available for basic
monitoring only.
809
Amazon Relational Database Service User Guide
CloudWatch metrics for RDS
Freeable
FreeableMemory The amount of available random All Bytes
Memory (MB) access memory.
Free Storage
FreeStorageSpace The amount of available storage All Bytes
Space (MB) space.
Maximum
MaximumUsedTransactionIDs The maximum transaction IDs that PostgreSQL Count
Used have been used.
Transaction
IDs (Count)
Network
NetworkReceiveThroughput The incoming (receive) network All Bytes per
Receive traffic on the DB instance, second
Throughput including both customer
(MB/Second) database traffic and Amazon RDS
traffic used for monitoring and
replication.
810
Amazon Relational Database Service User Guide
CloudWatch metrics for RDS
Network
NetworkTransmitThroughput The outgoing (transmit) network All Bytes per
Transmit traffic on the DB instance, second
Throughput including both customer
(MB/Second) database traffic and Amazon RDS
traffic used for monitoring and
replication.
Oldest
OldestReplicationSlotLag The lagging size of the replica PostgreSQL Bytes
Replication lagging the most in terms of write-
Slot Lag (MB) ahead log (WAL) data received.
ReadIOPS Read IOPS The average number of disk read I/ All Count per
(Count/ O operations per second. second
Second)
ReadLatency Read Latency The average amount of time taken All Seconds
(Seconds) per disk I/O operation.
Read
ReadThroughput The average number of bytes read All Bytes per
Throughput from disk per second. second
(MB/Second)
Replica Slot
ReplicationSlotDiskUsage The disk space used by replication PostgreSQL Bytes
Disk Usage slot files.
(MB)
SwapUsage Swap Usage The amount of swap space used on MariaDB Bytes
(MB) the DB instance.
MySQL
Oracle
PostgreSQL
Transaction
TransactionLogsDiskUsage The disk space used by transaction PostgreSQL Bytes
Logs Disk logs.
Usage (MB)
Transaction
TransactionLogsGeneration The size of transaction logs PostgreSQL Bytes per
Logs generated per second. second
Generation
(MB/Second)
811
Amazon Relational Database Service User Guide
CloudWatch metrics for RDS
WriteIOPS Write IOPS The average number of disk write All Count per
(Count/ I/O operations per second. second
Second)
WriteLatency Write Latency The average amount of time taken All Seconds
(Seconds) per disk I/O operation.
Write
WriteThroughput The average number of bytes All Bytes per
Throughput written to disk per second. second
(MB/Second)
For more information, see CloudWatch usage metrics in the Amazon CloudWatch User Guide. For more
information about quotas, see Quotas and constraints for Amazon RDS (p. 2720) and Requesting a quota
increase in the Service Quotas User Guide.
AllocatedStorage The total storage for all DB instances. The sum excludes Gigabytes
temporary migration instances.
DBSecurityGroups The number of security groups in your AWS account. The Count
count excludes the default security group and the default VPC
security group.
DBSubnetGroups The number of DB subnet groups in your AWS account. The Count
count excludes the default subnet group.
OptionGroups The number of option groups in your AWS account. The count Count
excludes the default option groups.
812
Amazon Relational Database Service User Guide
CloudWatch dimensions for RDS
* Amazon RDS doesn't publish units for usage metrics to CloudWatch. The units only appear in the
documentation.
DatabaseClass All instances in a database class. For example, you can aggregate
metrics for all instances that belong to the database class
db.r5.large.
EngineName The identified engine name only. For example, you can aggregate
metrics for all instances that have the engine name postgres.
SourceRegion The specified Region only. For example, you can aggregate metrics
for all DB instances in the us-east-1 Region.
Metric Description
Note
These metrics are published to CloudWatch only if there is load on the DB instance.
You can examine these metrics using the CloudWatch console, the AWS CLI, or the CloudWatch API.
For example, you can get the statistics for the DBLoad metric by running the get-metric-statistics
command.
813
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
--region us-west-2 \
--namespace AWS/RDS \
--metric-name DBLoad \
--period 60 \
--statistics Average \
--start-time 1532035185 \
--end-time 1532036185 \
--dimensions Name=DBInstanceIdentifier,Value=db-loadtest-0
{
"Datapoints": [
{
"Timestamp": "2021-07-19T21:30:00Z",
"Unit": "None",
"Average": 2.1
},
{
"Timestamp": "2021-07-19T21:34:00Z",
"Unit": "None",
"Average": 1.7
},
{
"Timestamp": "2021-07-19T21:35:00Z",
"Unit": "None",
"Average": 2.8
},
{
"Timestamp": "2021-07-19T21:31:00Z",
"Unit": "None",
"Average": 1.5
},
{
"Timestamp": "2021-07-19T21:32:00Z",
"Unit": "None",
"Average": 1.8
},
{
"Timestamp": "2021-07-19T21:29:00Z",
"Unit": "None",
"Average": 3.0
},
{
"Timestamp": "2021-07-19T21:33:00Z",
"Unit": "None",
"Average": 2.4
}
],
"Label": "DBLoad"
}
For more information about CloudWatch, see What is Amazon CloudWatch? in the Amazon CloudWatch
User Guide.
814
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
The counter metrics are collected one time each minute. The OS metrics collection depends on whether
Enhanced Monitoring is turned on or off. If Enhanced Monitoring is turned off, the OS metrics are
collected one time each minute. If Enhanced Monitoring is turned on, the OS metrics are collected
for the selected time period. For more information about turning Enhanced Monitoring on or off, see
Turning Enhanced Monitoring on and off (p. 799).
Topics
• Performance Insights operating system counters (p. 815)
• Performance Insights counters for Amazon RDS for MariaDB and MySQL (p. 820)
• Performance Insights counters for Amazon RDS for Microsoft SQL Server (p. 825)
• Performance Insights counters for Amazon RDS for Oracle (p. 826)
• Performance Insights counters for Amazon RDS for PostgreSQL (p. 828)
You can use ListAvailableResourceMetrics API for the list of available counter metrics for your
DB instance. For more information, see ListAvailableResourceMetrics in the Amazon RDS Performance
Insights API Reference guide.
815
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
816
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
817
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
818
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
819
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
Topics
• Native counters for RDS for MariaDB and RDS for MySQL (p. 820)
• Non-native counters for Amazon RDS for MariaDB and MySQL (p. 823)
Native counters for RDS for MariaDB and RDS for MySQL
Native metrics are defined by the database engine and not by Amazon RDS. For definitions of these
native metrics, see Server status variables in the MySQL documentation.
820
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
821
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
822
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
innodb_buffer_pool_hit_rateCache db.Cache.innoDB_buffer_pool_hit_rate
The 100 *
percentage innodb_buffer_pool_read_requests
of reads that (innodb_buffer_pool_read_request
InnoDB could +
satisfy from innodb_buffer_pool_reads)
the buffer
pool.
823
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
innodb_datafile_writes_to_disk
I/O db.IO.innoDB_datafile_writes_to_disk
The number of Innodb_data_writes -
InnoDB data Innodb_log_writes -
file writes to Innodb_dblwr_writes
disk, excluding
double write
and redo
logging write
operations.
824
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
825
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
826
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
session represents the amount of CPU used by Oracle sessions. You can compare CPU send to
the CPU used by this session counter metric. When demand for CPU is higher than CPU
used, sessions are waiting for CPU time.
Bytes sent via SQL*Net User Bytes per second db.User.bytes sent via
to client SQL*Net to client
Table scan rows gotten SQL Rows per second db.SQL.table scan rows
gotten
DB block gets from Cache Gets per second db.Cache.db block gets
cache from cache
827
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
Topics
• Native counters for Amazon RDS for PostgreSQL (p. 828)
• Non-native counters for Amazon RDS for PostgreSQL (p. 829)
828
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights
checkpoint_sync_latency
Checkpoint
db.Checkpoint.checkpoint_sync_latency
The total amount checkpoint_sync_time /
of time that has (checkpoints_timed
been spent in the + checkpoints_req)
portion of checkpoint
processing where files
are synchronized to disk.
checkpoint_write_latency
Checkpoint
db.Checkpoint.checkpoint_write_latency
The total amount of checkpoint_write_time /
time that has been (checkpoints_timed
spent in the portion of + checkpoints_req)
checkpoint processing
where files are written
to disk.
829
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights
A SQL digest is a composite of all queries having a given pattern but not necessarily having the same
literal values. The digest replaces literal values with a question mark. For example, SELECT * FROM emp
WHERE lname= ?. This digest might consist of the following child queries:
For the region, DB engine, and instance class support information for this feature, see Amazon RDS DB
engine, Region, and instance class support for Performance Insights features (p. 725)
Topics
• SQL statistics for MariaDB and MySQL (p. 830)
• SQL statistics for Oracle (p. 832)
• SQL statistics for SQL Server (p. 834)
• SQL statistics for RDS PostgreSQL (p. 835)
Topics
• Digest statistics for MariaDB and MySQL (p. 830)
• Per-second statistics for MariaDB and MySQL (p. 831)
• Per-call statistics for MariaDB and MySQL (p. 831)
The digest table doesn't have an eviction policy. When the table is full, the AWS Management Console
shows the following message:
Performance Insights is unable to collect SQL Digest statistics on new queries because the
table events_statements_summary_by_digest is full.
Please truncate events_statements_summary_by_digest table to clear the issue. Check the
User Guide for more details.
In this situation, MariaDB and MySQL don't track SQL queries. To address this issue, Performance Insights
automatically truncates the digest table when both of the following conditions are met:
830
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights
For automatic management, the performance_schema parameter must be set to 0 and the
Source must not be set to user. If Performance Insights isn't managing the Performance Schema
automatically, see Turning on the Performance Schema for Performance Insights on Amazon RDS for
MariaDB or MySQL (p. 731).
In the AWS CLI, check the source of a parameter value by running the describe-db-parameters command.
Metric Unit
db.sql_tokenized.stats.sum_select_range_check_per_sec
Select range check per second
db.sql_tokenized.stats.sum_sort_merge_passes_per_sec
Sort merge passes per second
db.sql_tokenized.stats.sum_created_tmp_disk_tables_per_sec
Created temporary disk tables per second
db.sql_tokenized.stats.sum_created_tmp_tables_per_sec
Created temporary tables per second
Metric Unit
db.sql_tokenized.stats.sum_select_range_check_per_call
Select range check per call
831
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights
Metric Unit
db.sql_tokenized.stats.sum_sort_merge_passes_per_call
Sort merge passes per call
db.sql_tokenized.stats.sum_created_tmp_disk_tables_per_call
Created temporary disk tables per call
db.sql_tokenized.stats.sum_created_tmp_tables_per_call
Created temporary tables per call
If the ID is 0 at the digest level, Oracle Database has determined that this statement is not suitable for
reuse. In this case, the child SQL statements could belong to different digests. However, the statements
are grouped together under the digest_text for the first SQL statement collected.
Topics
• Per-second statistics for Oracle (p. 832)
• Per-call statistics for Oracle (p. 833)
Metric Unit
832
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights
The following metrics provide per-call statistics for an Oracle SQL digest query.
Metric Unit
db.sql_tokenized.stats.physical_read_requests_per_sec
Physical reads per second
db.sql_tokenized.stats.physical_write_requests_per_sec
Physical writes per second
Metric Unit
The following metrics provide per-call statistics for an Oracle SQL digest query.
Metric Unit
db.sql_tokenized.stats.physical_read_requests_per_exec
Physical reads per execution
db.sql_tokenized.stats.physical_write_requests_per_exec
Physical writes per execution
833
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights
SQL Server returns NULL values for query_hash for a few statements. For example, ALTER INDEX,
CHECKPOINT, UPDATE STATISTICS, COMMIT TRANSACTION, FETCH NEXT FROM Cursor, and a few
INSERT statements, SELECT @<variable>, conditional statements, and executable stored procedures. In
this case, the sql_handle value is displayed as the ID at the digest level for that statement.
Topics
• Per-second statistics for SQL Server (p. 834)
• Per-call statistics for SQL Server (p. 834)
Metric Unit
The following metrics provide per-second statistics for a SQL Server SQL digest query.
Metric Unit
834
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights
Metric Unit
The following metrics provide per-call statistics for a SQL Server SQL digest query.
Metric Unit
Following, you can find information about digest-level statistics for RDS for PostgreSQL.
Topics
• Digest statistics for RDS PostgreSQL (p. 835)
• Per-second digest statistics for RDS PostgreSQL (p. 836)
• Per-call digest statistics for RDS PostgreSQL (p. 836)
835
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights
parameter group associated with your DB instance. When you change this parameter, a DB
instance reboot is required.
Metric Unit
Metric Unit
836
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring
Metric Unit
For more information about these metrics, see pg_stat_statements in the PostgreSQL documentation.
Topics
• OS metrics for MariaDB, MySQL, Oracle, and PostgreSQL (p. 837)
• OS metrics for Microsoft SQL Server (p. 842)
Not
instanceResourceID An immutable identifier for the DB instance that is unique
applicable to an AWS Region, also used as the log stream identifier.
uptime Not The amount of time that the DB instance has been active.
applicable
cpuUtilization
guest CPU Guest The percentage of CPU in use by guest programs.
837
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring
steal CPU Steal The percentage of CPU in use by other virtual machines.
total CPU Total The total percentage of the CPU in use. This value
includes the nice value.
wait CPU Wait The percentage of CPU unused while waiting for I/O
access.
diskIO avgQueueLen Avg Queue The number of requests waiting in the I/O device's queue.
Size
readLatency Read The elapsed time between the submission of a read I/O
Latency request and its completion, in milliseconds.
Read
readThroughput The amount of network throughput used by requests to
Throughput the DB cluster, in bytes per second.
rrqmPS Rrqms The number of merged read requests queued per second.
util Disk I/O The percentage of CPU time during which requests were
Util issued.
838
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring
Write
writeThroughput The amount of network throughput used by responses
Throughput from the DB cluster, in bytes per second.
wrqmPS Wrqms The number of merged write requests queued per second.
avgQueueLen Physical
physicalDeviceIO The number of requests waiting in the I/O device's queue.
Devices Avg
Queue Size
rrqmPS Physical The number of merged read requests queued per second.
Devices
Rrqms
util Physical The percentage of CPU time during which requests were
Devices Disk issued.
I/O Util
839
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring
wrqmPS Physical The number of merged write requests queued per second.
Devices
Wrqms
fileSys maxFiles Max Inodes The maximum number of files that can be created for the
file system.
total Total The total number of disk space available for the file
Filesystem system, in kilobytes.
used Used The amount of disk space used by files in the file system,
Filesystem in kilobytes.
Used Inodes
usedFilePercent The percentage of available files in use.
loadAverageMinute
fifteen Load Avg 15 The number of processes requesting CPU time over the
min last 15 minutes.
five Load Avg 5 The number of processes requesting CPU time over the
min last 5 minutes.
one Load Avg 1 The number of processes requesting CPU time over the
min last minute.
buffers Buffered The amount of memory used for buffering I/O requests
Memory prior to writing to the storage device, in kilobytes.
cached Cached The amount of memory used for caching file system–
Memory based I/O.
dirty Dirty The amount of memory pages in RAM that have been
Memory modified but not written to their related data block in
storage, in kilobytes.
840
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring
Huge Pages
hugePagesFree The number of free huge pages. Huge pages are a feature
Free of the Linux kernel.
Huge Pages
hugePagesRsvd The number of committed huge pages.
Rsvd
Huge Pages
hugePagesSize The size for each huge pages unit, in kilobytes.
Size
Huge Pages
hugePagesSurp The number of available surplus huge pages over the
Surp total.
Huge Pages
hugePagesTotal The total number of huge pages.
Total
pageTables Page Tables The amount of memory used by page tables, in kilobytes.
writeback Writeback The amount of dirty pages in RAM that are still being
Memory written to the backing storage, in kilobytes.
network interface Not The identifier for the network interface being used for the
applicable DB instance.
parentID Not The process identifier for the parent process of the
applicable process.
841
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring
swap out Swaps out The amount of memory, in kilobytes, swapped out to
disk.
zombie Tasks The number of child tasks that are inactive with an active
Zombie parent task.
General engine Not applicable The database engine for the DB instance.
Not applicable
instanceResourceID An immutable identifier for the DB instance that
is unique to an AWS Region, also used as the log
stream identifier.
numVCPUs Not applicable The number of virtual CPUs for the DB instance.
timestamp Not applicable The time at which the metrics were taken.
842
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring
uptime Not applicable The amount of time that the DB instance has been
active.
kernTotKb Total Kernel The sum of the memory in the paged and
Memory nonpaged kernel pools, in kilobytes.
kernPagedKb Paged Kernel The amount of memory in the paged kernel pool,
Memory in kilobytes.
843
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring
network interface Not applicable The identifier for the network interface being
used for the DB instance.
pid Not applicable The identifier of the process. This value is not
present for processes that are owned by Amazon
RDS.
ppid Not applicable The process identifier for the parent of this
process. This value is only present for child
processes.
tid Not applicable The thread identifier. This value is only present for
threads. The owning process can be identified by
using the pid value.
workingSetKb Not applicable The amount of memory in the private working set
plus the amount of memory that is in use by the
process and can be shared with other processes, in
kilobytes.
Not applicable
workingSetPrivKb The amount of memory that is in use by a process,
but can't be shared with other processes, in
kilobytes.
Not applicable
workingSetShareableKb The amount of memory that is in use by a process
and can be shared with other processes, in
kilobytes.
844
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring
virtKb Not applicable The amount of virtual address space the process
is using, in kilobytes. Use of virtual address space
doesn't necessarily imply corresponding use of
either disk or main memory pages.
system handles Handles The number of handles that the system is using.
845
Amazon Relational Database Service User Guide
Viewing logs, events, and streams
in the Amazon RDS console
• Reliability
• Availability
• Performance
• Security
Monitoring metrics in an Amazon RDS instance (p. 678) explains how to monitor your instance using
metrics. A complete solution must also monitor database events, log files, and activity streams. AWS
provides you with the following monitoring tools:
• Amazon EventBridge is a serverless event bus service that makes it easy to connect your applications
with data from a variety of sources. EventBridge delivers a stream of real-time data from your own
applications, Software-as-a-Service (SaaS) applications, and AWS services. EventBridge routes that
data to targets such as AWS Lambda. This way, you can monitor events that happen in services and
build event-driven architectures. For more information, see the Amazon EventBridge User Guide.
• Amazon CloudWatch Logs provides a way to monitor, store, and access your log files from Amazon
RDS instances, AWS CloudTrail, and other sources. Amazon CloudWatch Logs can monitor information
in the log files and notify you when certain thresholds are met. You can also archive your log data in
highly durable storage. For more information, see the Amazon CloudWatch Logs User Guide.
• AWS CloudTrail captures API calls and related events made by or on behalf of your AWS account.
CloudTrail delivers the log files to an Amazon S3 bucket that you specify. You can identify which users
and accounts called AWS, the source IP address from which the calls were made, and when the calls
occurred. For more information, see the AWS CloudTrail User Guide.
• Database Activity Streams is an Amazon RDS feature that provides a near real-time stream of the
activity in your DB instance. Amazon RDS pushes activities to an Amazon Kinesis data stream. The
Kinesis stream is created automatically. From Kinesis, you can configure AWS services such as Amazon
Kinesis Data Firehose and AWS Lambda to consume the stream and store the data.
Topics
• Viewing logs, events, and streams in the Amazon RDS console (p. 846)
• Monitoring Amazon RDS events (p. 850)
• Monitoring Amazon RDS log files (p. 895)
• Monitoring Amazon RDS API calls in AWS CloudTrail (p. 940)
• Monitoring Amazon RDS with Database Activity Streams (p. 944)
846
Amazon Relational Database Service User Guide
Viewing logs, events, and streams
in the Amazon RDS console
The Logs & events tab for your RDS DB instance shows the following information:
• Amazon CloudWatch alarms – Shows any metric alarms that you have configured for the DB instance.
If you haven't configured alarms, you can create them in the RDS console. For more information, see
Monitoring Amazon RDS metrics with Amazon CloudWatch (p. 706).
• Recent events – Shows a summary of events (environment changes) for your RDS DB instance . For
more information, see Viewing Amazon RDS events (p. 852).
• Logs – Shows database log files generated by a DB instance. For more information, see Monitoring
Amazon RDS log files (p. 895).
To view logs, events, and streams for your DB instance in the RDS console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the name of the DB instance that you want to monitor.
The database page appears. The following example shows an Oracle database named orclb.
847
Amazon Relational Database Service User Guide
Viewing logs, events, and streams
in the Amazon RDS console
5. Choose Configuration.
The following example shows the status of the database activity streams for your DB instance.
848
Amazon Relational Database Service User Guide
Viewing logs, events, and streams
in the Amazon RDS console
849
Amazon Relational Database Service User Guide
Monitoring RDS events
Topics
• Overview of events for Amazon RDS (p. 850)
• Viewing Amazon RDS events (p. 852)
• Working with Amazon RDS event notification (p. 855)
• Creating a rule that triggers on an Amazon RDS event (p. 870)
• Amazon RDS event categories and event messages (p. 874)
• DB instances
For a list of DB parameter group events, see DB parameter group events (p. 889).
• DB security groups
For a list of DB security group events, see DB security group events (p. 890).
• DB snapshots
For a list of RDS Proxy events, see RDS Proxy events (p. 891).
• Blue/green deployment events
For a list of blue/green deployment events, see Blue/green deployment events (p. 892).
850
Amazon Relational Database Service User Guide
Overview of events for Amazon RDS
851
Amazon Relational Database Service User Guide
Viewing Amazon RDS events
• Resource name
• Resource type
• Time of the event
• Message summary of the event
Access the events through the AWS Management Console, which shows events from the past 24 hours.
You can also retrieve events by using the describe-events AWS CLI command, or the DescribeEvents RDS
API operation. If you use the AWS CLI or the RDS API to view events, you can retrieve events for up to the
past 14 days.
Note
If you need to store events for longer periods of time, you can send Amazon RDS events to
CloudWatch Events. For more information, see Creating a rule that triggers on an Amazon RDS
event (p. 870)
For descriptions of the Amazon RDS events, see Amazon RDS event categories and event
messages (p. 874).
To access detailed information about events using AWS CloudTrail, including request parameters, see
CloudTrail events (p. 940).
Console
To view all Amazon RDS events for the past 24 hours
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Events.
The following example shows a list of events filtered by the characters stopped.
AWS CLI
To view all events generated in the last hour, call describe-events with no parameters.
852
Amazon Relational Database Service User Guide
Viewing Amazon RDS events
The following sample output shows that a DB instance has been stopped.
{
"Events": [
{
"EventCategories": [
"notification"
],
"SourceType": "db-instance",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:db:testinst",
"Date": "2022-04-22T21:31:00.681Z",
"Message": "DB instance stopped",
"SourceIdentifier": "testinst"
}
]
}
To view all Amazon RDS events for the past 10080 minutes (7 days), call the describe-events AWS CLI
command and set the --duration parameter to 10080.
The following example shows the events in the specified time range for DB instance test-instance.
{
"Events": [
{
"SourceType": "db-instance",
"SourceIdentifier": "test-instance",
"EventCategories": [
"backup"
],
"Message": "Backing up DB instance",
"Date": "2022-03-13T23:09:23.983Z",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance"
},
{
"SourceType": "db-instance",
"SourceIdentifier": "test-instance",
"EventCategories": [
"backup"
],
"Message": "Finished DB Instance backup",
"Date": "2022-03-13T23:15:13.049Z",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance"
}
]
}
853
Amazon Relational Database Service User Guide
Viewing Amazon RDS events
API
You can view all Amazon RDS instance events for the past 14 days by calling the DescribeEvents RDS API
operation and setting the Duration parameter to 20160.
854
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
Topics
• Overview of Amazon RDS event notification (p. 855)
• Granting permissions to publish notifications to an Amazon SNS topic (p. 859)
• Subscribing to Amazon RDS event notification (p. 860)
• Amazon RDS event notification tags and attributes (p. 863)
• Listing Amazon RDS event notification subscriptions (p. 864)
• Modifying an Amazon RDS event notification subscription (p. 865)
• Adding a source identifier to an Amazon RDS event notification subscription (p. 866)
• Removing a source identifier from an Amazon RDS event notification subscription (p. 867)
• Listing the Amazon RDS event notification categories (p. 868)
• Deleting an Amazon RDS event notification subscription (p. 869)
Topics
• RDS resources eligible for event subscription (p. 855)
• Basic process for subscribing to Amazon RDS event notifications (p. 856)
• Delivery of RDS event notifications (p. 856)
• Billing for Amazon RDS event notifications (p. 856)
• Examples of Amazon RDS events (p. 856)
• DB instance
• DB snapshot
• DB parameter group
• DB security group
• RDS Proxy
• Custom engine version
For example, if you subscribe to the backup category for a given DB instance, you're notified whenever
a backup-related event occurs that affects the DB instance. If you subscribe to a configuration change
category for a DB instance, you're notified when the DB instance is changed. You also receive notification
when an event notification subscription changes.
You might want to create several different subscriptions. For example, you might create one subscription
that receives all event notifications for all DB instances and another subscription that includes only
critical events for a subset of the DB instances. For the second subscription, specify one or more DB
instances in the filter.
855
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
1. You create an Amazon RDS event notification subscription by using the Amazon RDS console, AWS CLI,
or API.
Amazon RDS uses the ARN of an Amazon SNS topic to identify each subscription. The Amazon RDS
console creates the ARN for you when you create the subscription. Create the ARN by using the
Amazon SNS console, the AWS CLI, or the Amazon SNS API.
2. Amazon RDS sends an approval email or SMS message to the addresses you submitted with your
subscription.
3. You confirm your subscription by choosing the link in the notification you received.
4. The Amazon RDS console updates the My Event Subscriptions section with the status of your
subscription.
5. Amazon RDS begins sending the notifications to the addresses that you provided when you created
the subscription.
To learn about identity and access management when using Amazon SNS, see Identity and access
management in Amazon SNS in the Amazon Simple Notification Service Developer Guide.
You can use AWS Lambda to process event notifications from a DB instance. For more information, see
Using AWS Lambda with Amazon RDS in the AWS Lambda Developer Guide.
When Amazon SNS sends a notification to a subscribed HTTP or HTTPS endpoint, the POST message
sent to the endpoint has a message body that contains a JSON document. For more information, see
Amazon SNS message and JSON formats in the Amazon Simple Notification Service Developer Guide.
You can configure SNS to notify you with text messages. For more information, see Mobile text
messaging (SMS) in the Amazon Simple Notification Service Developer Guide.
To turn off notifications without deleting a subscription, choose No for Enabled in the Amazon RDS
console. Or you can set the Enabled parameter to false using the AWS CLI or Amazon RDS API.
856
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
Topics
• Example of a DB instance event (p. 857)
• Example of a DB parameter group event (p. 857)
• Example of a DB snapshot event (p. 858)
The following is an example of a DB instance event in JSON format. The event shows that RDS
performed a multi-AZ failover for the instance named my-db-instance. The event ID is RDS-
EVENT-0049.
{
"version": "0",
"id": "68f6e973-1a0c-d37b-f2f2-94a7f62ffd4e",
"detail-type": "RDS DB Instance Event",
"source": "aws.rds",
"account": "123456789012",
"time": "2018-09-27T22:36:43Z",
"region": "us-east-1",
"resources": [
"arn:aws:rds:us-east-1:123456789012:db:my-db-instance"
],
"detail": {
"EventCategories": [
"failover"
],
"SourceType": "DB_INSTANCE",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:db:my-db-instance",
"Date": "2018-09-27T22:36:43.292Z",
"Message": "A Multi-AZ failover has completed.",
"SourceIdentifier": "rds:my-db-instance",
"EventID": "RDS-EVENT-0049"
}
}
The following is an example of a DB parameter group event in JSON format. The event shows that the
parameter time_zone was updated in parameter group my-db-param-group. The event ID is RDS-
EVENT-0037.
{
"version": "0",
"id": "844e2571-85d4-695f-b930-0153b71dcb42",
"detail-type": "RDS DB Parameter Group Event",
"source": "aws.rds",
"account": "123456789012",
"time": "2018-10-06T12:26:13Z",
"region": "us-east-1",
"resources": [
"arn:aws:rds:us-east-1:123456789012:pg:my-db-param-group"
],
"detail": {
"EventCategories": [
"configuration change"
],
"SourceType": "DB_PARAM",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:pg:my-db-param-group",
"Date": "2018-10-06T12:26:13.882Z",
"Message": "Updated parameter time_zone to UTC with apply method immediate",
857
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
"SourceIdentifier": "rds:my-db-param-group",
"EventID": "RDS-EVENT-0037"
}
}
The following is an example of a DB snapshot event in JSON format. The event shows the deletion of the
snapshot named my-db-snapshot. The event ID is RDS-EVENT-0041.
{
"version": "0",
"id": "844e2571-85d4-695f-b930-0153b71dcb42",
"detail-type": "RDS DB Snapshot Event",
"source": "aws.rds",
"account": "123456789012",
"time": "2018-10-06T12:26:13Z",
"region": "us-east-1",
"resources": [
"arn:aws:rds:us-east-1:123456789012:snapshot:rds:my-db-snapshot"
],
"detail": {
"EventCategories": [
"deletion"
],
"SourceType": "SNAPSHOT",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:snapshot:rds:my-db-snapshot",
"Date": "2018-10-06T12:26:13.882Z",
"Message": "Deleted manual snapshot",
"SourceIdentifier": "rds:my-db-snapshot",
"EventID": "RDS-EVENT-0041"
}
}
858
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
By default, an Amazon SNS topic has a policy allowing all Amazon RDS resources within the same
account to publish notifications to it. You can attach a custom policy to allow cross-account notifications,
or to restrict access to certain resources.
The following is an example of an IAM policy that you attach to the destination Amazon SNS topic. It
restricts the topic to DB instances with names that match the specified prefix. To use this policy, specify
the following values:
• Resource – The Amazon Resource Name (ARN) for your Amazon SNS topic
• SourceARN – Your RDS resource ARN
• SourceAccount – Your AWS account ID
To see a list of resource types and their ARNs, see Resources Defined by Amazon RDS in the Service
Authorization Reference.
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "events.rds.amazonaws.com"
},
"Action": [
"sns:Publish"
],
"Resource": "arn:aws:sns:us-east-1:123456789012:topic_name",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:rds:us-east-1:123456789012:db:prefix-*"
},
"StringEquals": {
"aws:SourceAccount": "123456789012"
}
}
}
]
}
859
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
You can specify the type of source you want to be notified of and the Amazon RDS source that triggers
the event:
Source type
The type of source. For example, Source type might be Instances. You must choose a source type.
Resources to include
The Amazon RDS resources that are generating the events. For example, you might choose Select
specific instances and then myDBInstance1.
The following table explains the result when you specify or don't specify Resources to include.
Specified RDS notifies you about all events for the If your Source type is
specified resource only. Instances and your resource is
myDBInstance1, RDS notifies you
about all events for myDBInstance1
only.
Not specified RDS notifies you about the events for the If your Source type is Instances,
specified source type for all your Amazon RDS RDS notifies you about all instance-
resources. related events in your account.
An Amazon SNS topic subscriber receives every message published to the topic by default. To receive
only a subset of the messages, the subscriber must assign a filter policy to the topic subscription. For
more information about SNS message filtering, see Amazon SNS message filtering in the Amazon Simple
Notification Service Developer Guide
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In navigation pane, choose Event subscriptions.
3. In the Event subscriptions pane, choose Create event subscription.
4. Enter your subscription details as follows:
• Choose New email topic. Enter a name for your email topic and a list of recipients. We
recommend that you configure the events subscriptions to the same email address as
860
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
your primary account contact. The recommendations, service events, and personal health
messages are sent using different channels. The subscriptions to the same email address
ensures that all the messages are consolidated in one location.
• Choose Amazon Resource Name (ARN). Then choose existing Amazon SNS ARN for an
Amazon SNS topic.
If you want to use a topic that has been enabled for server-side encryption (SSE), grant
Amazon RDS the necessary permissions to access the AWS KMS key. For more information, see
Enable compatibility between event sources from AWS services and encrypted topics in the
Amazon Simple Notification Service Developer Guide.
c. For Source type, choose a source type. For example, choose Instances or Parameter groups.
d. Choose the event categories and resources that you want to receive event notifications for.
The following example configures event notifications for the DB instance named testinst.
e. Choose Create.
The Amazon RDS console indicates that the subscription is being created.
AWS CLI
To subscribe to RDS event notification, use the AWS CLI create-event-subscription command.
Include the following required parameters:
• --subscription-name
• --sns-topic-arn
861
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
Example
For Windows:
API
To subscribe to Amazon RDS event notification, call the Amazon RDS API function
CreateEventSubscription. Include the following required parameters:
• SubscriptionName
• SnsTopicArn
862
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
By default, the Amazon SNS and Amazon EventBridge receives every message sent to them. SNS and
EventBridge can filter the message and send the notifications to the preferred communication mode,
such as an email, a text message, or a call to an HTTP endpoint.
Note
The notification sent in an email or a text message will not have event tags.
The following table shows the message attributes for RDS events sent to the topic subscriber.
EventID Identifier for the RDS event message, for example, RDS-
EVENT-0006.
The RDS tags provide data about the resource that was affected by the service event. RDS adds the
current state of the tags in the message body when the notification is sent to SNS or EventBridge.
For more information about filtering message attributes for SNS, see Amazon SNS message filtering in
the Amazon Simple Notification Service Developer Guide.
For more information about filtering event tags for EventBridge, see Content filtering in Amazon
EventBridge event patterns in the Amazon EventBridge User Guide.
For more information about filtering payload-based tags for SNS, see https://aws.amazon.com/blogs/
compute/introducing-payload-based-message-filtering-for-amazon-sns/
863
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Event subscriptions. The Event subscriptions pane shows all your
event notification subscriptions.
AWS CLI
To list your current Amazon RDS event notification subscriptions, use the AWS CLI describe-event-
subscriptions command.
Example
API
To list your current Amazon RDS event notification subscriptions, call the Amazon RDS API
DescribeEventSubscriptions action.
864
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Event subscriptions.
3. In the Event subscriptions pane, choose the subscription that you want to modify and choose Edit.
4. Make your changes to the subscription in either the Target or Source section.
5. Choose Edit. The Amazon RDS console indicates that the subscription is being modified.
AWS CLI
To modify an Amazon RDS event notification subscription, use the AWS CLI modify-event-
subscription command. Include the following required parameter:
• --subscription-name
Example
For Windows:
API
To modify an Amazon RDS event, call the Amazon RDS API operation ModifyEventSubscription.
Include the following required parameter:
• SubscriptionName
865
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
Console
You can easily add or remove source identifiers using the Amazon RDS console by selecting or
deselecting them when modifying a subscription. For more information, see Modifying an Amazon RDS
event notification subscription (p. 865).
AWS CLI
To add a source identifier to an Amazon RDS event notification subscription, use the AWS CLI add-
source-identifier-to-subscription command. Include the following required parameters:
• --subscription-name
• --source-identifier
Example
The following example adds the source identifier mysqldb to the myrdseventsubscription
subscription.
For Windows:
API
To add a source identifier to an Amazon RDS event notification subscription, call the Amazon RDS API
AddSourceIdentifierToSubscription. Include the following required parameters:
• SubscriptionName
• SourceIdentifier
866
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
Console
You can easily add or remove source identifiers using the Amazon RDS console by selecting or
deselecting them when modifying a subscription. For more information, see Modifying an Amazon RDS
event notification subscription (p. 865).
AWS CLI
To remove a source identifier from an Amazon RDS event notification subscription, use the AWS CLI
remove-source-identifier-from-subscription command. Include the following required
parameters:
• --subscription-name
• --source-identifier
Example
The following example removes the source identifier mysqldb from the myrdseventsubscription
subscription.
For Windows:
API
To remove a source identifier from an Amazon RDS event notification subscription, use the Amazon
RDS API RemoveSourceIdentifierFromSubscription command. Include the following required
parameters:
• SubscriptionName
• SourceIdentifier
867
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
Console
When you create or modify an event notification subscription, the event categories are displayed in
the Amazon RDS console. For more information, see Modifying an Amazon RDS event notification
subscription (p. 865).
AWS CLI
To list the Amazon RDS event notification categories, use the AWS CLI describe-event-categories
command. This command has no required parameters.
Example
API
To list the Amazon RDS event notification categories, use the Amazon RDS API
DescribeEventCategories command. This command has no required parameters.
868
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose DB Event Subscriptions.
3. In the My DB Event Subscriptions pane, choose the subscription that you want to delete.
4. Choose Delete.
5. The Amazon RDS console indicates that the subscription is being deleted.
AWS CLI
To delete an Amazon RDS event notification subscription, use the AWS CLI delete-event-
subscription command. Include the following required parameter:
• --subscription-name
Example
API
To delete an Amazon RDS event notification subscription, use the RDS API DeleteEventSubscription
command. Include the following required parameter:
• SubscriptionName
869
Amazon Relational Database Service User Guide
Creating a rule that triggers on an Amazon RDS event
Topics
• Creating rules to send Amazon RDS events to CloudWatch Events (p. 870)
• Tutorial: Log DB instance state changes using Amazon EventBridge (p. 871)
• To create an IAM role automatically, choose Create a new role for this specific resource.
• To use an IAM role that you created before, choose Use existing role.
8. Optionally, repeat steps 5-7 to add another target for this rule.
9. Choose Configure details. For Rule definition, type a name and description for the rule.
For more information, see Creating a CloudWatch Events Rule That Triggers on an Event in the Amazon
CloudWatch User Guide.
870
Amazon Relational Database Service User Guide
Creating a rule that triggers on an Amazon RDS event
Topics
• Step 1: Create an AWS Lambda function (p. 871)
• Step 2: Create a rule (p. 872)
• Step 3: Test the rule (p. 872)
a. Enter a name and description for the Lambda function. For example, name the function
RDSInstanceStateChange.
b. In Runtime, select Node.js 16x.
c. For Architecture, choose x86_64.
d. For Execution role, do either of the following:
console.log('Loading function');
d. Choose Deploy.
871
Amazon Relational Database Service User Guide
Creating a rule that triggers on an Amazon RDS event
You are redirected to the Amazon CloudWatch console. If you are not redirected, click View the
metrics in CloudWatch.
6. In All metrics, choose the name of the rule that you created.
{
"version": "0",
"id": "12a345b6-78c9-01d2-34e5-123f4ghi5j6k",
"detail-type": "RDS DB Instance Event",
"source": "aws.rds",
872
Amazon Relational Database Service User Guide
Creating a rule that triggers on an Amazon RDS event
"account": "111111111111",
"time": "2021-03-19T19:34:09Z",
"region": "us-east-1",
"resources": [
"arn:aws:rds:us-east-1:111111111111:db:testdb"
],
"detail": {
"EventCategories": [
"notification"
],
"SourceType": "DB_INSTANCE",
"SourceArn": "arn:aws:rds:us-east-1:111111111111:db:testdb",
"Date": "2021-03-19T19:34:09.293Z",
"Message": "DB instance stopped",
"SourceIdentifier": "testdb",
"EventID": "RDS-EVENT-0087"
}
}
For more examples of RDS events in JSON format, see Overview of events for Amazon RDS (p. 850).
10. (Optional) When you're finished, you can open the Amazon RDS console and start the instance that
you stopped.
873
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
Topics
• DB cluster events (p. 874)
• DB instance events (p. 876)
• DB parameter group events (p. 889)
• DB security group events (p. 890)
• DB snapshot events (p. 890)
• DB cluster snapshot events (p. 891)
• RDS Proxy events (p. 891)
• Blue/green deployment events (p. 892)
• Custom engine version events (p. 893)
DB cluster events
The following table shows the event category and a list of events when a DB cluster is the source type.
For more information about Multi-AZ DB cluster deployments, see Multi-AZ DB cluster
deployments (p. 499)
global failover RDS-EVENT-0182 Old primary DB cluster name This event is for a switchover
in Region name successfully operation (previously called
shut down. "managed planned failover").
874
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
global failover RDS-EVENT-0183 Waiting for data This event is for a switchover
synchronization across global operation (previously called
cluster members. Current "managed planned failover").
lags behind primary DB
cluster: reason. A replication lag is occurring
during the synchronization
phase of the global database
failover.
global failover RDS-EVENT-0184 New primary DB cluster This event is for a switchover
name in Region name was operation (previously called
successfully promoted. "managed planned failover").
875
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
DB instance events
The following table shows the event category and a list of events when a DB instance is the source type.
876
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
877
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
878
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
failure RDS-EVENT-0031 DB instance put into name The DB instance has failed
state. RDS recommends that due to an incompatible
you initiate a point-in-time- configuration or an
restore. underlying storage issue.
Begin a point-in-time-restore
for the DB instance.
failure RDS-EVENT-0035 Database instance put into The DB instance has invalid
state. message. parameters. For example, if
the DB instance could not
start because a memory-
related parameter is set
too high for this instance
class, your action would
be to modify the memory
parameter and reboot the DB
instance.
879
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
failure RDS-EVENT-0081 Amazon RDS has been unable The IAM role that you use
to create credentials for to access your Amazon
name option. This is due S3 bucket for SQL Server
to the name IAM role not native backup and restore
being configured correctly is configured incorrectly.
in your account. Please For more information, see
refer to the troubleshooting Setting up for native backup
section in the Amazon RDS and restore (p. 1421).
documentation for further
details.
880
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
failure RDS-EVENT-0279 The promotion of the RDS The message includes details
Custom read replica failed. about the failure.
message
881
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
failure RDS-EVENT-0281 RDS Custom couldn't modify The message includes details
the DB instance because the about the failure.
pre-check failed. message
failure RDS-EVENT-0285 RDS Custom couldn't create The message includes details
a final snapshot for the DB about the failure.
instance because message.
low storage RDS-EVENT-0007 Allocated storage has The allocated storage for
been exhausted. Allocate the DB instance has been
additional storage to resolve. consumed. To resolve this
issue, allocate additional
storage for the DB instance.
For more information,
see the RDS FAQ. You can
monitor the storage space for
a DB instance using the Free
Storage Space metric.
low storage RDS-EVENT-0089 The free storage capacity The DB instance has
for DB instance: name is consumed more than 90% of
low at percentage of its allocated storage. You can
the provisioned storage monitor the storage space for
[Provisioned Storage: size, a DB instance using the Free
Free Storage: size]. You Storage Space metric.
may want to increase the
provisioned storage to
address this issue.
882
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
low storage RDS-EVENT-0227 Your Aurora cluster's storage The Aurora storage
is dangerously low with only subsystem is running low on
amount terabytes remaining. space.
Please take measures to
reduce the storage load on
your cluster.
maintenance, RDS-EVENT-0191 A new version of the time If you update your RDS for
notification zone file is available for Oracle DB engine, Amazon
update. RDS generates this event if
you haven't chosen a time
zone file upgrade and the
database doesn’t use the
latest DST time zone file
available on the instance.
For more information,
see Oracle time zone file
autoupgrade (p. 2091).
883
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
maintenance, RDS-EVENT-0192 The update of your time zone The upgrade of your Oracle
notification file has started. time zone file has begun.
For more information,
see Oracle time zone file
autoupgrade (p. 2091).
maintenance, RDS-EVENT-0194 The update of your time zone The update of your Oracle
notification file has finished. time zone file has completed.
For more information,
see Oracle time zone file
autoupgrade (p. 2091).
884
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
notification RDS-EVENT-0064 The TDE encryption key was For information about
rotated successfully. recommended best
practices, see Amazon
RDS basic operational
guidelines (p. 286).
885
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
notification RDS-EVENT-0189 The gp2 burst balance The gp2 burst balance
credits for the RDS database credits for the RDS database
instance are low. To resolve instance are low. To resolve
this issue, reduce IOPS usage this issue, reduce IOPS usage
or modify your storage or modify your storage
settings to enable higher settings to enable higher
performance. performance. For more
information, see I/O credits
and burst performance in
the Amazon Elastic Compute
Cloud User Guide.
886
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
• Troubleshooting a
MariaDB read replica
problem (p. 1327)
• Troubleshooting a SQL
Server read replica
problem (p. 1449)
• Troubleshooting a
MySQL read replica
problem (p. 1718)
• Troubleshooting RDS for
Oracle replicas (p. 1988)
887
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
read replica RDS-EVENT-0046 Replication for the Read This message appears when
Replica resumed. you first create a read replica,
or as a monitoring message
confirming that replication is
functioning properly. If this
message follows an RDS-
EVENT-0045 notification,
then replication has resumed
following an error or after
replication was stopped.
recovery RDS-EVENT-0052 Multi-AZ instance recovery Recovery time will vary with
started. the amount of data to be
recovered.
888
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
889
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
DB snapshot events
The following table shows the event category and a list of events when a DB snapshot is the source type.
890
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
891
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
For more information about blue/green deployments, see Using Amazon RDS Blue/Green Deployments
for database updates (p. 566).
892
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
893
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages
failure RDS-EVENT-0198 Creation failed for custom The message includes details
engine version name. about the failure, such as
message missing files.
894
Amazon Relational Database Service User Guide
Monitoring RDS logs
You can access database logs using the AWS Management Console, the AWS Command Line Interface
(AWS CLI), or the Amazon RDS API. You can't view, watch, or download transaction logs.
Topics
• Viewing and listing database log files (p. 895)
• Downloading a database log file (p. 896)
• Watching a database log file (p. 897)
• Publishing database logs to Amazon CloudWatch Logs (p. 898)
• Reading log file contents using REST (p. 900)
• MariaDB database log files (p. 902)
• Microsoft SQL Server database log files (p. 911)
• MySQL database log files (p. 915)
• Oracle database log files (p. 924)
• RDS for PostgreSQL database log files (p. 931)
Console
AWS CLI
To list the available database log files for a DB instance, use the AWS CLI describe-db-log-files
command.
The following example returns a list of log files for a DB instance named my-db-instance.
895
Amazon Relational Database Service User Guide
Downloading a database log file
Example
RDS API
To list the available database log files for a DB instance, use the Amazon RDS API
DescribeDBLogFiles action.
Console
To download a database log file
AWS CLI
To download a database log file, use the AWS CLI command download-db-log-file-portion. By
default, this command downloads only the latest portion of a log file. However, you can download an
entire file by specifying the parameter --starting-token 0.
The following example shows how to download the entire contents of a log file called log/ERROR.4 and
store it in a local file called errorlog.txt.
Example
896
Amazon Relational Database Service User Guide
Watching a database log file
For Windows:
RDS API
To download a database log file, use the Amazon RDS API DownloadDBLogFilePortion action.
5. In the Logs section, choose a log file, and then choose Watch.
RDS shows the tail of the log, as in the following MySQL example.
897
Amazon Relational Database Service User Guide
Publishing to CloudWatch Logs
Topics
• Overview of RDS integration with CloudWatch Logs (p. 898)
• Deciding which logs to publish to CloudWatch Logs (p. 899)
• Specifying the logs to publish to CloudWatch Logs (p. 899)
• Searching and filtering your logs in CloudWatch Logs (p. 899)
Amazon RDS continuously streams your DB instance log records to a log group. For example, you have
a log group /aws/rds/instance/instance_name/log_type for each type of log that you publish.
This log group is in the same AWS Region as the database instance that generates the log.
AWS retains log data published to CloudWatch Logs for an indefinite time period unless you specify a
retention period. For more information, see Change log data retention in CloudWatch Logs.
898
Amazon Relational Database Service User Guide
Publishing to CloudWatch Logs
• the section called “Publishing MariaDB logs to Amazon CloudWatch Logs” (p. 904)
• the section called “Publishing MySQL logs to Amazon CloudWatch Logs” (p. 918)
• the section called “Publishing Oracle logs to Amazon CloudWatch Logs” (p. 927)
• the section called “Publishing PostgreSQL logs to Amazon CloudWatch Logs” (p. 936)
• the section called “Publishing SQL Server logs to Amazon CloudWatch Logs” (p. 911)
The following example specifies the audit log, error logs, general log, and slow query log.
899
Amazon Relational Database Service User Guide
Reading log file contents using REST
For more information, see Searching and filtering log data in the Amazon CloudWatch Logs User Guide.
For a blog tutorial explaining how to monitor RDS logs, see Build proactive database monitoring for
Amazon RDS with Amazon CloudWatch Logs, AWS Lambda, and Amazon SNS.
• DBInstanceIdentifier—the name of the DB instance that contains the log file you want to
download.
• LogFileName—the name of the log file to be downloaded.
The response contains the contents of the requested log file, as a stream.
The following example downloads the log file named log/ERROR.6 for the DB instance named sample-sql
in the us-west-2 region.
900
Amazon Relational Database Service User Guide
Reading log file contents using REST
X-Amz-Signature: 353a4f14b3f250142d9afc34f9f9948154d46ce7d4ec091d0cdabbcf8b40c558
If you specify a nonexistent DB instance, the response consists of the following error:
901
Amazon Relational Database Service User Guide
MariaDB database log files
You can monitor the MariaDB logs directly through the Amazon RDS console, Amazon RDS API, Amazon
RDS CLI, or AWS SDKs. You can also access MariaDB logs by directing the logs to a database table in the
main database and querying that table. You can use the mysqlbinlog utility to download a binary log.
For more information about viewing, downloading, and watching file-based database logs, see
Monitoring Amazon RDS log files (p. 895).
Topics
• Accessing MariaDB error logs (p. 902)
• Accessing the MariaDB slow query and general logs (p. 902)
• Publishing MariaDB logs to Amazon CloudWatch Logs (p. 904)
• Log file size (p. 906)
• Managing table-based MariaDB logs (p. 906)
• Binary logging format (p. 907)
• Accessing MariaDB binary logs (p. 908)
• Binary log annotation (p. 909)
MariaDB writes to the error log only on startup, shutdown, and when it encounters errors. A DB instance
can go hours or days without new entries being written to the error log. If you see no recent entries, it's
because the server did not encounter an error that resulted in a log entry.
You can control MariaDB logging by using the parameters in this list:
902
Amazon Relational Database Service User Guide
MariaDB database log files
minimum is 0. If log_output = FILE, you can specify a floating point value that goes to microsecond
resolution. If log_output = TABLE, you must specify an integer value with second resolution. Only
queries whose run time exceeds the long_query_time value are logged. For example, setting
long_query_time to 0.1 prevents any query that runs for less than 100 milliseconds from being
logged.
• log_queries_not_using_indexes: To log all queries that do not use an index to the slow query
log, set this parameter to 1. The default is 0. Queries that do not use an index are logged even if their
run time is less than the value of the long_query_time parameter.
• log_output option: You can specify one of the following options for the log_output parameter:
• TABLE (default)– Write general queries to the mysql.general_log table, and slow queries to the
mysql.slow_log table.
• FILE– Write both general and slow query logs to the file system. Log files are rotated hourly.
• NONE– Disable logging.
When logging is enabled, Amazon RDS rotates table logs or deletes log files at regular intervals. This
measure is a precaution to reduce the possibility of a large log file either blocking database use or
affecting performance. FILE and TABLE logging approach rotation and deletion as follows:
• When FILE logging is enabled, log files are examined every hour and log files older than 24 hours
are deleted. In some cases, the remaining combined log file size after the deletion might exceed
the threshold of 2 percent of a DB instance's allocated space. In these cases, the largest log files are
deleted until the log file size no longer exceeds the threshold.
• When TABLE logging is enabled, in some cases log tables are rotated every 24 hours. This rotation
occurs if the space used by the table logs is more than 20 percent of the allocated storage space. It
also occurs if the size of all logs combined is greater than 10 GB. If the amount of space used for a DB
instance is greater than 90 percent of the DB instance's allocated storage space, the thresholds for log
rotation are reduced. Log tables are then rotated if the space used by the table logs is more than 10
percent of the allocated storage space. They're also rotated if the size of all logs combined is greater
than 5 GB.
When log tables are rotated, the current log table is copied to a backup log table and the entries in
the current log table are removed. If the backup log table already exists, then it is deleted before the
current log table is copied to the backup. You can query the backup log table if needed. The backup
log table for the mysql.general_log table is named mysql.general_log_backup. The backup
log table for the mysql.slow_log table is named mysql.slow_log_backup.
Amazon RDS records both TABLE and FILE log rotation in an Amazon RDS event and sends you a
notification.
To work with the logs from the Amazon RDS console, Amazon RDS API, Amazon RDS CLI, or AWS SDKs,
set the log_output parameter to FILE. Like the MariaDB error log, these log files are rotated hourly. The
log files that were generated during the previous 24 hours are retained.
For more information about the slow query and general logs, go to the following topics in the MariaDB
documentation:
903
Amazon Relational Database Service User Guide
MariaDB database log files
Amazon RDS publishes each MariaDB database log as a separate database stream in the log group. For
example, suppose that you configure the export function to include the slow query log. Then slow query
data is stored in a slow query log stream in the /aws/rds/instance/my_instance/slowquery log
group.
The error log is enabled by default. The following table summarizes the requirements for the other
MariaDB logs.
Log Requirement
Console
AWS CLI
You can publish a MariaDB logs with the AWS CLI. You can call the modify-db-instance command
with the following parameters:
• --db-instance-identifier
• --cloudwatch-logs-export-configuration
904
Amazon Relational Database Service User Guide
MariaDB database log files
Note
A change to the --cloudwatch-logs-export-configuration option is always applied
to the DB instance immediately. Therefore, the --apply-immediately and --no-apply-
immediately options have no effect.
You can also publish MariaDB logs by calling the following AWS CLI commands:
• create-db-instance
• restore-db-instance-from-db-snapshot
• restore-db-instance-from-s3
• restore-db-instance-to-point-in-time
Run one of these AWS CLI commands with the following options:
• --db-instance-identifier
• --enable-cloudwatch-logs-exports
• --db-instance-class
• --engine
Other options might be required depending on the AWS CLI command you run.
Example
The following example modifies an existing MariaDB DB instance to publish log files to CloudWatch Logs.
The --cloudwatch-logs-export-configuration value is a JSON object. The key for this object is
EnableLogTypes, and its value is an array of strings with any combination of audit, error, general,
and slowquery.
For Windows:
Example
The following command creates a MariaDB DB instance and publishes log files to CloudWatch Logs.
The --enable-cloudwatch-logs-exports value is a JSON array of strings. The strings can be any
combination of audit, error, general, and slowquery.
905
Amazon Relational Database Service User Guide
MariaDB database log files
--db-instance-class db.m4.large \
--engine mariadb
For Windows:
RDS API
You can publish MariaDB logs with the RDS API. You can call the ModifyDBInstance operation with the
following parameters:
• DBInstanceIdentifier
• CloudwatchLogsExportConfiguration
Note
A change to the CloudwatchLogsExportConfiguration parameter is always applied to the
DB instance immediately. Therefore, the ApplyImmediately parameter has no effect.
You can also publish MariaDB logs by calling the following RDS API operations:
• CreateDBInstance
• RestoreDBInstanceFromDBSnapshot
• RestoreDBInstanceFromS3
• RestoreDBInstanceToPointInTime
Run one of these RDS API operations with the following parameters:
• DBInstanceIdentifier
• EnableCloudwatchLogsExports
• Engine
• DBInstanceClass
Other parameters might be required depending on the AWS CLI command you run.
906
Amazon Relational Database Service User Guide
MariaDB database log files
to the mysql.general_log table, and slow queries are logged to the mysql.slow_log table. You
can query the tables to access the log information. Enabling this logging increases the amount of data
written to the database, which can degrade performance.
Both the general log and the slow query logs are disabled by default. In order to enable logging to
tables, you must also set the general_log and slow_query_log server parameters to 1.
Log tables keep growing until the respective logging activities are turned off by resetting the appropriate
parameter to 0. A large amount of data often accumulates over time, which can use up a considerable
percentage of your allocated storage space. Amazon RDS does not allow you to truncate the log tables,
but you can move their contents. Rotating a table saves its contents to a backup table and then creates
a new empty log table. You can manually rotate the log tables with the following command line
procedures, where the command prompt is indicated by PROMPT>:
To completely remove the old data and reclaim the disk space, call the appropriate procedure twice in
succession.
If you plan to use replication, the binary logging format is important. This is because it determines
the record of data changes that is recorded in the source and sent to the replication targets. For
information about the advantages and disadvantages of different binary logging formats for replication,
see Advantages and disadvantages of statement-based and row-based replication in the MySQL
documentation.
Important
Setting the binary logging format to row-based can result in very large binary log files. Large
binary log files reduce the amount of storage available for a DB instance. They also can increase
the amount of time to perform a restore operation of a DB instance.
Statement-based replication can cause inconsistencies between the source DB instance and a
read replica. For more information, see Unsafe statements for statement-based replication in
the MariaDB documentation.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose the parameter group that is used by the DB instance that you want to modify.
You can't modify a default parameter group. If the DB instance is using a default parameter group,
create a new parameter group and associate it with the DB instance.
For more information on DB parameter groups, see Working with parameter groups (p. 347).
4. For Parameter group actions, choose Edit.
5. Set the binlog_format parameter to the binary logging format of your choice (ROW, STATEMENT,
or MIXED).
6. Choose Save changes to save the updates to the DB parameter group.
907
Amazon Relational Database Service User Guide
MariaDB database log files
To run the mysqlbinlog utility against an Amazon RDS instance, use the following options:
For more information about mysqlbinlog options, go to mysqlbinlog options in the MariaDB
documentation.
mysqlbinlog \
--read-from-remote-server \
--host=mariadbinstance1.1234abcd.region.rds.amazonaws.com \
--port=3306 \
--user ReplUser \
--password <password> \
--result-file=/tmp/binlog.txt
For Windows:
mysqlbinlog ^
--read-from-remote-server ^
--host=mariadbinstance1.1234abcd.region.rds.amazonaws.com ^
--port=3306 ^
--user ReplUser ^
--password <password> ^
--result-file=/tmp/binlog.txt
Amazon RDS normally purges a binary log as soon as possible. However, the binary log must still be
available on the instance to be accessed by mysqlbinlog. To specify the number of hours for RDS to
retain binary logs, use the mysql.rds_set_configuration stored procedure. Specify a period with
enough time for you to download the logs. After you set the retention period, monitor storage usage for
the DB instance to ensure that the retained binary logs don't take up too much storage.
908
Amazon Relational Database Service User Guide
MariaDB database log files
call mysql.rds_show_configuration;
You can enable binary log annotations globally by creating a custom parameter group and
setting the binlog_annotate_row_events parameter to 1. You can also enable annotations
at the session level, by calling SET SESSION binlog_annotate_row_events = 1. Use the
replicate_annotate_row_events to replicate binary log annotations to the replica instance if
binary logging is enabled on it. No special privileges are required to use these settings.
The following is an example of a row-based transaction in MariaDB. The use of row-based logging is
triggered by setting the transaction isolation level to read-committed.
Without annotations, the binary log entries for the transaction look like the following:
BEGIN
/*!*/;
# at 1163
# at 1209
#150922 7:55:57 server id 1855786460 end_log_pos 1209 Table_map: `test`.`square`
mapped to number 76
#150922 7:55:57 server id 1855786460 end_log_pos 1247 Write_rows: table id 76
flags: STMT_END_F
### INSERT INTO `test`.`square`
### SET
### @1=5
### @2=25
# at 1247
#150922 7:56:01 server id 1855786460 end_log_pos 1274 Xid = 62
COMMIT/*!*/;
The following statement enables session-level annotations for this same transaction, and disables them
after committing the transaction:
With annotations, the binary log entries for the transaction look like the following:
BEGIN
909
Amazon Relational Database Service User Guide
MariaDB database log files
/*!*/;
# at 423
# at 483
# at 529
#150922 8:04:24 server id 1855786460 end_log_pos 483 Annotate_rows:
#Q> INSERT INTO square(x, y) VALUES(5, 5 * 5)
#150922 8:04:24 server id 1855786460 end_log_pos 529 Table_map: `test`.`square` mapped
to number 76
#150922 8:04:24 server id 1855786460 end_log_pos 567 Write_rows: table id 76 flags:
STMT_END_F
### INSERT INTO `test`.`square`
### SET
### @1=5
### @2=25
# at 567
#150922 8:04:26 server id 1855786460 end_log_pos 594 Xid = 88
COMMIT/*!*/;
910
Amazon Relational Database Service User Guide
Microsoft SQL Server database log files
Topics
• Retention schedule (p. 911)
• Viewing the SQL Server error log by using the rds_read_error_log procedure (p. 911)
• Publishing SQL Server logs to Amazon CloudWatch Logs (p. 911)
Retention schedule
Log files are rotated each day and whenever your DB instance is restarted. The following is the retention
schedule for Microsoft SQL Server logs on Amazon RDS.
Error logs A maximum of 30 error logs are retained. Amazon RDS might delete error
logs older than 7 days.
Agent logs A maximum of 10 agent logs are retained. Amazon RDS might delete agent
logs older than 7 days.
Trace files Trace files are retained according to the trace file retention period of your DB
instance. The default trace file retention period is 7 days. To modify the trace
file retention period for your DB instance, see Setting the retention period
for trace and dump files (p. 1621).
Dump files Dump files are retained according to the dump file retention period of your
DB instance. The default dump file retention period is 7 days. To modify the
dump file retention period for your DB instance, see Setting the retention
period for trace and dump files (p. 1621).
• Store logs in highly durable storage space with a retention period that you define.
• Search and filter log data.
• Share log data between accounts.
911
Amazon Relational Database Service User Guide
Microsoft SQL Server database log files
Amazon RDS publishes each SQL Server database log as a separate database stream in the log group.
For example, if you publish error logs, error data is stored in an error log stream in the /aws/rds/
instance/my_instance/error log group.
For Multi-AZ DB instances, Amazon RDS publishes the database log as two separate streams in the log
group. For example, if you publish the error logs, the error data is stored in the error log streams /aws/
rds/instance/my_instance.node1/error and /aws/rds/instance/my_instance.node2/
error respectively. The log streams don't change during a failover and the error log stream of each node
can contain error logs from primary or secondary instance.
Note
Publishing SQL Server logs to CloudWatch Logs isn't enabled by default. Publishing trace and
dump files isn't supported. Publishing SQL Server logs to CloudWatch Logs is supported in all
regions, except for Asia Pacific (Hong Kong).
Console
To publish SQL Server DB logs to CloudWatch Logs from the AWS Management Console
AWS CLI
To publish SQL Server logs, you can use the modify-db-instance command with the following
parameters:
• --db-instance-identifier
• --cloudwatch-logs-export-configuration
Note
A change to the --cloudwatch-logs-export-configuration option is always applied
to the DB instance immediately. Therefore, the --apply-immediately and --no-apply-
immediately options have no effect.
You can also publish SQL Server logs using the following commands:
• create-db-instance
• restore-db-instance-from-db-snapshot
• restore-db-instance-to-point-in-time
912
Amazon Relational Database Service User Guide
Microsoft SQL Server database log files
Example
The following example creates an SQL Server DB instance with CloudWatch Logs publishing enabled.
The --enable-cloudwatch-logs-exports value is a JSON array of strings that can include error,
agent, or both.
For Windows:
Note
When using the Windows command prompt, you must escape double quotes (") in JSON code by
prefixing them with a backslash (\).
Example
The following example modifies an existing SQL Server DB instance to publish log files to CloudWatch
Logs. The --cloudwatch-logs-export-configuration value is a JSON object. The key for this
object is EnableLogTypes, and its value is an array of strings that can include error, agent, or both.
For Windows:
Note
When using the Windows command prompt, you must escape double quotes (") in JSON code by
prefixing them with a backslash (\).
Example
The following example modifies an existing SQL Server DB instance to disable publishing agent log files
to CloudWatch Logs. The --cloudwatch-logs-export-configuration value is a JSON object. The
key for this object is DisableLogTypes, and its value is an array of strings that can include error,
agent, or both.
913
Amazon Relational Database Service User Guide
Microsoft SQL Server database log files
--db-instance-identifier mydbinstance \
--cloudwatch-logs-export-configuration '{"DisableLogTypes":["agent"]}'
For Windows:
Note
When using the Windows command prompt, you must escape double quotes (") in JSON code by
prefixing them with a backslash (\).
914
Amazon Relational Database Service User Guide
MySQL database log files
For more information about viewing, downloading, and watching file-based database logs, see
Monitoring Amazon RDS log files (p. 895).
Topics
• Overview of RDS for MySQL database logs (p. 915)
• Publishing MySQL logs to Amazon CloudWatch Logs (p. 918)
• Managing table-based MySQL logs (p. 920)
• Configuring MySQL binary logging (p. 921)
• Accessing MySQL binary logs (p. 922)
• Error log
• Slow query log
• General log
• Audit log
The RDS for MySQL error log is generated by default. You can generate the slow query and general logs
by setting parameters in your DB parameter group.
Topics
• RDS for MySQL error logs (p. 915)
• RDS for MySQL slow query and general logs (p. 916)
• MySQL audit log (p. 916)
• Log rotation and retention for RDS for MySQL (p. 916)
• Size limits on redo logs (p. 917)
• Size limits on BLOBs written to the redo log (p. 917)
RDS for MySQL writes to the error log only on startup, shutdown, and when it encounters errors. A DB
instance can go hours or days without new entries being written to the error log. If you see no recent
entries, it's because the server didn't encounter an error that would result in a log entry.
By design, the error logs are filtered so that only unexpected events such as errors are shown. However,
the error logs also contain some additional database information, for example query progress, which
isn't shown. Therefore, even without any actual errors the size of the error logs might increase because of
ongoing database activities. And while you might see a certain size in bytes or kilobytes for the error logs
in the AWS Management Console, they might have 0 bytes when you download them.
915
Amazon Relational Database Service User Guide
MySQL database log files
RDS for MySQL writes mysql-error.log to disk every 5 minutes. It appends the contents of the log to
mysql-error-running.log.
RDS for MySQL rotates the mysql-error-running.log file every hour. It retains the logs generated
during the last two weeks.
Note
The log retention period is different between Amazon RDS and Aurora.
You can control RDS for MySQL logging by using the parameters in this list:
For more information about the slow query and general logs, go to the following topics in the MySQL
documentation:
916
Amazon Relational Database Service User Guide
MySQL database log files
• The MySQL slow query log, error log, and the general log file sizes are constrained to no more than
2 percent of the allocated storage space for a DB instance. To maintain this threshold, logs are
automatically rotated every hour. MySQL removes log files more than two weeks old. If the combined
log file size exceeds the threshold after removing old log files, then the oldest log files are deleted
until the log file size no longer exceeds the threshold.
• When FILE logging is enabled, log files are examined every hour and log files more than two weeks
old are deleted. In some cases, the remaining combined log file size after the deletion might exceed
the threshold of 2 percent of a DB instance's allocated space. In these cases, the oldest log files are
deleted until the log file size no longer exceeds the threshold.
• When TABLE logging is enabled, in some cases log tables are rotated every 24 hours. This rotation
occurs if the space used by the table logs is more than 20 percent of the allocated storage space. It
also occurs if the size of all logs combined is greater than 10 GB. If the amount of space used for a DB
instance is greater than 90 percent of the DB instance's allocated storage space, then the thresholds
for log rotation are reduced. Log tables are then rotated if the space used by the table logs is more
than 10 percent of the allocated storage space. They're also rotated if the size of all logs combined
is greater than 5 GB. You can subscribe to the low_free_storage event to be notified when log
tables are rotated to free up space. For more information, see Working with Amazon RDS event
notification (p. 855).
When log tables are rotated, the current log table is first copied to a backup log table. Then the entries
in the current log table are removed. If the backup log table already exists, then it is deleted before the
current log table is copied to the backup. You can query the backup log table if needed. The backup
log table for the mysql.general_log table is named mysql.general_log_backup. The backup
log table for the mysql.slow_log table is named mysql.slow_log_backup.
To work with the logs from the Amazon RDS console, Amazon RDS API, Amazon RDS CLI, or AWS SDKs,
set the log_output parameter to FILE. Like the MySQL error log, these log files are rotated hourly. The
log files that were generated during the previous two weeks are retained. Note that the retention period
is different between Amazon RDS and Aurora.
For RDS for MySQL version 8.0.30 and higher, the innodb_redo_log_capacity parameter
is used instead of the innodb_log_file_size parameter. The default value of the
innodb_redo_log_capacity parameter is 256 MB. For more information, see Changes in MySQL
8.0.30 in the MySQL documentation.
917
Amazon Relational Database Service User Guide
MySQL database log files
Amazon RDS publishes each MySQL database log as a separate database stream in the log group. For
example, if you configure the export function to include the slow query log, slow query data is stored in
a slow query log stream in the /aws/rds/instance/my_instance/slowquery log group.
The error log is enabled by default. The following table summarizes the requirements for the other
MySQL logs.
Log Requirement
Console
AWS CLI
You can publish MySQL logs with the AWS CLI. You can call the modify-db-instance command with
the following parameters:
• --db-instance-identifier
• --cloudwatch-logs-export-configuration
918
Amazon Relational Database Service User Guide
MySQL database log files
Note
A change to the --cloudwatch-logs-export-configuration option is always applied
to the DB instance immediately. Therefore, the --apply-immediately and --no-apply-
immediately options have no effect.
You can also publish MySQL logs by calling the following AWS CLI commands:
• create-db-instance
• restore-db-instance-from-db-snapshot
• restore-db-instance-from-s3
• restore-db-instance-to-point-in-time
Run one of these AWS CLI commands with the following options:
• --db-instance-identifier
• --enable-cloudwatch-logs-exports
• --db-instance-class
• --engine
Other options might be required depending on the AWS CLI command you run.
Example
The following example modifies an existing MySQL DB instance to publish log files to CloudWatch Logs.
The --cloudwatch-logs-export-configuration value is a JSON object. The key for this object is
EnableLogTypes, and its value is an array of strings with any combination of audit, error, general,
and slowquery.
For Windows:
Example
The following example creates a MySQL DB instance and publishes log files to CloudWatch Logs. The
--enable-cloudwatch-logs-exports value is a JSON array of strings. The strings can be any
combination of audit, error, general, and slowquery.
919
Amazon Relational Database Service User Guide
MySQL database log files
--engine MySQL
For Windows:
RDS API
You can publish MySQL logs with the RDS API. You can call the ModifyDBInstance action with the
following parameters:
• DBInstanceIdentifier
• CloudwatchLogsExportConfiguration
Note
A change to the CloudwatchLogsExportConfiguration parameter is always applied to the
DB instance immediately. Therefore, the ApplyImmediately parameter has no effect.
You can also publish MySQL logs by calling the following RDS API operations:
• CreateDBInstance
• RestoreDBInstanceFromDBSnapshot
• RestoreDBInstanceFromS3
• RestoreDBInstanceToPointInTime
Run one of these RDS API operations with the following parameters:
• DBInstanceIdentifier
• EnableCloudwatchLogsExports
• Engine
• DBInstanceClass
Other parameters might be required depending on the AWS CLI command you run.
Both the general log and the slow query logs are disabled by default. In order to enable logging to
tables, you must also set the general_log and slow_query_log server parameters to 1.
Log tables keep growing until the respective logging activities are turned off by resetting the appropriate
parameter to 0. A large amount of data often accumulates over time, which can use up a considerable
percentage of your allocated storage space. Amazon RDS doesn't allow you to truncate the log tables,
920
Amazon Relational Database Service User Guide
MySQL database log files
but you can move their contents. Rotating a table saves its contents to a backup table and then creates
a new empty log table. You can manually rotate the log tables with the following command line
procedures, where the command prompt is indicated by PROMPT>:
To completely remove the old data and reclaim the disk space, call the appropriate procedure twice in
succession.
• Events that describe database changes such as table creation or row modifications
• Information about the duration of each statement that updated data
• Events for statements that could have updated data but didn't
The binary log records statements that are sent during replication. It is also required for some
recovery operations. For more information, see The Binary Log and Binary Log Overview in the MySQL
documentation.
The automated backups feature determines whether binary logging is turned on or off for MySQL. You
have the following options:
MySQL on Amazon RDS supports the row-based, statement-based, and mixed binary logging formats. We
recommend mixed unless you need a specific binlog format. For details on the different MySQL binary
log formats, see Binary logging formats in the MySQL documentation.
If you plan to use replication, the binary logging format is important because it determines the record of
data changes that is recorded in the source and sent to the replication targets. For information about the
advantages and disadvantages of different binary logging formats for replication, see Advantages and
disadvantages of statement-based and row-based replication in the MySQL documentation.
Important
Setting the binary logging format to row-based can result in very large binary log files. Large
binary log files reduce the amount of storage available for a DB instance and can increase the
amount of time to perform a restore operation of a DB instance.
Statement-based replication can cause inconsistencies between the source DB instance and a
read replica. For more information, see Determination of safe and unsafe statements in binary
logging in the MySQL documentation.
Enabling binary logging increases the number of write disk I/O operations to the DB instance.
You can monitor IOPS usage with the WriteIOPS CloudWatch metric.
921
Amazon Relational Database Service User Guide
MySQL database log files
You can't modify a default parameter group. If the DB instance is using a default parameter group,
create a new parameter group and associate it with the DB instance.
For more information on parameter groups, see Working with parameter groups (p. 347).
4. From Parameter group actions, choose Edit.
5. Set the binlog_format parameter to the binary logging format of your choice (ROW, STATEMENT,
or MIXED).
You can turn off binary logging by setting the backup retention period of a DB instance to zero, but
this disables daily automated backups. We recommend that you don't disable backups. For more
information about the Backup retention period setting, see Settings for DB instances (p. 402).
6. Choose Save changes to save the updates to the DB parameter group.
Because the binlog_format parameter is dynamic, you don't need to reboot the DB instance for the
changes to apply.
Important
Changing a DB parameter group affects all DB instances that use that parameter group. If you
want to specify different binary logging formats for different MySQL DB instances in an AWS
Region, the DB instances must use different DB parameter groups. These parameter groups
identify different logging formats. Assign the appropriate DB parameter group to the each DB
instance.
To run the mysqlbinlog utility against an Amazon RDS instance, use the following options:
• --read-from-remote-server – Required.
• --host – The DNS name from the endpoint of the instance.
• --port – The port used by the instance.
• --user – A MySQL user that has been granted the REPLICATION SLAVE permission.
• --password – The password for the MySQL user, or omit a password value so that the utility prompts
you for a password.
• --raw – Download the file in binary format.
• --result-file – The local file to receive the raw output.
• --stop-never – Stream the binary log files.
• --verbose – When you use the ROW binlog format, include this option to see the row events as
pseudo-SQL statements. For more information on the --verbose option, see mysqlbinlog row event
display in the MySQL documentation.
• Specify the names of one or more binary log files. To get a list of the available logs, use the SQL
command SHOW BINARY LOGS.
For more information about mysqlbinlog options, see mysqlbinlog — Utility for processing binary log
files in the MySQL documentation.
922
Amazon Relational Database Service User Guide
MySQL database log files
mysqlbinlog \
--read-from-remote-server \
--host=MySQLInstance1.cg034hpkmmjt.region.rds.amazonaws.com \
--port=3306 \
--user ReplUser \
--password \
--raw \
--verbose \
--result-file=/tmp/ \
binlog.00098
For Windows:
mysqlbinlog ^
--read-from-remote-server ^
--host=MySQLInstance1.cg034hpkmmjt.region.rds.amazonaws.com ^
--port=3306 ^
--user ReplUser ^
--password ^
--raw ^
--verbose ^
--result-file=/tmp/ ^
binlog.00098
Amazon RDS normally purges a binary log as soon as possible, but the binary log must still be available
on the instance to be accessed by mysqlbinlog. To specify the number of hours for RDS to retain binary
logs, use the mysql.rds_set_configuration (p. 1758) stored procedure and specify a period with enough
time for you to download the logs. After you set the retention period, monitor storage usage for the DB
instance to ensure that the retained binary logs don't take up too much storage.
To display the current setting, use the mysql.rds_show_configuration (p. 1760) stored procedure.
call mysql.rds_show_configuration;
923
Amazon Relational Database Service User Guide
Oracle database log files
The Oracle audit files provided are the standard Oracle auditing files. Amazon RDS supports the Oracle
fine-grained auditing (FGA) feature. However, log access doesn't provide access to FGA events that are
stored in the SYS.FGA_LOG$ table and that are accessible through the DBA_FGA_AUDIT_TRAIL view.
The DescribeDBLogFiles API operation that lists the Oracle log files that are available for a
DB instance ignores the MaxRecords parameter and returns up to 1,000 records. The call returns
LastWritten as a POSIX date in milliseconds.
Topics
• Retention schedule (p. 924)
• Working with Oracle trace files (p. 924)
• Publishing Oracle logs to Amazon CloudWatch Logs (p. 927)
• Previous methods for accessing alert logs and listener logs (p. 930)
Retention schedule
The Oracle database engine might rotate log files if they get very large. To retain audit or trace files,
download them. If you store the files locally, you reduce your Amazon RDS storage costs and make more
space available for your data.
The following table shows the retention schedule for Oracle alert logs, audit files, and trace files on
Amazon RDS.
Alert logs The text alert log is rotated daily with 30-day retention managed by Amazon
RDS. The XML alert log is retained for at least seven days. You can access this
log by using the ALERTLOG view.
Audit files The default retention period for audit files is seven days. Amazon RDS might
delete audit files older than seven days.
Trace files The default retention period for trace files is seven days. Amazon RDS might
delete trace files older than seven days.
Listener logs The default retention period for the listener logs is seven days. Amazon RDS
might delete listener logs older than seven days.
Note
Audit files and trace files share the same retention configuration.
Topics
924
Amazon Relational Database Service User Guide
Oracle database log files
Listing files
You can use either of two procedures to allow access to any file in the background_dump_dest
path. The first procedure refreshes a view containing a listing of all files currently in
background_dump_dest.
EXEC rdsadmin.manage_tracefiles.refresh_tracefile_listing;
After the view is refreshed, query the following view to access the results.
An alternative to the previous process is to use FROM table to stream nonrelational data in a table-like
format to list database directory contents.
On a read replica, get the name of the BDUMP directory by querying V$DATABASE.DB_UNIQUE_NAME.
If the unique name is DATABASE_B, then the BDUMP directory is BDUMP_B. The following
example queries the BDUMP name on a replica and then uses this name to query the contents of
alert_DATABASE.log.2020-06-23.
BDUMP_VARIABLE
--------------
BDUMP_B
925
Amazon Relational Database Service User Guide
Oracle database log files
You can use many standard methods to trace individual sessions connected to an Oracle DB instance in
Amazon RDS. To enable tracing for a session, you can run subprograms in PL/SQL packages supplied by
Oracle, such as DBMS_SESSION and DBMS_MONITOR. For more information, see Enabling tracing for a
session in the Oracle documentation.
For example, you can use the rdsadmin.tracefile_listing view mentioned preceding to list all
of the trace files on the system. You can then set the tracefile_table view to point to the intended
trace file using the following procedure.
EXEC
rdsadmin.manage_tracefiles.set_tracefile_table_location('CUST01_ora_3260_SYSTEMSTATE.trc');
The following example creates an external table in the current schema with the location set to the file
provided. You can retrieve the contents into a local file using a SQL query.
SPOOL /tmp/tracefile.txt
SELECT * FROM tracefile_table;
SPOOL OFF;
The following example shows the current trace file retention period, and then sets a new trace file
retention period.
926
Amazon Relational Database Service User Guide
Oracle database log files
In addition to the periodic purge process, you can manually remove files from the
background_dump_dest. The following example shows how to purge all files older than five minutes.
EXEC rdsadmin.manage_tracefiles.purge_tracefiles(5);
You can also purge all files that match a specific pattern (if you do, don't include the file extension, such
as .trc). The following example shows how to purge all files that start with SCHPOC1_ora_5935.
EXEC rdsadmin.manage_tracefiles.purge_tracefiles('SCHPOC1_ora_5935');
Amazon RDS publishes each Oracle database log as a separate database stream in the log group. For
example, if you configure the export function to include the audit log, audit data is stored in an audit
log stream in the /aws/rds/instance/my_instance/audit log group. RDS for Oracle supports the
following logs:
• Alert log
• Trace log
• Audit log
• Listener log
• Oracle Management Agent log
This Oracle Management Agent log consists of the log groups shown in the following table.
emctl.log oemagent-emctl
emdctlj.log oemagent-emdctlj
gcagent.log oemagent-gcagent
gcagent_errors.log oemagent-gcagent-errors
emagent.nohup oemagent-emagent-nohup
secure.log oemagent-secure
For more information, see Locating Management Agent Log and Trace Files in the Oracle documentation.
Console
To publish Oracle DB logs to CloudWatch Logs from the AWS Management Console
927
Amazon Relational Database Service User Guide
Oracle database log files
4. In the Log exports section, choose the logs that you want to start publishing to CloudWatch Logs.
5. Choose Continue, and then choose Modify DB Instance on the summary page.
AWS CLI
To publish Oracle logs, you can use the modify-db-instance command with the following
parameters:
• --db-instance-identifier
• --cloudwatch-logs-export-configuration
Note
A change to the --cloudwatch-logs-export-configuration option is always applied
to the DB instance immediately. Therefore, the --apply-immediately and --no-apply-
immediately options have no effect.
You can also publish Oracle logs using the following commands:
• create-db-instance
• restore-db-instance-from-db-snapshot
• restore-db-instance-from-s3
• restore-db-instance-to-point-in-time
Example
The following example creates an Oracle DB instance with CloudWatch Logs publishing enabled. The --
cloudwatch-logs-export-configuration value is a JSON array of strings. The strings can be any
combination of alert, audit, listener, and trace.
For Windows:
928
Amazon Relational Database Service User Guide
Oracle database log files
Example
The following example modifies an existing Oracle DB instance to publish log files to CloudWatch
Logs. The --cloudwatch-logs-export-configuration value is a JSON object. The key for this
object is EnableLogTypes, and its value is an array of strings with any combination of alert, audit,
listener, and trace.
For Windows:
Example
The following example modifies an existing Oracle DB instance to disable publishing audit and listener
log files to CloudWatch Logs. The --cloudwatch-logs-export-configuration value is a JSON
object. The key for this object is DisableLogTypes, and its value is an array of strings with any
combination of alert, audit, listener, and trace.
For Windows:
RDS API
You can publish Oracle DB logs with the RDS API. You can call the ModifyDBInstance action with the
following parameters:
• DBInstanceIdentifier
• CloudwatchLogsExportConfiguration
Note
A change to the CloudwatchLogsExportConfiguration parameter is always applied to the
DB instance immediately. Therefore, the ApplyImmediately parameter has no effect.
You can also publish Oracle logs by calling the following RDS API operations:
• CreateDBInstance
• RestoreDBInstanceFromDBSnapshot
929
Amazon Relational Database Service User Guide
Oracle database log files
• RestoreDBInstanceFromS3
• RestoreDBInstanceToPointInTime
Run one of these RDS API operations with the following parameters:
• DBInstanceIdentifier
• EnableCloudwatchLogsExports
• Engine
• DBInstanceClass
Other parameters might be required depending on the RDS operation that you run.
The listenerlog view contains entries for Oracle Database version 12.1.0.2 and earlier. To access the
listener log for these database versions, use the following query.
For Oracle Database versions 12.2.0.1 and later, access the listener log using Amazon CloudWatch Logs.
Note
Oracle rotates the alert and listener logs when they exceed 10 MB, at which point they are
unavailable from Amazon RDS views.
930
Amazon Relational Database Service User Guide
PostgreSQL database log files
For more information about how you can view, download, and watch file-based database logs, see
Monitoring Amazon RDS log files (p. 895). To learn more about PostgreSQL logs, see Working with
Amazon RDS and Aurora PostgreSQL logs: Part 1 and Working with Amazon RDS and Aurora PostgreSQL
logs: Part 2.
In addition to the standard PostgreSQL logs discussed in this topic, RDS for PostgreSQL also supports
the PostgreSQL Audit extension (pgAudit). Most regulated industries and government agencies need
to maintain an audit log or audit trail of changes made to data to comply with legal requirements. For
information about installing and using pgAudit, see Using pgAudit to log database activity (p. 2362).
Topics
• Parameters that affect logging behavior (p. 931)
• Turning on query logging for your RDS for PostgreSQL DB instance (p. 933)
• Publishing PostgreSQL logs to Amazon CloudWatch Logs (p. 936)
log_destination stderr Sets the output format for the log. The default
is stderr but you can also specify comma-
separated value (CSV) by adding csvlog to the
setting. For more information, see Setting the log
destination (stderr, csvlog) (p. 933)
log_filename postgresql.log.%Y-%m- Specifies the pattern for the log file name. In
%d-%H addition to the default, this parameter supports
postgresql.log.%Y-%m-%d for the filename
pattern.
log_line_prefix %t:%r:%u@%d:[%p]: Defines the prefix for each log line that gets
written to stderr, to note the time (%t), remote
host (%r), user (%u), database (%d), and process
ID (%p). You can't modify this parameter.
931
Amazon Relational Database Service User Guide
PostgreSQL database log files
rds.log_retention_period 4320 PostgreSQL logs that are older than the specified
number of minutes are deleted. The default value
of 4320 minutes deletes log files after 3 days. For
more information, see Setting the log retention
period (p. 932).
To identify application issues, you can look for query failures, login failures, deadlocks, and fatal server
errors in the log. For example, suppose that you converted a legacy application from Oracle to Amazon
RDS PostgreSQL, but not all queries converted correctly. These incorrectly formatted queries generate
error messages that you can find in the logs to help identify problems. For more information about
logging queries, see Turning on query logging for your RDS for PostgreSQL DB instance (p. 933).
In the following topics, you can find information about how to set various parameters that control the
basic details for your PostgreSQL logs.
Topics
• Setting the log retention period (p. 932)
• Setting log file rotation (p. 932)
• Setting the log destination (stderr, csvlog) (p. 933)
• Understanding the log_line_prefix parameter (p. 933)
We recommend that you have your logs routinely published to Amazon CloudWatch Logs so that you can
view and analyze system data long after the logs have been removed from your RDS for PostgreSQL DB
instance. For more information, see Publishing PostgreSQL logs to Amazon CloudWatch Logs (p. 936).
Log files can also be rotated according to their size, as specified in the log_rotation_size parameter.
This parameter specifies that the log should be rotated when it reaches the specified size (in kilobytes).
For an RDS for PostgreSQL DB instance, log_rotation_size is unset, that is, there is no value
specified. However, you can set the parameter from 0-2097151 kB (kilobytes).
The log file names are based on the file name pattern specified in the log_filename parameter. The
available settings for this parameter are as follows:
932
Amazon Relational Database Service User Guide
PostgreSQL database log files
• postgresql.log.%Y-%m-%d – Default format for the log file name. Includes the year, month, and
date in the name of the log file.
• postgresql.log.%Y-%m-%d-%H – Includes the hour in the log file name format.
RDS for PostgreSQL can also generate the logs in csvlog format. The csvlog is useful for analyzing
the log data as comma-separated values (CSV) data. For example, suppose that you use the log_fdw
extension to work with your logs as foreign tables. The foreign table created on stderr log files
contains a single column with log event data. By adding csvlog to the log_destination parameter,
you get the log file in the CSV format with demarcations for the multiple columns of the foreign table.
You can now sort and analyze your logs more easily. To learn how to use the log_fdw with csvlog, see
Using the log_fdw extension to access the DB log using SQL (p. 2401).
If you specify csvlog for this parameter, be aware that both stderr and csvlog files are
generated. Be sure to monitor the storage consumed by the logs, taking into account the
rds.log_retention_period and other settings that affect log storage and turnover. Using stderr
and csvlog more than doubles the storage consumed by the logs.
If you add csvlog to log_destination and you want to revert to the stderr alone, you need to reset
the parameter. To do so, open the Amazon RDS Console and then open the custom DB parameter group
for your instance. Choose the log_destination parameter, choose Edit parameter, and then choose
Reset.
For more information about configuring logging, see Working with Amazon RDS and Aurora PostgreSQL
logs: Part 1.
%t:%r:%u@%d:[%p]:t
You can't change this setting. Each log entry sent to stderr includes the following information.
933
Amazon Relational Database Service User Guide
PostgreSQL database log files
log_min_duration_statement
– Any SQL statement that runs atleast for the
specified amount of time or longer gets logged.
By default, this parameter isn't set. Turning on this
parameter can help you find unoptimized queries.
log_statement_sample_rate
– The percentage of statements exceeding the time
specified in log_min_duration_sample to
be logged, expressed as a floating point value
between 0.0 and 1.0.
Following, you can find reference information about the log_statement and log_min_duration
parameters.
log_statement
This parameter specifies the type of SQL statements that should get sent to the log. The default value
is none. If you change this parameter to all, ddl, or mod, be sure to apply recommended actions
to mitigate the risk of exposing passwords in the logs. For more information, see Mitigating risk of
password exposure when using query logging (p. 936).
934
Amazon Relational Database Service User Guide
PostgreSQL database log files
all
Logs all data definition language (DDL) statements, such as CREATE, ALTER, DROP, and so on.
mod
Logs all DDL statements and data manipulation language (DML) statements, such as INSERT,
UPDATE, and DELETE, which modify the data.
none
No SQL statements get logged. We recommend this setting to avoid the risk of exposing passwords
in the logs.
log_min_duration_statement
Any SQL statement that runs atleast for the specified amount of time or longer gets logged. By default,
this parameter isn't set. Turning on this parameter can help you find unoptimized queries.
–1–2147483647
The number of milliseconds (ms) of runtime over which a statement gets logged.
These steps assume that your RDS for PostgreSQL DB instance uses a custom DB parameter group.
1. Set the log_statement parameter to all. The following example shows the information that is
written to the postgresql.log file with this parameter setting.
2. Set the log_min_duration_statement parameter. The following example shows the information
that is written to the postgresql.log file when the parameter is set to 1.
935
Amazon Relational Database Service User Guide
PostgreSQL database log files
Queries that exceed the duration specified in the log_min_duration_statement parameter are
logged. The following shows an example. You can view the log file for your RDS for PostgreSQL DB
instance in the Amazon RDS Console.
We recommend that you keep log_statement set to none to avoid exposing passwords. If you set
log_statement to all, ddl, or mod, we recommend that you take one or more of the following steps.
• For the client, encrypt sensitive information. For more information, see Encryption Options in the
PostgreSQL documentation. Use the ENCRYPTED (and UNENCRYPTED) options of the CREATE and
ALTER statements. For more information, see CREATE USER in the PostgreSQL documentation.
• For your RDS for PostgreSQL DB instance, set up and use the PostgreSQL Auditing (pgAudit) extension.
This extension redacts sensitive information in CREATE and ALTER statements sent to the log. For
more information, see Using pgAudit to log database activity (p. 2362).
• Restrict access to the CloudWatch logs.
• Use stronger authentication mechanisms such as IAM.
All currently available RDS for PostgreSQL versions support publishing log files to CloudWatch Logs. For
more information, see Amazon RDS for PostgreSQL updates in the Amazon RDS for PostgreSQL Release
Notes..
To work with CloudWatch Logs, configure your RDS for PostgreSQL DB instance to publish log data to a
log group.
You can publish the following log types to CloudWatch Logs for RDS for PostgreSQL:
• Postgresql log
• Upgrade log
After you complete the configuration, Amazon RDS publishes the log events to log streams within a
CloudWatch log group. For example, the PostgreSQL log data is stored within the log group /aws/rds/
instance/my_instance/postgresql. To view your logs, open the CloudWatch console at https://
console.aws.amazon.com/cloudwatch/.
936
Amazon Relational Database Service User Guide
PostgreSQL database log files
Console
The Log exports section is available only for PostgreSQL versions that support publishing to
CloudWatch Logs.
5. Choose Continue, and then choose Modify DB Instance on the summary page.
AWS CLI
You can publish PostgreSQL logs with the AWS CLI. You can call the modify-db-instance command
with the following parameters.
• --db-instance-identifier
• --cloudwatch-logs-export-configuration
Note
A change to the --cloudwatch-logs-export-configuration option is always applied
to the DB instance immediately. Therefore, the --apply-immediately and --no-apply-
immediately options have no effect.
You can also publish PostgreSQL logs by calling the following CLI commands:
• create-db-instance
• restore-db-instance-from-db-snapshot
• restore-db-instance-to-point-in-time
• --db-instance-identifier
• --enable-cloudwatch-logs-exports
• --db-instance-class
• --engine
Other options might be required depending on the CLI command you run.
The following example modifies an existing PostgreSQL DB instance to publish log files to CloudWatch
Logs. The --cloudwatch-logs-export-configuration value is a JSON object. The key for this
object is EnableLogTypes, and its value is an array of strings with any combination of postgresql and
upgrade.
937
Amazon Relational Database Service User Guide
PostgreSQL database log files
For Windows:
The following example creates a PostgreSQL DB instance and publishes log files to CloudWatch Logs.
The --enable-cloudwatch-logs-exports value is a JSON array of strings. The strings can be any
combination of postgresql and upgrade.
For Windows:
RDS API
You can publish PostgreSQL logs with the RDS API. You can call the ModifyDBInstance action with the
following parameters:
• DBInstanceIdentifier
• CloudwatchLogsExportConfiguration
Note
A change to the CloudwatchLogsExportConfiguration parameter is always applied to the
DB instance immediately. Therefore, the ApplyImmediately parameter has no effect.
You can also publish PostgreSQL logs by calling the following RDS API operations:
• CreateDBInstance
• RestoreDBInstanceFromDBSnapshot
• RestoreDBInstanceToPointInTime
Run one of these RDS API operations with the following parameters:
• DBInstanceIdentifier
• EnableCloudwatchLogsExports
938
Amazon Relational Database Service User Guide
PostgreSQL database log files
• Engine
• DBInstanceClass
Other parameters might be required depending on the operation that you run.
939
Amazon Relational Database Service User Guide
Monitoring RDS API calls in CloudTrail
Topics
• CloudTrail integration with Amazon RDS (p. 940)
• Amazon RDS log file entries (p. 940)
CloudTrail events
CloudTrail captures API calls for Amazon RDS as events. An event represents a single request from any
source and includes information about the requested action, the date and time of the action, request
parameters, and so on. Events include calls from the Amazon RDS console and from code calls to the
Amazon RDS API operations.
Amazon RDS activity is recorded in a CloudTrail event in Event history. You can use the CloudTrail
console to view the last 90 days of recorded API activity and events in an AWS Region. For more
information, see Viewing events with CloudTrail event history.
CloudTrail trails
For an ongoing record of events in your AWS account, including events for Amazon RDS, create a trail.
A trail is a configuration that enables delivery of events to a specified Amazon S3 bucket. CloudTrail
typically delivers log files within 15 minutes of account activity.
Note
If you don't configure a trail, you can still view the most recent events in the CloudTrail console
in Event history.
You can create two types of trails for an AWS account: a trail that applies to all Regions, or a trail that
applies to one Region. By default, when you create a trail in the console, the trail applies to all Regions.
Additionally, you can configure other AWS services to further analyze and act upon the event data
collected in CloudTrail logs. For more information, see:
940
Amazon Relational Database Service User Guide
Amazon RDS log file entries
The following example shows a CloudTrail log entry that demonstrates the CreateDBInstance action.
{
"eventVersion": "1.04",
"userIdentity": {
"type": "IAMUser",
"principalId": "AKIAIOSFODNN7EXAMPLE",
"arn": "arn:aws:iam::123456789012:user/johndoe",
"accountId": "123456789012",
"accessKeyId": "AKIAI44QH8DHBEXAMPLE",
"userName": "johndoe"
},
"eventTime": "2018-07-30T22:14:06Z",
"eventSource": "rds.amazonaws.com",
"eventName": "CreateDBInstance",
"awsRegion": "us-east-1",
"sourceIPAddress": "192.0.2.0",
"userAgent": "aws-cli/1.15.42 Python/3.6.1 Darwin/17.7.0 botocore/1.10.42",
"requestParameters": {
"enableCloudwatchLogsExports": [
"audit",
"error",
"general",
"slowquery"
],
"dBInstanceIdentifier": "test-instance",
"engine": "mysql",
"masterUsername": "myawsuser",
"allocatedStorage": 20,
"dBInstanceClass": "db.m1.small",
"masterUserPassword": "****"
},
"responseElements": {
"dBInstanceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance",
"storageEncrypted": false,
"preferredBackupWindow": "10:27-10:57",
"preferredMaintenanceWindow": "sat:05:47-sat:06:17",
"backupRetentionPeriod": 1,
"allocatedStorage": 20,
"storageType": "standard",
"engineVersion": "8.0.28",
"dbInstancePort": 0,
"optionGroupMemberships": [
{
"status": "in-sync",
"optionGroupName": "default:mysql-8-0"
}
],
"dBParameterGroups": [
{
"dBParameterGroupName": "default.mysql8.0",
"parameterApplyStatus": "in-sync"
}
],
"monitoringInterval": 0,
"dBInstanceClass": "db.m1.small",
"readReplicaDBInstanceIdentifiers": [],
"dBSubnetGroup": {
"dBSubnetGroupName": "default",
"dBSubnetGroupDescription": "default",
"subnets": [
{
"subnetAvailabilityZone": {"name": "us-east-1b"},
"subnetIdentifier": "subnet-cbfff283",
941
Amazon Relational Database Service User Guide
Amazon RDS log file entries
"subnetStatus": "Active"
},
{
"subnetAvailabilityZone": {"name": "us-east-1e"},
"subnetIdentifier": "subnet-d7c825e8",
"subnetStatus": "Active"
},
{
"subnetAvailabilityZone": {"name": "us-east-1f"},
"subnetIdentifier": "subnet-6746046b",
"subnetStatus": "Active"
},
{
"subnetAvailabilityZone": {"name": "us-east-1c"},
"subnetIdentifier": "subnet-bac383e0",
"subnetStatus": "Active"
},
{
"subnetAvailabilityZone": {"name": "us-east-1d"},
"subnetIdentifier": "subnet-42599426",
"subnetStatus": "Active"
},
{
"subnetAvailabilityZone": {"name": "us-east-1a"},
"subnetIdentifier": "subnet-da327bf6",
"subnetStatus": "Active"
}
],
"vpcId": "vpc-136a4c6a",
"subnetGroupStatus": "Complete"
},
"masterUsername": "myawsuser",
"multiAZ": false,
"autoMinorVersionUpgrade": true,
"engine": "mysql",
"cACertificateIdentifier": "rds-ca-2015",
"dbiResourceId": "db-ETDZIIXHEWY5N7GXVC4SH7H5IA",
"dBSecurityGroups": [],
"pendingModifiedValues": {
"masterUserPassword": "****",
"pendingCloudwatchLogsExports": {
"logTypesToEnable": [
"audit",
"error",
"general",
"slowquery"
]
}
},
"dBInstanceStatus": "creating",
"publiclyAccessible": true,
"domainMemberships": [],
"copyTagsToSnapshot": false,
"dBInstanceIdentifier": "test-instance",
"licenseModel": "general-public-license",
"iAMDatabaseAuthenticationEnabled": false,
"performanceInsightsEnabled": false,
"vpcSecurityGroups": [
{
"status": "active",
"vpcSecurityGroupId": "sg-f839b688"
}
]
},
"requestID": "daf2e3f5-96a3-4df7-a026-863f96db793e",
"eventID": "797163d3-5726-441d-80a7-6eeb7464acd4",
942
Amazon Relational Database Service User Guide
Amazon RDS log file entries
"eventType": "AwsApiCall",
"recipientAccountId": "123456789012"
}
As shown in the userIdentity element in the preceding example, every event or log entry contains
information about who generated the request. The identity information helps you determine the
following:
• Whether the request was made with root or IAM user credentials.
• Whether the request was made with temporary security credentials for a role or federated user.
• Whether the request was made by another AWS service.
For more information about the userIdentity, see the CloudTrail userIdentity element. For more
information about CreateDBInstance and other Amazon RDS actions, see the Amazon RDS API
Reference.
943
Amazon Relational Database Service User Guide
Monitoring RDS with Database Activity Streams
Topics
• Overview of Database Activity Streams (p. 944)
• Configuring unified auditing for Oracle Database (p. 948)
• Configuring auditing policy for Microsoft SQL Server (p. 949)
• Starting a database activity stream (p. 950)
• Modifying a database activity stream (p. 951)
• Getting the status of a database activity stream (p. 953)
• Stopping a database activity stream (p. 954)
• Monitoring database activity streams (p. 955)
• Managing access to database activity streams (p. 975)
Security threats are both external and internal. To protect against internal threats, you can control
administrator access to data streams by configuring the Database Activity Streams feature. Amazon RDS
DBAs don't have access to the collection, transmission, storage, and processing of the streams.
Topics
• How database activity streams work (p. 944)
• Auditing in Oracle Database and Microsoft SQL Server Database (p. 945)
• Asynchronous mode for database activity streams (p. 947)
• Requirements and limitations for database activity streams (p. 947)
• Region and version availability (p. 947)
• Supported DB instance classes for database activity streams (p. 947)
You can configure applications for compliance management to consume database activity streams. These
applications can use the stream to generate alerts and audit activity on your database.
Amazon RDS supports database activity streams in Multi-AZ deployments. In this case, database activity
streams audit both the primary and standby instances.
944
Amazon Relational Database Service User Guide
Overview
Topics
• Unified auditing in Oracle Database (p. 945)
• Auditing in Microsoft SQL Server (p. 945)
• Non-native audit fields for Oracle Database and SQL Server (p. 946)
• DB parameter group override (p. 946)
An Oracle database writes audit records, including SYS audit records, to the unified audit trail. For
example, if an error occurs during an INSERT statement, standard auditing indicates the error number
and the SQL that was run. The audit trail resides in a read-only table in the AUDSYS schema. To access
these records, query the UNIFIED_AUDIT_TRAIL data dictionary view.
1. Create an Oracle Database audit policy by using the CREATE AUDIT POLICY command.
Only activities that match the Oracle Database audit policies are captured and sent to the Amazon
Kinesis data stream. When database activity streams are enabled, an Oracle database administrator
can't alter the audit policy or remove audit logs.
To learn more about unified audit policies, see About Auditing Activities with Unified Audit Policies and
AUDIT in the Oracle Database Security Guide.
• Server audit – The SQL server audit collects a single instance of server or database-level actions, and a
group of actions to monitor. The server-level audits RDS_DAS_AUDIT and RDS_DAS_AUDIT_CHANGES
are managed by RDS.
• Server audit specification – The server audit specification records the server-level events. You can
modify the RDS_DAS_SERVER_AUDIT_SPEC specification. This specification is linked to the server
audit RDS_DAS_AUDIT. The RDS_DAS_CHANGES_AUDIT_SPEC specification is managed by RDS.
• Database audit specification – The database audit specification records the database-level events. You
can create a database audit specification RDS_DAS_DB_<name> and link it to RDS_DAS_AUDIT server
audit.
You can configure database activity streams by using the console or CLI. Typically, you configure
database activity streams as follows:
945
Amazon Relational Database Service User Guide
Overview
1. (Optional) Create a database audit specification with the CREATE DATABASE AUDIT
SPECIFICATION command and link it to RDS_DAS_AUDIT server audit.
2. (Optional) Modify the server audit specification with the ALTER SERVER AUDIT SPECIFICATION
command and define the policies.
3. Activate the database and server audit policies. For example:
Only activities that match the server and database audit policies are captured and sent to the Amazon
Kinesis data stream. When database activity streams are enabled and the policies are locked, a
database administrator can't alter the audit policy or remove audit logs.
Important
If the database audit specification for a specific database is enabled and the policy is in a
locked state, then the database can't be dropped.
For more information about SQL Server auditing, see SQL Server Audit Components in the Microsoft SQL
Server documentation.
The events are represented in the stream as JSON objects. A JSON object contains a
DatabaseActivityMonitoringRecord, which contains a databaseActivityEventList array.
Predefined fields in the array include class, clientApplication, and command.
By default, an activity stream doesn't include engine-native audit fields. You can configure Amazon RDS
for Oracle and SQL Server so that it includes these extra fields in the engineNativeAuditFields
JSON object.
In Oracle Database, most events in the unified audit trail map to fields in the RDS data activity
stream. For example, the UNIFIED_AUDIT_TRAIL.SQL_TEXT field in unified auditing maps to the
commandText field in a database activity stream. However, Oracle Database audit fields such as
OS_USERNAME don't map to predefined fields in a database activity stream.
In SQL Server, most of the event's fields that are recorded by the SQLAudit map to the fields in RDS
database activity stream. For example, the code field from sys.fn_get_audit_file in the audit maps
to the commandText field in a database activity stream. However, SQL Server database audit fields, such
as permission_bitmask, don’t map to predefined fields in a database activity stream.
• If you activate an activity stream, RDS for Oracle ignores the auditing parameters in the parameter
group.
946
Amazon Relational Database Service User Guide
Overview
• If you deactivate an activity stream, RDS for Oracle stops ignoring the auditing parameters.
The database activity stream for SQL Server is independent of any parameters you set in the SQL Audit
option.
If an error occurs in the background task, Amazon RDS generates an event. This event indicates the
beginning and end of any time windows where activity stream event records might have been lost.
Asynchronous mode favors database performance over the accuracy of the activity stream.
• db.m4.*large
• db.m5.*large
• db.m5d.*large
• db.m6i.*large
• db.r4.*large
• db.r5.*large
947
Amazon Relational Database Service User Guide
Configuring Oracle unified auditing
• db.r5.*large.tpc*.mem*x
• db.r5b.*large
• db.r5b.*large.tpc*.mem*x
• db.r5d.*large
• db.r6i.*large
• db.x2idn.*large
• db.x2iedn.*large
• db.x2iezn.*large
• db.z1d.*large
For RDS for SQL Server you can use database activity streams with the following DB instance classes:
• db.m4.*large
• db.m5.*large
• db.m5d.*large
• db.m6i.*large
• db.r4.*large
• db.r5.*large
• db.r5b.*large
• db.r5d.*large
• db.r6i.*large
• db.x1e.*large
• db.z1d.*large
For more information about instance class types, see DB instance classes (p. 11).
In this case, create new policies with the CREATE AUDIT POLICY command, then activate them with
the AUDIT POLICY command. The following example creates and activates a policy to monitor users
with specific privileges and roles.
For complete instructions, see Configuring Audit Policies in the Oracle Database documentation.
• Unified auditing is configured for your Oracle database.
When you activate a database activity stream, RDS for Oracle automatically clears existing audit data.
It also revokes audit trail privileges. RDS for Oracle can no longer do the following:
• Purge unified audit trail records.
948
Amazon Relational Database Service User Guide
Configuring SQL Server auditing
The default server policy monitors only failed logins and changes to any database or server audit
specifications for database activity streams.
Limitations for the audit and audit specifications include the following:
• You can't modify the server or database audit specifications when the database activity stream is in a
locked state.
• You can't modify the server audit RDS_DAS_AUDIT specification.
• You can't modify the SQL Server audit RDS_DAS_CHANGES or its related server audit specification
RDS_DAS_CHANGES_AUDIT_SPEC.
• When creating a database audit specification, you must use the format RDS_DAS_DB_<name> for
example, RDS_DAS_DB_databaseActions.
Important
For smaller instance classes, we recommend that you don't audit all but only the data that is
required. This helps to reduce the performance impact of Database Activity Streams on these
instance classes.
The following sample code modifies the server audit specification RDS_DAS_SERVER_AUDIT_SPEC and
audits any logout and successful login actions:
The following sample code creates a database audit specification RDS_DAS_DB_database_spec and
attaches it to the server audit RDS_DAS_AUDIT:
USE testDB;
CREATE DATABASE AUDIT SPECIFICATION [RDS_DAS_DB_database_spec]
FOR SERVER AUDIT [RDS_DAS_AUDIT]
ADD ( INSERT, UPDATE, DELETE
ON testTable BY testUser )
949
Amazon Relational Database Service User Guide
Starting a database activity stream
After the audit specifications are configured, make sure that the specifications
RDS_DAS_SERVER_AUDIT_SPEC and RDS_DAS_DB_<name> are set to a state of ON. Now they can send
the audit data to your database activity stream.
Console
To start a database activity stream
The Start database activity stream: name window appears, where name is your RDS instance.
5. Enter the following settings:
• For AWS KMS key, choose a key from the list of AWS KMS keys.
Amazon RDS uses the KMS key to encrypt the key that in turn encrypts database activity. Choose
a KMS key other than the default key. For more information about encryption keys and AWS KMS,
see What is AWS Key Management Service? in the AWS Key Management Service Developer Guide.
• For Database activity events, choose Enable engine-native audit fields to include the engine
specific audit fields.
• Choose Immediately.
When you choose Immediately, the RDS instance restarts right away. If you choose During the
next maintenance window, the RDS instance doesn't restart right away. In this case, the database
activity stream doesn't start until the next maintenance window.
6. Choose Start database activity stream.
The status for the the database shows that the activity stream is starting.
Note
If you get the error You can't start a database activity stream in
this configuration, check Supported DB instance classes for database activity
streams (p. 947) to see whether your RDS instance is using a supported instance class.
950
Amazon Relational Database Service User Guide
Modifying a database activity stream
AWS CLI
To start database activity streams for a DB instance, configure the database using the start-activity-
stream AWS CLI command.
• --resource-arn arn – Specifies the Amazon Resource Name (ARN) of the DB instance.
• --kms-key-id key – Specifies the KMS key identifier for encrypting messages in the database
activity stream. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the
AWS KMS key.
• --engine-native-audit-fields-included – Includes engine-specific auditing fields in the data
stream. To exclude these fields, specify --no-engine-native-audit-fields-included (default).
The following example starts a database activity stream for a DB instance in asynchronous mode.
For Windows:
RDS API
To start database activity streams for a DB instance, configure the instance using the StartActivityStream
operation.
• Region
• KmsKeyId
• ResourceArn
• Mode
• EngineNativeAuditFieldsIncluded
Locked (default)
951
Amazon Relational Database Service User Guide
Modifying a database activity stream
Unlocked
Console
The Modify database activity stream: name window appears, where name is your RDS instance.
4. Choose either of the following options:
Locked
When you lock your audit policy, it becomes read-only. You can't edit your audit policy unless
you unlock the policy or stop the activity stream.
Unlocked
When you unlock your audit policy, it becomes read/write. You can edit your audit policy while
the activity stream is started.
5. Choose Modify DB activity stream.
The status for the Amazon RDS database shows Configuring activity stream.
6. (Optional) Choose the DB instance link. Then choose the Configuration tab.
The Audit policy status field shows one of the following values:
• Locked
• Unlocked
• Locking policy
• Unlocking policy
AWS CLI
To modify the activity stream state for the database instance, use the modify-activity-stream AWS CLI
command.
--resource-arn my- Yes The Amazon Resource Name (ARN) of your RDS
instance-ARN database instance.
952
Amazon Relational Database Service User Guide
Getting the activity stream status
--audit-policy-state No The new state of the audit policy for the database
activity stream on your instance: locked or
unlocked.
The following example unlocks the audit policy for the activity stream started on my-instance-ARN.
For Windows:
The following example describes the instance my-instance. The partial sample output shows that the
audit policy is unlocked.
{
"DBInstances": [
{
...
"Engine": "oracle-ee",
...
"ActivityStreamStatus": "started",
"ActivityStreamKmsKeyId": "ab12345e-1111-2bc3-12a3-ab1cd12345e",
"ActivityStreamKinesisStreamName": "aws-rds-das-db-AB1CDEFG23GHIJK4LMNOPQRST",
"ActivityStreamMode": "async",
"ActivityStreamEngineNativeAuditFieldsIncluded": true,
"ActivityStreamPolicyStatus": "unlocked",
...
}
]
}
RDS API
To modify the policy state of your database activity stream, use the ModifyActivityStream operation.
• AuditPolicyState
• ResourceArn
953
Amazon Relational Database Service User Guide
Stopping a database activity stream
Console
To get the status of a database activity stream
AWS CLI
You can get the activity stream configuration for a database instance as the response to a describe-db-
instances CLI request.
The following example shows a JSON response. The following fields are shown:
• ActivityStreamKinesisStreamName
• ActivityStreamKmsKeyId
• ActivityStreamStatus
• ActivityStreamMode
• ActivityStreamPolicyStatus
{
"DBInstances": [
{
...
"Engine": "oracle-ee",
...
"ActivityStreamStatus": "starting",
"ActivityStreamKmsKeyId": "ab12345e-1111-2bc3-12a3-ab1cd12345e",
"ActivityStreamKinesisStreamName": "aws-rds-das-db-AB1CDEFG23GHIJK4LMNOPQRST",
"ActivityStreamMode": "async",
"ActivityStreamEngineNativeAuditFieldsIncluded": true,
"ActivityStreamPolicyStatus": locked",
...
}
]
}
RDS API
You can get the activity stream configuration for a database as the response to a DescribeDBInstances
operation.
If you delete your Amazon RDS database instance, the activity stream is stopped and the underlying
Amazon Kinesis stream is deleted automatically.
954
Amazon Relational Database Service User Guide
Monitoring activity streams
Console
To turn off an activity stream
a. Choose Immediately.
When you choose Immediately, the RDS instance restarts right away. If you choose During
the next maintenance window, the RDS instance doesn't restart right away. In this case, the
database activity stream doesn't stop until the next maintenance window.
b. Choose Continue.
AWS CLI
To stop database activity streams for your database, configure the DB instance using the AWS CLI
command stop-activity-stream. Identify the AWS Region for the DB instance using the --region
parameter. The --apply-immediately parameter is optional.
For Windows:
RDS API
To stop database activity streams for your database, configure the DB instance using the
StopActivityStream operation. Identify the AWS Region for the DB instance using the Region parameter.
The ApplyImmediately parameter is optional.
• Amazon RDS creates the Kinesis stream automatically with a 24-hour retention period.
• Amazon RDS scales the Kinesis stream if necessary.
• If you stop the database activity stream or delete the DB instance, Amazon RDS deletes the Kinesis
stream.
955
Amazon Relational Database Service User Guide
Monitoring activity streams
The following categories of activity are monitored and put in the activity stream audit log:
• SQL commands – All SQL commands are audited, and also prepared statements, built-in functions,
and functions in PL/SQL. Calls to stored procedures are audited. Any SQL statements issued inside
stored procedures or functions are also audited.
• Other database information – Activity monitored includes the full SQL statement, the row count
of affected rows from DML commands, accessed objects, and the unique database name. Database
activity streams also monitor the bind variables and stored procedure parameters.
Important
The full SQL text of each statement is visible in the activity stream audit log, including any
sensitive data. However, database user passwords are redacted if Oracle can determine them
from the context, such as in the following SQL statement.
• Connection information – Activity monitored includes session and network information, the server
process ID, and exit codes.
If an activity stream has a failure while monitoring your DB instance, you are notified through RDS
events.
Topics
• Accessing an activity stream from Kinesis (p. 956)
• Audit log contents and examples (p. 957)
• databaseActivityEventList JSON array (p. 968)
• Processing a database activity stream using the AWS SDK (p. 975)
You can access your Kinesis stream either from the RDS console or the Kinesis console.
An activity stream's name includes the prefix aws-rds-das-db- followed by the resource ID of the
database. The following is an example.
956
Amazon Relational Database Service User Guide
Monitoring activity streams
aws-rds-das-db-NHVOV4PCLWHGF52NP
To use the Amazon RDS console to find the resource ID for the database, choose your DB instance
from the list of databases, and then choose the Configuration tab.
To use the AWS CLI to find the full Kinesis stream name for an activity stream, use a describe-
db-instances CLI request and note the value of ActivityStreamKinesisStreamName in the
response.
3. Choose Monitoring to begin observing the database activity.
For more information about using Amazon Kinesis, see What Is Amazon Kinesis Data Streams?.
Topics
• Examples of an audit log for an activity stream (p. 957)
• DatabaseActivityMonitoringRecords JSON object (p. 966)
• databaseActivityEvents JSON Object (p. 966)
The following activity event record shows a login with the use of a CONNECT SQL statement (command)
by a JDBC Thin Client (clientApplication) for your Oracle DB.
{
"class": "Standard",
"clientApplication": "JDBC Thin Client",
"command": "LOGON",
"commandText": null,
"dbid": "0123456789",
"databaseName": "ORCL",
"dbProtocol": "oracle",
"dbUserName": "TEST",
"endTime": null,
"errorMessage": null,
"exitCode": 0,
"logTime": "2021-01-15 00:15:36.233787",
"netProtocol": "tcp",
"objectName": null,
"objectType": null,
"paramList": [],
"pid": 17904,
"remoteHost": "123.456.789.012",
"remotePort": "25440",
"rowCount": null,
"serverHost": "987.654.321.098",
"serverType": "oracle",
"serverVersion": "19.0.0.0.ru-2020-01.rur-2020-01.r1.EE.3",
957
Amazon Relational Database Service User Guide
Monitoring activity streams
"serviceName": "oracle-ee",
"sessionId": 987654321,
"startTime": null,
"statementId": 1,
"substatementId": null,
"transactionId": "0000000000000000",
"engineNativeAuditFields": {
"UNIFIED_AUDIT_POLICIES": "TEST_POL_EVERYTHING",
"FGA_POLICY_NAME": null,
"DV_OBJECT_STATUS": null,
"SYSTEM_PRIVILEGE_USED": "CREATE SESSION",
"OLS_LABEL_COMPONENT_TYPE": null,
"XS_SESSIONID": null,
"ADDITIONAL_INFO": null,
"INSTANCE_ID": 1,
"DBID": 123456789
"DV_COMMENT": null,
"RMAN_SESSION_STAMP": null,
"NEW_NAME": null,
"DV_ACTION_NAME": null,
"OLS_PROGRAM_UNIT_NAME": null,
"OLS_STRING_LABEL": null,
"RMAN_SESSION_RECID": null,
"OBJECT_PRIVILEGES": null,
"OLS_OLD_VALUE": null,
"XS_TARGET_PRINCIPAL_NAME": null,
"XS_NS_ATTRIBUTE": null,
"XS_NS_NAME": null,
"DBLINK_INFO": null,
"AUTHENTICATION_TYPE": "(TYPE\u003d(DATABASE));(CLIENT ADDRESS\u003d((ADDRESS
\u003d(PROTOCOL\u003dtcp)(HOST\u003d205.251.233.183)(PORT\u003d25440))));",
"OBJECT_EDITION": null,
"OLS_PRIVILEGES_GRANTED": null,
"EXCLUDED_USER": null,
"DV_ACTION_OBJECT_NAME": null,
"OLS_LABEL_COMPONENT_NAME": null,
"EXCLUDED_SCHEMA": null,
"DP_TEXT_PARAMETERS1": null,
"XS_USER_NAME": null,
"XS_ENABLED_ROLE": null,
"XS_NS_ATTRIBUTE_NEW_VAL": null,
"DIRECT_PATH_NUM_COLUMNS_LOADED": null,
"AUDIT_OPTION": null,
"DV_EXTENDED_ACTION_CODE": null,
"XS_PACKAGE_NAME": null,
"OLS_NEW_VALUE": null,
"DV_RETURN_CODE": null,
"XS_CALLBACK_EVENT_TYPE": null,
"USERHOST": "a1b2c3d4e5f6.amazon.com",
"GLOBAL_USERID": null,
"CLIENT_IDENTIFIER": null,
"RMAN_OPERATION": null,
"TERMINAL": "unknown",
"OS_USERNAME": "sumepate",
"OLS_MAX_READ_LABEL": null,
"XS_PROXY_USER_NAME": null,
"XS_DATASEC_POLICY_NAME": null,
"DV_FACTOR_CONTEXT": null,
"OLS_MAX_WRITE_LABEL": null,
"OLS_PARENT_GROUP_NAME": null,
"EXCLUDED_OBJECT": null,
"DV_RULE_SET_NAME": null,
"EXTERNAL_USERID": null,
"EXECUTION_ID": null,
"ROLE": null,
"PROXY_SESSIONID": 0,
958
Amazon Relational Database Service User Guide
Monitoring activity streams
"DP_BOOLEAN_PARAMETERS1": null,
"OLS_POLICY_NAME": null,
"OLS_GRANTEE": null,
"OLS_MIN_WRITE_LABEL": null,
"APPLICATION_CONTEXTS": null,
"XS_SCHEMA_NAME": null,
"DV_GRANTEE": null,
"XS_COOKIE": null,
"DBPROXY_USERNAME": null,
"DV_ACTION_CODE": null,
"OLS_PRIVILEGES_USED": null,
"RMAN_DEVICE_TYPE": null,
"XS_NS_ATTRIBUTE_OLD_VAL": null,
"TARGET_USER": null,
"XS_ENTITY_TYPE": null,
"ENTRY_ID": 1,
"XS_PROCEDURE_NAME": null,
"XS_INACTIVITY_TIMEOUT": null,
"RMAN_OBJECT_TYPE": null,
"SYSTEM_PRIVILEGE": null,
"NEW_SCHEMA": null,
"SCN": 5124715
}
}
The following activity event record shows a login failure for your SQL Server DB.
{
"type": "DatabaseActivityMonitoringRecord",
"clusterId": "",
"instanceId": "db-4JCWQLUZVFYP7DIWP6JVQ77O3Q",
"databaseActivityEventList": [
{
"class": "LOGIN",
"clientApplication": "Microsoft SQL Server Management Studio",
"command": "LOGIN FAILED",
"commandText": "Login failed for user 'test'. Reason: Password did not match
that for the login provided. [CLIENT: local-machine]",
"databaseName": "",
"dbProtocol": "SQLSERVER",
"dbUserName": "test",
"endTime": null,
"errorMessage": null,
"exitCode": 0,
"logTime": "2022-10-06 21:34:42.7113072+00",
"netProtocol": null,
"objectName": "",
"objectType": "LOGIN",
"paramList": null,
"pid": null,
"remoteHost": "local machine",
"remotePort": null,
"rowCount": 0,
"serverHost": "172.31.30.159",
"serverType": "SQLSERVER",
"serverVersion": "15.00.4073.23.v1.R1",
"serviceName": "sqlserver-ee",
"sessionId": 0,
"startTime": null,
"statementId": "0x1eb0d1808d34a94b9d3dcf5432750f02",
"substatementId": 1,
"transactionId": "0",
"type": "record",
"engineNativeAuditFields": {
"target_database_principal_id": 0,
959
Amazon Relational Database Service User Guide
Monitoring activity streams
"target_server_principal_id": 0,
"target_database_principal_name": "",
"server_principal_id": 0,
"user_defined_information": "",
"response_rows": 0,
"database_principal_name": "",
"target_server_principal_name": "",
"schema_name": "",
"is_column_permission": false,
"object_id": 0,
"server_instance_name": "EC2AMAZ-NFUJJNO",
"target_server_principal_sid": null,
"additional_information": "<action_info "xmlns=\"http://
schemas.microsoft.com/sqlserver/2008/sqlaudit_data\"><pooled_connection>0</
pooled_connection><error>0x00004818</error><state>8</state><address>local machine</
address><PasswordFirstNibbleHash>B</PasswordFirstNibbleHash></action_info>"-->,
"duration_milliseconds": 0,
"permission_bitmask": "0x00000000000000000000000000000000",
"data_sensitivity_information": "",
"session_server_principal_name": "",
"connection_id": "98B4F537-0F82-49E3-AB08-B9D33B5893EF",
"audit_schema_version": 1,
"database_principal_id": 0,
"server_principal_sid": null,
"user_defined_event_id": 0,
"host_name": "EC2AMAZ-NFUJJNO"
}
}
]
}
Note
If a database activity stream isn't enabled, then the last field in the JSON document is
"engineNativeAuditFields": { }.
The following example shows a CREATE TABLE event for your Oracle database.
{
"class": "Standard",
"clientApplication": "sqlplus@ip-12-34-5-678 (TNS V1-V3)",
"command": "CREATE TABLE",
"commandText": "CREATE TABLE persons(\n person_id NUMBER GENERATED BY DEFAULT AS
IDENTITY,\n first_name VARCHAR2(50) NOT NULL,\n last_name VARCHAR2(50) NOT NULL,\n
PRIMARY KEY(person_id)\n)",
"dbid": "0123456789",
"databaseName": "ORCL",
"dbProtocol": "oracle",
"dbUserName": "TEST",
"endTime": null,
"errorMessage": null,
"exitCode": 0,
"logTime": "2021-01-15 00:22:49.535239",
"netProtocol": "beq",
"objectName": "PERSONS",
"objectType": "TEST",
"paramList": [],
"pid": 17687,
"remoteHost": "123.456.789.0",
"remotePort": null,
"rowCount": null,
"serverHost": "987.654.321.01",
"serverType": "oracle",
960
Amazon Relational Database Service User Guide
Monitoring activity streams
"serverVersion": "19.0.0.0.ru-2020-01.rur-2020-01.r1.EE.3",
"serviceName": "oracle-ee",
"sessionId": 1234567890,
"startTime": null,
"statementId": 43,
"substatementId": null,
"transactionId": "090011007F0D0000",
"engineNativeAuditFields": {
"UNIFIED_AUDIT_POLICIES": "TEST_POL_EVERYTHING",
"FGA_POLICY_NAME": null,
"DV_OBJECT_STATUS": null,
"SYSTEM_PRIVILEGE_USED": "CREATE SEQUENCE, CREATE TABLE",
"OLS_LABEL_COMPONENT_TYPE": null,
"XS_SESSIONID": null,
"ADDITIONAL_INFO": null,
"INSTANCE_ID": 1,
"DV_COMMENT": null,
"RMAN_SESSION_STAMP": null,
"NEW_NAME": null,
"DV_ACTION_NAME": null,
"OLS_PROGRAM_UNIT_NAME": null,
"OLS_STRING_LABEL": null,
"RMAN_SESSION_RECID": null,
"OBJECT_PRIVILEGES": null,
"OLS_OLD_VALUE": null,
"XS_TARGET_PRINCIPAL_NAME": null,
"XS_NS_ATTRIBUTE": null,
"XS_NS_NAME": null,
"DBLINK_INFO": null,
"AUTHENTICATION_TYPE": "(TYPE\u003d(DATABASE));(CLIENT ADDRESS\u003d((PROTOCOL
\u003dbeq)(HOST\u003d123.456.789.0)));",
"OBJECT_EDITION": null,
"OLS_PRIVILEGES_GRANTED": null,
"EXCLUDED_USER": null,
"DV_ACTION_OBJECT_NAME": null,
"OLS_LABEL_COMPONENT_NAME": null,
"EXCLUDED_SCHEMA": null,
"DP_TEXT_PARAMETERS1": null,
"XS_USER_NAME": null,
"XS_ENABLED_ROLE": null,
"XS_NS_ATTRIBUTE_NEW_VAL": null,
"DIRECT_PATH_NUM_COLUMNS_LOADED": null,
"AUDIT_OPTION": null,
"DV_EXTENDED_ACTION_CODE": null,
"XS_PACKAGE_NAME": null,
"OLS_NEW_VALUE": null,
"DV_RETURN_CODE": null,
"XS_CALLBACK_EVENT_TYPE": null,
"USERHOST": "ip-10-13-0-122",
"GLOBAL_USERID": null,
"CLIENT_IDENTIFIER": null,
"RMAN_OPERATION": null,
"TERMINAL": "pts/1",
"OS_USERNAME": "rdsdb",
"OLS_MAX_READ_LABEL": null,
"XS_PROXY_USER_NAME": null,
"XS_DATASEC_POLICY_NAME": null,
"DV_FACTOR_CONTEXT": null,
"OLS_MAX_WRITE_LABEL": null,
"OLS_PARENT_GROUP_NAME": null,
"EXCLUDED_OBJECT": null,
"DV_RULE_SET_NAME": null,
"EXTERNAL_USERID": null,
"EXECUTION_ID": null,
"ROLE": null,
"PROXY_SESSIONID": 0,
961
Amazon Relational Database Service User Guide
Monitoring activity streams
"DP_BOOLEAN_PARAMETERS1": null,
"OLS_POLICY_NAME": null,
"OLS_GRANTEE": null,
"OLS_MIN_WRITE_LABEL": null,
"APPLICATION_CONTEXTS": null,
"XS_SCHEMA_NAME": null,
"DV_GRANTEE": null,
"XS_COOKIE": null,
"DBPROXY_USERNAME": null,
"DV_ACTION_CODE": null,
"OLS_PRIVILEGES_USED": null,
"RMAN_DEVICE_TYPE": null,
"XS_NS_ATTRIBUTE_OLD_VAL": null,
"TARGET_USER": null,
"XS_ENTITY_TYPE": null,
"ENTRY_ID": 12,
"XS_PROCEDURE_NAME": null,
"XS_INACTIVITY_TIMEOUT": null,
"RMAN_OBJECT_TYPE": null,
"SYSTEM_PRIVILEGE": null,
"NEW_SCHEMA": null,
"SCN": 5133083
}
}
The following example shows a CREATE TABLE event for your SQL Server database.
{
"type": "DatabaseActivityMonitoringRecord",
"clusterId": "",
"instanceId": "db-4JCWQLUZVFYP7DIWP6JVQ77O3Q",
"databaseActivityEventList": [
{
"class": "SCHEMA",
"clientApplication": "Microsoft SQL Server Management Studio - Query",
"command": "ALTER",
"commandText": "Create table [testDB].[dbo].[TestTable2](\r\ntextA
varchar(6000),\r\n textB varchar(6000)\r\n)",
"databaseName": "testDB",
"dbProtocol": "SQLSERVER",
"dbUserName": "test",
"endTime": null,
"errorMessage": null,
"exitCode": 1,
"logTime": "2022-10-06 21:44:38.4120677+00",
"netProtocol": null,
"objectName": "dbo",
"objectType": "SCHEMA",
"paramList": null,
"pid": null,
"remoteHost": "local machine",
"remotePort": null,
"rowCount": 0,
"serverHost": "172.31.30.159",
"serverType": "SQLSERVER",
"serverVersion": "15.00.4073.23.v1.R1",
"serviceName": "sqlserver-ee",
"sessionId": 84,
"startTime": null,
"statementId": "0x5178d33d56e95e419558b9607158a5bd",
"substatementId": 1,
"transactionId": "4561864",
"type": "record",
"engineNativeAuditFields": {
"target_database_principal_id": 0,
962
Amazon Relational Database Service User Guide
Monitoring activity streams
"target_server_principal_id": 0,
"target_database_principal_name": "",
"server_principal_id": 2,
"user_defined_information": "",
"response_rows": 0,
"database_principal_name": "dbo",
"target_server_principal_name": "",
"schema_name": "",
"is_column_permission": false,
"object_id": 1,
"server_instance_name": "EC2AMAZ-NFUJJNO",
"target_server_principal_sid": null,
"additional_information": "",
"duration_milliseconds": 0,
"permission_bitmask": "0x00000000000000000000000000000000",
"data_sensitivity_information": "",
"session_server_principal_name": "test",
"connection_id": "EE1FE3FD-EF2C-41FD-AF45-9051E0CD983A",
"audit_schema_version": 1,
"database_principal_id": 1,
"server_principal_sid":
"0x010500000000000515000000bdc2795e2d0717901ba6998cf4010000",
"user_defined_event_id": 0,
"host_name": "EC2AMAZ-NFUJJNO"
}
}
]
}
The following example shows a SELECT event for your Oracle DB.
{
"class": "Standard",
"clientApplication": "sqlplus@ip-12-34-5-678 (TNS V1-V3)",
"command": "SELECT",
"commandText": "select count(*) from persons",
"databaseName": "1234567890",
"dbProtocol": "oracle",
"dbUserName": "TEST",
"endTime": null,
"errorMessage": null,
"exitCode": 0,
"logTime": "2021-01-15 00:25:18.850375",
"netProtocol": "beq",
"objectName": "PERSONS",
"objectType": "TEST",
"paramList": [],
"pid": 17687,
"remoteHost": "123.456.789.0",
"remotePort": null,
"rowCount": null,
"serverHost": "987.654.321.09",
"serverType": "oracle",
"serverVersion": "19.0.0.0.ru-2020-01.rur-2020-01.r1.EE.3",
"serviceName": "oracle-ee",
"sessionId": 1080639707,
"startTime": null,
"statementId": 44,
"substatementId": null,
"transactionId": null,
"engineNativeAuditFields": {
"UNIFIED_AUDIT_POLICIES": "TEST_POL_EVERYTHING",
963
Amazon Relational Database Service User Guide
Monitoring activity streams
"FGA_POLICY_NAME": null,
"DV_OBJECT_STATUS": null,
"SYSTEM_PRIVILEGE_USED": null,
"OLS_LABEL_COMPONENT_TYPE": null,
"XS_SESSIONID": null,
"ADDITIONAL_INFO": null,
"INSTANCE_ID": 1,
"DV_COMMENT": null,
"RMAN_SESSION_STAMP": null,
"NEW_NAME": null,
"DV_ACTION_NAME": null,
"OLS_PROGRAM_UNIT_NAME": null,
"OLS_STRING_LABEL": null,
"RMAN_SESSION_RECID": null,
"OBJECT_PRIVILEGES": null,
"OLS_OLD_VALUE": null,
"XS_TARGET_PRINCIPAL_NAME": null,
"XS_NS_ATTRIBUTE": null,
"XS_NS_NAME": null,
"DBLINK_INFO": null,
"AUTHENTICATION_TYPE": "(TYPE\u003d(DATABASE));(CLIENT ADDRESS\u003d((PROTOCOL
\u003dbeq)(HOST\u003d123.456.789.0)));",
"OBJECT_EDITION": null,
"OLS_PRIVILEGES_GRANTED": null,
"EXCLUDED_USER": null,
"DV_ACTION_OBJECT_NAME": null,
"OLS_LABEL_COMPONENT_NAME": null,
"EXCLUDED_SCHEMA": null,
"DP_TEXT_PARAMETERS1": null,
"XS_USER_NAME": null,
"XS_ENABLED_ROLE": null,
"XS_NS_ATTRIBUTE_NEW_VAL": null,
"DIRECT_PATH_NUM_COLUMNS_LOADED": null,
"AUDIT_OPTION": null,
"DV_EXTENDED_ACTION_CODE": null,
"XS_PACKAGE_NAME": null,
"OLS_NEW_VALUE": null,
"DV_RETURN_CODE": null,
"XS_CALLBACK_EVENT_TYPE": null,
"USERHOST": "ip-12-34-5-678",
"GLOBAL_USERID": null,
"CLIENT_IDENTIFIER": null,
"RMAN_OPERATION": null,
"TERMINAL": "pts/1",
"OS_USERNAME": "rdsdb",
"OLS_MAX_READ_LABEL": null,
"XS_PROXY_USER_NAME": null,
"XS_DATASEC_POLICY_NAME": null,
"DV_FACTOR_CONTEXT": null,
"OLS_MAX_WRITE_LABEL": null,
"OLS_PARENT_GROUP_NAME": null,
"EXCLUDED_OBJECT": null,
"DV_RULE_SET_NAME": null,
"EXTERNAL_USERID": null,
"EXECUTION_ID": null,
"ROLE": null,
"PROXY_SESSIONID": 0,
"DP_BOOLEAN_PARAMETERS1": null,
"OLS_POLICY_NAME": null,
"OLS_GRANTEE": null,
"OLS_MIN_WRITE_LABEL": null,
"APPLICATION_CONTEXTS": null,
"XS_SCHEMA_NAME": null,
"DV_GRANTEE": null,
"XS_COOKIE": null,
"DBPROXY_USERNAME": null,
964
Amazon Relational Database Service User Guide
Monitoring activity streams
"DV_ACTION_CODE": null,
"OLS_PRIVILEGES_USED": null,
"RMAN_DEVICE_TYPE": null,
"XS_NS_ATTRIBUTE_OLD_VAL": null,
"TARGET_USER": null,
"XS_ENTITY_TYPE": null,
"ENTRY_ID": 13,
"XS_PROCEDURE_NAME": null,
"XS_INACTIVITY_TIMEOUT": null,
"RMAN_OBJECT_TYPE": null,
"SYSTEM_PRIVILEGE": null,
"NEW_SCHEMA": null,
"SCN": 5136972
}
}
The following example shows a SELECT event for your SQL Server DB.
{
"type": "DatabaseActivityMonitoringRecord",
"clusterId": "",
"instanceId": "db-4JCWQLUZVFYP7DIWP6JVQ77O3Q",
"databaseActivityEventList": [
{
"class": "TABLE",
"clientApplication": "Microsoft SQL Server Management Studio - Query",
"command": "SELECT",
"commandText": "select * from [testDB].[dbo].[TestTable]",
"databaseName": "testDB",
"dbProtocol": "SQLSERVER",
"dbUserName": "test",
"endTime": null,
"errorMessage": null,
"exitCode": 1,
"logTime": "2022-10-06 21:24:59.9422268+00",
"netProtocol": null,
"objectName": "TestTable",
"objectType": "TABLE",
"paramList": null,
"pid": null,
"remoteHost": "local machine",
"remotePort": null,
"rowCount": 0,
"serverHost": "172.31.30.159",
"serverType": "SQLSERVER",
"serverVersion": "15.00.4073.23.v1.R1",
"serviceName": "sqlserver-ee",
"sessionId": 62,
"startTime": null,
"statementId": "0x03baed90412f564fad640ebe51f89b99",
"substatementId": 1,
"transactionId": "4532935",
"type": "record",
"engineNativeAuditFields": {
"target_database_principal_id": 0,
"target_server_principal_id": 0,
"target_database_principal_name": "",
"server_principal_id": 2,
"user_defined_information": "",
"response_rows": 0,
"database_principal_name": "dbo",
"target_server_principal_name": "",
"schema_name": "dbo",
"is_column_permission": true,
"object_id": 581577110,
965
Amazon Relational Database Service User Guide
Monitoring activity streams
"server_instance_name": "EC2AMAZ-NFUJJNO",
"target_server_principal_sid": null,
"additional_information": "",
"duration_milliseconds": 0,
"permission_bitmask": "0x00000000000000000000000000000001",
"data_sensitivity_information": "",
"session_server_principal_name": "test",
"connection_id": "AD3A5084-FB83-45C1-8334-E923459A8109",
"audit_schema_version": 1,
"database_principal_id": 1,
"server_principal_sid":
"0x010500000000000515000000bdc2795e2d0717901ba6998cf4010000",
"user_defined_event_id": 0,
"host_name": "EC2AMAZ-NFUJJNO"
}
}
]
}
databaseActivityEvents (p. 966) string A JSON object that contains the activity events.
Each event in the audit log is wrapped inside a record in JSON format. This record contains the following
fields.
type
This field represents the version of the database activity stream data protocol or contract. It defines
which fields are available.
966
Amazon Relational Database Service User Guide
Monitoring activity streams
databaseActivityEvents
An encrypted string representing one or more activity events. It's represented as a base64 byte
array. When you decrypt the string, the result is a record in JSON format with fields as shown in the
examples in this section.
key
The encrypted data key used to encrypt the databaseActivityEvents string. This is the same
AWS KMS key that you provided when you started the database activity stream.
{
"type":"DatabaseActivityMonitoringRecords",
"version":"1.3",
"databaseActivityEvents":"encrypted audit records",
"key":"encrypted key"
}
"type":"DatabaseActivityMonitoringRecords",
"version":"1.4",
"databaseActivityEvents":"encrypted audit records",
"key":"encrypted key"
Take the following steps to decrypt the contents of the databaseActivityEvents field:
1. Decrypt the value in the key JSON field using the KMS key you provided when starting database
activity stream. Doing so returns the data encryption key in clear text.
2. Base64-decode the value in the databaseActivityEvents JSON field to obtain the ciphertext, in
binary format, of the audit payload.
3. Decrypt the binary ciphertext with the data encryption key that you decoded in the first step.
4. Decompress the decrypted payload.
• The encrypted payload is in the databaseActivityEvents field.
• The databaseActivityEventList field contains an array of audit records. The type fields in the
array can be record or heartbeat.
The audit log activity event record is a JSON object that contains the following information.
databaseActivityEventList
string
(p. 968) An array of activity audit records or heartbeat messages.
967
Amazon Relational Database Service User Guide
Monitoring activity streams
When unified auditing is enabled in Oracle Database, the audit records are populated in this new
audit trail. The UNIFIED_AUDIT_TRAIL view displays audit records in tabular form by retrieving
the audit records from the audit trail. When you start a database activity stream, a column in
UNIFIED_AUDIT_TRAIL maps to a field in the databaseActivityEventList array.
Important
The event structure is subject to change. Amazon RDS might add new fields to activity events
in the future. In applications that parse the JSON data, make sure that your code can ignore or
take appropriate actions for unknown field names.
• Standard
• FineGrainedAudit
• XS
• Database Vault
• Label Security
• RMAN_AUDIT
• Datapump
• Direct path API
968
Amazon Relational Database Service User Guide
Monitoring activity streams
endTime string N/A This field isn't used for RDS for
Oracle and is always null.
969
Amazon Relational Database Service User Guide
Monitoring activity streams
ADDITIONAL_INFO
APPLICATION_CONTEXTS
AUDIT_OPTION
AUTHENTICATION_TYPE
CLIENT_IDENTIFIER
CURRENT_USER
DBLINK_INFO
DBPROXY_USERNAME
DIRECT_PATH_NUM_COLUMNS_LOADED
DP_BOOLEAN_PARAMETERS1
DP_TEXT_PARAMETERS1
DV_ACTION_CODE
DV_ACTION_NAME
DV_ACTION_OBJECT_NAME
DV_COMMENT
DV_EXTENDED_ACTION_CODE
DV_FACTOR_CONTEXT
DV_GRANTEE
DV_OBJECT_STATUS
DV_RETURN_CODE
DV_RULE_SET_NAME
ENTRY_ID
EXCLUDED_OBJECT
EXCLUDED_SCHEMA
EXCLUDED_USER
EXECUTION_ID
EXTERNAL_USERID
FGA_POLICY_NAME
GLOBAL_USERID
INSTANCE_ID
KSACL_SERVICE_NAME
KSACL_SOURCE_LOCATION
KSACL_USER_NAME
NEW_NAME
NEW_SCHEMA
OBJECT_EDITION
OBJECT_PRIVILEGES
OLS_GRANTEE
OLS_LABEL_COMPONENT_NAME
OLS_LABEL_COMPONENT_TYPE
OLS_MAX_READ_LABEL
OLS_MAX_WRITE_LABEL
OLS_MIN_WRITE_LABEL
OLS_NEW_VALUE
OLS_OLD_VALUE
OLS_PARENT_GROUP_NAME
OLS_POLICY_NAME
OLS_PRIVILEGES_GRANTED
OLS_PRIVILEGES_USED
OLS_PROGRAM_UNIT_NAME
OLS_STRING_LABEL
970
Amazon Relational Database Service User Guide
Monitoring activity streams
errorMessage string N/A This field isn't used for RDS for
Oracle and is always null.
971
Amazon Relational Database Service User Guide
Monitoring activity streams
rowCount number N/A This field isn't used for RDS for
Oracle and is always null.
972
Amazon Relational Database Service User Guide
Monitoring activity streams
startTime string N/A This field isn't used for RDS for
Oracle and is always null.
substatementId N/A N/A This field isn't used for RDS for
Oracle and is always null.
string
clientApplication The application that the client connects
sys.fn_get_audit_file.application_name
as reported by the client (SQL Server
version 14 and higher). This field is null
in SQL Server version 13.
endTime string N/A This field isn't used by Amazon RDS for
SQL Server and the value is null.
object
engineNativeAuditFields
Each field in sys.fn_get_audit_file that By default, this object is empty. When
is not listed in this column. you start the activity stream with the
--engine-native-audit-fields-
included option, this object includes
other native engine audit fields, which
are not returned by this JSON map.
973
Amazon Relational Database Service User Guide
Monitoring activity streams
Values include:
• 0 – Fail
• 1 – Success
netProtocol string N/A This field isn't used by Amazon RDS for
SQL Server and the value is null.
paramList string N/A This field isn't used by Amazon RDS for
SQL Server and the value is null.
pid integer N/A This field isn't used by Amazon RDS for
SQL Server and the value is null.
remotePort integer N/A This field isn't used by Amazon RDS for
SQL Server and the value is null.
974
Amazon Relational Database Service User Guide
Managing access to activity streams
string
serverVersion Database Host The database server version, for
example, 15.00.4073.23.v1.R1 for SQL
Server 2017.
startTime string N/A This field isn't used by Amazon RDS for
SQL Server and the value is null.
integer
substatementId sys.fn_get_audit_file.sequence_numberAn identifier to determine the sequence
number for a statement. This identifier
helps when large records are split into
multiple records.
integer
transactionId sys.fn_get_audit_file.transaction_id An identifier of a transaction. If there
aren't any active transactions, the value
is zero.
type string Database activity stream generated The type of event. The values are
record or heartbeat.
You set access to database activity streams using IAM policies. For more information about Amazon RDS
authentication, see Identity and access management for Amazon RDS (p. 2606). For more information
about creating IAM policies, see Creating and using an IAM policy for IAM database access (p. 2646).
To give users fine-grained access to modify activity streams, use the service-specific operation context
keys rds:StartActivityStream and rds:StopActivityStream in an IAM policy. The following
IAM policy example allows a user or role to configure activity streams.
{
"Version":"2012-10-17",
975
Amazon Relational Database Service User Guide
Managing access to activity streams
"Statement":[
{
"Sid":"ConfigureActivityStreams",
"Effect":"Allow",
"Action": [
"rds:StartActivityStream",
"rds:StopActivityStream"
],
"Resource":"*",
}
]
}
The following IAM policy example allows a user or role to start activity streams.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AllowStartActivityStreams",
"Effect":"Allow",
"Action":"rds:StartActivityStream",
"Resource":"*"
}
]
}
The following IAM policy example allows a user or role to stop activity streams.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AllowStopActivityStreams",
"Effect":"Allow",
"Action":"rds:StopActivityStream",
"Resource":"*"
}
]
}
The following IAM policy example prevents a user or role from starting activity streams.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"DenyStartActivityStreams",
"Effect":"Deny",
"Action":"rds:StartActivityStream",
"Resource":"*"
}
]
}
976
Amazon Relational Database Service User Guide
Managing access to activity streams
The following IAM policy example prevents a user or role from stopping activity streams.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"DenyStopActivityStreams",
"Effect":"Deny",
"Action":"rds:StopActivityStream",
"Resource":"*"
}
]
}
977
Amazon Relational Database Service User Guide
Database customization challenge
For the latest webinars and blogs about RDS Custom, see Amazon RDS Custom resources.
Topics
• Addressing the challenge of database customization (p. 978)
• Management model and benefits for Amazon RDS Custom (p. 979)
• Amazon RDS Custom architecture (p. 981)
• Security in Amazon RDS Custom (p. 988)
• Working with RDS Custom for Oracle (p. 993)
• Working with RDS Custom for SQL Server (p. 1087)
If you need the entire database and operating system to be fully managed by AWS, we recommend
Amazon RDS. If you need administrative rights to the database and underlying operating system to make
dependent applications available, Amazon RDS Custom is the better choice. If you want full management
responsibility and simply need a managed compute service, the best option is self-managing your
commercial databases on Amazon EC2.
To deliver a managed service experience, Amazon RDS doesn't let you access the underlying host.
Amazon RDS also restricts access to some procedures and objects that require high-level privileges.
However, for some applications, you might need to perform operations as a privileged operating system
(OS) user.
Previously, if you needed to customize your application, you had to deploy your database on-premises
or on Amazon EC2. In this case, you bear most or all of the responsibility for database management, as
summarized in the following table.
978
Amazon Relational Database Service User Guide
RDS Custom management model and benefits
When you manage database software yourself, you gain more control, but you're also more prone to
user errors. For example, when you make changes manually, you might accidentally cause application
downtime. You might spend hours checking every change to identify and fix an issue. Ideally, you want a
managed database service that automates common DBA tasks, but also supports privileged access to the
database and underlying operating system.
RDS Custom supports only the Oracle Database and Microsoft SQL Server DB engines.
Topics
• Shared responsibility model in RDS Custom (p. 979)
• Support perimeter and unsupported configurations in RDS Custom (p. 981)
• Key benefits of RDS Custom (p. 981)
979
Amazon Relational Database Service User Guide
Shared responsibility model in RDS Custom
responsibilities beyond what you do in Amazon RDS. The result is that you have more control over
database and DB instance management than you do in Amazon RDS, while still benefiting from RDS
automation.
1. You own part of the process when using an RDS Custom feature.
For example, in RDS Custom for Oracle, you control which Oracle database patches to use and when
to apply them to your DB instances.
2. You are responsible for making sure that any customizations to RDS Custom features work correctly.
To help protect against invalid customization, RDS Custom has automation software that runs outside
of your DB instance. If your underlying Amazon EC2 instance becomes impaired, RDS Custom attempts
to resolve these problems automatically by either rebooting or replacing the EC2 instance. The
only user-visible change is a new IP address. For more information, see Amazon RDS Custom host
replacement (p. 983).
The following table details the shared responsibility model for different features of RDS Custom.
Feature Amazon EC2 Amazon RDS RDS Custom for RDS Custom
responsibility responsibility Oracle responsibility for SQL Server
responsibility
You can create an RDS Custom DB instance using Microsoft SQL Server. In this case:
You can create an RDS Custom DB instance using Oracle Database. In this case, you do the following:
980
Amazon Relational Database Service User Guide
Support perimeter and unsupported
configurations in RDS Custom
When using RDS Custom, you upload your own database installation files and patches. You create
a custom engine version (CEV) from these files. Then you can create an RDS Custom DB instance by
using this CEV.
• Manage your own licenses.
You bring your own Oracle Database licenses and manage licenses by yourself.
• Automate many of the same administrative tasks as Amazon RDS, including the following:
• Lifecycle management of databases
• Automated backups and point-in-time recovery (PITR)
• Monitoring the health of RDS Custom DB instances and observing changes to the infrastructure,
operating system, and databases
• Notification or taking action to fix issues depending on disruption to the DB instance
• Install third-party applications.
You can install software to run custom applications and agents. Because you have privileged access to
the host, you can modify file systems to support legacy applications.
• Install custom patches.
You can apply custom database patches or modify OS packages on your RDS Custom DB instances.
• Stage an on-premises database before moving it to a fully managed service.
If you manage your own on-premises database, you can stage the database to RDS Custom as-is.
After you familiarize yourself with the cloud environment, you can migrate your database to a fully
managed Amazon RDS DB instance.
• Create your own automation.
You can create, schedule, and run custom automation scripts for reporting, management, or diagnostic
tools.
981
Amazon Relational Database Service User Guide
VPC
Topics
• VPC (p. 982)
• RDS Custom automation and monitoring (p. 983)
• Amazon S3 (p. 986)
• AWS CloudTrail (p. 986)
VPC
As in Amazon RDS, your RDS Custom DB instance resides in a virtual private cloud (VPC).
982
Amazon Relational Database Service User Guide
RDS Custom automation and monitoring
The RDS Custom monitoring and recovery features offer similar functionality to Amazon RDS. By
default, RDS Custom is in full automation mode. The automation software has the following primary
responsibilities:
An important responsibility of RDS Custom automation is responding to problems with your Amazon
EC2 instance. For various reasons, the host might become impaired or unreachable. RDS Custom resolves
these problems by either rebooting or replacing the Amazon EC2 instance.
Topics
• Amazon RDS Custom host replacement (p. 983)
• RDS Custom support perimeter (p. 985)
Topics
983
Amazon Relational Database Service User Guide
RDS Custom automation and monitoring
The EC2 instance performs a normal shutdown and stops running. Any Amazon EBS volumes remain
attached to the instance, and their data persists. Any data stored in the instance store volumes (not
supported on RDS Custom) or RAM of the host computer is gone.
For more information, see Stop and start your instance in the Amazon EC2 User Guide for Linux
Instances.
2. Starts the Amazon EC2 host.
The EC2 instance migrates to a new underlying host hardware. In some cases, the RDS Custom DB
instance remains on the original host.
RDS Custom for Oracle retains all database and customer data after the operation, including root volume
data. No user intervention is required. On RDS Custom for SQL Server, database data is retained, but any
data on the C: drive, including operating system and customer data, is lost.
After the replacement process, the Amazon EC2 host has a new public IP address. The host retains the
following:
• Instance ID
• Private IP addresses
• Elastic IP addresses
• Instance metadata
• Data storage volume data
• Root volume data (on RDS Custom for Oracle)
• Before you change your configuration or the operating system, back up your data. If the root volume
or operating system becomes corrupt, host replacement can't repair it. Your only options are restoring
from a DB snapshot or point-in-time recovery.
• Don't manually stop or terminate the physical Amazon EC2 host. Both actions result in the instance
being put outside the RDS Custom support perimeter.
• (RDS Custom for SQL Server) If you attach additional volumes to the Amazon EC2 host, configure
them to remount upon restart. If the host is impaired, RDS Custom might stop and start the host
automatically.
984
Amazon Relational Database Service User Guide
RDS Custom automation and monitoring
The support perimeter checks that your DB instance conforms to the requirements listed in Fixing
unsupported configurations in RDS Custom for Oracle (p. 1080) and Fixing unsupported configurations
in RDS Custom for SQL Server (p. 1172). If any of these requirements aren't met, RDS Custom considers
your DB instance to be outside of the support perimeter.
Topics
• Unsupported configurations in RDS Custom (p. 985)
• Troubleshooting unsupported configurations (p. 985)
985
Amazon Relational Database Service User Guide
Amazon S3
Amazon S3
If you use RDS Custom for Oracle, you upload installation media to a user-created Amazon S3 bucket.
RDS Custom for Oracle uses the media in this bucket to create a custom engine version (CEV). A CEV is a
binary volume snapshot of a database version and Amazon Machine Image (AMI). From the CEV, you can
create an RDS Custom DB instance. For more information, see Working with custom engine versions for
Amazon RDS Custom for Oracle (p. 1015).
For both RDS Custom for Oracle and RDS Custom for SQL Server, RDS Custom automatically creates an
Amazon S3 bucket prefixed with the string do-not-delete-rds-custom-. RDS Custom uses the do-
not-delete-rds-custom- S3 bucket to store the following types of files:
RDS Custom creates the do-not-delete-rds-custom- S3 bucket when you create either of the
following resources:
RDS Custom creates one bucket for each combination of the following:
• AWS account ID
• Engine type (either RDS Custom for Oracle or RDS Custom for SQL Server)
• AWS Region
For example, if you create RDS Custom for Oracle CEVs in a single AWS Region, one do-not-delete-
rds-custom- bucket exists. If you create multiple RDS Custom for SQL Server instances, and they reside
in different AWS Regions, one do-not-delete-rds-custom- bucket exists in each AWS Region. If you
create one RDS Custom for Oracle instance and two RDS Custom for SQL Server instances in a single
AWS Region, two do-not-delete-rds-custom- buckets exist.
AWS CloudTrail
RDS Custom automatically creates an AWS CloudTrail trail whose name begins with do-not-delete-
rds-custom-. The RDS Custom support perimeter relies on the events from CloudTrail to determine
whether your actions affect RDS Custom automation. For more information, see Troubleshooting
unsupported configurations (p. 985).
RDS Custom creates the trail when you create your first DB instance. RDS Custom creates one trail for
each combination of the following:
• AWS account ID
• Engine type (either RDS Custom for Oracle or RDS Custom for SQL Server)
• AWS Region
When you delete an RDS Custom DB instance, the CloudTrail for this instance isn't automatically
removed. In this case, your AWS account continues to be billed for the undeleted CloudTrail. RDS Custom
986
Amazon Relational Database Service User Guide
AWS CloudTrail
is not responsible for the deletion of this resource. To learn how to remove the CloudTrail manually, see
Deleting a trail in the AWS CloudTrail User Guide.
987
Amazon Relational Database Service User Guide
RDS Custom security
Topics
• How RDS Custom securely manages tasks on your behalf (p. 988)
• SSL certificates (p. 989)
• Securing your Amazon S3 bucket against the confused deputy problem (p. 989)
• Rotating RDS Custom for Oracle credentials for compliance programs (p. 990)
A service-linked role is predefined by the service and includes all permissions that the service needs
to call other AWS services on your behalf. For RDS Custom, AWSServiceRoleForRDSCustom is
a service-linked role that is defined according to the principle of least privilege. RDS Custom uses
the permissions in AmazonRDSCustomServiceRolePolicy, which is the policy attached to this
role, to perform most provisioning and all off-host management tasks. For more information, see
AmazonRDSCustomServiceRolePolicy.
When it performs tasks on the host, RDS Custom automation uses credentials from the service-
linked role to run commands using AWS Systems Manager. You can audit the command history
through the Systems Manager command history and AWS CloudTrail. Systems Manager connects
to your RDS Custom DB instance using your networking setup. For more information, see Step 3:
Configure IAM and your Amazon VPC (p. 1003).
Temporary IAM credentials
When provisioning or deleting resources, RDS Custom sometimes uses temporary credentials derived
from the credentials of the calling IAM principal. These IAM credentials are restricted by the IAM
policies attached to that principal and expire after the operation is completed. To learn about the
permissions required for IAM principals who use RDS Custom, see Step 4: Grant required permissions
to your IAM user or role (p. 1012).
Amazon EC2 instance profile
An EC2 instance profile is a container for an IAM role that you can use to pass role information to an
EC2 instance. An EC2 instance underlies an RDS Custom DB instance. You provide an instance profile
when you create an RDS Custom DB instance. RDS Custom uses EC2 instance profile credentials
when it performs host-based management tasks such as backups. For more information, see Create
your IAM role and instance profile manually (p. 1007).
SSH key pair
When RDS Custom creates the EC2 instance that underlies a DB instance, it creates an SSH key pair
on your behalf. The key uses the naming prefix do-not-delete-rds-custom-ssh-privatekey-
db-. AWS Secrets Manager stores this SSH private key as a secret in your AWS account. Amazon RDS
doesn't store, access, or use these credentials. For more information, see Amazon EC2 key pairs and
Linux instances.
Note
RDS Custom
988
Amazon Relational Database Service User Guide
SSL certificates
SSL certificates
RDS Custom DB instances don't support managed SSL certificates. If you want to deploy SSL, you can
self-manage SSL certificates in your own wallet and create an SSL listener to secure the connections
between the client database or for database replication. For more information, see Configuring Transport
Layer Security Authentication in the Oracle Database documentation.
You can make these S3 buckets more secure by using the global condition context keys to prevent
the confused deputy problem. For more information, see Preventing cross-service confused deputy
problems (p. 2640).
The following RDS Custom for Oracle example shows the use of the aws:SourceArn and
aws:SourceAccount global condition context keys in an S3 bucket policy. For RDS Custom for Oracle,
make sure to include the Amazon Resource Names (ARNs) for the CEVs and the DB instances. For RDS
Custom for SQL Server, make sure to include the ARN for the DB instances.
...
{
"Sid": "AWSRDSCustomForOracleInstancesObjectLevelAccess",
"Effect": "Allow",
"Principal": {
"Service": "custom.rds.amazonaws.com"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObjectRetention",
"s3:BypassGovernanceRetention"
],
"Resource": "arn:aws:s3:::do-not-delete-rds-custom-123456789012-us-east-2-c8a6f7/
RDSCustomForOracle/Instances/*",
"Condition": {
"ArnLike": {
"aws:SourceArn": [
"arn:aws:rds:us-east-2:123456789012:db:*",
"arn:aws:rds:us-east-2:123456789012:cev:*/*"
]
},
"StringEquals": {
"aws:SourceAccount": "123456789012"
}
}
},
...
989
Amazon Relational Database Service User Guide
Rotating RDS Custom for Oracle
credentials for compliance programs
Topics
• Automatic rotation of credentials for predefined users (p. 990)
• Guidelines for rotating user credentials (p. 991)
• Rotating user credentials manually (p. 991)
An exception to the automatic credential rotation is an RDS Custom for Oracle DB instance that
you have manually configured as a standby database. RDS only rotates credentials for read
990
Amazon Relational Database Service User Guide
Rotating RDS Custom for Oracle
credentials for compliance programs
replicas that you have created using the create-db-instance-read-replica CLI command or
CreateDBInstanceReadReplica API.
• If your DB instance rotates credentials automatically, don't manually change or delete a secret,
password file, or password for users listed in Predefined Oracle users (p. 990). Otherwise, RDS
Custom might place your DB instance outside of the support perimeter, which suspends automatic
rotation.
• The RDS master user is not predefined, so you are responsible for either changing the password
manually or setting up automatic rotation in Secrets Manager. For more information, see Rotate AWS
Secrets Manager secrets.
If your database is in any of the preceding categories, you must rotate your user credentials manually.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In Databases, make sure that RDS isn't currently backing up your DB instance or performing
operations such configuring high availability.
3. In the database details page, choose Configuration and note the Resource ID for the DB instance. Or
you can use the AWS CLI command describe-db-instances.
4. Open the Secrets Manager console at https://console.aws.amazon.com/secretsmanager/.
5. In the search box, enter your DB Resource ID and find the secret in the following form:
do-not-delete-rds-custom-db-resource-id-numeric-string
This secret stores the password for RDSADMIN, SYS, and SYSTEM. The following sample key is for the
DB instance with the DB resource ID db-ABCDEFG12HIJKLNMNOPQRS3TUVWX:
do-not-delete-rds-custom-db-ABCDEFG12HIJKLNMNOPQRS3TUVWX-123456
Important
If your DB instance is a read replica and uses the custom-oracle-ee-cdb engine, two
secrets exist with the suffix db-resource-id-numeric-string, one for the master user
and the other for RDSADMIN, SYS, and SYSTEM. To find the correct secret, run the following
command on the host:
991
Amazon Relational Database Service User Guide
Rotating RDS Custom for Oracle
credentials for compliance programs
The dbMonitoringUserPassword attribute indicates the secret for RDSADMIN, SYS, and
SYSTEM.
6. If your DB instance exists in an Oracle Data Guard configuration, find the secret in the following
form:
do-not-delete-rds-custom-db-resource-id-numeric-string-dg
This secret stores the password for RDS_DATAGUARD. The following sample key is for the DB
instance with the DB resource ID db-ABCDEFG12HIJKLNMNOPQRS3TUVWX:
do-not-delete-rds-custom-db-ABCDEFG12HIJKLNMNOPQRS3TUVWX-789012-dg
7. For all database users listed in Predefined Oracle users (p. 990), update the passwords by following
the instructions in Modify an AWS Secrets Manager secret.
8. If your database is a standalone database or a source database in an Oracle Data Guard
configuration:
For example, if the new password for RDSADMIN stored in Secrets Manager is pwd-123, run the
following statement:
9. If your DB instance runs Oracle Database 12c Release 1 (12.1) and is managed by Oracle Data Guard,
manually copy the password file (orapw) from the primary DB instance to each standby DB instance.
If your DB instance is hosted in Amazon RDS, the password file location is /rdsdbdata/config/
orapw. For databases that aren't hosted in Amazon RDS, the default location is $ORACLE_HOME/
dbs/orapw$ORACLE_SID on Linux and UNIX and %ORACLE_HOME%\database\PWD%ORACLE_SID
%.ora on Windows.
992
Amazon Relational Database Service User Guide
Working with RDS Custom for Oracle
Topics
• RDS Custom for Oracle workflow (p. 993)
• Database architecture for Amazon RDS Custom for Oracle (p. 997)
• RDS Custom for Oracle requirements and limitations (p. 999)
• Setting up your environment for Amazon RDS Custom for Oracle (p. 1002)
• Working with custom engine versions for Amazon RDS Custom for Oracle (p. 1015)
• Configuring a DB instance for Amazon RDS Custom for Oracle (p. 1035)
• Managing an Amazon RDS Custom for Oracle DB instance (p. 1047)
• Working with Oracle replicas for RDS Custom for Oracle (p. 1060)
• Backing up and restoring an Amazon RDS Custom for Oracle DB instance (p. 1065)
• Migrating an on-premises database to RDS Custom for Oracle (p. 1072)
• Upgrading a DB instance for Amazon RDS Custom for Oracle (p. 1073)
• Troubleshooting DB issues for Amazon RDS Custom for Oracle (p. 1078)
993
Amazon Relational Database Service User Guide
RDS Custom for Oracle workflow
For more information, see Step 3: Upload your installation files to Amazon S3 (p. 1017).
2. Create an RDS Custom for Oracle custom engine version (CEV) from your media.
Choose either the multitenant or non-multitenant architecture. For more information, see Creating a
CEV (p. 1026).
3. Create an RDS Custom for Oracle DB instance from a CEV.
For more information, see Creating an RDS Custom for Oracle DB instance (p. 1035).
4. Connect your application to the DB instance endpoint.
For more information, see Connecting to your RDS Custom DB instance using SSH (p. 1041) and
Connecting to your RDS Custom DB instance using Session Manager (p. 1040).
5. (Optional) Access the host to customize your software.
6. Monitor notifications and messages generated by RDS Custom automation.
For RDS Custom, you supply your own media. When you create a custom engine version, RDS Custom
installs the media that you provide. RDS Custom media contains your database installation files and
patches. This service model is called Bring Your Own Media (BYOM).
CEV manifest
After you download Oracle database installation files from Oracle, you upload them to an Amazon S3
bucket. When you create your CEV, you specify the file names in a JSON document called a CEV manifest.
RDS Custom for Oracle uses the specified files and the AMI to create your CEV.
RDS Custom for Oracle provides JSON manifest templates with our recommended .zip files for each
supported Oracle Database release. For example, the following template is for the 19.17.0.0.0 RU.
{
"mediaImportTemplateVersion": "2020-08-14",
"databaseInstallationFileNames": [
"V982063-01.zip"
],
"opatchFileNames": [
"p6880880_190000_Linux-x86-64.zip"
],
"psuRuPatchFileNames": [
"p34419443_190000_Linux-x86-64.zip",
"p34411846_190000_Linux-x86-64.zip"
],
"otherPatchFileNames": [
"p28852325_190000_Linux-x86-64.zip",
994
Amazon Relational Database Service User Guide
RDS Custom for Oracle workflow
"p29997937_190000_Linux-x86-64.zip",
"p31335037_190000_Linux-x86-64.zip",
"p32327201_190000_Linux-x86-64.zip",
"p33613829_190000_Linux-x86-64.zip",
"p34006614_190000_Linux-x86-64.zip",
"p34533061_190000_Linux-x86-64.zip",
"p34533150_190000_Generic.zip",
"p28730253_190000_Linux-x86-64.zip",
"p29213893_1917000DBRU_Generic.zip",
"p33125873_1917000DBRU_Linux-x86-64.zip",
"p34446152_1917000DBRU_Linux-x86-64.zip"
]
}
You can also specify installation parameters in the JSON manifest. For example, you can set nondefault
values for the Oracle base, Oracle home, and the ID and name of the UNIX/Linux user and group. For
more information, see JSON fields in the CEV manifest (p. 1020).
• 19.customized_string
• 18.customized_string
• 12.2.customized_string
• 12.1.customized_string
You can use 1–50 alphanumeric characters, underscores, dashes, and periods. For example, you might
name your CEV 19.my_cev1.
Multitenant architecture
The multitenant architecture enables an Oracle database to function as a multitenant container database
(CDB). A CDB includes zero, one, or many customer-created pluggable databases (PDBs). A PDB is a
portable collection of schemas and objects that appears to an application as a non-CDB.
When you create a CEV, you can specify the either the multitenant or non-multitenant architecture.
You can create an RDS Custom for Oracle CDB only when the CEV that you used to create it uses the
multitenant architecture. For more information, see Working with custom engine versions for Amazon
RDS Custom for Oracle (p. 1015).
You can either create your RDS Custom for Oracle DB instance with the Oracle Multitenant architecture
(custom-oracle-ee-cdb engine type) or with the traditional non-CDB architecture (custom-oracle-
ee engine type). When you create a container database (CDB), it contains one pluggable database (PDB)
and one PDB seed. You can create additional PDBs manually using Oracle SQL.
To create your RDS Custom for Oracle DB instance, use the create-db-instance command. In this
command, specify which CEV to use. The procedure is similar to creating an Amazon RDS DB instance.
However, some parameters are different. For more information, see Configuring a DB instance for
Amazon RDS Custom for Oracle (p. 1035).
995
Amazon Relational Database Service User Guide
RDS Custom for Oracle workflow
Database connection
Like an Amazon RDS DB instance, an RDS Custom DB instance resides in a virtual private cloud (VPC).
Your application connects to the Oracle database using an Oracle listener.
If your database is a CDB, you can use the listener L_RDSCDB_001 to connect to the CDB root and to a
PDB. If you plug a non-CDB into a CDB, make sure to set USE_SID_AS_SERVICE_LISTENER = ON so
that migrated applications keep the same settings.
When you connect to a non-CDB, the master user is the user for the non-CDB. When you connect to a
CDB, the master user is the user for the PDB. To connect to the CDB root, log in to the host, start a SQL
client, and create an administrative user with SQL commands.
996
Amazon Relational Database Service User Guide
Database architecture for Amazon RDS Custom for Oracle
Topics
• Supported Oracle database architectures (p. 997)
• Supported engine types (p. 997)
• Supported features in the multitenant architecture (p. 997)
The multitenant and non-multitenant architectures are mutually exclusive. If a database isn't a CDB, it's
a non-CDB and so can't contain other databases. In RDS Custom for Oracle, only Oracle Database 19c
supports the multitenant architecture. Thus, if you create instances using previous database releases, you
can create only non-CDBs.
• custom-oracle-ee-cdb
This engine type specifies the multitenant architecture. This option is available only for Oracle
Database 19c. When you create an RDS for Oracle DB instance using the multitenant architecture, your
CDB includes the following containers:
• CDB root (CDB$ROOT)
• PDB seed (PDB$SEED)
• Initial PDB
You can create more PDBs using the Oracle SQL command CREATE PLUGGABLE DATABASE. You can't
use RDS APIs to create or delete PDBs.
• custom-oracle-ee
This engine type specifies the traditional non-CDB architecture. A non-CDB can't contain pluggable
databases (PDBs).
• Backups
• Restoring and point-time-restore (PITR) from backups
997
Amazon Relational Database Service User Guide
Database architecture for Amazon RDS Custom for Oracle
• Read replicas
• Minor version upgrades
998
Amazon Relational Database Service User Guide
RDS Custom for Oracle requirements and limitations
Topics
• AWS Region and database version support for RDS Custom for Oracle (p. 999)
• Edition and licensing support for RDS Custom for Oracle (p. 999)
• DB instance class support for RDS Custom for Oracle (p. 999)
• General requirements for RDS Custom for Oracle (p. 1000)
• General limitations for RDS Custom for Oracle (p. 1000)
AWS Region and database version support for RDS Custom for
Oracle
Feature availability and support vary across specific versions of each database engine, and across AWS
Regions. For more information on version and Region availability of RDS Custom for Oracle, see RDS
Custom (p. 151).
Type Size
999
Amazon Relational Database Service User Guide
RDS Custom for Oracle requirements and limitations
• Use Oracle Software Delivery Cloud to download Oracle installation and patch files. For more
information, see Prerequisites for creating an RDS Custom for Oracle DB instance (p. 1002).
• Use the DB instance classes shown in DB instance class support for RDS Custom for Oracle (p. 999).
The DB instances must run Oracle Linux 7 Update 9.
• Specify the gp2, gp3, or io1 solid state drives for storage. The maximum storage limit is 64 TiB.
• Make sure that you have an AWS KMS key to create an RDS Custom DB instance. For more information,
see Step 1: Create or reuse a symmetric encryption AWS KMS key (p. 1003).
• Use only the approved Oracle database installation and patch files. For more information, see Step 2:
Download your database installation files and patches from Oracle Software Delivery Cloud (p. 1016).
• Create an AWS Identity and Access Management (IAM) role and instance profile. For more information,
see Step 3: Configure IAM and your Amazon VPC (p. 1003).
• Make sure to supply a networking configuration that RDS Custom can use to access other AWS
services. For specific requirements, see Step 3: Configure IAM and your Amazon VPC (p. 1003).
• Make sure that the combined number of RDS Custom and Amazon RDS DB instances doesn't exceed
your quota limit. For example, if your quota for Amazon RDS is 40 DB instances, you can have 20 RDS
Custom for Oracle DB instances and 20 Amazon RDS DB instances.
• You can't provide your own AMI. You can specify only the default AMI or an AMI that has been
previously used by a CEV.
• You can't modify a CEV to use a different AMI.
• You can't modify the DB instance identifier of an existing RDS Custom for Oracle DB instance.
• You can't specify the multitenant architecture for a database release other than Oracle Database 19c.
• You can't create a CDB instance from a CEV that uses the custom-oracle-ee engine. The CEV must
use custom-oracle-ee-cdb.
• Not all Amazon RDS options are supported. For example, when you create or modify an RDS Custom
for Oracle DB instance, you can't do the following:
• Change the number of CPU cores and threads per core on the DB instance class.
• Turn on storage autoscaling.
• Create a Multi-AZ deployment.
Note
For an alternative HA solution, see the AWS blog article Build high availability for Amazon
RDS Custom for Oracle using read replicas.
• Set backup retention to 0.
• Configure Kerberos authentication.
• Specify your own DB parameter group or option group.
• Turn on Performance Insights.
• Turn on automatic minor version upgrade.
• You can't specify a DB instance storage size greater than the maximum of 64 TiB.
• You can't create multiple Oracle databases on a single RDS Custom for Oracle DB instance.
• You can’t stop your RDS Custom for Oracle DB instance or its underlying Amazon EC2 instance. Billing
for an RDS Custom for Oracle DB instance can't be stopped.
1000
Amazon Relational Database Service User Guide
RDS Custom for Oracle requirements and limitations
• You can't use automatic shared memory management. RDS Custom for Oracle supports only
automatic memory management. For more information, see Automatic Memory Management in the
Oracle Database Administrator’s Guide.
• Make sure not to change the DB_UNIQUE_NAME for the primary DB instance. Changing the name
causes any restore operation to become stuck.
For limitations specific to modifying an RDS Custom for Oracle DB instance, see Modifying your RDS
Custom for Oracle DB instance (p. 1052). For replication limitations, see General limitations for RDS
Custom for Oracle replication (p. 1062).
1001
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment
Topics
• Prerequisites for creating an RDS Custom for Oracle DB instance (p. 1002)
• Step 1: Create or reuse a symmetric encryption AWS KMS key (p. 1003)
• Step 2: Download and install the AWS CLI (p. 1003)
• Step 3: Configure IAM and your Amazon VPC (p. 1003)
• Step 4: Grant required permissions to your IAM user or role (p. 1012)
• You have access to My Oracle Support and Oracle Software Delivery Cloud to download the supported
list of installation files and patches for the Enterprise Edition of any of the following Oracle Database
releases:
• Oracle Database 19c
• Oracle Database 18c
• Oracle Database 12c Release 2 (12.2)
• Oracle Database 12c Release 1 (12.1)
If you use an unknown patch, custom engine version (CEV) creation fails. In this case, contact the RDS
Custom support team and ask it to add the missing patch.
For more information, see Step 2: Download your database installation files and patches from Oracle
Software Delivery Cloud (p. 1016).
• You have access to Amazon S3. This service is required for the following reasons:
• You upload your Oracle installation files to S3 buckets. You use the uploaded installation files to
create your RDS Custom CEV.
• RDS Custom for Oracle uses scripts downloaded from internally defined S3 buckets to perform
actions on your DB instances. These scripts are necessary for onboarding and RDS Custom
automation.
• RDS Custom for Oracle uploads certain files to S3 buckets located in your customer
account. These buckets use the following naming format: do-not-delete-rds-
custom-account_id-region-six_character_alphanumeric_string. For example, you might
have a bucket named do-not-delete-rds-custom-123456789012-us-east-1-12a3b4.
For more information, see Step 3: Upload your installation files to Amazon S3 (p. 1017) and Creating a
CEV (p. 1026).
• You supply your own virtual private cloud (VPC) and security group configuration. For more
information, see Step 3: Configure IAM and your Amazon VPC (p. 1003).
• The AWS Identity and Access Management (IAM) user that creates a CEV or RDS Custom DB instance
has the required permissions for IAM, CloudTrail, and Amazon S3.
For more information, see Step 4: Grant required permissions to your IAM user or role (p. 1012).
1002
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment
For each task, the following sections describe the requirements and limitations specific to the task.
For example, when you create your RDS Custom DB for Oracle instance, use either the db.m5 or db.r5
instance classes running Oracle Linux 7 Update 9. For general requirements that apply to RDS Custom,
see RDS Custom for Oracle requirements and limitations (p. 999).
• If you have an existing customer managed KMS key in your AWS account, you can use it with RDS
Custom. No further action is necessary.
• If you already created a customer managed symmetric encryption KMS key for a different RDS Custom
engine, you can reuse the same KMS key. No further action is necessary.
• If you don't have an existing customer managed symmetric encryption KMS key in your account, create
a KMS key by following the instructions in Creating keys in the AWS Key Management Service Developer
Guide.
• If you're creating a CEV or RDS Custom DB instance, and your KMS key is in a different AWS account,
make sure to use the AWS CLI. You can't use the AWS console with cross-account KMS keys.
Important
RDS Custom doesn't support AWS managed KMS keys.
Make sure that your symmetric encryption key grants access to the kms:Decrypt and
kms:GenerateDataKey operations to the AWS Identity and Access Management (IAM) role in your IAM
instance profile. If you have a new symmetric encryption key in your account, no changes are required.
Otherwise, make sure that your symmetric encryption key's policy grants access to these operations.
For more information, see Step 3: Configure IAM and your Amazon VPC (p. 1003).
For more information about configuring IAM for RDS Custom for Oracle, see Step 3: Configure IAM and
your Amazon VPC (p. 1003).
For information about downloading and installing the AWS CLI, see Installing or updating the latest
version of the AWS CLI.
• You plan to access RDS Custom only from the AWS Management Console.
• You have already downloaded the AWS CLI for Amazon RDS or a different RDS Custom DB engine.
1003
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment
Your RDS Custom DB instance is in a virtual private cloud (VPC) based on the Amazon VPC service, just
like an Amazon EC2 instance or Amazon RDS instance. You provide and configure your own VPC. Thus,
you have full control over your instance networking setup.
You can configure your IAM identity and virtual private cloud (VPC) using either of the following
techniques:
• Configure IAM and your VPC using AWS CloudFormation (p. 1004) (recommended)
• Follow the procedures in Create your IAM role and instance profile manually (p. 1007) and Configure
your VPC manually (p. 1011)
We strongly recommend that you configure your RDS Custom for Oracle environment using AWS
CloudFormation. This technique is the easiest and least error-prone.
Unlike RDS Custom for SQL Server, RDS Custom for Oracle doesn't create an access control list or
security groups. You must attach you own security group, subnets, and route tables.
1004
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment
1. Open the context (right-click) menu for the link custom-oracle-iam.zip and choose Save Link As.
2. Save the file to your computer.
3. Repeat the previous steps for the link custom-vpc.zip.
If you already configured your VPC for RDS Custom, skip this step.
When you use the CloudFormation template for IAM, it creates the following required resources:
a. Select the I acknowledge that AWS CloudFormation might create IAM resources with custom
names check box.
b. Choose Submit.
CloudFormation creates the IAM roles that RDS Custom for Oracle requires. In the left panel, when
custom-oracle-iam shows CREATE_COMPLETE, proceed to the next step.
7. In the left panel, choose custom-oracle-iam. In the right panel, do the following:
When you create your RDS Custom DB instance, you need to supply the instance profile ID.
1005
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment
If you've already configured your VPC for a different RDS Custom engine, and want to reuse the existing
VPC, skip this step. This section assumes that the following:
• You've already used CloudFormation to create your IAM instance profile and role.
• You know your route table ID.
For a DB instance to be private, it must be in a private subnet. For a subnet to be private, it must
not be associated with a route table that has a default internet gateway. For more information, see
Configure route tables in the Amazon VPC User Guide.
When you use the CloudFormation template for your VPC, it creates the following required resources:
• A private VPC
• A subnet group named rds-custom-private
• VPC endpoints that use the naming format vpce-string
CloudFormation configures your private VPC. In the left panel, when custom-vpc shows
CREATE_COMPLETE, proceed to the next step.
7. (Optional) Review the details of your VPC. In the Stacks pane, choose custom-vpc. In the right pane,
do the following:
The following steps show how to create the instance profile and role and then add the role to your
profile.
To create the RDS Custom instance profile and add the necessary role to it
1. Create the IAM role that uses the naming format AWSRDSCustomInstanceRole-region with a
trust policy that Amazon EC2 can use to assume this role.
2. Add an access policy to AWSRDSCustomInstanceRole-region.
3. Create an IAM instance profile for RDS Custom that uses the naming format
AWSRDSCustomInstanceProfile-region.
4. Add the AWSRDSCustomInstanceRole-region IAM role to the instance profile.
The following example creates the access policy named AWSRDSCustomIamRolePolicy, and adds it
to the IAM role AWSRDSCustomInstanceRole-region. This example assumes that you have set the
following environment variables:
$REGION
Set this variable to the AWS Region in which you plan to create your DB instance.
$ACCOUNT_ID
1007
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment
$KMS_KEY
Set this variable to the Amazon Resource Name (ARN) of the AWS KMS key that you want to use for
your RDS Custom DB instances. To specify more than one KMS key, add it to the Resources section
of statement ID (Sid) 11.
1008
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment
"logs:DescribeLogGroups",
"logs:CreateLogStream",
"logs:CreateLogGroup"
],
"Resource": [
"arn:aws:logs:'$REGION':*:log-group:rds-custom-instance*"
]
},
{
"Sid": "4",
"Effect": "Allow",
"Action": [
"s3:putObject",
"s3:getObject",
"s3:getObjectVersion"
],
"Resource": [
"arn:aws:s3:::do-not-delete-rds-custom-*/*"
]
},
{
"Sid": "5",
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData"
],
"Resource": [
"*"
],
"Condition": {
"StringEquals": {
"cloudwatch:namespace": [
"RDSCustomForOracle/Agent"
]
}
}
},
{
"Sid": "6",
"Effect": "Allow",
"Action": [
"events:PutEvents"
],
"Resource": [
"*"
]
},
{
"Sid": "7",
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": [
"arn:aws:secretsmanager:'$REGION':'$ACCOUNT_ID':secret:do-not-delete-rds-
custom-*"
]
},
{
"Sid": "8",
"Effect": "Allow",
"Action": [
"s3:ListBucketVersions"
],
"Resource": [
1009
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment
"arn:aws:s3:::do-not-delete-rds-custom-*"
]
},
{
"Sid": "9",
"Effect": "Allow",
"Action": "ec2:CreateSnapshots",
"Resource": [
"arn:aws:ec2:*:*:instance/*",
"arn:aws:ec2:*:*:volume/*"
],
"Condition": {
"StringEquals": {
"ec2:ResourceTag/AWSRDSCustom": "custom-oracle"
}
}
},
{
"Sid": "10",
"Effect": "Allow",
"Action": "ec2:CreateSnapshots",
"Resource": [
"arn:aws:ec2:*::snapshot/*"
]
},
{
"Sid": "11",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:kms:'$REGION':'$ACCOUNT_ID':key/'$KMS_KEY'"
]
},
{
"Sid": "12",
"Effect": "Allow",
"Action": "ec2:CreateTags",
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:CreateAction": [
"CreateSnapshots"
]
}
}
}
]
}'
An instance profile is a container that includes a single IAM role. RDS Custom uses the instance profile to
pass the role to the instance.
If you use the CLI to create a role, you create the role and instance profile as separate actions, with
potentially different names. Create your IAM instance profile as follows, naming it using the format
AWSRDSCustomInstanceProfile-region. The following example assumes that you have set the
environment variable $REGION to the AWS Region in which you want to create your DB instance.
1010
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment
--instance-profile-name AWSRDSCustomInstanceProfile-$REGION
Add your IAM role to the instance profile that you previously created. The following example assumes
that you have set the environment variable $REGION to the AWS Region in which you want to create
your DB instance.
Topics
• Create VPC endpoints for dependent AWS services (p. 1011)
• Configure the instance metadata service (p. 1012)
RDS Custom sends communication from your DB instance to other AWS services. To make sure that RDS
Custom can communicate, it validates network connectivity to the following AWS services:
• Amazon CloudWatch
• Amazon CloudWatch Logs
• Amazon CloudWatch Events
• Amazon EC2
• Amazon EventBridge
• Amazon S3
• AWS Secrets Manager
• AWS Systems Manager
If RDS Custom can't communicate with the necessary services, it publishes the following event:
Database instance in incompatible-network. SSM Agent connection not available. Amazon RDS
can't connect to the dependent AWS services.
To avoid incompatible-network errors, make sure that VPC components involved in communication
between your RDS Custom DB instance and AWS services satisfy the following requirements:
• The DB instance can make outbound connections on port 443 to other AWS services.
• The VPC allows incoming responses to requests originating from your RDS Custom DB instance.
• RDS Custom can correctly resolve the domain names of endpoints for each AWS service.
RDS Custom relies on AWS Systems Manager connectivity for its automation. For information about how
to configure VPC endpoints, see Creating VPC endpoints for Systems Manager. For the list of endpoints
in each Region, see AWS Systems Manager endpoints and quotas in the Amazon Web Services General
Reference.
1011
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment
If you already configured a VPC for a different RDS Custom DB engine, you can reuse that VPC and skip
this process.
• Access the instance metadata service using Instance Metadata Service Version 2 (IMDSv2).
• Allow outbound communications through port 80 (HTTP) to the IMDS link IP address.
• Request instance metadata from http://169.254.169.254, the IMDSv2 link.
For more information, see Use IMDSv2 in the Amazon EC2 User Guide for Linux Instances.
RDS Custom for Oracle automation uses IMDSv2 by default, by setting HttpTokens=enabled on the
underlying Amazon EC2 instance. However, you can use IMDSv1 if you want. For more information, see
Configure the instance metadata options in the Amazon EC2 User Guide for Linux Instances.
Topics
• IAM permissions required for Amazon S3 and AWS KMS (p. 1012)
• IAM permissions required for creating a CEV (p. 1013)
• IAM permissions required for creating a DB instance from a CEV (p. 1013)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CreateS3Bucket",
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:PutBucketPolicy",
"s3:PutBucketObjectLockConfiguration",
"s3:PutBucketVersioning"
],
"Resource": "arn:aws:s3:::do-not-delete-rds-custom-*"
},
{
"Sid": "CreateKmsGrant",
"Effect": "Allow",
"Action": [
"kms:CreateGrant",
"kms:DescribeKey"
1012
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment
],
"Resource": "*"
}
]
}
For more information about the kms:CreateGrant permission, see AWS KMS key
management (p. 2589).
s3:GetObjectAcl
s3:GetObject
s3:GetObjectTagging
s3:ListBucket
mediaimport:CreateDatabaseBinarySnapshot
The following sample JSON policy grants the additional permissions necessary to access bucket my-
custom-installation-files and its contents.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessToS3MediaBucket",
"Effect": "Allow",
"Action": [
"s3:GetObjectAcl",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-custom-installation-files",
"arn:aws:s3:::my-custom-installation-files/*"
]
},
{
"Sid": "PermissionForByom",
"Effect": "Allow",
"Action": [
"mediaimport:CreateDatabaseBinarySnapshot"
],
"Resource": "*"
}
]
}
You can grant similar permissions for Amazon S3 to caller accounts using an S3 bucket policy.
iam:SimulatePrincipalPolicy
cloudtrail:CreateTrail
cloudtrail:StartLogging
1013
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment
The following sample JSON policy grants the permissions necessary to validate an IAM role and log
information to an AWS CloudTrail.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ValidateIamRole",
"Effect": "Allow",
"Action": "iam:SimulatePrincipalPolicy",
"Resource": "*"
},
{
"Sid": "CreateCloudTrail",
"Effect": "Allow",
"Action": [
"cloudtrail:CreateTrail",
"cloudtrail:StartLogging"
],
"Resource": "arn:aws:cloudtrail:*:*:trail/do-not-delete-rds-custom-*"
}
]
}
1014
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
Topics
• Preparing to create a CEV (p. 1015)
• Creating a CEV (p. 1026)
• Modifying CEV status (p. 1030)
• Viewing CEV details (p. 1031)
• Deleting a CEV (p. 1033)
For example, you can use the April 2021 RU/RUR for Oracle Database 19c, or any valid combination
of installation files and patches. For more information on the versions and Regions supported by RDS
Custom for Oracle, see RDS Custom with RDS for Oracle.
Topics
• Step 1 (Optional): Download the manifest templates (p. 1015)
• Step 2: Download your database installation files and patches from Oracle Software Delivery
Cloud (p. 1016)
• Step 3: Upload your installation files to Amazon S3 (p. 1017)
• Step 4 (Optional): Share your installation media in S3 across AWS accounts (p. 1018)
• Step 5: Prepare the CEV manifest (p. 1020)
• Step 6 (Optional): Validate the CEV manifest (p. 1026)
• Step 7: Add necessary IAM permissions (p. 1026)
1. Identify the Oracle database installation files that you want to include in your CEV.
2. Download the installation files.
3. Create a JSON manifest that lists the installation files.
1015
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
RDS Custom for Oracle provides JSON manifest templates with our recommended .zip files for each
supported Oracle Database release. For example, the following template is for the 19.17.0.0.0 RU.
{
"mediaImportTemplateVersion": "2020-08-14",
"databaseInstallationFileNames": [
"V982063-01.zip"
],
"opatchFileNames": [
"p6880880_190000_Linux-x86-64.zip"
],
"psuRuPatchFileNames": [
"p34419443_190000_Linux-x86-64.zip",
"p34411846_190000_Linux-x86-64.zip"
],
"otherPatchFileNames": [
"p28852325_190000_Linux-x86-64.zip",
"p29997937_190000_Linux-x86-64.zip",
"p31335037_190000_Linux-x86-64.zip",
"p32327201_190000_Linux-x86-64.zip",
"p33613829_190000_Linux-x86-64.zip",
"p34006614_190000_Linux-x86-64.zip",
"p34533061_190000_Linux-x86-64.zip",
"p34533150_190000_Generic.zip",
"p28730253_190000_Linux-x86-64.zip",
"p29213893_1917000DBRU_Generic.zip",
"p33125873_1917000DBRU_Linux-x86-64.zip",
"p34446152_1917000DBRU_Linux-x86-64.zip"
]
}
Each template has an associated readme that includes instructions for downloading the patches, URLs
for the .zip files, and file checksums. You can use these templates as they are or modify them with
your own patches. To review the templates, download custom-oracle-manifest.zip to your local disk
and then open it with a file archiving application. For more information, see Step 5: Prepare the CEV
manifest (p. 1020).
Step 2: Download your database installation files and patches from Oracle
Software Delivery Cloud
When you have identified the installation files that you want for your CEV, download them to your local
system. The Oracle Database installation files and patches are hosted on Oracle Software Delivery Cloud.
Each CEV requires a base release, such as Oracle Database 19c or Oracle Database 12c Release 2 (12.2),
and an optional list of patches.
• DLP: Oracle Database Enterprise Edition 19.3.0.0.0 ( Oracle Database Enterprise Edition ).
• Choose DLP: Oracle Database 12c Enterprise Edition 18.0.0.0.0 ( Oracle Database Enterprise
Edition ).
• Choose DLP: Oracle Database 12c Enterprise Edition 12.2.0.1.0 ( Oracle Database Enterprise
Edition ).
• Choose DLP: Oracle Database 12c Enterprise Edition 12.1.0.2.0 ( Oracle Database Enterprise
Edition ).
4. Choose Continue.
1016
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
12.1 V46095-01_1of2.zip31FDC2AF41687B4E547A3A18F796424D8C1AF36406D2160F65B0AF6A9CD4735
V46095-01_2of2.zipfor V46095-01_1of2.zip
03DA14F5E875304B28F0F3BB02AF0EC33227885B99C9865DF70749D1E220ACC
for V46095-01_2of2.zip
10. Download your desired Oracle patches from updates.oracle.com or support.oracle.com to
your local system. You can find the URLs for the patches in the following locations:
• The readme files in the .zip file that you downloaded in Step 1 (Optional): Download the manifest
templates (p. 1015)
• The patches listed in each Release Update (RU) in Release notes for Amazon Relational Database
Service (Amazon RDS) for Oracle
Upload each installation .zip file separately. Don't combine the .zip files into a single .zip file.
• Use aws s3 sync to upload a directory.
List your installation files using either the AWS Management Console or the AWS CLI.
1017
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
aws s3 cp install-or-patch-file.zip \
s3://my-custom-installation-files/123456789012/cev1/
For Windows:
aws s3 cp install-or-patch-file.zip ^
s3://my-custom-installation-files/123456789012/cev1/
Verify that your S3 bucket is in the AWS Region where you plan to run the create-custom-db-
engine-version command.
aws s3 ls \
s3://my-custom-installation-files/123456789012/cev1/
The following example uploads the files in your local cev1 folder to the 123456789012/cev1 folder in
your Amazon S3 bucket.
For Windows:
The following example uploads all files in source-bucket to the 123456789012/cev1 folder in your
Amazon S3 bucket.
For Windows:
1018
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
you might want to use one AWS account to populate your media bucket and a different AWS account to
create CEVs. If you don't intend to share your media bucket, skip to the next section.
• You can access the account that created your media bucket and a different account in which you intend
to create CEVs.
• You intend to create CEVs in only one AWS Region. If you intend to use multiple Regions, create a
media bucket in each Region.
• You're using the CLI. If you're using the Amazon S3 console, adapt the following steps.
1. Log in to the AWS account that contains the S3 bucket into which you uploaded your installation
media.
2. Start with either a blank JSON policy template or an existing policy that you can adapt.
The following command retrieves an existing policy and saves it as my-policy.json. In this
example, the S3 bucket containing your installation files is named oracle-media-bucket.
• In the Resource element of your template, specify the S3 bucket into which you uploaded your
Oracle Database installation files.
• In the Principal element, specify the ARNs for all AWS accounts that you intend to use to create
CEVs. You can add the root, a user, or a role to the S3 bucket allow list. For more information, see
IAM identifiers in the AWS Identity and Access Management User Guide.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "GrantAccountsAccess",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::account-1:root",
"arn:aws:iam::account-2:user/user-name-with-path",
"arn:aws:iam::account-3:role/role-name-with-path",
...
]
},
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectTagging",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::oracle-media-bucket",
"arn:aws:s3:::oracle-media-bucket/*"
]
}
1019
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
]
}
In the following example, oracle-media-bucket is the name of the S3 bucket that contains your
installation files, and my-policy.json is the name of your JSON file.
For more information, see aws s3 ls in the AWS CLI Command Reference.
7. Create a CEV by following the steps in Creating a CEV (p. 1026).
• (Required) The list of installation .zip files that you uploaded to Amazon S3. RDS Custom applies the
patches in the order in which they're listed in the manifest.
• (Optional) Installation parameters that set nondefault values for the Oracle base, Oracle home, and
the ID and name of the UNIX/Linux user and group. Be aware that you can’t modify the installation
parameters for an existing CEV or an existing DB instance. You also can’t upgrade from one CEV to
another CEV when the installation parameters have different settings.
For sample CEV manifests, see the JSON templates that you downloaded in Step 1 (Optional): Download
the manifest templates (p. 1015). You can also review the samples in CEV manifest examples (p. 1023).
Topics
• JSON fields in the CEV manifest (p. 1020)
• Creating the CEV manifest (p. 1023)
• CEV manifest examples (p. 1023)
MediaImportTemplateVersion Version of the CEV manifest. The date is in the format YYYY-MM-DD.
opatchFileNames Ordered list of OPatch installers used for the Oracle DB engine. Only one
value is valid. Values for opatchFileNames must start with p6880880_.
1020
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
OtherPatchFileNames The patches that aren't in the list of PSU and RU patches. RDS Custom
applies these patches after applying the PSU and RU patches.
Important
If you include OtherPatchFileNames, opatchFileNames
is required. Values for opatchFileNames must start with
p6880880_.
1021
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
installationParameters Nondefault settings for the Oracle base, Oracle home, and the ID and name
of the UNIX/Linux user and group. You can set the following parameters:
oracleBase
The directory under which your Oracle binaries are installed. It is the
mount point of the binary volume that stores your files. The Oracle base
directory can include multiple Oracle homes. For example, if /home/
oracle/oracle.19.0.0.0.ru-2020-04.rur-2020-04.r1.EE.1 is
one of your Oracle home directories, then /home/oracle is the Oracle
base directory. A user-specified Oracle base directory is not a symbolic
link.
If you don't specify the Oracle base, the default directory is /rdsdbbin.
oracleHome
If you don't specify the Oracle home, the default naming format is /
rdsdbbin/oracle.major-engine-version.custom.r1.engine-
edition.1.
unixUname
The name of the UNIX user that owns the Oracle software. RDS Custom
assumes this user when running local database commands. If you specify
both unixUid and unixUname, RDS Custom creates the user if it
doesn't exist, and then assigns the UID to the user if it's not the same as
the initial UID.
The ID (UID) of the UNIX user that owns the Oracle software. If you
specify both unixUid and unixUname, RDS Custom creates the user if it
doesn't exist, and then assigns the UID to the user if it's not the same as
the initial UID.
The default UID is 61001. This is the UID of the user rdsdb.
unixGroupName
The name of the UNIX group. The UNIX user that owns the Oracle
software belongs to this group.
1022
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
Each Oracle Database release has a different list of supported installation files. When you create your
CEV manifest, make sure to specify only files that are supported by RDS Custom for Oracle. Otherwise,
CEV creation fails with an error. All patches listed in Release notes for Amazon Relational Database
Service (Amazon RDS) for Oracle are supported.
1. List all installation files that you plan to apply, in the order that you want to apply them.
2. Correlate the installation files with the JSON fields described in JSON fields in the CEV
manifest (p. 1020).
3. Do either of the following:
The following examples show CEV manifest files for different Oracle Database releases. If you include a
JSON field in your manifest, make sure that it isn't empty. For example, the following CEV manifest isn't
valid because otherPatchFileNames is empty.
{
"mediaImportTemplateVersion": "2020-08-14",
"databaseInstallationFileNames": [
"V982063-01.zip"
],
"opatchFileNames": [
"p6880880_190000_Linux-x86-64.zip"
],
"psuRuPatchFileNames": [
"p32126828_190000_Linux-x86-64.zip"
],
"otherPatchFileNames": [
]
}
Topics
• Sample CEV manifest for Oracle Database 12c Release 1 (12.1) (p. 1023)
• Sample CEV manifest for Oracle Database 12c Release 2 (12.2) (p. 1024)
• Sample CEV manifest for Oracle Database 18c (p. 1025)
• Sample CEV manifest for Oracle Database 19c (p. 1026)
Example Sample CEV manifest for Oracle Database 12c Release 1 (12.1)
In the following example for the July 2021 PSU for Oracle Database 12c Release 1 (12.1), RDS Custom
applies the patches in the order specified. Thus, RDS Custom applies p32768233, then p32876425, then
p18759211, and so on. The example sets new values for the UNIX user and group, and the Oracle home
and Oracle base.
{
"mediaImportTemplateVersion":"2020-08-14",
1023
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
"databaseInstallationFileNames":[
"V46095-01_1of2.zip",
"V46095-01_2of2.zip"
],
"opatchFileNames":[
"p6880880_121010_Linux-x86-64.zip"
],
"psuRuPatchFileNames":[
"p32768233_121020_Linux-x86-64.zip"
],
"otherPatchFileNames":[
"p32876425_121020_Linux-x86-64.zip",
"p18759211_121020_Linux-x86-64.zip",
"p19396455_121020_Linux-x86-64.zip",
"p20875898_121020_Linux-x86-64.zip",
"p22037014_121020_Linux-x86-64.zip",
"p22873635_121020_Linux-x86-64.zip",
"p23614158_121020_Linux-x86-64.zip",
"p24701840_121020_Linux-x86-64.zip",
"p25881255_121020_Linux-x86-64.zip",
"p27015449_121020_Linux-x86-64.zip",
"p28125601_121020_Linux-x86-64.zip",
"p28852325_121020_Linux-x86-64.zip",
"p29997937_121020_Linux-x86-64.zip",
"p31335037_121020_Linux-x86-64.zip",
"p32327201_121020_Linux-x86-64.zip",
"p32327208_121020_Generic.zip",
"p17969866_12102210119_Linux-x86-64.zip",
"p20394750_12102210119_Linux-x86-64.zip",
"p24835919_121020_Linux-x86-64.zip",
"p23262847_12102201020_Linux-x86-64.zip",
"p21171382_12102201020_Generic.zip",
"p21091901_12102210720_Linux-x86-64.zip",
"p33013352_12102210720_Linux-x86-64.zip",
"p25031502_12102210720_Linux-x86-64.zip",
"p23711335_12102191015_Generic.zip",
"p19504946_121020_Linux-x86-64.zip"
],
"installationParameters": {
"unixGroupName": "dba",
"unixGroupId": 12345,
"unixUname": "oracle",
"unixUid": 12345,
"oracleHome": "/home/oracle/oracle.12.1.0.2",
"oracleBase": "/home/oracle"
}
}
Example Sample CEV manifest for Oracle Database 12c Release 2 (12.2)
In following example for the October 2021 PSU for Oracle Database 12c Release 2 (12.2), RDS Custom
applies p33261817, then p33192662, then p29213893, and so on. The example sets new values for the
UNIX user and group, and the Oracle home and Oracle base.
{
"mediaImportTemplateVersion":"2020-08-14",
"databaseInstallationFileNames":[
"V839960-01.zip"
],
"opatchFileNames":[
"p6880880_122010_Linux-x86-64.zip"
],
"psuRuPatchFileNames":[
"p33261817_122010_Linux-x86-64.zip"
1024
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
],
"otherPatchFileNames":[
"p33192662_122010_Linux-x86-64.zip",
"p29213893_122010_Generic.zip",
"p28730253_122010_Linux-x86-64.zip",
"p26352615_12201211019DBOCT2021RU_Linux-x86-64.zip",
"p23614158_122010_Linux-x86-64.zip",
"p24701840_122010_Linux-x86-64.zip",
"p25173124_122010_Linux-x86-64.zip",
"p25881255_122010_Linux-x86-64.zip",
"p27015449_122010_Linux-x86-64.zip",
"p28125601_122010_Linux-x86-64.zip",
"p28852325_122010_Linux-x86-64.zip",
"p29997937_122010_Linux-x86-64.zip",
"p31335037_122010_Linux-x86-64.zip",
"p32327201_122010_Linux-x86-64.zip",
"p32327208_122010_Generic.zip"
],
"installationParameters": {
"unixGroupName": "dba",
"unixGroupId": 12345,
"unixUname": "oracle",
"unixUid": 12345,
"oracleHome": "/home/oracle/oracle.12.2.0.1",
"oracleBase": "/home/oracle"
}
}
In following example for the October 2021 PSU for Oracle Database 18c, RDS Custom applies
p32126855, then p28730253, then p27539475, and so on. The example sets new values for the UNIX
user and group, and the Oracle home and Oracle base.
{
"mediaImportTemplateVersion":"2020-08-14",
"databaseInstallationFileNames":[
"V978967-01.zip"
],
"opatchFileNames":[
"p6880880_180000_Linux-x86-64.zip"
],
"psuRuPatchFileNames":[
"p32126855_180000_Linux-x86-64.zip"
],
"otherPatchFileNames":[
"p28730253_180000_Linux-x86-64.zip",
"p27539475_1813000DBRU_Linux-x86-64.zip",
"p29213893_180000_Generic.zip",
"p29374604_1813000DBRU_Linux-x86-64.zip",
"p29782284_180000_Generic.zip",
"p28125601_180000_Linux-x86-64.zip",
"p28852325_180000_Linux-x86-64.zip",
"p29997937_180000_Linux-x86-64.zip",
"p31335037_180000_Linux-x86-64.zip",
"p31335142_180000_Generic.zip"
]
"installationParameters": {
"unixGroupName": "dba",
"unixGroupId": 12345,
"unixUname": "oracle",
"unixUid": 12345,
"oracleHome": "/home/oracle/18.0.0.0.ru-2020-10.rur-2020-10.r1",
"oracleBase": "/home/oracle/"
1025
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
}
}
In the following example for Oracle Database 19c, RDS Custom applies p32126828, then p29213893,
then p29782284, and so on. The example sets new values for the UNIX user and group, and the Oracle
home and Oracle base.
{
"mediaImportTemplateVersion": "2020-08-14",
"databaseInstallationFileNames": [
"V982063-01.zip"
],
"opatchFileNames": [
"p6880880_190000_Linux-x86-64.zip"
],
"psuRuPatchFileNames": [
"p32126828_190000_Linux-x86-64.zip"
],
"otherPatchFileNames": [
"p29213893_1910000DBRU_Generic.zip",
"p29782284_1910000DBRU_Generic.zip",
"p28730253_190000_Linux-x86-64.zip",
"p29374604_1910000DBRU_Linux-x86-64.zip",
"p28852325_190000_Linux-x86-64.zip",
"p29997937_190000_Linux-x86-64.zip",
"p31335037_190000_Linux-x86-64.zip",
"p31335142_190000_Generic.zip"
],
"installationParameters": {
"unixGroupName": "dba",
"unixGroupId": 12345,
"unixUname": "oracle",
"unixUid": 12345,
"oracleHome": "/home/oracle/oracle.19.0.0.0.ru-2020-04.rur-2020-04.r1.EE.1",
"oracleBase": "/home/oracle"
}
}
Creating a CEV
You can create a CEV using the AWS Management Console or the AWS CLI. Specify either the
multitenant or non-multitenant architecture. For more information, see Multitenant architecture
considerations (p. 1035).
1026
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
Make sure that the Amazon S3 bucket containing your installation files is in the same AWS Region as
your CEV. Otherwise, the process to create a CEV fails.
Typically, creating a CEV takes about two hours. After the CEV is created, you can use it to create
an RDS Custom DB instance. For more information, see Creating an RDS Custom for Oracle DB
instance (p. 1035).
Console
To create a CEV
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.
The Custom engine versions page shows all CEVs that currently exist. If you haven't created any
CEVs, the page is empty.
3. Choose Create custom engine version.
4. In Engine options, do the following:
a. (Optional) For AMI ID, enter an AMI that you previously used to create a CEV. To obtain valid
AMI IDs, use either of the following techniques:
• In the console, choose Custom engine versions in the left navigation pane, and choose the
name of a CEV. The AMI ID used by the CEV appears in the Configuration tab.
• In the AWS CLI, use the describe-db-engine-versions command. Search the output for
ImageID.
If you don't enter an AMI ID, RDS Custom uses the most recent available AMI.
b. For S3 location of manifest files, enter the location of the Amazon S3 bucket that you specified
in Step 3: Upload your installation files to Amazon S3 (p. 1017). For example, enter s3://my-
custom-installation-files/806242271698/cev1/.
c. For CEV manifest, enter the JSON manifest that you created in Creating the CEV
manifest (p. 1023).
7. In the KMS key section, select Enter a key ARN to list the available AWS KMS keys. Then select your
KMS key from the list.
1027
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
An AWS KMS key is required for RDS Custom. For more information, see Step 1: Create or reuse a
symmetric encryption AWS KMS key (p. 1003).
8. (Optional) Choose Add new tag to create a key-value pair for your CEV.
9. Choose Create custom engine version.
If the CEV manifest has an invalid form, the console displays Error validating the CEV manifest. Fix
the problems, and try again.
The Custom engine versions page appears. Your CEV is shown with the status Creating. The process to
create the CEV takes approximately two hours.
AWS CLI
To create a CEV by using the AWS CLI, run the create-custom-db-engine-version command.
Newline characters aren't permitted in manifest_string. Make sure to escape double quotes (") in
the JSON code by prefixing them with a backslash (\).
The following example shows the manifest_string for 19c from Step 5: Prepare the CEV
manifest (p. 1020). The example sets new values for the Oracle base, Oracle home, and the ID and
name of the UNIX/Linux user and group. If you copy this string, remove all newline characters before
pasting it into your command.
"{\"mediaImportTemplateVersion\": \"2020-08-14\",
\"databaseInstallationFileNames\": [\"V982063-01.zip\"],
\"opatchFileNames\": [\"p6880880_190000_Linux-x86-64.zip\"],
\"psuRuPatchFileNames\": [\"p32126828_190000_Linux-x86-64.zip\"],
\"otherPatchFileNames\": [\"p29213893_1910000DBRU_Generic.zip\",
\"p29782284_1910000DBRU_Generic.zip\",\"p28730253_190000_Linux-x86-64.zip
\",\"p29374604_1910000DBRU_Linux-x86-64.zip\",\"p28852325_190000_Linux-
x86-64.zip\",\"p29997937_190000_Linux-x86-64.zip\",\"p31335037_190000_Linux-
x86-64.zip\",\"p31335142_190000_Generic.zip\"]\"installationParameters\":
{ \"unixGroupName\":\"dba\", \ \"unixUname\":\"oracle\", \ \"oracleHome\":\"/
home/oracle/oracle.19.0.0.0.ru-2020-04.rur-2020-04.r1.EE.1\", \ \"oracleBase
\":\"/home/oracle/\"}}"
• --database-installation-files-s3-bucket-name s3-bucket-name, where s3-bucket-
name is the bucket name that you specified in Step 3: Upload your installation files to Amazon
S3 (p. 1017). The AWS Region in which you run create-custom-db-engine-version must be the
same Region as your Amazon S3 bucket.
• --description my-cev-description
• --database-installation-files-s3-prefix prefix, where prefix is the folder name that
you specified in Step 3: Upload your installation files to Amazon S3 (p. 1017).
1028
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
• --image-id ami-id, where ami-id is an AMI ID that want to reuse. To find valid IDs, run the
describe-db-engine-versions command, and then search the output for ImageID. By default,
RDS Custom for Oracle uses the most recent available AMI.
The following example creates a Multitenant CEV named 19.cdb_cev1. The example reuses an existing
AMI rather than use the latest available AMI. Make sure that the name of your CEV starts with the major
engine version number.
Example
For Windows:
Example
The following partial sample output shows the engine, parameter groups, manifest, and other
information.
{
"DBEngineVersions": [
{
"Engine": "custom-oracle-ee-cdb",
"EngineVersion": "19.cdb_cev1",
"DBParameterGroupFamily": "custom-oracle-ee-cdb-19",
"DBEngineDescription": "Containerized Database for Oracle Custom EE",
"DBEngineVersionDescription": "test cev",
"Image": {
"ImageId": "ami-012a345678901bcde",
"Status": "active"
},
1029
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
"ValidUpgradeTarget": [],
"SupportsLogExportsToCloudwatchLogs": false,
"SupportsReadReplica": true,
"SupportedFeatureNames": [],
"Status": "available",
"SupportsParallelQuery": false,
"SupportsGlobalDatabases": false,
"MajorEngineVersion": "19",
"DatabaseInstallationFilesS3BucketName": "us-east-1-123456789012-custom-
installation-files",
"DatabaseInstallationFilesS3Prefix": "123456789012/cev1",
"DBEngineVersionArn": "arn:aws:rds:us-east-1:123456789012:cev:custom-oracle-ee-
cdb/19.cdb_cev1/abcd12e3-4f5g-67h8-i9j0-k1234l56m789",
"KMSKeyId": "arn:aws:kms:us-
east-1:732027699161:key/1ab2345c-6d78-9ef0-1gh2-3456i7j89k01",
"CreateTime": "2023-03-07T19:47:58.131000+00:00",
"TagList": [],
"SupportsBabelfish": false,
...
You can't modify a failed CEV. You can only delete it, then try again to create a CEV after fixing the
causes of the failure. For information about troubleshooting the reasons for CEV creation failure, see
Troubleshooting custom engine version creation for RDS Custom for Oracle (p. 1079).
• available – You can use this CEV to create a new RDS Custom DB instance or upgrade a DB instance.
This is the default status for a newly created CEV.
• inactive – You can't create or upgrade an RDS Custom instance with this CEV. You can't restore a DB
snapshot to create a new RDS Custom DB instance with this CEV.
You can change the CEV from any supported status to any other supported status. You might change
status to prevent the accidental use of a CEV or make a discontinued CEV eligible for use again. For
example, you might change the status of your CEV from available to inactive, and from inactive
back to available.
Console
To modify a CEV
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.
3. Choose a CEV whose description or status you want to modify.
4. For Actions, choose Modify.
5. Make any of the following changes:
1030
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
If the CEV is in use, the console displays You can't modify the CEV status. Fix the problems, and try
again.
AWS CLI
To modify a CEV by using the AWS CLI, run the modify-custom-db-engine-version command. You can
find CEVs to modify by running the describe-db-engine-versions command.
• --engine custom-oracle-ee
• --engine-version cev, where cev is the name of the custom engine version that you want to
modify
• --status status, where status is the availability status that you want to assign to the CEV
The following example changes a CEV named 19.my_cev1 from its current status to inactive.
Example
For Windows:
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.
The Custom engine versions page shows all CEVs that currently exist. If you haven't created any
CEVs, the page is empty.
3. Choose the name of the CEV that you want to view.
1031
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
5. Choose Manifest to view the installation parameters specified in the --manifest option of the
create-custom-db-engine-version command. You can copy this text, replace values as
needed, and use them in a new command.
AWS CLI
To view details about a CEV by using the AWS CLI, run the describe-db-engine-versions command.
• --engine custom-oracle-ee
• --engine-version major-engine-version.customized_string
The following example creates a CEV named 19.my_cev1. Make sure that the name of your CEV starts
with the major engine version number.
Example
1032
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
For Windows:
The following partial sample output shows the engine, parameter groups, manifest, and other
information.
"DBEngineVersions": [
{
"Engine": "custom-oracle-ee",
"MajorEngineVersion": "19",
"EngineVersion": "19.my_cev1",
"DatabaseInstallationFilesS3BucketName": "us-east-1-123456789012-cev-customer-
installation-files",
"DatabaseInstallationFilesS3Prefix": "123456789012/cev1",
"CustomDBEngineVersionManifest": "{\n\"mediaImportTemplateVersion\":
\"2020-08-14\",\n\"databaseInstallationFileNames\": [\n\"V982063-01.zip\"\n],\n
\"installationParameters\": {\n\"oracleBase\":\"/tmp\",\n\"oracleHome\":\"/tmp/Oracle\"\n},
\n\"opatchFileNames\": [\n\"p6880880_190000_Linux-x86-64.zip\"\n],\n\"psuRuPatchFileNames
\": [\n\"p32126828_190000_Linux-x86-64.zip\"\n],\n\"otherPatchFileNames\": [\n
\"p29213893_1910000DBRU_Generic.zip\",\n\"p29782284_1910000DBRU_Generic.zip\",\n
\"p28730253_190000_Linux-x86-64.zip\",\n\"p29374604_1910000DBRU_Linux-x86-64.zip\",
\n\"p28852325_190000_Linux-x86-64.zip\",\n\"p29997937_190000_Linux-x86-64.zip\",\n
\"p31335037_190000_Linux-x86-64.zip\",\n\"p31335142_190000_Generic.zip\"\n]\n}\n",
"DBParameterGroupFamily": "custom-oracle-ee-19",
"DBEngineDescription": "Oracle Database server EE for RDS Custom",
"DBEngineVersionArn": "arn:aws:rds:us-west-2:123456789012:cev:custom-oracle-
ee/19.my_cev1/0a123b45-6c78-901d-23e4-5678f901fg23",
"DBEngineVersionDescription": "test",
"KMSKeyId": "arn:aws:kms:us-east-1:123456789012:key/ab1c2de3-f4g5-6789-h012-
h3ijk4567l89",
"CreateTime": "2022-11-18T09:17:07.693000+00:00",
"ValidUpgradeTarget": [
{
"Engine": "custom-oracle-ee",
"EngineVersion": "19.cev.2021-01.09",
"Description": "test",
"AutoUpgrade": false,
"IsMajorVersionUpgrade": false
}
]
Deleting a CEV
You can delete a CEV using the AWS Management Console or the AWS CLI. Typically, deletion takes a few
minutes.
1033
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle
Console
To delete a CEV
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.
3. Choose a CEV whose description or status you want to delete.
4. For Actions, choose Delete.
In the Custom engine versions page, the banner shows that your CEV is being deleted.
AWS CLI
To delete a CEV by using the AWS CLI, run the delete-custom-db-engine-version command.
• --engine custom-oracle-ee
• --engine-version cev, where cev is the name of the custom engine version to be deleted
Example
For Windows:
1034
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance
Topics
• Multitenant architecture considerations (p. 1035)
• Creating an RDS Custom for Oracle DB instance (p. 1035)
• RDS Custom service-linked role (p. 1040)
• Connecting to your RDS Custom DB instance using Session Manager (p. 1040)
• Connecting to your RDS Custom DB instance using SSH (p. 1041)
• Logging in to your RDS Custom for Oracle database as SYS (p. 1045)
• Installing additional software components on your RDS Custom for Oracle DB instance (p. 1046)
When you create an RDS Custom for Oracle CDB instance, consider the following:
• You can create a multitenant database only from an Oracle Database 19c CEV.
• You can create a CDB instance only if the CEV uses the custom-oracle-ee-cdb engine type.
• By default, your CDB is named RDSCDB, which is also the name of the Oracle System ID (Oracle SID).
You can choose a different name.
• You CDB contains only one initial PDB. The PDB name defaults to ORCL. You can choose a different
name for your initial PDB, but the Oracle SID and the PDB name can’t be the same.
• RDS Custom for Oracle doesn't supply APIs for PDBs. To create additional PDBs, use the Oracle SQL
command CREATE PLUGGABLE DATABASE. RDS Custom for Oracle doesn't restrict the number of
PDBs that you can create. In general, you are responsible for creating and managing PDBs, as in an on-
premises deployment.
• If you create a PDB using Oracle SQL, we recommend that you take a manual snapshot afterward in
case you need to perform point-in-time recovery (PITR).
• You can't rename existing PDBs using Amazon RDS APIs. You also can't rename the CDB using the
modify-db-instance command.
• The open mode for the CDB root is READ WRITE on the primary and MOUNTED on a mounted standby
database. RDS Custom for Oracle attempts to open all PDBs when opening the CDB. If RDS Custom for
Oracle can’t open all PDBs, it issues the event tenant database shutdown.
1035
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance
If you included installation parameters in your CEV manifest, then your DB instance uses the Oracle
base, Oracle home, and the ID and name of the UNIX/Linux user and group that you specified. The
oratab file, which is created by Oracle Database during installation, points to the real installation
location rather than to a symbolic link. When RDS Custom for Oracle runs commands, it runs as the
configured OS user rather than the default user rdsdb. For more information, see Step 5: Prepare the
CEV manifest (p. 1020).
Before you attempt to create or connect to an RDS Custom DB instance, complete the tasks in Setting up
your environment for Amazon RDS Custom for Oracle (p. 1002).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose Create database.
4. In Choose a database creation method, select Standard create.
5. In the Engine options section, do the following:
• Select Multitenant architecture to create a container database (CDB). At creation, your CDB
contains one PDB seed and one initial PDB.
Note
The Multitenant architecture setting is supported only for Oracle Database 19c.
• Clear Multitenant architecture to create a non-CDB. A non-CDB can't contain PDBs.
d. For Edition, choose Oracle Enterprise Edition.
e. For Custom engine version, choose an existing RDS Custom custom engine version (CEV). A
CEV has the following format: major-engine-version.customized_string. An example
identifier is 19.cdb_cev1.
If you chose Multitenant architecture in the previous step, you can only specify CEV that uses
the custom-oracle-ee-cdb engine type. The console filters out CEVs that were created with
the custom-oracle-ee engine type.
6. In Templates, choose Production.
7. In the Settings section, do the following:
When you connect to a non-CDB, the master user is the user for the non-CDB. When you
connect to a CDB, the master user is the user for the PDB. To connect to the CDB root, log in to
the host, start a SQL client, and create an administrative user with SQL commands.
c. Clear Auto generate a password.
8. Choose a DB instance class.
For supported classes, see DB instance class support for RDS Custom for Oracle (p. 999).
9. In the Storage section, do the following:
a. For Storage type, choose an SSD type: io1, gp2, or gp3. You have the following additional
options:
1036
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance
• For io1 or gp3, choose a rate for Provisioned IOPS. The default is 1000 for io1 and 12000 for
gp3.
• For gp3, choose a rate for Storage throughput. The default is 500 MiBps.
b. For Allocated storage, choose a storage size. The default is 40 GiB.
10. For Connectivity, specify your Virtual private cloud (VPC), DB subnet group, and VPC security
group (firewall).
11. For RDS Custom security, do the following:
a. For IAM instance profile, choose the instance profile for your RDS Custom for Oracle DB
instance.
The IAM instance profile must begin with AWSRDSCustom, for example
AWSRDSCustomInstanceProfileForRdsCustomInstance.
b. For Encryption, choose Enter a key ARN to list the available AWS KMS keys. Then choose your
key from the list.
An AWS KMS key is required for RDS Custom. For more information, see Step 1: Create or reuse
a symmetric encryption AWS KMS key (p. 1003).
12. For Database options, do the following:
a. (Optional) For System ID (SID), enter a value for the Oracle SID, which is also the name of your
CDB. The SID is the name of the Oracle database instance that manages your database files. In
this context, the term "Oracle database instance" refers exclusively to the system global area
(SGA) and Oracle background processes. If you don't specify a SID, the value defaults to RDSCDB.
b. (Optional) For Initial database name, enter a name. The default value is ORCL. In the
multitenant architecture, the initial database name is the PDB name.
Note
The SID and PDB name must be different.
c. For Backup retention period choose a value. You can't choose 0 days.
d. For the remaining sections, specify your preferred RDS Custom DB instance settings. For
information about each setting, see Settings for DB instances (p. 308). The following settings
don't appear in the console and aren't supported:
• Processor features
• Storage autoscaling
• Availability & durability
• Password and Kerberos authentication option in Database authentication (only Password
authentication is supported)
• Database options group in Additional configuration
• Performance Insights
• Log exports
• Enable auto minor version upgrade
• Deletion protection
13. Choose Create database.
Important
When you create an RDS Custom for Oracle DB instance, you might receive the following
error: The service-linked role is in the process of being created. Try again later. If you do,
wait a few minutes and then try again to create the DB instance.
1037
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance
To view the master user name and password for the RDS Custom DB instance, choose View
credential details.
To connect to the DB instance as the master user, use the user name and password that appear.
Important
You can't view the master user password again in the console. If you don't record it, you
might have to change it. To change the master user password after the RDS Custom DB
instance is available, log in to the database and run an ALTER USER command. You can't
reset the password using the Modify option in the console.
14. Choose Databases to view the list of RDS Custom DB instances.
15. Choose the RDS Custom DB instance that you just created.
On the RDS console, the details for the new RDS Custom DB instance appear:
• The DB instance has a status of creating until the RDS Custom DB instance is created and ready
for use. When the state changes to available, you can connect to the DB instance. Depending on
the instance class and storage allocated, it can take several minutes for the new DB instance to be
available.
• Role has the value Instance (RDS Custom).
• RDS Custom automation mode has the value Full automation. This setting means that the DB
instance provides automatic monitoring and instance recovery.
AWS CLI
You create an RDS Custom DB instance by using the create-db-instance AWS CLI command.
• --db-instance-identifier
• --db-instance-class (for a list of supported instance classes, see DB instance class support for
RDS Custom for Oracle (p. 999))
• --engine engine-type (where engine-type is custom-oracle-ee-cdb for a CDB and custom-
oracle-ee for a non-CDB)
• --engine-version cev (where cev is the name of the custom engine version that you specified in
Creating a CEV (p. 1026))
• --kms-key-id my-kms-key
• --backup-retention-period days (where days is a value greater than 0)
• --no-auto-minor-version-upgrade
• --custom-iam-instance-profile AWSRDSCustomInstanceRole-us-east-1 (where region is
the AWS Region where you are creating your DB instance)
The following example creates an RDS Custom DB instance named my-cdb-instance. The database is a
CDB with the nondefault name MYCDB. The nondefault PDB name is MYPDB. The backup retention period
is three days.
Example
1038
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance
--db-name MYPDB \
--db-system-id MYCDB \
--allocated-storage 250 \
--db-instance-class db.m5.xlarge \
--db-subnet-group mydbsubnetgroup \
--master-username myawsuser \
--master-user-password mypassword \
--backup-retention-period 3 \
--port 8200 \
--license-model bring-your-own-license \
--kms-key-id my-kms-key \
--no-auto-minor-version-upgrade \
--custom-iam-instance-profile AWSRDSCustomInstanceRole-us-east-1
For Windows:
Note
Specify a password other than the prompt shown here as a security best practice.
Example
The following partial output shows the engine, parameter groups, and other information.
{
"DBInstanceIdentifier": "my-cdb-instance",
"DBInstanceClass": "db.m5.xlarge",
"Engine": "custom-oracle-ee-cdb",
"DBInstanceStatus": "available",
"MasterUsername": "admin",
"DBName": "MYPDB",
"DBSystemID": "MYCDB",
"Endpoint": {
"Address": "my-cdb-instance.abcdefghijkl.us-east-1.rds.amazonaws.com",
"Port": 1521,
"HostedZoneId": "A1B2CDEFGH34IJ"
},
"AllocatedStorage": 100,
"InstanceCreateTime": "2023-04-12T18:52:16.353000+00:00",
"PreferredBackupWindow": "08:46-09:16",
1039
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance
"BackupRetentionPeriod": 7,
"DBSecurityGroups": [],
"VpcSecurityGroups": [
{
"VpcSecurityGroupId": "sg-0a1bcd2e",
"Status": "active"
}
],
"DBParameterGroups": [
{
"DBParameterGroupName": "default.custom-oracle-ee-cdb-19",
"ParameterApplyStatus": "in-sync"
}
],
...
When you create an RDS Custom DB instance, both the Amazon RDS and RDS Custom service-linked
roles are created (if they don't already exist) and used. For more information, see Using service-linked
roles for Amazon RDS (p. 2684).
The first time that you create an RDS Custom for Oracle DB instance, you might receive the following
error: The service-linked role is in the process of being created. Try again later. If you do, wait a few
minutes and then try again to create the DB instance.
Session Manager allows you to access Amazon EC2 instances through a browser-based shell or through
the AWS CLI. For more information, see AWS Systems Manager Session Manager.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance to which
you want to connect.
3. Choose Configuration.
4. Note the Resource ID for your DB instance. For example, the resource ID might be db-
ABCDEFGHIJKLMNOPQRS0123456.
5. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
6. In the navigation pane, choose Instances.
7. Look for the name of your EC2 instance, and then click the instance ID associated with it. For
example, the instance ID might be i-abcdefghijklm01234.
1040
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance
8. Choose Connect.
9. Choose Session Manager.
10. Choose Connect.
AWS CLI
You can connect to your RDS Custom DB instance using the AWS CLI. This technique requires the Session
Manager plugin for the AWS CLI. To learn how to install the plugin, see Install the Session Manager
plugin for the AWS CLI.
To find the DB resource ID of your RDS Custom DB instance, use aws rds describe-db-instances.
The following sample output shows the resource ID for your RDS Custom instance. The prefix is db-.
db-ABCDEFGHIJKLMNOPQRS0123456
To find the EC2 instance ID of your DB instance, use aws ec2 describe-instances. The following
example uses db-ABCDEFGHIJKLMNOPQRS0123456 for the resource ID.
i-abcdefghijklm01234
Use the aws ssm start-session command, supplying the EC2 instance ID in the --target
parameter.
Your SSH connection technique depends on whether your DB instance is private, meaning that it doesn't
accept connections from the public internet. In this case, you must use SSH tunneling to connect the ssh
1041
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance
utility to your instance. This technique transports data with a dedicated data stream (tunnel) inside an
existing SSH session. You can configure SSH tunneling using AWS Systems Manager.
Note
Various strategies are supported for accessing private instances. To learn how to connect an ssh
client to private instances using bastion hosts, see Linux Bastion Hosts on AWS. To learn how to
configure port forwarding, see Port Forwarding Using AWS Systems Manager Session Manager.
If your DB instance is in a public subnet and has the publicly available setting, then no SSH tunneling is
required. You can connect with SSH just as would to a public Amazon EC2 instance.
• Make sure that your DB instance security group permits inbound connections on port 22 for TCP.
To learn how to configure the security group for your DB instance, see Controlling access with security
groups (p. 2680).
• If you don't plan to use SSH tunneling, make sure your DB instance resides in a public subnet and is
publicly accessible.
In the console, the relevant field is Publicly accessible on the Connectivity & security tab of the
database details page. To check your settings in the CLI, run the following command:
To change the accessibility settings for your DB instance, see Modifying an Amazon RDS DB
instance (p. 401).
Retrieve your SSH secret key using either AWS Management Console or the AWS CLI. If your instance has
a public DNS, and you don't intend to use SSH tunneling, then also retrieve the DNS name. You specify
the DNS name for public connections.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance to which
you want to connect.
1042
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance
3. Choose Configuration.
4. Note the Resource ID value. For example, the DB instance resource ID might be db-
ABCDEFGHIJKLMNOPQRS0123456.
5. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
6. In the navigation pane, choose Instances.
7. Find the name of your EC2 instance, and choose the instance ID associated with it. For example, the
EC2 instance ID might be i-abcdefghijklm01234.
8. In Details, find Key pair name. The pair name includes the DB instance resource ID. For
example, the pair name might be do-not-delete-rds-custom-ssh-privatekey-db-
ABCDEFGHIJKLMNOPQRS0123456-0d726c.
9. If your EC2 instance is public, note the Public IPv4 DNS. For the example, the public Domain Name
System (DNS) address might be ec2-12-345-678-901.us-east-2.compute.amazonaws.com.
10. Open the AWS Secrets Manager console at https://console.aws.amazon.com/secretsmanager/.
11. Choose the secret that has the same name as your key pair.
12. Choose Retrieve secret value.
13. Copy the SSH private key into a text file, and then save the file with the .pem extension. For
example, save the file as /tmp/do-not-delete-rds-custom-ssh-privatekey-db-
ABCDEFGHIJKLMNOPQRS0123456-0d726c.pem.
AWS CLI
To retrieve the SSH private key and save it in a .pem file, you can use the AWS CLI.
1. Find the DB resource ID of your RDS Custom DB instance using aws rds describe-db-
instances.
The following sample output shows the resource ID for your RDS Custom instance. The prefix is db-.
db-ABCDEFGHIJKLMNOPQRS0123456
2. Find the EC2 instance ID of your DB instance using aws ec2 describe-instances. The following
example uses db-ABCDEFGHIJKLMNOPQRS0123456 for the resource ID.
i-abcdefghijklm01234
3. To find the key name, specify the EC2 instance ID. The following example describes EC2 instance
i-0bdc4219e66944afa.
1043
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance
The following sample output shows the key name, which uses the prefix do-not-delete-rds-
custom-ssh-privatekey-.
do-not-delete-rds-custom-ssh-privatekey-db-ABCDEFGHIJKLMNOPQRS0123456-0d726c
4. Save the private key in a .pem file named after the key using aws secretsmanager. The following
example saves the file in your /tmp directory.
1. For private connections, modify your SSH configuration file to proxy commands to AWS Systems
Manager Session Manager. For public connections, skip to Step 2.
Add the following lines to ~/.ssh/config. These lines proxy SSH commands for hosts whose
names begin with i- or mi-.
2. Change to the directory that contains your .pem file. Using chmod, set the permissions to 400.
cd /tmp
chmod 400 do-not-delete-rds-custom-ssh-privatekey-db-
ABCDEFGHIJKLMNOPQRS0123456-0d726c.pem
3. Run the ssh utility, specifying the .pem file and either the public DNS name (for public connections)
or the EC2 instance ID (for private connections). Log in as user ec2-user.
The following example connects to a public instance using the DNS name
ec2-12-345-678-901.us-east-2.compute.amazonaws.com.
ssh -i \
"do-not-delete-rds-custom-ssh-privatekey-db-ABCDEFGHIJKLMNOPQRS0123456-0d726c.pem"
\
[email protected]
The following example connects to a private instance using the EC2 instance ID
i-0bdc4219e66944afa.
ssh -i \
"do-not-delete-rds-custom-ssh-privatekey-db-ABCDEFGHIJKLMNOPQRS0123456-0d726c.pem"
\
1044
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance
ec2-user@i-0bdc4219e66944afa
• Get the SYS password from Secrets Manager, and specify this password in your SQL client.
• Use OS authentication to log in to your database. In this case, you don't need a password.
Finding the SYS password for your RDS Custom for Oracle database
Your can log in to your Oracle database as SYS or SYSTEM or by specifying the master user name in an
API call. The password for SYS and SYSTEM is stored in Secrets Manager. The secret uses the naming
format do-not-delete-rds-custom-resource_id-uuid. You can find the password using the AWS
Management Console.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the RDS console, complete the following steps:
1. Connect to your DB instance with AWS Systems Manager. For more information, see Connecting to
your RDS Custom DB instance using Session Manager (p. 1040).
1045
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance
• https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-instantclient-
basic-21.9.0.0.0-1.el8.x86_64.rpm
• https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-instantclient-
sqlplus-21.9.0.0.0-1.el8.x86_64.rpm
4. In your SSH session, run the wget command to the download the .rpm files from the links that you
obtained in the previous step. The following example downloads the .rpm files for Oracle Database
version 21.9:
wget https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-
instantclient-basic-21.9.0.0.0-1.el8.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-
instantclient-sqlplus-21.9.0.0.0-1.el8.x86_64.rpm
sudo su - rdsdb
$ sqlplus / as sysdba
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.10.0.0.0
1046
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance
Topics
• Working with container databases (CDBs) in RDS Custom for Oracle (p. 1047)
• Working with high availability features for RDS Custom for Oracle (p. 1048)
• Customizing your RDS Custom environment (p. 1048)
• Modifying your RDS Custom for Oracle DB instance (p. 1052)
• Changing the time zone of an RDS Custom for Oracle DB instance (p. 1055)
• Changing the character set of an RDS Custom for Oracle DB instance (p. 1056)
• Setting the NLS_LANG value in RDS Custom for Oracle (p. 1057)
• Support for Transparent Data Encryption (p. 1057)
• Tagging RDS Custom for Oracle resources (p. 1057)
• Deleting an RDS Custom for Oracle DB instance (p. 1058)
By default, your CDB is named RDSCDB. You can choose a different name. The CDB name is also the
name of your Oracle system identifier (SID), which uniquely identifies the memory and processes that
manage your CDB. For more information about the Oracle SID, see Oracle System Identifier (SID) in
Oracle Database Concepts.
You can't rename existing PDBs using Amazon RDS APIs. You also can't rename the CDB using the
modify-db-instance command.
PDB management
In the RDS Custom for Oracle shared responsibility model, you are responsible for managing PDBs and
creating any additional PDBs. RDS Custom doesn't restrict the number of PDBs. You can manually create,
modify, and delete PDBs by connecting to the CDB root and running a SQL statement. Create PDBs on an
Amazon EBS data volume to prevent the DB instance from going outside the support perimeter.
1047
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance
You can configure your high availability environment in the following ways:
To learn how to configure high availability, see the whitepaper Build high availability for Amazon RDS
Custom for Oracle using read replicas. You can perform the following tasks:
• Use a virtual private network (VPN) tunnel to encrypt data in transit for your high availability
instances. Encryption in transit isn't configured automatically by RDS Custom.
• Configure Oracle Fast-Failover Observer (FSFO) to monitor your high availability instances.
• Allow the observer to perform automatic failover when necessary conditions are met.
For some customizations, such as changing the time zone or character set, you can't use the RDS APIs. In
these cases, you need to change the environment manually by accessing your Amazon EC2 instance as
the root user or logging in to your Oracle database as SYSDBA.
To customize your instance manually, you must pause and resume RDS Custom automation. This pause
ensures that your customizations don't interfere with RDS Custom automation. In this way, you avoid
breaking the support perimeter, which places the instance in the unsupported-configuration state
1048
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance
until you fix the underlying issues. Pausing and resuming are the only supported automation tasks when
you modify an RDS Custom for Oracle DB instance.
1. Pause RDS Custom automation for a specified period using the console or CLI.
2. Identify your underlying Amazon EC2 instance.
3. Connect to your underlying Amazon EC2 instance using SSH keys or AWS Systems Manager.
4. Verify your current configuration settings at the database or operating system layer.
You can validate your changes by comparing the initial configuration to the changed configuration.
Depending on the type of customization, use OS tools or database queries.
5. Customize your RDS Custom for Oracle DB instance as needed.
6. Reboot your instance or database, if required.
Note
In an on-premises Oracle CDB, you can preserve a specified open mode for PDBs using a
built-in command or after a startup trigger. This mechanism brings PDBs to a specified state
when the CDB restarts. When opening your CDB, RDS Custom automation discards any user-
specified preserved states and attempts to open all PDBs. If RDS Custom can't open all PDBs,
the following event is issued: The following PDBs failed to open: list-of-PDBs.
7. Verify your new configuration settings by comparing them with the previous settings.
8. Resume RDS Custom automation in either of the following ways:
• Resume automation manually.
• Wait for the pause period to end. In this case, RDS Custom resumes monitoring and instance
recovery automatically.
9. Verify the RDS Custom automation framework
If you followed the preceding steps correctly, RDS Custom starts an automated backup. The status of
the instance in the console shows Available.
For best practices and step-by-step instructions, see the AWS blog posts Make configuration changes
to an Amazon RDS Custom for Oracle instance: Part 1 and Recreate an Amazon RDS Custom for Oracle
database: Part 2.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance that you
want to modify.
3. Choose Modify. The Modify DB instance page appears.
4. For RDS Custom automation mode, choose one of the following options:
• Paused pauses the monitoring and instance recovery for the RDS Custom DB instance. Enter the
pause duration that you want (in minutes) for Automation mode duration. The minimum value is
60 minutes (default). The maximum value is 1,440 minutes.
1049
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance
A message indicates that RDS Custom will apply the changes immediately.
6. If your changes are correct, choose Modify DB instance. Or choose Back to edit your changes or
Cancel to cancel your changes.
On the RDS console, the details for the modification appear. If you paused automation, the Status of
your RDS Custom DB instance indicates Automation paused.
7. (Optional) In the navigation pane, choose Databases, and then your RDS Custom DB instance.
In the Summary pane, RDS Custom automation mode indicates the automation status. If
automation is paused, the value is Paused. Automation resumes in num minutes.
AWS CLI
To pause or resume RDS Custom automation, use the modify-db-instance AWS CLI command.
Identify the DB instance using the required parameter --db-instance-identifier. Control the
automation mode with the following parameters:
• --automation-mode specifies the pause state of the DB instance. Valid values are all-paused,
which pauses automation, and full, which resumes it.
• --resume-full-automation-mode-minutes specifies the duration of the pause. The default value
is 60 minutes.
Note
Regardless of whether you specify --no-apply-immediately or --apply-immediately,
RDS Custom applies modifications asynchronously as soon as possible.
Example
For Linux, macOS, or Unix:
For Windows:
The following example extends the pause duration for an extra 30 minutes. The 30 minutes is added to
the original time shown in ResumeFullAutomationModeTime.
Example
For Linux, macOS, or Unix:
1050
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance
For Windows:
Example
For Windows:
In the following partial sample output, the pending AutomationMode value is full.
{
"DBInstance": {
"PubliclyAccessible": true,
"MasterUsername": "admin",
"MonitoringInterval": 0,
"LicenseModel": "bring-your-own-license",
"VpcSecurityGroups": [
{
"Status": "active",
"VpcSecurityGroupId": "0123456789abcdefg"
}
],
"InstanceCreateTime": "2020-11-07T19:50:06.193Z",
"CopyTagsToSnapshot": false,
"OptionGroupMemberships": [
{
"Status": "in-sync",
"OptionGroupName": "default:custom-oracle-ee-19"
}
],
"PendingModifiedValues": {
"AutomationMode": "full"
},
"Engine": "custom-oracle-ee",
"MultiAZ": false,
"DBSecurityGroups": [],
"DBParameterGroups": [
{
"DBParameterGroupName": "default.custom-oracle-ee-19",
"ParameterApplyStatus": "in-sync"
1051
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance
}
],
...
"ReadReplicaDBInstanceIdentifiers": [],
"AllocatedStorage": 250,
"DBInstanceArn": "arn:aws:rds:us-west-2:012345678912:db:my-custom-instance",
"BackupRetentionPeriod": 3,
"DBName": "ORCL",
"PreferredMaintenanceWindow": "fri:10:56-fri:11:26",
"Endpoint": {
"HostedZoneId": "ABCDEFGHIJKLMNO",
"Port": 8200,
"Address": "my-custom-instance.abcdefghijk.us-west-2.rds.amazonaws.com"
},
"DBInstanceStatus": "automation-paused",
"IAMDatabaseAuthenticationEnabled": false,
"AutomationMode": "all-paused",
"EngineVersion": "19.my_cev1",
"DeletionProtection": false,
"AvailabilityZone": "us-west-2a",
"DomainMemberships": [],
"StorageType": "gp2",
"DbiResourceId": "db-ABCDEFGHIJKLMNOPQRSTUVW",
"ResumeFullAutomationModeTime": "2020-11-07T20:56:50.565Z",
"KmsKeyId": "arn:aws:kms:us-west-2:012345678912:key/
aa111a11-111a-11a1-1a11-1111a11a1a1a",
"StorageEncrypted": false,
"AssociatedRoles": [],
"DBInstanceClass": "db.m5.xlarge",
"DbInstancePort": 0,
"DBInstanceIdentifier": "my-custom-instance",
"TagList": []
}
Topics
• Requirements and limitations when modifying your DB instance storage (p. 1052)
• Requirements and limitations when modifying your DB instance class (p. 1053)
• How RDS Custom creates your DB instance when you modify the instance class (p. 1053)
• Modifying the instance class or storage for your RDS Custom for Oracle DB instance (p. 1054)
• The minimum allocated storage for RDS Custom for Oracle is 40 GiB, and the maximum is 64 TiB.
• As with Amazon RDS, you can't decrease the allocated storage. This is a limitation of Amazon EBS
volumes.
1052
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance
For more information, see RDS Custom support perimeter (p. 985).
• Magnetic (standard) Amazon EBS storage isn't supported for RDS Custom. You can choose only the io1,
gp2, or gp3 SSD storage types.
For more information about Amazon EBS storage, see Amazon RDS DB instance storage (p. 101).
For general information about storage modification, see Working with storage for Amazon RDS DB
instances (p. 478).
How RDS Custom creates your DB instance when you modify the instance class
When you modify your instance class, RDS Custom creates your DB instance as follows:
1053
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance
Modifying the instance class or storage for your RDS Custom for Oracle DB
instance
You can modify the DB instance class or storage using the console, AWS CLI, or RDS API.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to modify.
4. Choose Modify.
5. Make the following changes as needed:
a. Change the value for DB instance class. For supported classes, see DB instance class support for
RDS Custom for Oracle (p. 999).
b. Enter a new value for Allocated storage. It must be greater than the current value, and from 40
GiB–64 TiB.
c. Change the value for Storage type to General Purpose SSD (gp2), General Purpose SSD (gp3),
or Provisioned IOPS (io1).
d. If you use Provisioned IOPS (io1) or General Purpose SSD (gp3), you can change the
Provisioned IOPS value.
6. Choose Continue.
7. Choose Apply immediately or Apply during the next scheduled maintenance window.
8. Choose Modify DB instance.
AWS CLI
To modify the storage for an RDS Custom for Oracle DB instance, use the modify-db-instance AWS CLI
command. Set the following parameters as needed:
• --db-instance-class – A new instance class. For supported classes, see DB instance class support
for RDS Custom for Oracle (p. 999).
• --allocated-storage – Amount of storage to be allocated for the DB instance, in gibibytes. It must
be greater than the current value, and from 40–65,536 GiB.
• --storage-type – The storage type: gp2, gp3, or io1.
• --iops – Provisioned IOPS for the DB instance, if using the io1 or gp3 storage types.
• --apply-immediately – Use --apply-immediately to apply the storage changes immediately.
Or use --no-apply-immediately (the default) to apply the changes during the next maintenance
window.
The following example changes the DB instance class of my-custom-instance to db.m5.16xlarge. The
command also changes the storage size to 1 TiB, storage type to io1, and Provisioned IOPS to 3000.
Example
For Linux, macOS, or Unix:
1054
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance
--storage-type io1 \
--iops 3000 \
--allocated-storage 1024 \
--apply-immediately
For Windows:
You can change time zones for RDS Custom for Oracle DB instances multiple times. However, we
recommend not changing them more than once every 48 hours. We also recommend changing them
only when the latest restorable time is within the last 30 minutes.
If you don't follow these recommendations, cleaning up redo logs might remove more logs than
intended. Redo log timestamps might also be converted incorrectly to UTC, which can prevent the redo
log files from being downloaded and replayed correctly. This in turn can prevent point-in-time recovery
(PITR) from performing correctly.
Changing the time zone of an RDS Custom for Oracle DB instance has the following limitations:
• PITR is supported for recovery times before RDS Custom automation is paused, and after automation
is resumed.
For more information about PITR, see Restoring an RDS Custom for Oracle instance to a point in
time (p. 1067).
• Changing the time zone of an existing read replica causes downtime. We recommend changing the
time zone of the DB instance before creating read replicas.
You can create a read replica from a DB instance with a modified time zone. For more information
about read replicas, see Working with Oracle replicas for RDS Custom for Oracle (p. 1060).
Use the following procedures to change the time zone of an RDS Custom for Oracle DB instance.
Make sure to follow these procedures. If they aren't followed, it can result in these issues:
1. Pause RDS Custom automation. For more information, see Pausing and resuming your RDS Custom
DB instance (p. 1049).
2. (Optional) Change the time zone of the DB instance, for example by using the following command.
1055
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance
To change the time zone for a primary DB instance and its read replicas
Changing the character set of an RDS Custom for Oracle DB instance has the following requirements:
• You can only change the character on a newly provisioned RDS Custom instance that has an empty or
starter database with no application data. For all other scenarios, change the character set using DMU
(Database Migration Assistant for Unicode).
• You can only change to a character set supported by RDS for Oracle. For more information, see
Supported DB character sets (p. 1801).
1. Pause RDS Custom automation. For more information, see Pausing and resuming your RDS Custom
DB instance (p. 1049).
2. Log in to your database as a user with SYSDBA privileges.
3. Restart the database in restricted mode, change the character set, and then restart the database in
normal mode.
SHUTDOWN IMMEDIATE;
1056
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance
STARTUP RESTRICT;
ALTER DATABASE CHARACTER SET INTERNAL_CONVERT AL32UTF8;
SHUTDOWN IMMEDIATE;
STARTUP;
SELECT VALUE FROM NLS_DATABASE_PARAMETERS WHERE PARAMETER = 'NLS_CHARACTERSET';
VALUE
--------
AL32UTF8
4. Resume RDS Custom automation. For more information, see Pausing and resuming your RDS
Custom DB instance (p. 1049).
For RDS Custom for Oracle, you can set only the language in the NLS_LANG variable: the territory and
character use defaults. The language is used for Oracle database messages, collation, day names, and
month names. Each supported language has a unique name, for example, American, French, or German.
If language is not specified, the value defaults to American.
After you create your RDS Custom for Oracle database, you can set NLS_LANG on your client host to a
language other than English. To see a list of languages supported by Oracle Database, log in to your RDS
Custom for Oracle database and run the following query:
You can set NLS_LANG on the host command line. The following example sets the language to German
for your client application using the Z shell on Linux.
export NLS_LANG=German
Your application reads the NLS_LANG value when it starts and then communicates it to the database
when it connects.
For more information, see Choosing a Locale with the NLS_LANG Environment Variable in the Oracle
Database Globalization Support Guide.
However, you can't enable TDE using an option in a custom option group as you can in RDS for Oracle.
You turn on TDE manually. For information about using Oracle Transparent Data Encryption, see
Securing stored data using Transparent Data Encryption in the Oracle documentation.
1057
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance
• Don't create or modify the AWSRDSCustom tag that's required for RDS Custom automation. If you do,
you might break the automation.
• Tags added to RDS Custom DB instances during creation are propagated to all other related RDS
Custom resources.
• Tags aren't propagated when you add them to RDS Custom resources after DB instance creation.
For general information about resource tagging, see Tagging Amazon RDS resources (p. 461).
You can delete an RDS Custom DB instance using the console or the CLI. The time required to delete the
DB instance can vary depending on the backup retention period (that is, how many backups to delete)
and how much data is deleted.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance that you
want to delete. RDS Custom DB instances show the role Instance (RDS Custom).
3. For Actions, choose Delete.
4. To retain automated backups, choose Retain automated backups.
5. Enter delete me in the box.
6. Choose Delete.
AWS CLI
You delete an RDS Custom DB instance by using the delete-db-instance AWS CLI command. Identify the
DB instance using the required parameter --db-instance-identifier. The remaining parameters
are the same as for an Amazon RDS DB instance, with the following exceptions:
• --skip-final-snapshot is required.
• --no-skip-final-snapshot isn't supported.
• --final-db-snapshot-identifier isn't supported.
The following example deletes the RDS Custom DB instance named my-custom-instance, and retains
automated backups.
Example
1058
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance
--skip-final-snapshot \
--no-delete-automated-backups
For Windows:
1059
Amazon Relational Database Service User Guide
Working with RDS Custom for Oracle replicas
Topics
• Overview of RDS Custom for Oracle replication (p. 1060)
• Guidelines and limitations for RDS Custom for Oracle replication (p. 1061)
• Promoting an RDS Custom for Oracle replica to a standalone DB instance (p. 1063)
1060
Amazon Relational Database Service User Guide
Working with RDS Custom for Oracle replicas
Replica promotion
You can promote managed Oracle replicas in RDS Custom for Oracle using the console, promote-read-
replica AWS CLI command, or PromoteReadReplica API. If you delete your primary DB instance, and
all replicas are healthy, RDS Custom for Oracle promotes your managed replicas to standalone instances
automatically. If a replica has paused automation or is outside the support perimeter, you must fix the
replica before RDS Custom can promote it automatically. You can only promote external Oracle replicas
manually.
Topics
• General guidelines for RDS Custom for Oracle replication (p. 1061)
• General limitations for RDS Custom for Oracle replication (p. 1062)
• Networking requirements and limitations for RDS Custom for Oracle replication (p. 1062)
• External replica limitations for RDS Custom for Oracle (p. 1062)
• Replica promotion limitations for RDS Custom for Oracle (p. 1063)
• Replica promotion guidelines for RDS Custom for Oracle (p. 1063)
• Don't modify the RDS_DATAGUARD user. This user is reserved for RDS Custom for Oracle automation.
Modifying this user can result in undesired outcomes, such as an inability to create Oracle replicas for
your RDS Custom for Oracle DB instance.
• Don't change the replication user password. It is required to administer the Oracle Data Guard
configuration on the RDS Custom host. If you change the password, RDS Custom for Oracle might put
your Oracle replica outside the support perimeter. For more information, see RDS Custom support
perimeter (p. 985).
The password is stored in AWS Secrets Manager, tagged with the DB resource ID. Each Oracle replica
has its own secret in Secrets Manager. The format for the secret is the following.
1061
Amazon Relational Database Service User Guide
Working with RDS Custom for Oracle replicas
do-not-delete-rds-custom-db-DB_resource_id-6-digit_UUID-dg
• Don't change the DB_UNIQUE_NAME for the primary DB instance. Changing the name causes any
restore operation to become stuck.
• Don't specify the clause STANDBYS=NONE in a CREATE PLUGGABLE DATABASE command in an RDS
Custom CDB. This way, if a failover occurs, your standby CDB contains all PDBs.
• You can't create RDS Custom for Oracle replicas in read-only mode. However, you can manually change
the mode of mounted replicas to read-only, and from read-only to mounted. For more information,
see the documentation for the create-db-instance-read-replica AWS CLI command.
• You can't create cross-Region RDS Custom for Oracle replicas.
• You can't change the value of the Oracle Data Guard CommunicationTimeout parameter. This
parameter is set to 15 seconds for RDS Custom for Oracle DB instances.
Networking requirements and limitations for RDS Custom for Oracle replication
Make sure that your network configuration supports RDS Custom for Oracle replicas. Consider the
following:
• Make sure to enable port 1140 for both inbound and outbound communication within your virtual
private cloud (VPC) for the primary DB instance and all of its replicas. This is required for Oracle Data
Guard communication between read replicas.
• RDS Custom for Oracle validates the network while creating a Oracle replica. If the primary DB
instance and the new replica can't connect over the network, RDS Custom for Oracle doesn't create the
replica and places it in the INCOMPATIBLE_NETWORK state.
• For external Oracle replicas, such as those you create on Amazon EC2 or on-premises, use another port
and listener for Oracle Data Guard replication. Trying to use port 1140 could cause conflicts with RDS
Custom automation.
• The /rdsdbdata/config/tnsnames.ora file contains network service names mapped to listener
protocol addresses. Note the following requirements and recommendations:
• Entries in tnsnames.ora prefixed with rds_custom_ are reserved for RDS Custom when handling
Oracle replica operations.
RDS Custom automation updates tnsnames.ora entries on only the primary DB instance. Make
sure also to synchronize when you add or remove a Oracle replica.
If you don't synchronize the tnsnames.ora files and switch over or fail over manually, Oracle Data
Guard on the primary DB instance might not be able to communicate with the Oracle replicas.
1062
Amazon Relational Database Service User Guide
Working with RDS Custom for Oracle replicas
• RDS Custom for Oracle doesn't detect instance role changes upon manual failover, such as FSFO, for
external Oracle replicas.
RDS Custom for Oracle does detect changes for managed replicas. The role change is noted in the
event log. You can also see the new state by using the describe-db-instances AWS CLI command.
• RDS Custom for Oracle doesn't detect high replication lag for external Oracle replicas.
RDS Custom for Oracle does detect lag for managed replicas. High replication lag produces the
Replication has stopped event. You can also see the replication status by using the describe-db-
instances AWS CLI command, but there might be a delay for it to be updated.
• RDS Custom for Oracle doesn't promote external Oracle replicas automatically if you delete your
primary DB instance.
The automatic promotion feature is available only for managed Oracle replicas. For information about
promoting Oracle replicas manually, see the white paper Enabling high availability with Data Guard on
Amazon RDS Custom for Oracle.
• You can't promote a replica while RDS Custom for Oracle is backing it up.
• You can't change the backup retention period to 0 when you promote your Oracle replica.
• You can't promote your replica when it isn't in a healthy state.
If you issue delete-db-instance on the primary DB instance, RDS Custom for Oracle validates that
each managed Oracle replica is healthy and available for promotion. A replica might be ineligible for
promotion because automation is paused or it is outside the support perimeter. In such cases, RDS
Custom for Oracle publishes an event explaining the issue so that you can repair your Oracle replica
manually.
• Don't initiate a failover while RDS Custom for Oracle is promoting your replica. Otherwise, the
promotion workflow could become stuck.
• Don't switch over your primary DB instance while RDS Custom for Oracle is promoting your Oracle
replica. Otherwise, the promotion workflow could become stuck.
• Don't shut down your primary DB instance while RDS Custom for Oracle is promoting your Oracle
replica. Otherwise, the promotion workflow could become stuck.
• Don't try to restart replication with your newly promoted DB instance as a target. After RDS Custom
for Oracle promotes your Oracle replica, it becomes a standalone DB instance and no longer has the
replica role.
For more information, see Troubleshooting replica promotion for RDS Custom for Oracle (p. 1086).
1063
Amazon Relational Database Service User Guide
Working with RDS Custom for Oracle replicas
becomes available. For more information about promoting Oracle replicas, see Promoting a read replica
to be a standalone DB instance (p. 447).
The following steps show the general process for promoting a Oracle replica to a DB instance:
Promoting a Oracle replica takes a few minutes to complete. During the process, RDS Custom for Oracle
stops replication and reboots your replica. When the reboot completes, the Oracle replica is available as
a standalone DB instance.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the Amazon RDS console, choose Databases.
The Databases pane appears. Each Oracle replica shows Replica in the Role column.
3. Choose the RDS Custom for Oracle replica that you want to promote.
4. For Actions, choose Promote.
5. On the Promote Oracle replica page, enter the backup retention period and the backup window for
the newly promoted DB instance. You can't set this value to 0.
6. When the settings are as you want them, choose Promote Oracle replica.
AWS CLI
To promote your RDS Custom for Oracle replica to a standalone DB instance, use the AWS CLI promote-
read-replica command.
Example
For Windows:
RDS API
To promote your RDS Custom for Oracle replica to be a standalone DB instance, call the Amazon RDS API
PromoteReadReplica operation with the required parameter DBInstanceIdentifier.
1064
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance
The procedure is identical to taking a snapshot of an Amazon RDS DB instance. The first snapshot
of an RDS Custom DB instance contains the data for the full DB instance. Subsequent snapshots are
incremental.
Restore DB snapshots using either the AWS Management Console or the AWS CLI.
Topics
• Creating an RDS Custom for Oracle snapshot (p. 1065)
• Restoring from an RDS Custom for Oracle DB snapshot (p. 1066)
• Restoring an RDS Custom for Oracle instance to a point in time (p. 1067)
• Deleting an RDS Custom for Oracle snapshot (p. 1070)
• Deleting RDS Custom for Oracle automated backups (p. 1070)
When you create an RDS Custom for Oracle snapshot, specify which RDS Custom DB instance to back up.
Give your snapshot a name so you can restore from it later.
When you create a snapshot, RDS Custom for Oracle creates an Amazon EBS snapshot for every volume
attached to the DB instance. RDS Custom for Oracle uses the EBS snapshot of the root volume to register
a new Amazon Machine Image (AMI). To make snapshots easy to associate with a specific DB instance,
they're tagged with DBSnapshotIdentifier, DbiResourceId, and VolumeType.
Creating a DB snapshot results in a brief I/O suspension. This suspension can last from a few seconds to
a few minutes, depending on the size and class of your DB instance. The snapshot creation time varies
with the size of your database. Because the snapshot includes the entire storage volume, the size of files,
such as temporary files, also affects snapshot creation time. To learn more about creating snapshots, see
Creating a DB snapshot (p. 613).
Create an RDS Custom for Oracle snapshot using the console or the AWS CLI.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. In the list of RDS Custom DB instances, choose the instance for which you want to take a snapshot.
4. For Actions, choose Take snapshot.
1065
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance
AWS CLI
You create a snapshot of an RDS Custom DB instance by using the create-db-snapshot AWS CLI
command.
• --db-instance-identifier – Identifies which RDS Custom DB instance you are going to back up
• --db-snapshot-identifier – Names your RDS Custom snapshot so you can restore from it later
In this example, you create a DB snapshot called my-custom-snapshot for an RDS Custom DB instance
called my-custom-instance.
Example
For Windows:
The restore process differs in the following ways from restore in Amazon RDS:
• Before restoring a snapshot, RDS Custom for Oracle backs up existing configuration files. These files
are available on the restored instance in the directory /rdsdbdata/config/backup. RDS Custom
for Oracle restores the DB snapshot with default parameters and overwrites the previous database
configuration files with existing ones. Thus, the restored instance doesn't preserve custom parameters
and changes to database configuration files.
• The restored database has the same name as in the snapshot. You can't specify a different name. (For
RDS Custom for Oracle, the default is ORCL.)
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the DB snapshot that you want to restore from.
4. For Actions, choose Restore snapshot.
1066
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance
5. On the Restore DB instance page, for DB instance identifier, enter the name for your restored RDS
Custom DB instance.
6. Choose Restore DB instance.
AWS CLI
You restore an RDS Custom DB snapshot by using the restore-db-instance-from-db-snapshot AWS CLI
command.
If the snapshot you are restoring from is for a private DB instance, make sure to specify both the correct
db-subnet-group-name and no-publicly-accessible. Otherwise, the DB instance defaults to
publicly accessible. The following options are required:
The following code restores the snapshot named my-custom-snapshot for my-custom-instance.
Example
For Windows:
The latest restorable time for an RDS Custom for Oracle DB instance depends on several factors,
but is typically within 5 minutes of the current time. To see the latest restorable time for a DB
instance, use the AWS CLI describe-db-instances command and look at the value returned in the
LatestRestorableTime field for the DB instance. To see the latest restorable time for each DB
instance in the Amazon RDS console, choose Automated backups.
You can restore to any point in time within your backup retention period. To see the earliest restorable
time for each DB instance, choose Automated backups in the Amazon RDS console.
For general information about PITR, see Restoring a DB instance to a specified time (p. 660).
Topics
1067
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance
• The restored database has the same name as in the source DB instance. You can't specify a different
name. The default is ORCL.
• AWSRDSCustomIamRolePolicy requires new permissions. For more information, see Step 2: Add an
access policy to AWSRDSCustomInstanceRoleForRdsCustomInstance (p. 1007).
• All RDS Custom for Oracle DB instances must have backup retention set to a nonzero value.
• If you change the operating system or DB instance time zone, PITR might not work. For information
about changing time zones, see Changing the time zone of an RDS Custom for Oracle DB
instance (p. 1055).
• If you set automation to ALL_PAUSED, RDS Custom pauses the upload of archived redo logs, including
logs created before the latest restorable time (LRT). We recommend that you pause automation for a
brief period.
To illustrate, assume that your LRT is 10 minutes ago. You pause automation. During the pause, RDS
Custom doesn't upload archived redo logs. If your DB instance crashes, you can only recover to a time
before the LRT that existed when you paused. When you resume automation, RDS Custom resumes
uploading logs. The LRT advances. Normal PITR rules apply.
• In RDS Custom, you can manually specify an arbitrary number of hours to retain archived redo logs
before RDS Custom deletes them after upload. Specify the number of hours as follows:
1. Create a text file named /opt/aws/rdscustomagent/config/
redo_logs_custom_configuration.json.
2. Add a JSON object in the following format: {"archivedLogRetentionHours" :
"num_of_hours"}. The number must be an integer in the range 1–840.
• Assume that you plug a non-CDB into a container database (CDB) as a PDB and then attempt PITR. The
operation succeeds only if you previously backed up the PDB. After you create or modify a PDB, we
recommend that you always back it up.
• We recommend that you don't customize database initialization parameters. For example, modifying
the following parameters affects PITR:
• CONTROL_FILE_RECORD_KEEP_TIME affects the rules for uploading and deleting logs.
• LOG_ARCHIVE_DEST_n doesn't support multiple destinations.
• ARCHIVE_LAG_TARGET affects the latest restorable time.
• If you customize database initialization parameters, we strongly recommend that you only customize
the following:
• COMPATIBLE
• MAX_STRING_SIZE
• DB_FILES
• UNDO_TABLESPACE
• ENABLE_PLUGGABLE_DATABASE
• CONTROL_FILES
• AUDIT_TRAIL
• AUDIT_TRAIL_DEST
For all other initialization parameters, RDS Custom restores the default values. If you modify a
parameter that isn't in the preceding list, it might have an adverse effect on PITR and lead to
unpredictable results. For example, CONTROL_FILE_RECORD_KEEP_TIME affects the rules for
uploading and deleting logs.
1068
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance
You can restore an RDS Custom DB instance to a point in time using the AWS Management Console, the
AWS CLI, or the RDS API.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.
3. Choose the RDS Custom DB instance that you want to restore.
4. For Actions, choose Restore to point in time.
If you chose Custom, enter the date and time to which you want to restore the instance.
Times are shown in your local time zone, which is indicated by an offset from Coordinated Universal
Time (UTC). For example, UTC-5 is Eastern Standard Time/Central Daylight Time.
6. For DB instance identifier, enter the name of the target restored RDS Custom DB instance. The
name must be unique.
7. Choose other options as needed, such as DB instance class.
8. Choose Restore to point in time.
AWS CLI
You restore a DB instance to a specified time by using the restore-db-instance-to-point-in-time AWS CLI
command to create a new RDS Custom DB instance.
Use one of the following options to specify the backup to restore from:
• --source-db-instance-identifier mysourcedbinstance
• --source-dbi-resource-id dbinstanceresourceID
• --source-db-instance-automated-backups-arn backupARN
Example
For Windows:
1069
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance
The Amazon EBS snapshots for the binary and root volumes remain in your account for a longer time
because they might be linked to some instances running in your account or to other RDS Custom for
Oracle snapshots. These EBS snapshots are automatically deleted after they're no longer related to any
existing RDS Custom for Oracle resources (DB instances or backups).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the DB snapshot that you want to delete.
4. For Actions, choose Delete snapshot.
5. Choose Delete on the confirmation page.
AWS CLI
To delete an RDS Custom snapshot, use the AWS CLI command delete-db-snapshot.
Example
For Windows:
1070
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.
3. Choose Retained.
4. Choose the retained automated backup that you want to delete.
5. For Actions, choose Delete.
6. On the confirmation page, enter delete me and choose Delete.
AWS CLI
You can delete a retained automated backup by using the AWS CLI command delete-db-instance-
automated-backup.
• --dbi-resource-id – The resource identifier for the source RDS Custom DB instance.
You can find the resource identifier for the source DB instance of a retained automated backup by
using the AWS CLI command describe-db-instance-automated-backups.
The following example deletes the retained automated backup with source DB instance resource
identifier custom-db-123ABCEXAMPLE.
Example
For Windows:
1071
Amazon Relational Database Service User Guide
Migrating to RDS Custom for Oracle
Based on these factors, you can choose physical migration, logical migration, or a combination. If you
choose physical migration, you can use the following techniques:
RMAN duplication
Active database duplication doesn’t require a backup of your source database. It duplicates the live
source database to the destination host by copying database files over the network to the auxiliary
instance. The RMAN DUPLICATE command copies the required files as image copies or backup sets.
To learn this technique, see the AWS blog post Physical migration of Oracle databases to Amazon
RDS Custom using RMAN duplication.
Oracle Data Guard
In this technique, you back up a primary on-premises database and copy the backups to an Amazon
S3 bucket. You then copy the backups to your RDS Custom for Oracle standby DB instance. After
performing the necessary configuration, you manually switch over your primary database to your
RDS Custom for Oracle standby database. To learn this technique, see the AWS blog post Physical
migration of Oracle databases to Amazon RDS Custom using Data Guard.
For general information about logically importing data into RDS for Oracle, see Importing data into
Oracle on Amazon RDS (p. 1947).
1072
Amazon Relational Database Service User Guide
Upgrading a DB instance for RDS Custom for Oracle
Topics
• Requirements for RDS Custom for Oracle upgrades (p. 1073)
• Considerations for RDS Custom for Oracle upgrades (p. 1073)
• Viewing valid upgrade targets for RDS Custom for Oracle DB instances (p. 1074)
• Upgrading an RDS Custom DB instance (p. 1075)
• Viewing pending upgrades for RDS Custom DB instances (p. 1075)
• Troubleshooting an upgrade failure for an RDS Custom for Oracle DB instance (p. 1076)
• We strongly recommend that you upgrade your RDS Custom for Oracle DB instance using CEVs. RDS
Custom for Oracle automation synchronizes the patch metadata with the database binary on your DB
instance.
In special circumstances, RDS Custom supports applying a "one-off" patch directly to the underlying
Amazon EC2 instance directly using OPATCH. A valid use case might be a patch that you want to apply
immediately, but the RDS Custom team is upgrading the CEV feature, causing a delay. To apply a patch
manually, perform the following steps:
1. Pause RDS Custom automation.
2. Apply your patch to the database binaries on the Amazon EC2 instance.
3. Resume RDS Custom automation.
A disadvantage of the preceding technique is that you must apply the patch manually to every
instance that you want to upgrade. In contrast, when you create a new CEV, you can create or upgrade
multiple DB instances using the same CEV.
• When you upgrade your primary DB instance, RDS Custom for Oracle upgrades your read replicas
automatically. You don't have to upgrade read replicas manually.
• When you upgrade your RDS Custom for Oracle DB instance to a new CEV, RDS Custom performs out-
of-place patching that replaces the entire database volume with a new volume that uses your target
1073
Amazon Relational Database Service User Guide
Upgrading a DB instance for RDS Custom for Oracle
database version. Thus, we strongly recommend that you don't use the bin volume for installations or
for storing permanent data or files.
• When you upgrade a container database (CDB), RDS Custom for Oracle checks that all PDBs are open
or could be opened. If these conditions aren't met, RDS Custom stops the check and returns the
database to its original state without attempting the upgrade. If the conditions are met, RDS Custom
patches the CDB root first, and then patches all other PDBs (including PDB$SEED) in parallel.
After the patching process completes, RDS Custom attempts to open all PDBs. If any PDBs fail to open,
you receive the following event: The following PDBs failed to open: list-of-PDBs. If RDS
Custom fails to patch the CDB root or any PDBs, the instance is put into the PATCH_DB_FAILED state.
• You might want to perform a major version upgrade and a conversion of non-CDB to CDB at the same
time. In this case, we recommend that you proceed as follows:
1. Create a new RDS Custom DB instance that uses the Oracle Multitenant architecture.
2. Plug in a non-CDB into your CDB root, creating it as a PDB. Make sure that the non-CDB is the same
major version as your CDB.
3. Convert your PDB by running the noncdb_to_pdb.sql Oracle script.
4. Validate your CDB instance.
5. Upgrade your CDB instance.
You can also use the describe-db-engine-versions AWS CLI command to find valid upgrades for your
DB instances, as shown in the following example. This example assumes that a DB instance was created
using the version 19.my_cev1, and that the upgrade versions 19.my_cev2 and 19.my_cev exist.
{
"DBEngineVersions": [
{
"Engine": "custom-oracle-ee",
"EngineVersion": "19.my_cev1",
...
"ValidUpgradeTarget": [
{
"Engine": "custom-oracle-ee",
"EngineVersion": "19.my_cev2",
"Description": "19.my_cev2 description",
"AutoUpgrade": false,
"IsMajorVersionUpgrade": false
},
{
"Engine": "custom-oracle-ee",
"EngineVersion": "19.my_cev3",
"Description": "19.my_cev3 description",
"AutoUpgrade": false,
"IsMajorVersionUpgrade": false
}
]
...
1074
Amazon Relational Database Service User Guide
Upgrading a DB instance for RDS Custom for Oracle
Read replicas managed by RDS Custom are automatically upgraded after the primary DB instance is
upgraded.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance that you
want to modify.
3. Choose Modify. The Modify DB instance page appears.
4. For DB engine version, choose the CEV to upgrade to, such as 19.my_cev3.
5. Choose Continue to check the summary of modifications.
AWS CLI
To upgrade an RDS Custom DB instance, use the modify-db-instance AWS CLI command with the
following parameters:
Example
For Linux, macOS, or Unix:
For Windows:
1075
Amazon Relational Database Service User Guide
Upgrading a DB instance for RDS Custom for Oracle
However, this approach doesn't work if you used the --apply-immediately option or if the upgrade is
in progress.
{
"DBInstances": [
{
"DBInstanceIdentifier": "my-custom-instance",
"EngineVersion": "19.my_cev1",
...
"PendingModifiedValues": {
"EngineVersion": "19.my_cev3"
...
}
}
]
}
{
"PendingMaintenanceActions": [
{
"ResourceIdentifier": "arn:aws:rds:us-west-2:123456789012:instance:my-custom-
instance",
"PendingMaintenanceActionDetails": [
{
"Action": "db-upgrade",
"Description": "Upgrade to 19.my_cev3"
}
]
}
]
}
You can see this status by using the describe-db-instances AWS CLI command, as shown in the following
example.
1076
Amazon Relational Database Service User Guide
Upgrading a DB instance for RDS Custom for Oracle
{
"DBInstances": [
{
"DBInstanceIdentifier": "my-custom-instance",
"EngineVersion": "19.my_cev1",
...
"PendingModifiedValues": {
"EngineVersion": "19.my_cev3"
...
}
"DBInstanceStatus": "upgrade-failed"
}
]
}
After an upgrade failure, all database actions are blocked except for modifying the DB instance to
perform the following tasks:
Note
If automation has been paused for the RDS Custom DB instance, you can't retry the upgrade
until you resume automation.
The same actions apply to an upgrade failure for an RDS-managed read replica as for the
primary.
For more information, see Troubleshooting upgrades for RDS Custom for Oracle (p. 1085).
1077
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle
Topics
• Viewing RDS Custom events (p. 1078)
• Viewing RDS Custom events (p. 1078)
• Troubleshooting custom engine version creation for RDS Custom for Oracle (p. 1079)
• Fixing unsupported configurations in RDS Custom for Oracle (p. 1080)
• Troubleshooting upgrades for RDS Custom for Oracle (p. 1085)
• Troubleshooting replica promotion for RDS Custom for Oracle (p. 1086)
To view RDS Custom event notification using the AWS CLI, use the describe-events command. RDS
Custom introduces several new events. The event categories are the same as for Amazon RDS. For the list
of events, see Amazon RDS event categories and event messages (p. 874).
The following example retrieves details for the events that have occurred for the specified RDS Custom
DB instance.
To subscribe to RDS Custom event notification using the CLI, use the create-event-subscription
command. Include the following required parameters:
• --subscription-name
• --sns-topic-arn
The following example creates a subscription for backup and recovery events for an RDS Custom DB
instance in the current AWS account. Notifications are sent to an Amazon Simple Notification Service
(Amazon SNS) topic, specified by --sns-topic-arn.
1078
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle
--subscription-name my-instance-events \
--source-type db-instance \
--event-categories '["backup","recovery"]' \
--sns-topic-arn arn:aws:sns:us-east-1:123456789012:interesting-events
• The Amazon S3 bucket containing your installation files isn't in the same AWS Region as your CEV.
• When you request CEV creation in an AWS Region for the first time, RDS Custom creates an S3 bucket
for storing RDS Custom resources (such as CEV artifacts, AWS CloudTrail logs, and transaction logs).
CEV creation fails if RDS Custom can't create the S3 bucket. Either the caller doesn't have S3
permissions as described in Step 4: Grant required permissions to your IAM user or role (p. 1012), or
the number of S3 buckets has reached the limit.
• The caller doesn't have permissions to get files from your S3 bucket that contains the installation
media files. These permissions are described in Step 7: Add necessary IAM permissions (p. 1026).
• Your IAM policy has an aws:SourceIp condition. Make sure to follow the recommendations in AWS
Denies access to AWS based on the source IP in the AWS Identity and Access Management User Guide.
Also make sure that the caller has the S3 permissions described in Step 4: Grant required permissions
to your IAM user or role (p. 1012).
• Installation media files listed in the CEV manifest aren't in your S3 bucket.
• The SHA-256 checksums of the installation files are unknown to RDS Custom.
Confirm that the SHA-256 checksums of the provided files match the SHA-256 checksum on the
Oracle website. If the checksums match, contact AWS Support and provide the failed CEV name, file
name, and checksum.
• The OPatch version is incompatible with your patch files. You might get the following message:
OPatch is lower than minimum required version. Check that the version meets
the requirements for all patches, and try again. To apply an Oracle patch, you must use
a compatible version of the OPatch utility. You can find the required version of the Opatch utility in
the readme file for the patch. Download the most recent OPatch utility from My Oracle Support, and
try creating your CEV again.
• The patches specified in the CEV manifest are in the wrong order.
You can view RDS events either on the RDS console (in the navigation pane, choose Events) or by
using the describe-events AWS CLI command. The default duration is 60 minutes. If no events are
returned, specify a longer duration, as shown in the following example.
Currently, the MediaImport service that imports files from Amazon S3 to create CEVs isn't integrated
with AWS CloudTrail. Therefore, if you turn on data logging for Amazon RDS in CloudTrail, calls to the
MediaImport service such as the CreateCustomDbEngineVersion event aren't logged.
However, you might see calls from the API gateway that accesses your Amazon S3 bucket. These calls
come from the MediaImport service for the CreateCustomDbEngineVersion event.
1079
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle
In the following table, you can find descriptions of the notifications and events that the support
perimeter sends and how to fix them. These notifications and the support perimeter are subject to
change. For background on the support perimeter, see RDS Custom support perimeter (p. 985). For event
descriptions, see Amazon RDS event categories and event messages (p. 874).
Database
Database health You need to manually The support perimeter Log in to your host and examine the
recover the database monitors the DB database state.
on EC2 instance instance state. It also
[i- monitors how many ps -eo pid,state,command | grep
xxxxxxxxxxxxxxxxx]. restarts occurred smon
during the previous
The DB instance hour and day.
restarted. Restart your RDS Custom for Oracle
You're notified when DB instance if necessary to get
the instance is in a it running again. Sometimes it's
state where it still necessary to reboot the host.
exists, but you can't
After the restart, the RDS Custom
interact with it.
agent detects that your DB instance is
no longer in an unresponsive state. It
then notifies the support perimeter to
reevaluate your DB instance state.
Oracle Data Guard role The database The support perimeter Restore your Oracle Data Guard
role [LOGICAL monitors the current database role to a supported value.
STANDBY] isn't database role every
supported. Validate 15 seconds and RDS Custom only supports PRIMARY
the Oracle Data sends a CloudWatch and PHYSICAL STANDBY roles. You
Guard configuration notification if the can use the following statement to
for the database on database role has check the role:
Amazon EC2 instance changed.
[i- SELECT DATABASE_ROLE FROM V
xxxxxxxxxxxxxxxxx]. The Oracle Data Guard $DATABASE;
DATABASE_ROLE
parameter must be
If your RDS Custom for Oracle DB
either PRIMARY or
instance is standalone, you can use
PHYSICAL STANDBY.
either of the following statements to
change it back to the PRIMARY role:
1080
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle
Database archive lag The monitored Archive The support perimeter Log in to your host, connect to
target Lag Target database monitors the your RDS Custom for Oracle
parameter on ARCHIVE_LAG_TARGET DB instance, and change the
Amazon EC2 instance database parameter ARCHIVE_LAG_TARGET parameter to
[i- to verify that the a value from 60–7200.
xxxxxxxxxxxxxxxxx] DB instance's latest
has changed from restorable time is For example, use the following SQL
[300] to [0]. within reasonable command.
bounds.
The RDS Custom ALTER SYSTEM SET
instance is using ARCHIVE_LAG_TARGET=300
an unsupported SCOPE=BOTH;
configuration because
of the following [1] Your DB instance becomes available
issue(s): (1) The archive within 30 minutes.
lag target database
parameter on
Amazon EC2 instance
[i-
xxxxxxxxxxxxxxxxx]
is out of desired range
{"lowerbound":60,"upperbound":7200}.
Database log mode The monitored The DB instance log Log in to your host and shut down
log mode of the mode must be set to your RDS Custom for Oracle DB
database on Amazon ARCHIVELOG. instance. Use the following SQL
EC2 instance statement to initiate a consistent
[i- shutdown.
xxxxxxxxxxxxxxxxx]
has changed from SHUTDOWN IMMEDIATE;
[ARCHIVELOG] to
[NOARCHIVELOG].
The RDS Custom agent restarts your
DB instance and sets the log mode to
ARCHIVELOG.
1081
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle
Operating system
RDS Custom agent The monitored state The RDS Custom Log in to your host and make sure that
status of the RDS Custom agent must always the RDS Custom agent is running.
agent on EC2 instance be running. The
[i- agent publishes the You can use the following command to
xxxxxxxxxxxxxxxxx] IamAlive metric to find the status of the agent.
has changed Amazon CloudWatch
from RUNNING to every 30 seconds. service rdscustomagent status
STOPPED. An alarm is triggered
if the metric hasn't
You can use the following command to
been published for 30
start the agent.
seconds.
AWS Systems Manager The AWS Systems The SSM agent must For more information, see
agent (SSM agent) Manager agent always be running. The Troubleshooting SSM Agent.
status on EC2 instance RDS Custom agent is
[i- responsible for making
xxxxxxxxxxxxxxxxx] sure that the Systems
is currently Manager agent is
unreachable. Make running.
sure you have correctly
configured the If the SSM agent was
network, agent, and down and restarted,
IAM permissions. the RDS Custom agent
publishes a metric
to CloudWatch. The
RDS Custom agent
has an alarm on the
metric set to trigger
when there has been
a restart in each of
the previous three
minutes.
1082
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle
sudo configurations The sudo The support perimeter If the overwrite is unsuccessful, you
configurations monitors that can log in to your host and investigate
on EC2 instance certain OS users are why recent changes to the sudo
[i- allowed to run certain configurations aren't supported.
xxxxxxxxxxxxxxxxx] commands on the
have changed. box. It monitors sudo You can use the following command.
configurations against
the supported state. visudo -c -f /etc/
sudoers.d/individual_sudo_files
When the sudo
configurations aren't
After the support perimeter
supported, the
determines that the sudo
RDS Custom tries
configurations are supported, the your
to overwrite them
RDS Custom for Oracle DB instance
back to the previous
becomes available within 30 minutes.
supported state. If
that is successful, the
following notification
is sent:
RDS Custom
successfully overwrote
your configuration.
AWS resources
Amazon EC2 instance The state of the The support perimeter If your EC2 instance is stopped, start
state EC2 instance monitors EC2 it and remount the binary and data
[i- instance state-change volumes.
xxxxxxxxxxxxxxxxx] notifications. The EC2
has changed from instance must always If your EC2 instance is terminated,
[RUNNING] to be running. delete your RDS Custom for Oracle DB
[STOPPING]. instance.
The Amazon
EC2 instance
[i-
xxxxxxxxxxxxxxxxx]
has been terminated
and can't be found.
Delete the database
instance to clean up
resources.
The Amazon
EC2 instance
[i-
xxxxxxxxxxxxxxxxx]
has been stopped.
Start the instance,
and restore the host
configuration. For
more information, see
the troubleshooting
documentation.
1083
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle
Amazon EC2 instance The attributes of The support perimeter Change the EC2 instance type back to
attributes Amazon EC2 instance monitors the instance the original type using the EC2 console
[i- type of the EC2 or CLI.
xxxxxxxxxxxxxxxxx] instance where the
have changed. RDS Custom DB To change the instance type because
instance is running. of scaling requirements, begin point-
The EC2 instance type in-time recovery and specify the new
must stay the same instance type and class. This action
as when you set it up results in a new RDS Custom DB
during RDS Custom DB instance with a new host and Domain
instance creation. Name System (DNS) name.
Amazon Elastic Block The following RDS Custom creates If you detached any initial EBS
Store (Amazon EBS) Amazon EBS volumes two types of EBS volumes, contact AWS Support.
volumes are attached to volume, besides the
Amazon EC2 instance root volume created If you modified the storage type,
[i- from the Amazon Provisioned IOPS, or storage
xxxxxxxxxxxxxxxxx]: Machine Image (AMI), throughput of an EBS volume, revert
[[vol- and associates them the modification to the original value.
01234abcd56789ef0, with the EC2 instance.
vol- If you modified the storage size of an
0def6789abcd01234]]. The binary volume is EBS volume, contact AWS Support.
where the database
The original Amazon software binaries are (RDS Custom for Oracle only) If you
EBS volumes located. The data attached any additional EBS volumes,
attached to Amazon volumes are where do either of the following:
EC2 instance database files are
• Detach the additional EBS volumes
[i- located. The storage
from the RDS Custom DB instance.
xxxxxxxxxxxxxxxxx] configurations that
have been detached you set when creating • Contact AWS Support.
or modified. You can’t the DB instance are
attach or modify the used to configure the
initial EBS volumes data volumes.
attached to an RDS
Custom instance. The support perimeter
monitors the
following:
1084
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle
EBS-optimized state The EBS-optimized Amazon EC2 instances To turn on the EBS-optimized
attribute of Amazon should be EBS attribute:
EC2 instance optimized.
[i- 1. Stop the EC2 instance.
xxxxxxxxxxxxxxxxx] If the EBS- 2. Set the EBS-optimized attribute
has changed optimized attribute to enabled.
from [enabled] to is turned off
3. Start the EC2 instance.
[disabled]. (disabled), the
support perimeter 4. Remount the binary and data
doesn't put the DB volumes.
instance into the
unsupported-
configuration
state.
• Examine the upgrade output log files in the /tmp directory on your DB instance. The names of the logs
depend on your DB engine version. For example, you might see logs that contain the strings catupgrd
or catup.
• Examine the alert.log file located in the /rdsdbdata/log/trace directory.
• Run the following grep command in the root directory to track the upgrade OS process. This
command shows where the log files are being written and determine the state of the upgrade process.
root 18884 0.0 0.0 235428 8172 ? S< 17:03 0:00 /usr/bin/sudo -u rdsdb /
rdsdbbin/scripts/oracle-control ORCL op_apply_upgrade_sh RDS-UPGRADE/2.upgrade.sh
rdsdb 18886 0.0 0.0 153968 12164 ? S< 17:03 0:00 /usr/bin/perl -T -w /
rdsdbbin/scripts/oracle-control ORCL op_apply_upgrade_sh RDS-UPGRADE/2.upgrade.sh
rdsdb 18887 0.0 0.0 113196 3032 ? S< 17:03 0:00 /bin/sh /rdsdbbin/
oracle/rdbms/admin/RDS-UPGRADE/2.upgrade.sh
rdsdb 18900 0.0 0.0 113196 1812 ? S< 17:03 0:00 /bin/sh /rdsdbbin/
oracle/rdbms/admin/RDS-UPGRADE/2.upgrade.sh
rdsdb 18901 0.1 0.0 167652 20620 ? S< 17:03 0:07 /rdsdbbin/oracle/perl/
bin/perl catctl.pl -n 4 -d /rdsdbbin/oracle/rdbms/admin -l /tmp catupgrd.sql
root 29944 0.0 0.0 112724 2316 pts/0 S+ 18:43 0:00 grep --color=auto upg
• Run the following SQL query to verify the current state of the components to find the database
version and the options installed on the DB instance.
1085
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle
4 rows selected.
• Run the following SQL query to check for invalid objects that might interfere with the upgrade
process.
The replica promotion workflow might become stuck in the following situation:
3. Contact AWS Support and request it to move your DB instance to available status.
1086
Amazon Relational Database Service User Guide
Working with RDS Custom for SQL Server
Topics
• RDS Custom for SQL Server workflow (p. 1087)
• Requirements and limitations for Amazon RDS Custom for SQL Server (p. 1089)
• Setting up your environment for Amazon RDS Custom for SQL Server (p. 1099)
• Bring Your Own Media with RDS Custom for SQL Server (p. 1113)
• Working with custom engine versions for RDS Custom for SQL Server (p. 1115)
• Creating and connecting to a DB instance for Amazon RDS Custom for SQL Server (p. 1130)
• Managing an Amazon RDS Custom for SQL Server DB instance (p. 1138)
• Managing a Multi-AZ deployment for RDS Custom for SQL Server (p. 1147)
• Backing up and restoring an Amazon RDS Custom for SQL Server DB instance (p. 1157)
• Migrating an on-premises database to Amazon RDS Custom for SQL Server (p. 1165)
• Upgrading a DB instance for Amazon RDS Custom for SQL Server (p. 1168)
• Troubleshooting DB issues for Amazon RDS Custom for SQL Server (p. 1169)
1. Create an RDS Custom for SQL Server DB instance from an engine version offered by RDS Custom.
For more information, see Creating an RDS Custom for SQL Server DB instance (p. 1130).
1087
Amazon Relational Database Service User Guide
RDS Custom for SQL Server workflow
For more information, see Connecting to your RDS Custom DB instance using AWS Systems
Manager (p. 1133) and Connecting to your RDS Custom DB instance using RDP (p. 1135).
3. (Optional) Access the host to customize your software.
4. Monitor notifications and messages generated by RDS Custom automation.
Database connection
Like an Amazon RDS DB instance, your RDS Custom for SQL Server DB instance resides in a VPC. Your
application connects to the RDS Custom instance using a client such as SQL Server Management Suite
(SSMS), just as in RDS for SQL Server.
1088
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations
Topics
• Region and version availability (p. 1089)
• General requirements for RDS Custom for SQL Server (p. 1089)
• DB instance class support for RDS Custom for SQL Server (p. 1089)
• Limitations for RDS Custom for SQL Server (p. 1090)
• Local time zone for RDS Custom for SQL Server DB instances (p. 1090)
• Use the instance classes shown in DB instance class support for RDS Custom for SQL Server (p. 1089).
The only storage types supported are solid state drives (SSD) of types gp2, gp3, and io1. The maximum
storage limit is 16 TiB.
• Make sure that you have a symmetric encryption AWS KMS key to create an RDS Custom DB instance.
For more information, see Make sure that you have a symmetric encryption AWS KMS key (p. 1104).
• Make sure that you create an AWS Identity and Access Management (IAM) role and instance profile. For
more information, see Creating your IAM role and instance profile manually (p. 1105).
• Make sure to supply a networking configuration that RDS Custom can use to access other
AWS services. For specific requirements, see Configure networking, instance profile, and
encryption (p. 1101).
• The combined number of RDS Custom and Amazon RDS DB instances can't exceed your quota limit.
For example, if your quota is 40 DB instances, you can have 20 RDS Custom for SQL Server DB
instances and 20 Amazon RDS DB instances.
db.m5.xlarge–db.m5.24xlarge
db.m5.large–db.m5.24xlarge
1089
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations
db.m5.large–db.m5.4xlarge
• You can't create read replicas in Amazon RDS for RDS Custom for SQL Server DB instances. However,
you can configure high availability automatically with a Multi-AZ deployment. For more information,
see Managing a Multi-AZ deployment for RDS Custom for SQL Server (p. 1147).
• You can't modify the default server-level collation of an existing RDS Custom for SQL Server DB
instance. The default server collation is SQL_Latin1_General_CP1_CI_AS.
• Transparent Data Encryption (TDE) for database encryption isn't supported for RDS Custom for SQL
Server. However, you can use KMS for storage-level encryption. For more information on using KMS
with RDS Custom for SQL Server, see Make sure that you have a symmetric encryption AWS KMS
key (p. 1104).
• For an RDS Custom for SQL Server DB instance that wasn't created with a custom engine version
(CEV), changes to the Microsoft Windows operating system or C: drive aren't guaranteed to persist.
For example, you will lose these changes when you scale compute or initiate a snapshot restore
operation. If the RDS Custom for SQL Server DB instance was created with a CEV, then those changes
are persisted.
• Not all options are supported. For example, when you create an RDS Custom for SQL Server DB
instance, you can't do the following:
• Change the number of CPU cores and threads per core on the DB instance class.
• Turn on storage autoscaling.
• Configure Kerberos authentication using the AWS Management Console. However, you can configure
Windows Authentication manually and use Kerberos.
• Specify your own DB parameter group, option group, or character set.
• Turn on Performance Insights.
• Turn on automatic minor version upgrade.
• The maximum DB instance storage is 16 TiB.
Local time zone for RDS Custom for SQL Server DB instances
The time zone of an RDS Custom for SQL Server DB instance is set by default. The current default is
Coordinated Universal Time (UTC). You can set the time zone of your DB instance to a local time zone
instead, to match the time zone of your applications.
You set the time zone when you first create your DB instance. You can create your DB instance by using
the AWS Management Console, the Amazon RDS API CreateDBInstance action, or the AWS CLI create-db-
instance command.
If your DB instance is part of a Multi-AZ deployment, then when you fail over, your time zone remains the
local time zone that you set.
When you request a point-in-time restore, you specify the time to restore to. The time is shown in your
local time zone. For more information, see Restoring a DB instance to a specified time (p. 660).
The following are limitations to setting the local time zone on your DB instance:
1090
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations
• You can configure the time zone for a DB instance during instance creation, but you can't modify the
time zone of an existing RDS Custom for SQL Server DB instance.
• If the time zone is modified for an existing RDS Custom for SQL Server DB instance, RDS Custom
changes the DB instance status to unsupported-configuration, and sends event notifications.
• You can't restore a snapshot from a DB instance in one time zone to a DB instance in a different time
zone.
• We strongly recommend that you don't restore a backup file from one time zone to a different time
zone. If you restore a backup file from one time zone to a different time zone, you must audit your
queries and applications for the effects of the time zone change. For more information, see Importing
and exporting SQL Server databases using native backup and restore (p. 1419).
Argentina Standard Time (UTC–03:00) City of Buenos Aires This time zone
doesn't observe
daylight saving time.
1091
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations
Cape Verde Standard Time (UTC–01:00) Cabo Verde Is. This time zone
doesn't observe
daylight saving time.
Central America Standard Time (UTC–06:00) Central America This time zone
doesn't observe
daylight saving time.
Central Pacific Standard Time (UTC+11:00) Solomon Islands, New This time zone
Caledonia doesn't observe
daylight saving time.
1092
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations
1093
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations
1094
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations
SA Pacific Standard Time (UTC–05:00) Bogota, Lima, Quito, This time zone
Rio Branco doesn't observe
daylight saving time.
1095
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations
South Africa Standard Time (UTC+02:00) Harare, Pretoria This time zone
doesn't observe
daylight saving time.
Sri Lanka Standard Time (UTC+05:30) Sri Jayawardenepura This time zone
doesn't observe
daylight saving time.
1096
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations
W. Central Africa Standard Time (UTC+01:00) West Central Africa This time zone
doesn't observe
daylight saving time.
West Asia Standard Time (UTC+05:00) Ashgabat, Tashkent This time zone
doesn't observe
daylight saving time.
1097
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations
West Pacific Standard Time (UTC+10:00) Guam, Port Moresby This time zone
doesn't observe
daylight saving time.
1098
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
Contents
• Prerequisites for setting up RDS Custom for SQL Server (p. 1099)
• Download and install the AWS CLI (p. 1100)
• Grant required permissions to your IAM principal (p. 1100)
• Configure networking, instance profile, and encryption (p. 1101)
• Configuring with AWS CloudFormation (p. 1101)
• Resources created by CloudFormation (p. 1102)
• Downloading the template file (p. 1102)
• Configuring resources using CloudFormation (p. 1102)
• Configuring manually (p. 1104)
• Make sure that you have a symmetric encryption AWS KMS key (p. 1104)
• Creating your IAM role and instance profile manually (p. 1105)
• Create the AWSRDSCustomSQLServerInstanceRole IAM role (p. 1105)
• Add an access policy to AWSRDSCustomSQLServerInstanceRole (p. 1105)
• Create your RDS Custom for SQL Server instance profile (p. 1109)
• Add AWSRDSCustomSQLServerInstanceRole to your RDS Custom for SQL
Server instance profile (p. 1109)
• Configuring your VPC manually (p. 1109)
• Configure your VPC security group (p. 1110)
• Configure endpoints for dependent AWS services (p. 1110)
• Configure the instance metadata service (p. 1112)
• Configure the specified AWS Identity and Access Management (IAM) users and roles.
These are either used to create an RDS Custom DB instance or passed as a parameter in a creation
request.
• Confirm there aren't any service control policies (SCPs) restricting account level permissions.
If the account that you're using is part of an AWS Organization, it might have service control policies
(SCPs) restricting account level permissions. Make sure that the SCPs don't restrict the permissions on
users and roles that you create using the following procedures.
For more information about SCPs, see Service control policies (SCPs) in the AWS Organizations User
Guide. Use the describe-organization AWS CLI command to check whether your account is part of an
AWS Organization.
For more information about AWS Organizations, see What is AWS Organizations in the AWS
Organizations User Guide.
1099
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
Note
For a step-by-step tutorial on how to set up prerequisites and launch Amazon RDS Custom for
SQL Server, see the blog post Get started with Amazon RDS Custom for SQL Server using an
CloudFormation template (Network setup)
For each task, you can find descriptions following for the requirements and limitations specific to that
task. For example, when you create your RDS Custom for SQL Server DB instance, use one of the SQL
Server instances listed in DB instance class support for RDS Custom for SQL Server (p. 1089).
For general requirements that apply to RDS Custom for SQL Server, see General requirements for RDS
Custom for SQL Server (p. 1089).
For information about downloading and installing the AWS CLI, see Installing or updating the latest
version of the AWS CLI.
• You plan to access RDS Custom only from the AWS Management Console.
• You have already downloaded the AWS CLI for Amazon RDS or a different RDS Custom DB engine.
iam:SimulatePrincipalPolicy
cloudtrail:CreateTrail
cloudtrail:StartLogging
s3:CreateBucket
s3:PutBucketPolicy
s3:PutBucketObjectLockConfiguration
s3:PutBucketVersioning
kms:CreateGrant
kms:DescribeKey
For more information about the kms:CreateGrant permission, see AWS KMS key
management (p. 2589).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ValidateIamRole",
"Effect": "Allow",
"Action": "iam:SimulatePrincipalPolicy",
"Resource": "*"
1100
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
},
{
"Sid": "CreateCloudTrail",
"Effect": "Allow",
"Action": [
"cloudtrail:CreateTrail",
"cloudtrail:StartLogging"
],
"Resource": "arn:aws:cloudtrail:*:*:trail/do-not-delete-rds-custom-*"
},
{
"Sid": "CreateS3Bucket",
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:PutBucketPolicy",
"s3:PutBucketObjectLockConfiguration",
"s3:PutBucketVersioning"
],
"Resource": "arn:aws:s3:::do-not-delete-rds-custom-*"
},
{
"Sid": "CreateKmsGrant",
"Effect": "Allow",
"Action": [
"kms:CreateGrant",
"kms:DescribeKey"
],
"Resource": "*"
}
]
}
Also, the IAM principal requires the iam:PassRole permission on the IAM role. That must be attached
to the instance profile passed in the custom-iam-instance-profile parameter in the request
to create the RDS Custom DB instance. The instance profile and its attached role are created later in
Configure networking, instance profile, and encryption (p. 1101).
Make sure that the previously listed permissions aren't restricted by service control policies (SCPs),
permission boundaries, or session policies associated with the IAM principal.
If your account is part of an AWS Organization, make sure that the permissions required by the instance
profile role aren’t restricted by service control policies (SCPs).
The following networking configurations are designed to work best with DB instances that aren't publicly
accessible. That is, you can’t connect directly to the DB instance from outside the VPC.
1101
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
For a tutorial on how to launch Amazon RDS Custom for SQL Server using an AWS CloudFormation
template, see Get started with Amazon RDS Custom for SQL Server using an AWS CloudFormation
template in the AWS Database Blog .
Topics
• Resources created by CloudFormation (p. 1102)
• Downloading the template file (p. 1102)
• Configuring resources using CloudFormation (p. 1102)
Successfully creating the CloudFormation stack creates the following resources in your AWS account:
• Symmetric encryption KMS key for encryption of data managed by RDS Custom.
• Instance profile and associated IAM role for attaching to RDS Custom instances.
• VPC with the CIDR range specified as the CloudFormation parameter. The default value is
10.0.0.0/16.
• Two private subnets with the CIDR range specified in the parameters, and two different Availability
Zones in the AWS Region. The default values for the subnet CIDRs are 10.0.128.0/20 and
10.0.144.0/20.
• DHCP option set for the VPC with domain name resolution to an Amazon Domain Name System (DNS)
server.
• Route table to associate with two private subnets and no access to the internet.
• Network access control list (ACL) to associate with two private subnets and access restricted to HTTPS.
• VPC security group to be associated with the RDS Custom instance. Access is restricted for outbound
HTTPS to AWS service endpoints that are required by RDS Custom.
• VPC security group to be associated with VPC endpoints that are created for AWS service endpoints
that are required by RDS Custom.
• DB subnet group in which RDS Custom instances are created.
• VPC endpoints for each of the AWS service endpoints that are required by RDS Custom.
Use the following procedures to create the CloudFormation stack for RDS Custom for SQL Server.
1. Open the context (right-click) menu for the link custom-sqlserver-onboard.zip and choose Save
Link As.
2. Save and extract the file to your computer.
1102
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
a. For Capabilities, select the I acknowledge that AWS CloudFormation might create IAM
resources with custom names check box.
b. Choose Create stack.
10. (Optional): You can update the SQS permissions in the instance profile role.
If you want to deploy only a Single-AZ DB instance, you can edit the CloudFormation template
file to remove SQS permissions. SQS permissions are only required for a Multi-AZ deployment and
allow RDS Custom for SQL Server to call Amazon SQS to perform specific actions. Because they are
not required for a Single-AZ deployment, you may opt to remove these permissions to follow the
principle of least privilege.
If you want to configure a Multi-AZ deployment, you don't need to remove the SQS permissions.
Note
If you remove the SQS permissions and later choose to modify to a Multi-AZ deployment,
the Multi-AZ creation will fail. You would need to re-add the SQS permissions before
modifying to a Multi-AZ deployment.
To make this optional change to the CloudFormation template, open the CloudFormation console
at https://console.aws.amazon.com/cloudformation, and edit the template file by removing the
following lines:
{
"Sid": "SendMessageToSQSQueue",
"Effect": "Allow",
"Action": [
"SQS:SendMessage",
"SQS:ReceiveMessage",
"SQS:DeleteMessage",
"SQS:GetQueueUrl"
],
"Resource": [
{
"Fn::Sub": "arn:${AWS::Partition}:sqs:${AWS::Region}:${AWS::AccountId}:do-
not-delete-rds-custom-*"
}
],
"Condition": {
"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"
1103
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
}
}
}
CloudFormation creates the resources that RDS Custom for SQL Server requires. If the stack creation
fails, read through the Events tab to see which resource creation failed and its status reason.
The Outputs tab for this CloudFormation stack in the console should have information about all
resources to be passed as parameters for creating an RDS Custom for SQL Server DB instance. Make
sure to use the VPC security group and DB subnet group created by CloudFormation for RDS Custom DB
instances. By default, RDS tries to attach the default VPC security group, which might not have the access
that you need.
Note
When you delete a CloudFormation stack, all of the resources created by the stack are deleted
except the KMS key. The KMS key goes into a pending-deletion state and is deleted after
30 days. To keep the KMS key, perform a CancelKeyDeletion operation during the 30-day grace
period.
If you used CloudFormation to create resources, you can skip Configuring manually (p. 1104).
Configuring manually
If you choose to configure resources manually, perform the following tasks.
Note
To simplify setup, you can use the AWS CloudFormation template file to create a
CloudFormation stack rather than a manual configuration. For more information, see
Configuring with AWS CloudFormation (p. 1101).
Topics
• Make sure that you have a symmetric encryption AWS KMS key (p. 1104)
• Creating your IAM role and instance profile manually (p. 1105)
• Configuring your VPC manually (p. 1109)
Make sure that you have a symmetric encryption AWS KMS key
A symmetric encryption AWS KMS key is required for RDS Custom. When you create an RDS Custom for
SQL Server DB instance, make sure to supply the KMS key identifier. For more information, see Creating
and connecting to a DB instance for Amazon RDS Custom for SQL Server (p. 1130).
• If you have an existing customer managed KMS key in your AWS account, you can use it with RDS
Custom. No further action is necessary.
• If you already created a customer managed symmetric encryption KMS key for a different RDS Custom
engine, you can reuse the same KMS key. No further action is necessary.
• If you don't have an existing customer managed symmetric encryption KMS key in your account, create
a KMS key by following the instructions in Creating keys in the AWS Key Management Service Developer
Guide.
• If you're creating a CEV or RDS Custom DB instance, and your KMS key is in a different AWS account,
make sure to use the AWS CLI. You can't use the AWS console with cross-account KMS keys.
Important
RDS Custom doesn't support AWS managed KMS keys.
1104
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
Make sure that your symmetric encryption key grants access to the kms:Decrypt and
kms:GenerateDataKey operations to the AWS Identity and Access Management (IAM) role in your IAM
instance profile. If you have a new symmetric encryption key in your account, no changes are required.
Otherwise, make sure that your symmetric encryption key's policy grants access to these operations.
For more information, see Step 3: Configure IAM and your Amazon VPC (p. 1003).
To create the IAM instance profile and IAM roles for RDS Custom for SQL Server
1. Create the IAM role named AWSRDSCustomSQLServerInstanceRole with a trust policy that lets
Amazon EC2 assume this role.
2. Add an access policy to AWSRDSCustomSQLServerInstanceRole.
3. Create an IAM instance profile for RDS Custom for SQL Server that is named
AWSRDSCustomSQLServerInstanceProfile.
4. Add AWSRDSCustomSQLServerInstanceRole to the instance profile.
Make sure that the permissions in the access policy aren't restricted by SCPs or permission boundaries
associated with the instance profile role.
1105
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ssmAgent1",
"Effect": "Allow",
"Action": [
"ssm:GetDeployablePatchSnapshotForInstance",
"ssm:ListAssociations",
"ssm:PutInventory",
"ssm:PutConfigurePackageResult",
"ssm:UpdateInstanceInformation",
"ssm:GetManifest"
],
"Resource": "*"
},
{
"Sid": "ssmAgent2",
"Effect": "Allow",
"Action": [
"ssm:ListInstanceAssociations",
"ssm:PutComplianceItems",
"ssm:UpdateAssociationStatus",
"ssm:DescribeAssociation",
"ssm:UpdateInstanceAssociationStatus"
],
"Resource": "arn:aws:ec2:'$REGION':'$ACCOUNT_ID':instance/*",
"Condition": {
"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"
}
}
},
{
"Sid": "ssmAgent3",
"Effect": "Allow",
"Action": [
"ssm:UpdateAssociationStatus",
"ssm:DescribeAssociation",
"ssm:GetDocument",
"ssm:DescribeDocument"
],
"Resource": "arn:aws:ssm:*:*:document/*"
},
{
"Sid": "ssmAgent4",
"Effect": "Allow",
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*"
},
{
"Sid": "ssmAgent5",
"Effect": "Allow",
"Action": [
"ec2messages:AcknowledgeMessage",
"ec2messages:DeleteMessage",
"ec2messages:FailMessage",
"ec2messages:GetEndpoint",
"ec2messages:GetMessages",
"ec2messages:SendReply"
],
"Resource": "*"
1106
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
},
{
"Sid": "ssmAgent6",
"Effect": "Allow",
"Action": [
"ssm:GetParameters",
"ssm:GetParameter"
],
"Resource": "arn:aws:ssm:*:*:parameter/*"
},
{
"Sid": "ssmAgent7",
"Effect": "Allow",
"Action": [
"ssm:UpdateInstanceAssociationStatus",
"ssm:DescribeAssociation"
],
"Resource": "arn:aws:ssm:*:*:association/*"
},
{
"Sid": "eccSnapshot1",
"Effect": "Allow",
"Action": "ec2:CreateSnapshot",
"Resource": [
"arn:aws:ec2:'$REGION':'$ACCOUNT_ID':volume/*"
],
"Condition": {
"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"
}
}
},
{
"Sid": "eccSnapshot2",
"Effect": "Allow",
"Action": "ec2:CreateSnapshot",
"Resource": [
"arn:aws:ec2:'$REGION'::snapshot/*"
],
"Condition": {
"StringLike": {
"aws:RequestTag/AWSRDSCustom": "custom-sqlserver"
}
}
},
{
"Sid": "eccCreateTag",
"Effect": "Allow",
"Action": "ec2:CreateTags",
"Resource": "*",
"Condition": {
"StringLike": {
"aws:RequestTag/AWSRDSCustom": "custom-sqlserver",
"ec2:CreateAction": [
"CreateSnapshot"
]
}
}
},
{
"Sid": "s3BucketAccess",
"Effect": "Allow",
"Action": [
"s3:putObject",
"s3:getObject",
"s3:getObjectVersion",
1107
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
"s3:AbortMultipartUpload"
],
"Resource": [
"arn:aws:s3:::do-not-delete-rds-custom-*/*"
]
},
{
"Sid": "customerKMSEncryption",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey*"
],
"Resource": [
"arn:aws:kms:'$REGION':'$ACCOUNT_ID':key/'$CUSTOMER_KMS_KEY_ID'"
]
},
{
"Sid": "readSecretsFromCP",
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": [
"arn:aws:secretsmanager:'$REGION':'$ACCOUNT_ID':secret:do-not-delete-
rds-custom-*"
],
"Condition": {
"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"
}
}
},
{
"Sid": "publishCWMetrics",
"Effect": "Allow",
"Action": "cloudwatch:PutMetricData",
"Resource": "*",
"Condition": {
"StringEquals": {
"cloudwatch:namespace": "rdscustom/rds-custom-sqlserver-agent"
}
}
},
{
"Sid": "putEventsToEventBus",
"Effect": "Allow",
"Action": "events:PutEvents",
"Resource": "arn:aws:events:'$REGION':'$ACCOUNT_ID':event-bus/default"
},
{
"Sid": "cwlOperations1",
"Effect": "Allow",
"Action": [
"logs:PutRetentionPolicy",
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:CreateLogStream",
"logs:CreateLogGroup"
],
"Resource": "arn:aws:logs:'$REGION':'$ACCOUNT_ID':log-group:rds-custom-
instance-*"
},
{
"Condition": {
1108
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"
}
},
"Action": [
"SQS:SendMessage",
"SQS:ReceiveMessage",
"SQS:DeleteMessage",
"SQS:GetQueueUrl"
],
"Resource": [
"arn:aws:sqs:'$REGION':'$ACCOUNT_ID':do-not-delete-rds-custom-*"
],
"Effect": "Allow",
"Sid": "SendMessageToSQSQueue"
}
]
}'
Add AWSRDSCustomSQLServerInstanceRole to your RDS Custom for SQL Server instance profile
Add the AWSRDSCustomInstanceRoleForRdsCustomInstance role to the
AWSRDSCustomSQLServerInstanceProfile profile.
RDS Custom sends communication from your DB instance to other AWS services. To make sure that RDS
Custom can communicate, it validates network connectivity to the following AWS services:
• Amazon CloudWatch
• Amazon CloudWatch Logs
• Amazon CloudWatch Events
• Amazon EC2
• Amazon EventBridge
• Amazon S3
• AWS Secrets Manager
• AWS Systems Manager
If RDS Custom can't communicate with the necessary services, it publishes the following event:
Database instance in incompatible-network. SSM Agent connection not available. Amazon RDS
can't connect to the dependent AWS services.
1109
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
To avoid incompatible-network errors, make sure that VPC components involved in communication
between your RDS Custom DB instance and AWS services satisfy the following requirements:
• The DB instance can make outbound connections on port 443 to other AWS services.
• The VPC allows incoming responses to requests originating from your RDS Custom DB instance.
• RDS Custom can correctly resolve the domain names of endpoints for each AWS service.
RDS Custom relies on AWS Systems Manager connectivity for its automation. For information about how
to configure VPC endpoints, see Creating VPC endpoints for Systems Manager. For the list of endpoints
in each Region, see AWS Systems Manager endpoints and quotas in the Amazon Web Services General
Reference.
If you already configured a VPC for a different RDS Custom DB engine, you can reuse that VPC and skip
this process.
Topics
• Configure your VPC security group (p. 1110)
• Configure endpoints for dependent AWS services (p. 1110)
• Configure the instance metadata service (p. 1112)
A security group acts as a virtual firewall for a VPC instance, controlling both inbound and outbound
traffic. An RDS Custom DB instance has a default security group that protects the instance. Make sure
that your security group permits traffic between RDS Custom and other AWS services.
1. Sign in to the AWS Management Console and open the Amazon VPC console at https://
console.aws.amazon.com/vpc.
2. Allow RDS Custom to use the default security group, or create your own security group.
For detailed instructions, see Provide access to your DB instance in your VPC by creating a security
group (p. 177).
3. Make sure that your security group permits outbound connections on port 443. RDS Custom needs
this port to communicate with dependent AWS services.
4. If you have a private VPC and use VPC endpoints, make sure that the security group associated with
the DB instance allows outbound connections on port 443 to VPC endpoints. Also make sure that
the security group associated with the VPC endpoint allows inbound connections on port 443 from
the DB instance.
If incoming connections aren't allowed, the RDS Custom instance can't connect to the AWS Systems
Manager and Amazon EC2 endpoints. For more information, see Create a Virtual Private Cloud
endpoint in the AWS Systems Manager User Guide.
For more information about security groups, see Security groups for your VPC in the Amazon VPC
Developer Guide.
Make sure that your VPC allows outbound traffic to the following AWS services with which the DB
instance communicates:
• Amazon CloudWatch
1110
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
We recommend that you add endpoints for every service to your VPC using the following instructions.
However, you can use any solution that lets your VPC communicate with AWS service endpoints. For
example, you can use Network Address Translation (NAT) or AWS Direct Connect.
To configure endpoints for AWS services with which RDS Custom works
The VPC endpoint can span multiple Availability Zones. AWS creates an elastic network interface
for the VPC endpoint in each subnet that you choose. Each network interface has a Domain Name
System (DNS) host name and a private IP address.
8. For Security group, choose or create a security group.
You can use security groups to control access to your endpoint, much as you use a firewall. For more
information about security groups, see Security groups for your VPC in the Amazon VPC User Guide.
9. Optionally, you can attach a policy to the VPC endpoint. Endpoint policies can control access to the
AWS service to which you are connecting. The default policy allows all requests to pass through the
endpoint. If you're using a custom policy, make sure that requests from the DB instance are allowed
in the policy.
10. Choose Create endpoint.
The following table explains how to find the list of endpoints that your VPC needs for outbound
communications.
AWS Systems Use the following endpoint formats: For the list of endpoints in each Region,
Manager see AWS Systems Manager endpoints
• ssm.region.amazonaws.com and quotas in the Amazon Web Services
• ssmmessages.region.amazonaws.com General Reference.
AWS Secrets Manager Use the endpoint format For the list of endpoints in each Region,
secretsmanager.region.amazonaws.com. see AWS Secrets Manager endpoints
and quotas in the Amazon Web Services
General Reference.
1111
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment
Amazon CloudWatch Use the following endpoint formats: For the list of endpoints in every Region,
see:
• For CloudWatch metrics, use
monitoring.region.amazonaws.com • Amazon CloudWatch endpoints and
• For CloudWatch Events, use quotas in the Amazon Web Services
events.region.amazonaws.com General Reference
• For CloudWatch Logs, use • Amazon CloudWatch Logs endpoints
logs.region.amazonaws.com and quotas in the Amazon Web Services
General Reference
• Amazon CloudWatch Events endpoints
and quotas in the Amazon Web Services
General Reference
Amazon EC2 Use the following endpoint formats: For the list of endpoints in each Region,
see Amazon Elastic Compute Cloud
• ec2.region.amazonaws.com endpoints and quotas in the Amazon
• ec2messages.region.amazonaws.com Web Services General Reference.
Amazon S3 Use the endpoint format For the list of endpoints in each Region,
s3.region.amazonaws.com. see Amazon Simple Storage Service
endpoints and quotas in the Amazon
Web Services General Reference.
• Access the instance metadata service using Instance Metadata Service Version 2 (IMDSv2).
• Allow outbound communications through port 80 (HTTP) to the IMDS link IP address.
• Request instance metadata from http://169.254.169.254, the IMDSv2 link.
For more information, see Use IMDSv2 in the Amazon EC2 User Guide for Linux Instances.
1112
Amazon Relational Database Service User Guide
Bring Your Own Media with RDS Custom for SQL Server
1. Provide and install your own Microsoft SQL Server binaries with supported cumulative updates (CU)
on an AWS EC2 Windows AMI.
2. Save the AMI as a golden image, which is a template that you can use to create a custom engine
version (CEV).
3. Create a CEV from your golden image.
4. Create new RDS Custom for SQL Server DB instances by using your CEV.
When using BYOM, make you sure that you meet the following additional requirements:
• Use only SQL Server 2019 Enterprise and Standard edition. These are the only supported editions.
• Grant the SQL Server sysadmin (SA) server role privilege to NT AUTHORITY\SYSTEM.
• Keep the Windows Server OS configured with UTC time.
Amazon EC2 Windows instances are set to the UTC time zone by default. For more information about
viewing and changing the time for a Windows instance, see Set the time for a Windows instance.
• Open TCP port 1433 and UDP port 1434 to allow SSM connections.
• Only the default SQL Server instance (MSSQLSERVER) is supported. Named SQL Server instances
aren't supported. RDS Custom for SQL Server detects and monitors only the default SQL Server
instance.
• Only a single installation of SQL Server is supported on each AMI. Multiple installations of different
SQL Server versions aren't supported.
• SQL Server Web edition isn't supported with BYOM.
• Evaluation versions of SQL Server editions aren't supported with BYOM. When you install SQL Server,
don't select the checkbox for using an evaluation version.
1113
Amazon Relational Database Service User Guide
Bring Your Own Media with RDS Custom for SQL Server
• Feature availability and support varies across specific versions of each database engine, and
across AWS Regions. For more information, see Region availability for RDS Custom for SQL Server
CEVs (p. 1118) and Version support for RDS Custom for SQL Server CEVs (p. 1119).
1114
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
1. Choose an AWS EC2 Windows AMI to use as a base image for a CEV. You have the option to use pre-
installed Microsoft SQL Server, or bring your own media to install SQL Server yourself.
2. Install other software on the operating system (OS) and customize the configuration of the OS and
SQL Server to meet your enterprise needs.
3. Save the AMI as a golden image
4. Create a custom engine version (CEV) from your golden image.
5. Create new RDS Custom for SQL Server DB instances by using your CEV.
A CEV allows you to maintain your preferred baseline configuration of the OS and database. Using
a CEV ensures that the host configuration, such as any third-party agent installation or other OS
customizations, are persisted on RDS Custom for SQL Server DB instances. With a CEV, you can quickly
deploy fleets of RDS Custom for SQL Server DB instances with the same configuration.
Topics
• Preparing to create a CEV for RDS Custom for SQL Server (p. 1115)
• Creating a CEV for RDS Custom for SQL Server (p. 1120)
• Modifying a CEV for RDS Custom for SQL Server (p. 1124)
• Viewing CEV details for Amazon RDS Custom for SQL Server (p. 1126)
• Deleting a CEV for RDS Custom for SQL Server (p. 1128)
Contents
• Preparing a CEV using pre-installed SQL Server (LI) (p. 1115)
• Preparing a CEV using Bring Your Own Media (BYOM) (p. 1117)
• Region availability for RDS Custom for SQL Server CEVs (p. 1118)
• Version support for RDS Custom for SQL Server CEVs (p. 1119)
• Requirements for RDS Custom for SQL Server CEVs (p. 1119)
• Limitations for RDS Custom for SQL Server CEVs (p. 1119)
1115
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
the most recent release number. This ensures that you are using a supported version of Windows Server
and SQL Server with the latest Cumulative Update (CU).
1. Choose the latest available AWS EC2 Windows Amazon Machine Image (AMI) with License Included
(LI) Microsoft Windows Server and SQL Server.
h. Choose the AMI with the SQL Server edition that you want to use.
2. Create or launch an EC2 instance from your chosen AMI.
3. Log in to the EC2 instance and install additional software or customize the OS and database
configuration to meet your requirements.
4. Run Sysprep on the EC2 instance. For more information prepping an AMI using Sysprep, see Create a
standardized Amazon Machine Image (AMI) using Sysprep.
1116
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
5. Save the AMI that contains your installed SQL Server version, other software, and customizations.
This will be your golden image.
6. Create a new CEV by providing the AMI ID of the image that you created. For detailed steps on
creating a CEV, see Creating a CEV for RDS Custom for SQL Server (p. 1120).
7. Create a new RDS Custom for SQL Server DB instance using the CEV. For detailed steps, see Create
an RDS Custom for SQL Server DB instance from a CEV (p. 1122).
1. Choose the latest available AWS EC2 Windows Amazon Machine Image (AMI) with Microsoft
Windows Server.
a. View the monthly AMI updates table within the Windows AMI version history.
b. Note the latest available Release number. For example, the release number for Windows Server
2019 might be 2023.05.10. Although the Changes column may show SQL Server CUs
installed, the release number also includes an AMI for Windows Server 2019, without SQL
Server pre-installed. You can use this AMI for BYOM.
1117
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
g. Choose the AMI with the supported Windows Server version that you want to use.
2. Create or launch an EC2 instance from your chosen AMI.
3. Log in to the EC2 instance and copy your SQL Server installation media to it.
4. Install SQL Server. Make sure that you do the following:
a. Review Requirements for BYOM for RDS Custom for SQL Server (p. 1113).
b. Set the instance root directory to the default C:\Program Files\Microsoft SQL Server\.
Don't change this directory.
c. Set the SQL Server Database Engine Account Name to either NT Service\MSSQLSERVER or NT
AUTHORITY\NETWORK SERVICE.
d. Set the SQL Server Startup mode to Manual.
e. Choose SQL Server Authentication mode as Mixed.
f. Leave the current settings for the default Data directories and TempDB locations.
5. Grant the SQL Server sysadmin (SA) server role privilege to NT AUTHORITY\SYSTEM:
USE [master]
GO
EXEC master..sp_addsrvrolemember @loginame = N'NT AUTHORITY\SYSTEM' , @rolename =
N'sysadmin'
GO
6. Install additional software or customize the OS and database configuration to meet your
requirements.
7. Run Sysprep on the EC2 instance. For more information, see Create a standardized Amazon Machine
Image (AMI) using Sysprep.
8. Save the AMI that contains your installed SQL Server version, other software, and customizations.
This will be your golden image.
9. Create a new CEV by providing the AMI ID of the image that you created. For detailed steps, see
Creating a CEV for RDS Custom for SQL Server (p. 1120).
10. Create a new RDS Custom for SQL Server DB instance using the CEV. For detailed steps, see Create
an RDS Custom for SQL Server DB instance from a CEV (p. 1122).
• US East (Ohio)
• US East (N. Virginia)
• US West (Oregon)
• Asia Pacific (Mumbai)
• Asia Pacific (Seoul)
• Asia Pacific (Singapore)
1118
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
• For CEVs using pre-installed media, AWS EC2 Windows AMIs with License Included (LI) Microsoft
Windows Server 2019 and SQL Server 2019
• For CEVs using bring your own media (BYOM), AWS EC2 Windows AMIs with Microsoft Windows Server
2019
CEV creation for RDS Custom for SQL Server is supported for the following operating system (OS) and
database editions:
• For CEVs using pre-installed media, SQL Server 2019 with CU17, CU18, or CU20, for Enterprise,
Standard, and Web editions
• For CEVs using bring your own media (BYOM), SQL Server 2019 with CU17, CU18, or CU20, for
Enterprise and Standard editions
• For CEVs using pre-installed media or bring your own media (BYOM), Windows Server 2019 is the only
supported OS
• The AMI used to create a CEV must be based on an OS and database configuration supported by RDS
Custom for SQL Server. For more information on supported configurations, see Requirements and
limitations for Amazon RDS Custom for SQL Server (p. 1089).
• The CEV must have a unique name. You can't create a CEV with the same name as an existing CEV.
• You must name the CEV using a naming pattern of SQL Server major version + minor version +
customized string. The major version + minor version must match the SQL Server version provided with
the AMI. For example, you can name an AMI with SQL Server 2019 CU17 as 15.00.4249.2.my_cevtest.
• You must prepare an AMI using Sysprep. For more information about prepping an AMI using Sysprep,
see Create a standardized Amazon Machine Image (AMI) using Sysprep.
• You are responsible for maintaining the life cycle of the AMI. An RDS Custom for SQL Server DB
instance created from a CEV doesn't store a copy of the AMI. It maintains a pointer to the AMI that you
used to create the CEV. The AMI must exist for an RDS Custom for SQL Server DB instance to remain
operable.
• You can't delete a CEV if there are resources, such as DB instances or DB snapshots, associated with it.
1119
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
• To create an RDS Custom for SQL Server DB instance, a CEV must have a status of pending-
validation, available, failed, or validating. You can't create an RDS Custom for SQL Server
DB instance using a CEV if the CEV status is incompatible-image-configuration.
• To modify a RDS Custom for SQL Server DB instance to use a new CEV, the CEV must have a status of
available.
• You can't create an AMI or CEV from an existing RDS Custom for SQL Server DB instance.
• You can't modify an existing CEV to use a different AMI. However, you can modify an RDS Custom for
SQL Server DB instance to use a different CEV. For more information, see Modifying an RDS Custom for
SQL Server DB instance (p. 1141).
• Cross-Region copy of CEVs isn't supported.
• Cross-account copy of CEVs isn't supported.
• SQL Server Transparent Data Encryption (TDE) isn't supported.
• You can't restore or recover a CEV after you delete it. However, you can create a new CEV from the
same AMI.
• A RDS Custom for SQL Server DB instance stores your SQL Server database files in the D:\drive. The
AMI associated with a CEV should store the Microsoft SQL Server system database files in the C:\ drive.
• An RDS Custom for SQL Server DB instance retains your configuration changes made to SQL Server.
Any configuration changes to the OS on a running RDS Custom for SQL Server DB instance created
from a CEV aren't retained. If you need to make a permanent configuration change to the OS and have
it retained as your new baseline configuration, create a new CEV and modify the DB instance to use the
new CEV.
Important
Modifying an RDS Custom for SQL Server DB instance to use a new CEV is an offline
operation. You can perform the modification immediately or schedule it to occur during a
weekly maintenance window.
• When you modify a CEV, Amazon RDS doesn't push those modifications to any associated RDS Custom
for SQL Server DB instances. You must modify each RDS Custom for SQL Server DB instance to use
the new or updated CEV. For more information, see Modifying an RDS Custom for SQL Server DB
instance (p. 1141).
• Important
If an AMI used by a CEV is deleted, any modifications that may require host replacement, for
example, scale compute, will fail. The RDS Custom for SQL Server DB instance will then be
placed outside of the RDS support perimeter. We recommend that you avoid deleting any AMI
that's associated to a CEV.
Make sure that the Amazon Machine Image (AMI) is in the same AWS account and Region as your CEV.
Otherwise, the process to create a CEV fails.
For more information, see Creating and connecting to a DB instance for Amazon RDS Custom for SQL
Server (p. 1130).
Important
The steps to create a CEV are the same for AMIs created with pre-installed SQL Server and those
created using bring your own media (BYOM).
1120
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
Console
To create a CEV
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.
The Custom engine versions page shows all CEVs that currently exist. If you haven't created any
CEVs, the table is empty.
3. Choose Create custom engine version.
4. For Engine type, choose Microsoft SQL Server.
5. For Edition, choose SQL Server Enterprise, Standard, or Web Edition.
6. For Major version, choose the major engine version that's installed on your AMI.
7. In Version details, enter a valid name in Custom engine version name.
The Custom engine versions page appears. Your CEV is shown with the status pending-validation
AWS CLI
To create a CEV by using the AWS CLI, run the create-custom-db-engine-version command.
• --engine
• --engine-version
• --image-id
• --kms-key-id
• --description
• --region
• --tags
The following example creates a CEV named 15.00.4249.2.my_cevtest. Make sure that the name of
your CEV begins with the major engine version number.
Example
For Linux, macOS, or Unix:
1121
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
--image-id ami-0r93cx31t5r596482 \
--kms-key-id my-kms-key \
--description "Custom SQL Server EE 15.00.4249.2 cev test"
The following partial output shows the engine, parameter groups, and other information.
"DBEngineVersions": [
{
"Engine": "custom-sqlserver-ee",
"MajorEngineVersion": "15.00",
"EngineVersion": "15.00.4249.2.my_cevtest",
"DBEngineDescription": "Microsoft SQL Server Enterprise Edition for RDS Custom for SQL
Server",
"DBEngineVersionArn": "arn:aws:rds:us-east-1:<my-account-id>:cev:custom-sqlserver-
ee/15.00.4249.2.my_cevtest/a1234a1-123c-12rd-bre1-1234567890",
"DBEngineVersionDescription": "Custom SQL Server EE 15.00.4249.2 cev test",
"KMSKeyId": "arn:aws:kms:us-east-1:<your-account-id>:key/<my-kms-key-id>",
"Image": [
"ImageId": "ami-0r93cx31t5r596482",
"Status": "pending-validation"
],
"CreateTime": "2022-11-20T19:30:01.831000+00:00",
"SupportsLogExportsToCloudwatchLogs": false,
"SupportsReadReplica": false,
"Status": "pending-validation",
"SupportsParallelQuery": false,
"SupportsGlobalDatabases": false,
"TagList": []
}
]
If the process to create a CEV fails, RDS Custom for SQL Server issues RDS-EVENT-0198 with the
message Creation failed for custom engine version major-engine-version.cev_name.
The message includes details about the failure, for example, the event prints missing files. To find
troubleshooting ideas for CEV creation issues, see Troubleshooting CEV errors for RDS Custom for SQL
Server (p. 1170).
Lifecycle of a CEV
The CEV lifecycle includes the following statuses.
pending- A CEV was created If there are no existing tasks, create a new RDS
validation and is pending the Custom for SQL Server DB instance from the
validation of the CEV. When creating the RDS Custom for SQL
associated AMI. A Server DB instance, the system attempts to
CEV will remain validate the associated AMI for a CEV.
in pending-
validation until
an RDS Custom
for SQL Server DB
1122
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
validating A creation task for Wait for the creation task of the existing
the RDS Custom RDS Custom for SQL Server DB instance
for SQL Server DB to complete. You can use the RDS EVENTS
instance based console to review detailed event messages for
on a new CEV troubleshooting.
is in progress.
When creating
the RDS Custom
for SQL Server
DB instance, the
system attempts
to validate the
associated AMI of
a CEV.
available The CEV was The CEV doesn't require any additional
successfully validation. It can be used to create additional
validated. A RDS Custom for SQL Server DB instances or
CEV will enter modify existing ones.
the available
status once an
RDS Custom
for SQL Server
DB instance has
been successfully
created from it.
inactive The CEV has been You can't create or upgrade an RDS Custom DB
modified to an instance with this CEV. Also, you can't restore
inactive state. a DB snapshot to create a new RDS Custom
DB instance with this CEV. For information
about how to change the state to ACTIVE,
see Modifying a CEV for RDS Custom for SQL
Server (p. 1124).
failed The create DB Troubleshoot the root cause for why the
instance step system couldn't create the DB instance. View
failed for this CEV the detailed error message and try to create
before it could a new DB instance again. Ensure that the
validate the AMI. underlying AMI used by the CEV is in an
Alternatively, available state.
the underlying
AMI used by the
CEV isn't in an
available state.
1123
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
incompatible- There was an error View the technical details of the error.
image- validating the AMI. You can't attempt to validate the AMI
configuration with this CEV again. Review the following:
recommendations:
• available – You can use this CEV to create a new RDS Custom DB instance or upgrade a DB instance.
This is the default status for a newly created CEV.
• inactive – You can't create or upgrade an RDS Custom DB instance with this CEV. You can't restore a
DB snapshot to create a new RDS Custom DB instance with this CEV.
You can change the CEV status from available to inactive or from inactive to available. You
might change the status to INACTIVE to prevent the accidental use of a CEV or to make a discontinued
CEV eligible for use again.
Console
To modify a CEV
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.
3. Choose a CEV whose description or status you want to modify.
4. For Actions, choose Modify.
5. Make any of the following changes:
1124
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
If the CEV is in use, the console displays You can't modify the CEV status. Fix the problems, then try
again.
AWS CLI
To modify a CEV by using the AWS CLI, run the modify-custom-db-engine-version command. You can
find CEVs to modify by running the describe-db-engine-versions command.
• --engine
• --engine-version cev, where cev is the name of the custom engine version that you want to
modify
• --status status, where status is the availability status that you want to assign to the CEV
The following example changes a CEV named 15.00.4249.2.my_cevtest from its current status to
inactive.
Example
For Windows:
Modifying an RDS Custom for SQL Server DB instance to use a new CEV
You can modify an existing RDS Custom for SQL Server DB instance to use a different CEV. The changes
that you can make include:
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
1125
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
AWS CLI
To modify a DB instance to use a different CEV by using the AWS CLI, run the modify-db-instance
command.
• --db-instance-identifier
• --engine-version cev, where cev is the name of the custom engine version that you want the DB
instance to change to.
The following example modifies a DB instance named my-cev-db-instance to use a CEV named
15.00.4249.2.my_cevtest_new and applies the change immediately.
Example
For Windows:
Viewing CEV details for Amazon RDS Custom for SQL Server
You can view details about your CEV by using the AWS Management Console or the AWS CLI.
1126
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.
The Custom engine versions page shows all CEVs that currently exist. If you haven't created any
CEVs, the page is empty.
3. Choose the name of the CEV that you want to view.
4. Choose Configuration to view the details.
AWS CLI
To view details about a CEV by using the AWS CLI, run the describe-db-engine-versions command.
• --include-all, to view all CEVs with any lifecycle state. Without the --include-all option, only
the CEVs in an available lifecycle state will be returned.
1127
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
"EngineVersion": "15.00.4249.2.my_cevtest",
"DBParameterGroupFamily": "custom-sqlserver-ee-15.0",
"DBEngineDescription": "Microsoft SQL Server Enterprise Edition for custom
RDS",
"DBEngineVersionArn": "arn:aws:rds:us-east-1:{my-account-id}:cev:custom-
sqlserver-ee/15.00.4249.2.my_cevtest/a1234a1-123c-12rd-bre1-1234567890",
"DBEngineVersionDescription": "Custom SQL Server EE 15.00.4249.2 cev test",
"Image": {
"ImageId": "ami-0r93cx31t5r596482",
"Status": "pending-validation"
},
"DBEngineMediaType": "AWS Provided",
"CreateTime": "2022-11-20T19:30:01.831000+00:00",
"ValidUpgradeTarget": [],
"SupportsLogExportsToCloudwatchLogs": false,
"SupportsReadReplica": false,
"SupportedFeatureNames": [],
"Status": "pending-validation",
"SupportsParallelQuery": false,
"SupportsGlobalDatabases": false,
"TagList": [],
"SupportsBabelfish": false
}
]
}
You can use filters to view CEVs with a certain lifecycle status. For example, to view CEVs that have a
lifecycle status of either pending-validation, available, or failed:
Before deleting a CEV, make sure it isn't being used by any of the following:
Console
To delete a CEV
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.
3. Choose a CEV whose description or status you want to delete.
4. For Actions, choose Delete.
1128
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server
In the Custom engine versions page, the banner shows that your CEV is being deleted.
AWS CLI
To delete a CEV by using the AWS CLI, run the delete-custom-db-engine-version command.
• --engine custom-sqlserver-ee
• --engine-version cev, where cev is the name of the custom engine version to be deleted
Example
For Windows:
1129
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance
Topics
• Creating an RDS Custom for SQL Server DB instance (p. 1130)
• RDS Custom service-linked role (p. 1133)
• Connecting to your RDS Custom DB instance using AWS Systems Manager (p. 1133)
• Connecting to your RDS Custom DB instance using RDP (p. 1135)
For more information, see Creating an Amazon RDS DB instance (p. 300).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose Create database.
4. Choose Standard create for the database creation method.
5. For Engine options, choose Microsoft SQL Server for the engine type.
6. For Database management type, choose Amazon RDS Custom.
7. In the Edition section, choose the DB engine edition that you want to use. For RDS Custom for SQL
Server, the choices are Enterprise, Standard, and Web.
8. (Optional) If you intend to create the DB instance from a CEV, check the Use custom engine version
(CEV) check box. Select your CEV in the drop-down list.
9. For Database version, keep the SQL Server 2019 default value.
10. For Templates, choose Production.
11. In the Settings section, enter a unique name for the DB instance identifier.
12. To enter your master password, do the following:
1130
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance
c. Change the Master username value and enter the same password in Master password and
Confirm password.
By default, the new RDS Custom DB instance uses an automatically generated password for the
master user.
13. In the DB instance size section, choose a value for DB instance class.
For supported classes, see DB instance class support for RDS Custom for SQL Server (p. 1089).
14. Choose Storage settings.
15. For RDS Custom security, do the following:
a. For IAM instance profile, choose the instance profile for your RDS Custom for SQL Server DB
instance.
The IAM instance profile must begin with AWSRDSCustom, for example
AWSRDSCustomInstanceProfileForRdsCustomInstance.
b. For Encryption, choose Enter a key ARN to list the available AWS KMS keys. Then choose your
key from the list.
An AWS KMS key is required for RDS Custom. For more information, see Make sure that you
have a symmetric encryption AWS KMS key (p. 1104).
16. For the remaining sections, specify your preferred RDS Custom DB instance settings. For information
about each setting, see Settings for DB instances (p. 308). The following settings don't appear in the
console and aren't supported:
• Processor features
• Storage autoscaling
• Availability & durability
• Password and Kerberos authentication option in Database authentication (only Password
authentication is supported)
• Database options group in Additional configuration
• Performance Insights
• Log exports
• Enable auto minor version upgrade
• Deletion protection
To view the master user name and password for the RDS Custom DB instance, choose View
credential details.
To connect to the DB instance as the master user, use the user name and password that appear.
Important
You can't view the master user password again. If you don't record it, you might have
to change it. To change the master user password after the RDS Custom DB instance is
available, modify the DB instance. For more information about modifying a DB instance, see
Managing an Amazon RDS Custom for SQL Server DB instance (p. 1138).
18. Choose Databases to view the list of RDS Custom DB instances.
19. Choose the RDS Custom DB instance that you just created.
1131
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance
On the RDS console, the details for the new RDS Custom DB instance appear:
• The DB instance has a status of creating until the RDS Custom DB instance is created and ready
for use. When the state changes to available, you can connect to the DB instance. Depending on
the instance class and storage allocated, it can take several minutes for the new DB instance to be
available.
• Role has the value Instance (RDS Custom).
• RDS Custom automation mode has the value Full automation. This setting means that the DB
instance provides automatic monitoring and instance recovery.
AWS CLI
You create an RDS Custom DB instance by using the create-db-instance AWS CLI command.
• --db-instance-identifier
• --db-instance-class (for a list of supported instance classes, see DB instance class support for
RDS Custom for SQL Server (p. 1089))
• --engine (custom-sqlserver-ee, custom-sqlserver-se, or custom-sqlserver-web)
• --kms-key-id
• --custom-iam-instance-profile
The following example creates an RDS Custom for SQL Server DB instance named my-custom-
instance. The backup retention period is 3 days.
Note
To create a DB instance from a custom engine version (CEV), supply an existing CEV
name to the --engine-version parameter. For example, --engine-version
15.00.4249.2.my_cevtest
Example
For Linux, macOS, or Unix:
For Windows:
1132
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance
--allocated-storage 20 ^
--db-subnet-group mydbsubnetgroup ^
--master-username myuser ^
--master-user-password mypassword ^
--backup-retention-period 3 ^
--no-multi-az ^
--port 8200 ^
--kms-key-id mykmskey ^
--custom-iam-instance-profile AWSRDSCustomInstanceProfileForRdsCustomInstance
Note
Specify a password other than the prompt shown here as a security best practice.
The following partial output shows the engine, parameter groups, and other information.
{
"DBInstances": [
{
"PendingModifiedValues": {},
"Engine": "custom-sqlserver-ee",
"MultiAZ": false,
"DBSecurityGroups": [],
"DBParameterGroups": [
{
"DBParameterGroupName": "default.custom-sqlserver-ee-15",
"ParameterApplyStatus": "in-sync"
}
],
"AutomationMode": "full",
"DBInstanceIdentifier": "my-custom-instance",
"TagList": []
}
]
}
When you create an RDS Custom DB instance, both the Amazon RDS and RDS Custom service-linked
roles are created (if they don't already exist) and used. For more information, see Using service-linked
roles for Amazon RDS (p. 2684).
The first time that you create an RDS Custom for SQL Server DB instance, you might receive the
following error: The service-linked role is in the process of being created. Try again later. If you do, wait a
few minutes and then try again to create the DB instance.
1133
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance
EC2 instances through a browser-based shell or through the AWS CLI. For more information, see AWS
Systems Manager Session Manager.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance to which
you want to connect.
3. Choose Configuration.
4. Note the Resource ID value for your DB instance. For example, the resource ID might be db-
ABCDEFGHIJKLMNOPQRS0123456.
5. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
6. In the navigation pane, choose Instances.
7. Look for the name of your EC2 instance, and then choose the instance ID associated with it. For
example, the instance ID might be i-abcdefghijklm01234.
8. Choose Connect.
9. Choose Session Manager.
10. Choose Connect.
AWS CLI
You can connect to your RDS Custom DB instance using the AWS CLI. This technique requires the Session
Manager plugin for the AWS CLI. To learn how to install the plugin, see Install the Session Manager
plugin for the AWS CLI.
The following sample output shows the resource ID for your RDS Custom instance. The prefix is db-.
db-ABCDEFGHIJKLMNOPQRS0123456
To find the EC2 instance ID of your DB instance, use aws ec2 describe-instances. The following
example uses db-ABCDEFGHIJKLMNOPQRS0123456 for the resource ID.
i-abcdefghijklm01234
Use the aws ssm start-session command, supplying the EC2 instance ID in the --target
parameter.
1134
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance
To connect to the DB instance, you need the key pair associated with the instance. RDS
Custom creates the key pair for you. The pair name uses the prefix do-not-delete-rds-
custom-DBInstanceIdentifier. AWS Secrets Manager stores your private key as a secret.
Make sure that the VPC security group associated with your DB instance permits inbound connections on
port 3389 for Transmission Control Protocol (TCP). To learn how to configure your VPC security group,
see Configure your VPC security group (p. 1110).
To permit inbound connections on port 3389 for TCP, set a firewall rule on the host. The following
examples show how to do this.
We recommend that you use the specific -Profile value: Public, Private, or Domain. Using Any
refers to all three values. You can also specify a combination of values separated by a comma. For more
information about setting firewall rules, see Set-NetFirewallRule in the Microsoft documentation.
1. Connect to Session Manager as shown in Connecting to your RDS Custom DB instance using AWS
Systems Manager (p. 1133).
2. Run the following command.
1135
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance
2. Use the command ID returned in the output to get the status of the previous command. To use the
following query to return the command ID, make sure that you have the jq plug-in installed.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance to which
you want to connect.
3. Choose the Configuration tab.
4. Note the DB instance ID for your DB instance, for example, my-custom-instance.
5. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
6. In the navigation pane, choose Instances.
7. Look for the name of your EC2 instance, and then choose the instance ID associated with it.
AWS CLI
1. Get the list of your RDS Custom DB instances by calling the aws rds describe-db-instances
command.
1136
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance
2. Choose the DB instance identifier from the sample output, for example do-not-delete-rds-
custom-my-custom-instance.
3. Find the EC2 instance ID of your DB instance by calling the aws ec2 describe-instances
command. The following example uses the EC2 instance name to describe the DB instance.
i-abcdefghijklm01234
4. Find the key name by specifying the EC2 instance ID, as shown in the following example.
The following sample output shows the key name, which uses the prefix do-not-delete-rds-
custom-DBInstanceIdentifier.
do-not-delete-rds-custom-my-custom-instance-0d726c
1137
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance
Topics
• Pausing and resuming RDS Custom automation (p. 1138)
• Modifying an RDS Custom for SQL Server DB instance (p. 1141)
• Modifying the storage for an RDS Custom for SQL Server DB instance (p. 1142)
• Tagging RDS Custom for SQL Server resources (p. 1144)
• Deleting an RDS Custom for SQL Server DB instance (p. 1144)
• Starting and stopping an RDS Custom for SQL Server DB instance (p. 1146)
1. Pause RDS Custom automation for a specified period. The pause ensures that your customizations
don't interfere with RDS Custom automation.
2. Customize the RDS Custom for SQL Server DB instance as needed.
3. Do either of the following:
• Resume automation manually.
• Wait for the pause period to end. In this case, RDS Custom resumes monitoring and instance
recovery automatically.
Important
Pausing and resuming automation are the only supported automation tasks when modifying an
RDS Custom for SQL Server DB instance.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance that you
want to modify.
3. Choose Modify. The Modify DB instance page appears.
4. For RDS Custom automation mode, choose one of the following options:
• Paused pauses the monitoring and instance recovery for the RDS Custom DB instance. Enter the
pause duration that you want (in minutes) for Automation mode duration. The minimum value is
60 minutes (default). The maximum value is 1,440 minutes.
• Full automation resumes automation.
5. Choose Continue to check the summary of modifications.
A message indicates that RDS Custom will apply the changes immediately.
1138
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance
6. If your changes are correct, choose Modify DB instance. Or choose Back to edit your changes or
Cancel to cancel your changes.
On the RDS console, the details for the modification appear. If you paused automation, the Status of
your RDS Custom DB instance indicates Automation paused.
7. (Optional) In the navigation pane, choose Databases, and then your RDS Custom DB instance.
In the Summary pane, RDS Custom automation mode indicates the automation status. If
automation is paused, the value is Paused. Automation resumes in num minutes.
AWS CLI
To pause or resume RDS Custom automation, use the modify-db-instance AWS CLI command.
Identify the DB instance using the required parameter --db-instance-identifier. Control the
automation mode with the following parameters:
• --automation-mode specifies the pause state of the DB instance. Valid values are all-paused,
which pauses automation, and full, which resumes it.
• --resume-full-automation-mode-minutes specifies the duration of the pause. The default value
is 60 minutes.
Note
Regardless of whether you specify --no-apply-immediately or --apply-immediately,
RDS Custom applies modifications asynchronously as soon as possible.
Example
For Windows:
The following example extends the pause duration for an extra 30 minutes. The 30 minutes is added to
the original time shown in ResumeFullAutomationModeTime.
Example
1139
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance
--automation-mode all-paused \
--resume-full-automation-mode-minutes 30
For Windows:
Example
For Windows:
In the following partial sample output, the pending AutomationMode value is full.
{
"DBInstance": {
"PubliclyAccessible": true,
"MasterUsername": "admin",
"MonitoringInterval": 0,
"LicenseModel": "bring-your-own-license",
"VpcSecurityGroups": [
{
"Status": "active",
"VpcSecurityGroupId": "0123456789abcdefg"
}
],
"InstanceCreateTime": "2020-11-07T19:50:06.193Z",
"CopyTagsToSnapshot": false,
"OptionGroupMemberships": [
{
"Status": "in-sync",
"OptionGroupName": "default:custom-oracle-ee-19"
}
],
"PendingModifiedValues": {
"AutomationMode": "full"
},
"Engine": "custom-oracle-ee",
"MultiAZ": false,
"DBSecurityGroups": [],
"DBParameterGroups": [
{
"DBParameterGroupName": "default.custom-oracle-ee-19",
"ParameterApplyStatus": "in-sync"
}
],
1140
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance
...
"ReadReplicaDBInstanceIdentifiers": [],
"AllocatedStorage": 250,
"DBInstanceArn": "arn:aws:rds:us-west-2:012345678912:db:my-custom-instance",
"BackupRetentionPeriod": 3,
"DBName": "ORCL",
"PreferredMaintenanceWindow": "fri:10:56-fri:11:26",
"Endpoint": {
"HostedZoneId": "ABCDEFGHIJKLMNO",
"Port": 8200,
"Address": "my-custom-instance.abcdefghijk.us-west-2.rds.amazonaws.com"
},
"DBInstanceStatus": "automation-paused",
"IAMDatabaseAuthenticationEnabled": false,
"AutomationMode": "all-paused",
"EngineVersion": "19.my_cev1",
"DeletionProtection": false,
"AvailabilityZone": "us-west-2a",
"DomainMemberships": [],
"StorageType": "gp2",
"DbiResourceId": "db-ABCDEFGHIJKLMNOPQRSTUVW",
"ResumeFullAutomationModeTime": "2020-11-07T20:56:50.565Z",
"KmsKeyId": "arn:aws:kms:us-west-2:012345678912:key/
aa111a11-111a-11a1-1a11-1111a11a1a1a",
"StorageEncrypted": false,
"AssociatedRoles": [],
"DBInstanceClass": "db.m5.xlarge",
"DbInstancePort": 0,
"DBInstanceIdentifier": "my-custom-instance",
"TagList": []
}
The following limitations apply to modifying an RDS Custom for SQL Server DB instance:
For more information, see RDS Custom support perimeter (p. 985).
1141
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to modify.
4. Choose Modify.
5. Make the following changes as needed:
AWS CLI
To modify an RDS Custom for SQL Server DB instance, use the modify-db-instance AWS CLI command.
Set the following parameters as needed:
• --db-instance-class – For supported classes, see DB instance class support for RDS Custom for
SQL Server (p. 1089)
• --engine-version – The version number of the database engine to which you're upgrading.
• --backup-retention-period – How long to retain automated backups, from 0–35 days.
• --preferred-backup-window – The daily time range during which automated backups are created.
• --preferred-maintenance-window – The weekly time range (in UTC) during which system
maintenance can occur.
• --apply-immediately – Use --apply-immediately to apply the storage changes immediately.
Or use --no-apply-immediately (the default) to apply the changes during the next maintenance
window.
1142
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance
The following limitations apply to modifying the storage for an RDS Custom for SQL Server DB instance:
• The minimum allocated storage size for RDS Custom for SQL Server is 20 GiB, and the maximum
supported storage size is 16 TiB.
• As with Amazon RDS, you can't decrease the allocated storage. This is a limitation of Amazon Elastic
Block Store (Amazon EBS) volumes. For more information, see Working with storage for Amazon RDS
DB instances (p. 478)
• Storage autoscaling isn't supported for RDS Custom for SQL Server DB instances.
• Any storage volumes that you manually attach to your RDS Custom DB instance are not considered
for storage scaling. Only the RDS-provided default data volumes, i.e., the D drive, are considered for
storage scaling.
For more information, see RDS Custom support perimeter (p. 985).
• Scaling storage usually doesn't cause any outage or performance degradation of the DB instance. After
you modify the storage size for a DB instance, the status of the DB instance is storage-optimization.
• Storage optimization can take several hours. You can't make further storage modifications for either
six (6) hours or until storage optimization has completed on the instance, whichever is longer. For more
information, see Working with storage for Amazon RDS DB instances (p. 478)
For more information about storage, see Amazon RDS DB instance storage (p. 101).
For general information about storage modification, see Working with storage for Amazon RDS DB
instances (p. 478).
Console
To modify the storage for an RDS Custom for SQL Server DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to modify.
4. Choose Modify.
5. Make the following changes as needed:
a. Enter a new value for Allocated storage. It must be greater than the current value, and from 20
GiB–16 TiB.
b. Change the value for Storage type. You can use available storage types like General Purpose or
Provisioned IOPS. Provisioned IOPS is supported for gp3 and io1 storage types.
c. If you are specifying volume types that support provisioned IOPS, you can define the
Provisioned IOPS value.
6. Choose Continue.
7. Choose Apply immediately or Apply during the next scheduled maintenance window.
8. Choose Modify DB instance.
AWS CLI
To modify the storage for an RDS Custom for SQL Server DB instance, use the modify-db-instance AWS
CLI command. Set the following parameters as needed:
1143
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance
• --iops – Provisioned IOPS for the DB instance. You can specify this only for storage types that
support provisioned IOPS, like io1.
• --apply-immediately – Use --apply-immediately to apply the storage changes immediately.
Or use --no-apply-immediately (the default) to apply the changes during the next maintenance
window.
The following example changes the storage size of my-custom-instance to 200 GiB, storage type to io1,
and Provisioned IOPS to 3000.
Example
For Windows:
• Don't create or modify the AWSRDSCustom tag that's required for RDS Custom automation. If you do,
you might break the automation.
• Tags added to RDS Custom DB instances during creation are propagated to all other related RDS
Custom resources.
• Tags aren't propagated when you add them to RDS Custom resources after DB instance creation.
For general information about resource tagging, see Tagging Amazon RDS resources (p. 461).
You can delete an RDS Custom for SQL Server DB instance using the console or the CLI. The time
required to delete the DB instance can vary depending on the backup retention period (that is, how many
backups to delete), how much data is deleted, and whether a final snapshot is taken.
1144
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance
Note
You can't create a final DB snapshot of your DB instance if it has a status of creating, failed,
incompatible-create, incompatible-restore, or incompatible-network. For more
information, see Viewing Amazon RDS DB instance status (p. 684).
Important
When you choose to take a final snapshot, we recommend that you avoid writing data to your
DB instance while the DB instance deletion is in progress. Once the DB instance deletion is
initiated, data changes are not guaranteed to be captured by the final snapshot.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom for SQL Server DB
instance that you want to delete. RDS Custom for SQL Server DB instances show the role Instance
(RDS Custom for SQL Server).
3. For Actions, choose Delete.
4. To take a final snapshot, choose Create final snapshot, and provide a name for the Final snapshot
name.
5. To retain automated backups, choose Retain automated backups.
6. Enter delete me in the box.
7. Choose Delete.
AWS CLI
You delete an RDS Custom for SQL Server DB instance by using the delete-db-instance AWS CLI
command. Identify the DB instance using the required parameter --db-instance-identifier. The
remaining parameters are the same as for an Amazon RDS DB instance.
The following example deletes the RDS Custom for SQL Server DB instance named my-custom-
instance, takes a final snapshot, and retains automated backups.
Example
For Windows:
1145
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance
To skip the final snapshot, specify the --skip-final-snapshot option instead of the --no-skip-
final-snapshot and --final-db-snapshot-identifier options in the command.
The following considerations also apply to starting and stopping your RDS Custom for SQL Server DB
instance:
• Modifying an EC2 instance attribute of an RDS Custom for SQL Server DB instance while the DB
instance is STOPPED isn't supported.
• You can stop and start an RDS Custom for SQL Server DB instance only if it's configured for a
single Availability Zone. You can't stop an RDS Custom for SQL Server DB instance in a Multi-AZ
configuration.
• A SYSTEM snapshot will be created when you stop an RDS Custom for SQL Server DB instance. The
snapshot will be automatically deleted when you start the RDS Custom for SQL Server DB instance
again.
• If you delete your EC2 instance while your RDS Custom for SQL Server DB instance is stopped, the C:
drive will be replaced when you start the RDS Custom for SQL Server DB instance again.
• The C:\ drive, hostname, and your custom configurations are persisted when you stop an RDS Custom
for SQL Server DB instance, as long as you don't modify the instance type.
• The following actions will result in RDS Custom placing the DB instance outside the support perimeter,
and you're still charged for DB instance hours:
• Starting the underlying EC2 instance while Amazon RDS is stopped. To resolve, you can call the
start-db-instance Amazon RDS API, or stop the EC2 so the RDS Custom instance returns to
STOPPED.
• Stopping underlying EC2 instance when the RDS Custom for SQL Server DB instance is ACTIVE.
For more details about stopping and starting DB instances, see Stopping an Amazon RDS DB instance
temporarily (p. 381), and Starting an Amazon RDS DB instance that was previously stopped (p. 384).
1146
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server
Running a DB instance with high availability can enhance availability during planned system
maintenance. In the event of planned database maintenance or unplanned service disruption, Amazon
RDS automatically fails over to the up-to-date secondary DB instance. This functionality lets database
operations resume quickly without manual intervention. The primary and standby instances use the
same endpoint, whose physical network address transitions to the secondary replica as part of the
failover process. You don't have to reconfigure your application when a failover occurs.
You can create an RDS Custom for SQL Server Multi-AZ deployment by specifying Multi-AZ when
creating an RDS Custom DB instance. You can use the console to convert existing RDS Custom for SQL
1147
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server
Server DB instances to Multi-AZ deployments by modifying the DB instance and specifying the Multi-AZ
option. You can also specify a Multi-AZ DB instance deployment with the AWS CLI or Amazon RDS API.
The RDS console shows the Availability Zone of the standby replica (the secondary AZ). You can also use
the describe-db-instances CLI command or the DescribeDBInstances API operation to find the
secondary AZ.
RDS Custom for SQL Server DB instances with Multi-AZ deployment can have increased write and
commit latency compared to a Single-AZ deployment. This increase can happen because of the
synchronous data replication between DB instances. You might have a change in latency if your
deployment fails over to the standby replica, although AWS is engineered with low-latency network
connectivity between Availability Zones.
Note
For production workloads, we recommend that you use a DB instance class with Provisioned
IOPS (input/output operations per second) for fast, consistent performance. For more
information about DB instance classes, see Requirements and limitations for Amazon RDS
Custom for SQL Server (p. 1089).
Topics
• Region and version availability (p. 1148)
• Limitations for a Multi-AZ deployment with RDS Custom for SQL Server (p. 1148)
• Prerequisites for a Multi-AZ deployment with RDS Custom for SQL Server (p. 1149)
• Creating an RDS Custom for SQL Server Multi-AZ deployment (p. 1149)
• Modifying an RDS Custom for SQL Server Single-AZ deployment to a Multi-AZ deployment (p. 1149)
• Modifying an RDS Custom for SQL Server Multi-AZ deployment to a Single-AZ deployment (p. 1153)
• Failover process for an RDS Custom for SQL Server Multi-AZ deployment (p. 1154)
• Time to live (TTL) settings with applications using an RDS Custom for SQL Server Multi-AZ
deployment (p. 1156)
Multi-AZ deployments for RDS Custom for SQL Server are supported for the following SQL Server
versions:
Multi-AZ deployments for RDS Custom for SQL Server are available in all Regions where RDS Custom for
SQL Server is available. For more information on Region availability of Multi-AZ deployments for RDS
Custom for SQL Server, see RDS Custom for SQL Server (p. 153).
1148
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server
• Update the RDS security group inbound and outbound rules to allow port 1120.
• Add a rule in your private network Access Control List (ACL) that allows TCP ports 0-65535 for the DB
instance VPC.
• Create new Amazon SQS VPC endpoints that allow the RDS Custom for SQL Server DB instance to
communicate with SQS.
• Update the SQS permissions in the instance profile role.
1149
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server
Important
We recommend that you avoid modifying your RDS Custom for SQL Server DB instance from a
Single-AZ to a Multi-AZ deployment on a production DB instance during periods of peak activity.
AWS uses a snapshot to create the standby instance to avoid downtime when you convert from Single-
AZ to Multi-AZ, but performance might be impacted during and after converting to Multi-AZ. This impact
can be significant for workloads that are sensitive to write latency. While this capability allows large
volumes to quickly be restored from snapshots, it can cause increase in the latency of I/O operations
because of the synchronous replication. This latency can impact your database performance.
Topics
• Configuring prerequisites to modify a Single-AZ to a Multi-AZ deployment using
CloudFormation (p. 1150)
• Configuring prerequisites to modify a Single-AZ to a Multi-AZ deployment manually (p. 1151)
• Modify using the RDS console, AWS CLI, or RDS API. (p. 1152)
To configure the RDS Custom for SQL Server Multi-AZ deployment prerequisites using CloudFormation
a. Download the latest AWS CloudFormation template file. Open the context (right-click) menu for
the link custom-sqlserver-onboard.zip and choose Save Link As.
b. Save and extract the custom-sqlserver-onboard.json file to your computer.
c. For Template source, choose Upload a template file.
d. For Choose file, navigate to and then choose custom-sqlserver-onboard.json.
5. Choose Next.
a. For Capabilities, select the I acknowledge that AWS CloudFormation might create IAM
resources with custom names check box.
b. Choose Submit.
10. Verify the update is successful. The status of a successful operation shows UPDATE_COMPLETE.
1150
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server
If the update fails, any new configuration specified in the update process will be rolled back. The
existing resource will still be usable. For example, if you add network ACL rules numbered 18 and 19, but
there were existing rules with same numbers, the update would return the following error: Resource
handler returned message: "The network acl entry identified by 18 already
exists. In this scenario you can modify the existing ACL rules to use a number lower than 18, then
retry the update.
If you choose to configure the prerequisites manually, perform the following tasks.
{
"Version": "2012-10-17",
"Statement": [
{
"Condition": {
"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"
}
},
"Action": [
"SQS:SendMessage",
"SQS:ReceiveMessage",
"SQS:DeleteMessage",
"SQS:GetQueueUrl"
],
"Resource": "arn:${AWS::Partition}:sqs:${AWS::Region}:
${AWS::AccountId}:do-not-delete-rds-custom-*",
"Effect": "Allow",
"Principal": {
"AWS": "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/{IAM-
Instance-role}"
}
}
]
}
10. Update the Instance profile with permission to access Amazon SQS. Replace the AWS partition,
Region, and accountId with your own values.
1151
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server
{
"Sid": "SendMessageToSQSQueue",
"Effect": "Allow",
"Action": [
"SQS:SendMessage",
"SQS:ReceiveMessage",
"SQS:DeleteMessage",
"SQS:GetQueueUrl"
],
"Resource": [
{
"Fn::Sub": "arn:${AWS::Partition}:sqs:${AWS::Region}:${AWS::AccountId}:do-not-
delete-rds-custom-*"
}
],
"Condition": {
"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"
}
}
}
>
11. Update the Amazon RDS security group inbound and outbound rules to allow port 1120.
Console
To modify an existing RDS Custom for SQL Server Single-AZ to Multi-AZ deployment
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
1152
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server
AWS CLI
To convert to a Multi-AZ DB instance deployment by using the AWS CLI, call the modify-db-instance
command and set the --multi-az option. Specify the DB instance identifier and the values for
other options that you want to modify. For information about each option, see Settings for DB
instances (p. 402).
Example
The following code modifies mycustomdbinstance by including the --multi-az option. The changes
are applied during the next maintenance window by using --no-apply-immediately. Use --
apply-immediately to apply the changes immediately. For more information, see Using the Apply
Immediately setting (p. 402).
For Windows:
RDS API
To convert to a Multi-AZ DB instance deployment with the RDS API, call the ModifyDBInstance operation
and set the MultiAZ parameter to true.
Console
To modify an RDS Custom for SQL Server DB instance from a Multi-AZ to Single-AZ
deployment.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
1153
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server
AWS CLI
To modify a Multi-AZ deployment to a Single-AZ deployment by using the AWS CLI, call the modify-db-
instance command and include the --no-multi-az option. Specify the DB instance identifier and the
values for other options that you want to modify. For information about each option, see Settings for DB
instances (p. 402).
Example
The following code modifies mycustomdbinstance by including the --no-multi-az option. The
changes are applied during the next maintenance window by using --no-apply-immediately. Use
--apply-immediately to apply the changes immediately. For more information, see Using the Apply
Immediately setting (p. 402).
For Windows:
RDS API
To modify a Multi-AZ deployment to a Single-AZ deployment by using the RDS API, call the
ModifyDBInstance operation and set the MultiAZ parameter to false.
1154
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server
Note
You can force a failover manually when you reboot a DB instance with failover. For more
information on rebooting a DB instance, see Rebooting a DB instance (p. 436)
Amazon RDS handles failovers automatically so you can resume database operations as quickly as
possible without administrative intervention. The primary DB instance switches over automatically to
the standby replica if any of the conditions described in the following table occurs. You can view these
failover reasons in the RDS event log.
The operating system A failover was triggered during the maintenance window for an OS
for the RDS Custom for patch or a security update. For more information, see Maintaining a DB
SQL Server Multi-AZ instance (p. 418).
DB instance is being
patched in an offline
operation
The primary host of The Multi-AZ DB instance deployment detected an impaired primary
the RDS Custom for DB instance and failed over.
SQL Server Multi-AZ DB
instance is unhealthy.
The primary host of RDS monitoring detected a network reachability failure to the primary
the RDS Custom for DB instance and triggered a failover.
SQL Server Multi-
AZ DB instance is
unreachable due
to loss of network
connectivity.
The RDS Custom for SQL A DB instance modification triggered a failover. For more information,
Server Multi-AZ DB see Modifying an RDS Custom for SQL Server DB instance (p. 1141).
instance was modified
by the customer.
The storage volume The Multi-AZ DB instance deployment detected a storage issue on the
of the primary host primary DB instance and failed over.
of the RDS Custom for
SQL Server Multi-AZ DB
instance experienced a
failure.
The user requested a The RDS Custom for SQL Server Multi-AZ DB instance was
failover of the RDS rebooted with failover. For more information, see Rebooting a DB
Custom for SQL Server instance (p. 436).
Multi-AZ DB instance.
The RDS Custom for The primary DB instance is unresponsive. We recommend that you try
SQL Server Multi-AZ the following steps:
primary DB instance is
busy or unresponsive. • Examine the event logs and CloudWatch logs for excessive CPU,
memory, or swap space usage. For more information, see Working
with Amazon RDS event notification (p. 855).
• Create a rule that triggers on an Amazon RDS event. For more
information, see Creating a rule that triggers on an Amazon RDS
event (p. 870).
1155
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server
To determine if your Multi-AZ DB instance has failed over, you can do the following:
• Set up DB event subscriptions to notify you by email or SMS that a failover has been initiated. For
more information about events, see Working with Amazon RDS event notification (p. 855).
• View your DB events by using the RDS console or API operations.
• View the current state of your RDS Custom for SQL Server Multi-AZ DB instance deployment by using
the RDS console, CLI, or API operations.
1156
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance
The procedure is identical to taking a snapshot of an Amazon RDS DB instance. The first snapshot
of an RDS Custom DB instance contains the data for the full DB instance. Subsequent snapshots are
incremental.
Restore DB snapshots using either the AWS Management Console or the AWS CLI.
Topics
• Creating an RDS Custom for SQL Server snapshot (p. 1157)
• Restoring from an RDS Custom for SQL Server DB snapshot (p. 1158)
• Restoring an RDS Custom for SQL Server instance to a point in time (p. 1159)
• Deleting an RDS Custom for SQL Server snapshot (p. 1162)
• Deleting RDS Custom for SQL Server automated backups (p. 1163)
When you create a snapshot, RDS Custom for SQL Server creates an Amazon EBS snapshot for every
volume attached to the DB instance. RDS Custom for SQL Server uses the EBS snapshot of the root
volume to register a new Amazon Machine Image (AMI). To make snapshots easy to associate with a
specific DB instance, they're tagged with DBSnapshotIdentifier, DbiResourceId, and VolumeType.
Creating a DB snapshot results in a brief I/O suspension. This suspension can last from a few seconds to
a few minutes, depending on the size and class of your DB instance. The snapshot creation time varies
with the size of your database. Because the snapshot includes the entire storage volume, the size of files,
such as temporary files, also affects snapshot creation time. To learn more about creating snapshots, see
Creating a DB snapshot (p. 613).
Create an RDS Custom for SQL Server snapshot using the console or the AWS CLI.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. In the list of RDS Custom DB instances, choose the instance for which you want to take a snapshot.
4. For Actions, choose Take snapshot.
1157
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance
AWS CLI
You create a snapshot of an RDS Custom DB instance by using the create-db-snapshot AWS CLI
command.
• --db-instance-identifier – Identifies which RDS Custom DB instance you are going to back up
• --db-snapshot-identifier – Names your RDS Custom snapshot so you can restore from it later
In this example, you create a DB snapshot called my-custom-snapshot for an RDS Custom DB instance
called my-custom-instance.
Example
For Windows:
The restore process differs in the following ways from restore in Amazon RDS:
• Before restoring a snapshot, RDS Custom for SQL Server backs up existing configuration files. These
files are available on the restored instance in the directory /rdsdbdata/config/backup. RDS
Custom for SQL Server restores the DB snapshot with default parameters and overwrites the previous
database configuration files with existing ones. Thus, the restored instance doesn't preserve custom
parameters and changes to database configuration files.
• The restored database has the same name as in the snapshot. You can't specify a different name.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the DB snapshot that you want to restore from.
4. For Actions, choose Restore snapshot.
5. On the Restore DB instance page, for DB instance identifier, enter the name for your restored RDS
Custom DB instance.
1158
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance
AWS CLI
You restore an RDS Custom DB snapshot by using the restore-db-instance-from-db-snapshot AWS CLI
command.
If the snapshot you are restoring from is for a private DB instance, make sure to specify both the correct
db-subnet-group-name and no-publicly-accessible. Otherwise, the DB instance defaults to
publicly accessible. The following options are required:
The following code restores the snapshot named my-custom-snapshot for my-custom-instance.
Example
For Linux, macOS, or Unix:
For Windows:
The latest restorable time for an RDS Custom for SQL Server DB instance depends on several factors,
but is typically within 5 minutes of the current time. To see the latest restorable time for a DB
instance, use the AWS CLI describe-db-instances command and look at the value returned in the
LatestRestorableTime field for the DB instance. To see the latest restorable time for each DB
instance in the Amazon RDS console, choose Automated backups.
You can restore to any point in time within your backup retention period. To see the earliest restorable
time for each DB instance, choose Automated backups in the Amazon RDS console.
For general information about PITR, see Restoring a DB instance to a specified time (p. 660).
Topics
• PITR considerations for RDS Custom for SQL Server (p. 1160)
1159
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance
• PITR only restores the databases in the DB instance. It doesn't restore the operating system or files on
the C: drive.
• For an RDS Custom for SQL Server DB instance, a database is backed up automatically and is eligible
for PITR only under the following conditions:
• The database is online.
• Its recovery model is set to FULL.
• It's writable.
• It has its physical files on the D: drive.
• It's not listed in the rds_pitr_blocked_databases table. For more information, see Making
databases ineligible for PITR (p. 1160).
• RDS Custom for SQL Server allows up to 5,000 databases per DB instance. However, the maximum
number of databases restored by a PITR operation for an RDS Custom for SQL Server DB instance is
100. The 100 databases are determined by the order of their database ID.
Other databases that aren't part of PITR can be restored from DB snapshots, including the automated
backups used for PITR.
• Adding a new database, renaming a database, or restoring a database that is eligible for PITR initiates
a snapshot of the DB instance.
• Restored databases have the same name as in the source DB instance. You can't specify a different
name.
• AWSRDSCustomSQLServerIamRolePolicy requires new permissions. For more information, see Add
an access policy to AWSRDSCustomSQLServerInstanceRole (p. 1105).
• Time zone changes aren't supported for RDS Custom for SQL Server. If you change the operating
system or DB instance time zone, PITR (and other automation) doesn't work.
You can specify that certain RDS Custom for SQL Server databases aren't part of automated backups and
PITR. To do this, put their database_id values into a rds_pitr_blocked_databases table. Use the
following SQL script to create the table.
For the list of eligible and ineligible databases, see the RI.End file in the RDSCustomForSQLServer/
Instances/DB_instance_resource_ID/TransactionLogMetadata directory in the Amazon S3
bucket do-not-delete-rds-custom-$ACCOUNT_ID-$REGION-unique_identifier. For more
information about the RI.End file, see Transaction logs in Amazon S3 (p. 1161).
1160
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance
The backup retention period determines whether transaction logs for RDS Custom for SQL Server
DB instances are automatically extracted and uploaded to Amazon S3. A nonzero value means that
automatic backups are created, and that the RDS Custom agent uploads the transaction logs to S3 every
5 minutes.
Transaction log files on S3 are encrypted at rest using the AWS KMS key that you provided when you
created your DB instance. For more information, see Protecting data using server-side encryption in the
Amazon Simple Storage Service User Guide.
The transaction logs for each database are uploaded to an S3 bucket named do-not-delete-
rds-custom-$ACCOUNT_ID-$REGION-unique_identifier. The RDSCustomForSQLServer/
Instances/DB_instance_resource_ID directory in the S3 bucket contains two subdirectories:
• TransactionLogs – Contains the transaction logs for each database and their respective metadata.
The transaction log file name follows the pattern yyyyMMddHHmm.database_id.timestamp, for
example:
202110202230.11.1634769287
The same file name with the suffix _metadata contains information about the transaction log such as
log sequence numbers, database name, and RdsChunkCount. RdsChunkCount determines how many
physical files represent a single transaction log file. You might see files with suffixes _0001, _0002,
and so on, which mean the physical chunks of a transaction log file. If you want to use a chunked
transaction log file, make sure to merge the chunks after downloading them.
The RdsChunkCount is 3. The order for merging the files is the following:
202110202230.11.1634769287, 202110202230.11.1634769287_0001,
202110202230.11.1634769287_0002.
• TransactionLogMetadata – Contains metadata information about each iteration of transaction log
extraction.
The RI.End file contains information for all databases that had their transaction logs extracted, and
all databases that exist but didn't have their transaction logs extracted. The RI.End file name follows
the pattern yyyyMMddHHmm.RI.End.timestamp, for example:
202110202230.RI.End.1634769281
You can restore an RDS Custom for SQL Server DB instance to a point in time using the AWS
Management Console, the AWS CLI, or the RDS API.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
1161
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance
If you chose Custom, enter the date and time to which you want to restore the instance.
Times are shown in your local time zone, which is indicated by an offset from Coordinated Universal
Time (UTC). For example, UTC-5 is Eastern Standard Time/Central Daylight Time.
6. For DB instance identifier, enter the name of the target restored RDS Custom DB instance. The
name must be unique.
7. Choose other options as needed, such as DB instance class.
8. Choose Restore to point in time.
AWS CLI
You restore a DB instance to a specified time by using the restore-db-instance-to-point-in-time AWS CLI
command to create a new RDS Custom DB instance.
Use one of the following options to specify the backup to restore from:
• --source-db-instance-identifier mysourcedbinstance
• --source-dbi-resource-id dbinstanceresourceID
• --source-db-instance-automated-backups-arn backupARN
Example
For Windows:
1162
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance
The Amazon EBS snapshots for the binary and root volumes remain in your account for a longer time
because they might be linked to some instances running in your account or to other RDS Custom for SQL
Server snapshots. These EBS snapshots are automatically deleted after they're no longer related to any
existing RDS Custom for SQL Server resources (DB instances or backups).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the DB snapshot that you want to delete.
4. For Actions, choose Delete snapshot.
5. Choose Delete on the confirmation page.
AWS CLI
To delete an RDS Custom snapshot, use the AWS CLI command delete-db-snapshot.
Example
For Windows:
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.
3. Choose Retained.
4. Choose the retained automated backup that you want to delete.
5. For Actions, choose Delete.
1163
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance
AWS CLI
You can delete a retained automated backup by using the AWS CLI command delete-db-instance-
automated-backup.
• --dbi-resource-id – The resource identifier for the source RDS Custom DB instance.
You can find the resource identifier for the source DB instance of a retained automated backup by
using the AWS CLI command describe-db-instance-automated-backups.
The following example deletes the retained automated backup with source DB instance resource
identifier custom-db-123ABCEXAMPLE.
Example
For Windows:
1164
Amazon Relational Database Service User Guide
Migrating an on-premises database
to RDS Custom for SQL Server
This process explains the migration of a database from on-premises to RDS Custom for SQL Server, using
native full backup and restore. To reduce the cutover time during the migration process, you might also
consider using differential or log backups.
For general information about native backup and restore for RDS for SQL Server, see Importing and
exporting SQL Server databases using native backup and restore (p. 1419).
Topics
• Prerequisites (p. 1165)
• Backing up the on-premises database (p. 1165)
• Uploading the backup file to Amazon S3 (p. 1166)
• Downloading the backup file from Amazon S3 (p. 1166)
• Restoring the backup file to the RDS Custom for SQL Server DB instance (p. 1166)
Prerequisites
Perform the following tasks before migrating the database:
1. Configure Remote Desktop Connection (RDP) for your RDS Custom for SQL Server DB instance. For
more information, see Connecting to your RDS Custom DB instance using RDP (p. 1135).
2. Configure access to Amazon S3 so you can upload and download the database backup file. For more
information, see Integrating an Amazon RDS for SQL Server DB instance with Amazon S3 (p. 1464).
The following example shows a backup of a database called mydatabase, with the COMPRESSION
option specified to reduce the backup file size.
1. Using SQL Server Management Studio (SSMS), connect to the on-premises SQL Server instance.
2. Run the following T-SQL command.
1165
Amazon Relational Database Service User Guide
Migrating an on-premises database
to RDS Custom for SQL Server
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. For Buckets, choose the name of the bucket to which you want to upload your backup file.
3. Choose Upload.
4. In the Upload window, do one of the following:
Amazon S3 uploads your backup file as an S3 object. When the upload completes, you can see a
success message on the Upload: status page.
1. Using RDP, connect to your RDS Custom for SQL Server DB instance.
2. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
3. In the Buckets list, choose the name of the bucket that contains your backup file.
4. Choose the backup file mydb-full-compressed.bak.
5. For Actions, choose Download as.
6. Open the context (right-click) menu for the link provided, then choose Save As.
7. Save mydb-full-compressed.bak to the D:\rdsdbdata\BACKUP directory.
Restoring the backup file to the RDS Custom for SQL Server DB
instance
You use SQL Server native restore to restore the backup file to your RDS Custom for SQL Server DB
instance.
In this example, the MOVE option is specified because the data and log file directories are different from
the on-premises DB instance.
1. Using SSMS, connect to your RDS Custom for SQL Server DB instance.
2. Run the following T-SQL command.
1166
Amazon Relational Database Service User Guide
Migrating an on-premises database
to RDS Custom for SQL Server
1167
Amazon Relational Database Service User Guide
Upgrading a DB instance for RDS Custom for SQL Server
The same limitations for upgrading an RDS Custom for SQL Server DB instance apply as for modifying an
RDS Custom for SQL Server DB instance in general. For more information, see Modifying an RDS Custom
for SQL Server DB instance (p. 1141).
For general information about upgrading DB instances, see Upgrading a DB instance engine
version (p. 429).
1168
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server
Topics
• Viewing RDS Custom events (p. 1169)
• Viewing RDS Custom events (p. 1169)
• Troubleshooting CEV errors for RDS Custom for SQL Server (p. 1170)
• Fixing unsupported configurations in RDS Custom for SQL Server (p. 1172)
To view RDS Custom event notification using the AWS CLI, use the describe-events command. RDS
Custom introduces several new events. The event categories are the same as for Amazon RDS. For the list
of events, see Amazon RDS event categories and event messages (p. 874).
The following example retrieves details for the events that have occurred for the specified RDS Custom
DB instance.
To subscribe to RDS Custom event notification using the CLI, use the create-event-subscription
command. Include the following required parameters:
• --subscription-name
• --sns-topic-arn
The following example creates a subscription for backup and recovery events for an RDS Custom DB
instance in the current AWS account. Notifications are sent to an Amazon Simple Notification Service
(Amazon SNS) topic, specified by --sns-topic-arn.
1169
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server
EC2 Image permissions for Verify that your account and profile
image (AMI_ID) weren't found used for creation has the required
for customer (Customer_ID). permissions on create EC2
Verify customer (Customer_ID) Instance and Describe Images
has valid permissions on the for the selected AMI.
EC2 Image.
Image (AMI_ID) doesn't exist Ensure the AMI exists in the same
in your account (ACCOUNT_ID). customer account.
Verify (ACCOUNT_ID) is the
owner of the EC2 image.
SQL Server Web Edition isn't Use an AMI that contains a supported
supported for creating a edition of SQL Server. For more
Custom Engine Version using information, see Version support
Bring Your Own Media. Specify for RDS Custom for SQL Server
a valid image, and try again. CEVs (p. 1119).
The custom engine version Classic RDS Custom for SQL Server
can't be the same as the OEV engine versions aren't supported. For
engine version. Specify a example, version 15.00.4073.23.v1.
valid CEV, and try again. Use a supported version number.
1170
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server
The custom engine version The target CEV is not valid. Check the
isn't valid for an upgrade. requirements for a valid upgrade path.
Specify a valid CEV with an
engine version greater or
equal to (X), and try again.
The expected owner of image Create the EC2 instance from the AMI
(AMI_ID) is customer account that you have permission for. Run
ID (ACCOUNT_ID), but owner Sysprep on the EC2 instance to create
(ACCOUNT_ID) was found. and save a base image.
The expected root device type Create the AMI with the EBS device
is (X) for image %s, but root type.
device type (Y) was found.
1171
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server
In the following table, you can find descriptions of the notifications and events that the support
perimeter sends and how to fix them. These notifications and the support perimeter are subject to
change. For background on the support perimeter, see RDS Custom support perimeter (p. 985). For event
descriptions, see Amazon RDS event categories and event messages (p. 874).
Database
Database health You need to manually The support perimeter Log in to the host and examine the
recover the database monitors the DB state of your RDS Custom for SQL
on EC2 instance instance state. It also Server database.
[i- monitors how many
xxxxxxxxxxxxxxxxx]. restarts occurred ps -eo pid,state,command | grep
during the previous smon
The DB instance hour and day.
restarted.
You're notified when If necessary, restart your RDS Custom
the instance is in a for SQL Server DB instance to get it
state where it still running again. Sometimes you might
need to reboot the host.
1172
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server
Database file locations The RDS Custom All SQL Server Store all RDS Custom for SQL Server
instance is going out database files are database files on the D: drive.
of perimeter because stored on the D: drive
an unsupported by default, in the D:
configuration was \rdsdbdata\DATA
used for database files directory.
location.
If you create or
alter the database
file location to be
anywhere other than
the D: drive, then RDS
Custom places the DB
instance outside the
support perimeter.
We strongly
recommend that
you don't save any
database files on
the C: drive. You can
lose data on the C:
drive during certain
operations, such as
hardware failure.
Storage on the C: drive
doesn't offer the same
durability as on the D:
drive, which is an EBS
volume.
1173
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server
Shared memory The RDS Custom The RDS Custom To bring the RDS Custom for SQL
connections instance is going out agent on the EC2 Server DB instance back within
of perimeter because host connects to the support perimeter, turn on the
an unsupported SQL Server using shared memory protocol on the
configuration was the shared memory Protocol page of the Shared Memory
used for shared protocol. Properties window by setting Enabled
memory protocol. to Yes. After you enable the protocol,
If this protocol is restart SQL Server.
turned off (Enabled is
set to No), then RDS
Custom can't perform
its management
actions and places the
DB instance outside
the support perimeter.
Operating system
RDS Custom agent The RDS Custom The RDS Custom Log in to the host and make sure that
status instance is going out agent must always be the RDS Custom agent is running.
of perimeter because running.
an unsupported You can use the following commands
configuration was The support perimeter to find the agent's status.
used for RDS Custom monitors the RDS
agent. Custom agent process $name = "RDSCustomAgent"
state on the host every $service = Get-Service $name
1 minute. Write-Host $service.Status
SSM agent status The RDS Custom The SSM agent must For more information, see
instance is going out always be running. The Troubleshooting SSM Agent.
of perimeter because RDS Custom agent is
an unsupported responsible for making
configuration was sure that the Systems
used for SSM agent. Manager agent is
running.
AWS resources
1174
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server
Amazon EC2 instance The state of the The support perimeter If the EC2 instance is stopped, start
state EC2 instance monitors EC2 it and remount the binary and data
[i- instance state-change volumes.
xxxxxxxxxxxxxxxxx] notifications. The EC2
has changed from instance must always If the EC2 instance is terminated,
[RUNNING] to be running. RDS Custom performs an automated
[STOPPING]. AMI associated with a recovery to provision a new EC2
CEV should always be instance.
The Amazon active and available.
EC2 instance
[i-
xxxxxxxxxxxxxxxxx]
has been terminated
and can't be found.
Delete the database
instance to clean up
resources.
The Amazon
EC2 instance
[i-
xxxxxxxxxxxxxxxxx]
has been stopped.
Start the instance,
and restore the host
configuration. For
more information, see
the troubleshooting
documentation.
Amazon EC2 instance The RDS Custom The support perimeter Change the EC2 instance type back to
attributes instance is going out monitors the instance the original type using the EC2 console
of perimeter because type of the EC2 or CLI.
an unsupported instance where the
configuration was RDS Custom DB To change the instance type because
used for EC2 instance instance is running. of scaling requirements, do a PITR
metadata. The EC2 instance type and specify the new instance type and
must stay the same class. However, doing this results in a
as when you set it up new RDS Custom DB instance with a
during RDS Custom DB new host and Domain Name System
instance creation. (DNS) name.
1175
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server
Amazon Elastic Block The RDS Custom RDS Custom creates If you detached any initial EBS
Store (Amazon EBS) instance is going out two types of EBS volumes, contact AWS Support.
volumes of perimeter because volume, besides the
an unsupported root volume created If you modified the storage type,
configuration was from the Amazon Provisioned IOPS, or storage
used for EBS volume Machine Image (AMI), throughput of an EBS volume, revert
metadata. and associates them the modification to the original value.
with the EC2 instance.
If you modified the storage size of an
The binary volume is EBS volume, contact AWS Support.
where the database
software binaries are
located. The data
volumes are where
database files are
located. The storage
configurations that
you set when creating
the DB instance are
used to configure the
data volumes.
1176
Amazon Relational Database Service User Guide
You use the same AWS Management Console, AWS CLI, and RDS API to provision and manage on-
premises RDS on Outposts DB instances as you do for RDS DB instances running in the AWS Cloud. RDS
on Outposts automates tasks, such as database provisioning, operating system and database patching,
backup, and long-term archival in Amazon S3.
RDS on Outposts supports automated backups of DB instances. Network connectivity between your
Outpost and your AWS Region is required to back up and restore DB instances. All DB snapshots and
transaction logs from an Outpost are stored in your AWS Region. From your AWS Region, you can restore
a DB instance from a DB snapshot to a different Outpost. For more information, see Working with
backups (p. 591).
RDS on Outposts supports automated maintenance and upgrades of DB instances. For more information,
see Maintaining a DB instance (p. 418).
RDS on Outposts uses encryption at rest for DB instances and DB snapshots using your AWS KMS key. For
more information about encryption at rest, see Encrypting Amazon RDS resources (p. 2586).
By default, EC2 instances in Outposts subnets can use the Amazon Route 53 DNS Service to resolve
domain names to IP addresses. You might encounter longer DNS resolution times with Route 53,
depending on the path latency between your Outpost and the AWS Region. In such cases, you can use
the DNS servers installed locally in your on-premises environment. For more information, see DNS in the
AWS Outposts User Guide.
When network connectivity to the AWS Region isn't available, your DB instance continues to run locally.
You can continue to access DB instances using DNS name resolution by configuring a local DNS server
as a secondary server. However, you can't create new DB instances or take new actions on existing DB
instances. Automatic backups don't occur when there is no connectivity. If there is a DB instance failure,
the DB instance isn't automatically replaced until connectivity is restored. We recommend restoring
network connectivity as soon as possible.
Topics
• Prerequisites for Amazon RDS on AWS Outposts (p. 1178)
• Amazon RDS on AWS Outposts support for Amazon RDS features (p. 1179)
• Supported DB instance classes for Amazon RDS on AWS Outposts (p. 1182)
• Customer-owned IP addresses for Amazon RDS on AWS Outposts (p. 1184)
• Working with Multi-AZ deployments for Amazon RDS on AWS Outposts (p. 1186)
• Creating DB instances for Amazon RDS on AWS Outposts (p. 1189)
• Creating read replicas for Amazon RDS on AWS Outposts (p. 1196)
• Considerations for restoring DB instances on Amazon RDS on AWS Outposts (p. 1198)
1177
Amazon Relational Database Service User Guide
Prerequisites
• Install AWS Outposts in your on-premises data center. For more information about AWS Outposts, see
AWS Outposts.
• Make sure that you have at least one subnet available for RDS on Outposts. You can use the same
subnet for other workloads.
• Make sure that you have a reliable network connection between your Outpost and an AWS Region.
1178
Amazon Relational Database Service User Guide
Support for Amazon RDS features
1179
Amazon Relational Database Service User Guide
Support for Amazon RDS features
Read replicas Yes Read replicas are supported Creating read replicas
for MySQL and PostgreSQL for Amazon RDS on AWS
DB instances. Outposts (p. 1196)
Kerberos No — Kerberos
authentication authentication (p. 2567)
1180
Amazon Relational Database Service User Guide
Support for Amazon RDS features
Restoring from a Yes You can store automated Considerations for restoring
DB snapshot backups and manual DB instances on Amazon RDS
snapshots for the restored on AWS Outposts (p. 1198)
DB instance in the parent
AWS Region or locally on Restoring from a DB
your Outpost. snapshot (p. 615)
1181
Amazon Relational Database Service User Guide
Supported DB instance classes
Amazon Yes You can view the same set of Monitoring Amazon RDS
CloudWatch metrics that are available for metrics with Amazon
monitoring your databases in the AWS CloudWatch (p. 706)
Region.
1182
Amazon Relational Database Service User Guide
Supported DB instance classes
Depending on how you've configured your Outpost, you might not have all of these classes available. For
example, if you haven't purchased the db.r5 classes for your Outpost, you can't use them with RDS on
Outposts.
Only general purpose SSD storage is supported for RDS on Outposts DB instances. For more information
about DB instance classes, see DB instance classes (p. 11).
Amazon RDS manages maintenance and recovery for your DB instances and requires active capacity on
the Outpost to do so. We recommend that you configure N+1 EC2 instances for each DB instance class
in your production environments. RDS on Outposts can use the extra capacity of these EC2 instances for
maintenance and repair operations. For example, if your production environments have 3 db.m5.large
and 5 db.r5.xlarge DB instance classes, then we recommend that they have at least 4 m5.large EC2
instances and 6 r5.xlarge EC2 instances. For more information, see Resilience in AWS Outposts in the
AWS Outposts User Guide.
1183
Amazon Relational Database Service User Guide
Customer-owned IP addresses
Each RDS on Outposts DB instance has a private IP address for traffic inside its virtual private cloud
(VPC). This private IP address isn't publicly accessible. You can use the Public option to set whether the
DB instance also has a public IP address in addition to the private IP address. Using the public IP address
for connections routes them through the internet and can result in high latencies in some cases.
Instead of using these private and public IP addresses, RDS on Outposts supports using CoIPs for DB
instances through their subnets. When you use a CoIP for an RDS on Outposts DB instance, you connect
to the DB instance with the DB instance endpoint. RDS on Outposts then automatically uses the CoIP for
all connections from both inside and outside of the VPC.
CoIPs can provide the following benefits for RDS on Outposts DB instances:
Using CoIPs
You can turn CoIPs on or off for an RDS on Outposts DB instance using the AWS Management Console,
the AWS CLI, or the RDS API:
• With the AWS Management Console, choose the Customer-owned IP address (CoIP) setting in Access
type to use CoIPs. Choose one of the other settings to turn them off.
1184
Amazon Relational Database Service User Guide
Limitations
You can turn CoIPs on or off when you perform any of the following actions:
• Create a DB instance
For more information, see Creating DB instances for Amazon RDS on AWS Outposts (p. 1189).
• Modify a DB instance
For more information, see Modifying an Amazon RDS DB instance (p. 401).
• Create a read replica
For more information, see Creating read replicas for Amazon RDS on AWS Outposts (p. 1196).
• Restore a DB instance from a snapshot
For more information, see Restoring a DB instance to a specified time (p. 660).
Note
In some cases, you might turn on CoIPs for a DB instance but Amazon RDS isn't able to allocate
a CoIP for the DB instance. In such cases, the DB instance status is changed to incompatible-
network. For more information about the DB instance status, see Viewing Amazon RDS DB
instance status (p. 684).
Limitations
The following limitations apply to CoIP support for RDS on Outposts DB instances:
• When using a CoIP for a DB instance, make sure that public accessibility is turned off for that DB
instance.
• Make sure that the inbound rules for your VPC security groups include the CoIP address range (CIDR
block). For more information about setting up security groups, see Provide access to your DB instance
in your VPC by creating a security group (p. 177).
• You can't assign a CoIP from a CoIP pool to a DB instance. When you use a CoIP for a DB instance,
Amazon RDS automatically assigns a CoIP from a CoIP pool to the DB instance.
• You must use the AWS account that owns the Outpost resources (owner) or share the following
resources with other AWS accounts (consumers) in the same organization:
• The Outpost
• The local gateway (LGW) route table for the DB instance's VPC
• The CoIP pool or pools for the LGW route table
For more information, see Working with shared AWS Outposts resources in the AWS Outposts User
Guide.
1185
Amazon Relational Database Service User Guide
Multi-AZ deployments
Multi-AZ deployments on AWS Outposts operate like Multi-AZ deployments in AWS Regions, but with
the following differences:
Multi-AZ on AWS Outposts is available for all supported versions of MySQL and PostgreSQL on RDS on
Outposts. Local backups aren't supported for Multi-AZ deployments. For more information, see Creating
DB instances for Amazon RDS on AWS Outposts (p. 1189).
RDS on Outposts also requires connectivity between the Outpost that is hosting the primary DB instance
and the Outpost that is hosting the standby DB instance for synchronous replication. Any impact to this
connection can prevent RDS on Outposts from performing a failover.
You might see elevated latencies for a standard DB instance deployment as a result of the synchronous
data replication. The bandwidth and latency of the connection between the Outpost hosting the
primary DB instance and the Outpost hosting the standby DB instance directly affect latencies. For more
information, see Prerequisites (p. 1187).
Improving availability
We recommend the following actions to improve availability:
• Allocate enough additional capacity for your mission-critical applications to allow recovery and failover
if there is an underlying host issue. This applies to all Outposts that contain subnets in your DB subnet
group. For more information, see Resilience in AWS Outposts.
• Provide redundant network connectivity for your Outposts.
• Use more than two Outposts. Having more than two Outposts allows Amazon RDS to recover a DB
instance. RDS does this recovery by moving the DB instance to another Outpost if the current Outpost
experiences a failure.
• Provide dual power sources and redundant network connectivity for your Outpost.
• The round trip time (RTT) latency between the Outpost hosting your primary DB instance and the
Outpost hosting your standby DB instance directly affects write latency. Keep the RTT latency between
the AWS Outposts in the low single-digit milliseconds. We recommend not more than 5 milliseconds,
but your requirements might vary.
1186
Amazon Relational Database Service User Guide
Prerequisites
You can find the net impact to network latency in the Amazon CloudWatch metrics for
WriteLatency. For more information, see Amazon CloudWatch metrics for Amazon RDS (p. 806).
• The availability of the connection between the Outposts affects the overall availability of your DB
instances. Have redundant network connectivity between the Outposts.
Prerequisites
Multi-AZ deployments on RDS on Outposts have the following prerequisites:
• Have at least two Outposts, connected over local connections and attached to different Availability
Zones in an AWS Region.
• Make sure that your DB subnet groups contain the following:
• At least two subnets in at least two Availability Zones in a given AWS Region.
• Subnets only in Outposts.
• At least two subnets in at least two Outposts within the same virtual private cloud (VPC).
• Associate your DB instance's VPC with all of your local gateway route tables. This association is
necessary because replication runs over your local network using your Outposts' local gateways.
For example, suppose that your VPC contains subnet-A in Outpost-A and subnet-B in Outpost-B.
Outpost-A uses LocalGateway-A (LGW-A), and Outpost-B uses LocalGateway-B (LGW-B). LGW-A has
RouteTable-A, and LGW-B has RouteTable-B. You want to use both RouteTable-A and RouteTable-B for
replication traffic. To do this, associate your VPC with both RouteTable-A and RouteTable-B.
For more information about how to create an association, see the Amazon EC2 create-local-gateway-
route-table-vpc-association AWS CLI command.
• Make sure that your Outposts use customer-owned IP (CoIP) routing. Each route table must also each
have at least one address pool. Amazon RDS allocates an additional IP address each for the primary
and standby DB instances for data synchronization.
• Make sure that the AWS account that owns the RDS DB instances owns the local gateway route tables
and CoIP pools. Or make sure it's part of a Resource Access Manager share with access to the local
gateway route tables and CoIP pools.
• Make sure that the IP addresses in your CoIP pools can be routed from one Outpost local gateway to
the others.
• Make sure that the VPC's CIDR blocks (for example, 10.0.0.0/4) and your CoIP pool CIDR blocks don't
contain IP addresses from Class E (240.0.0.0/4). RDS uses these IP addresses internally.
• Make sure that you correctly set up outbound and related inbound traffic.
RDS on Outposts establishes a virtual private network (VPN) connection between the primary and
standby DB instances. For this to work correctly, your local network must allow outbound and related
inbound traffic for Internet Security Association and Key Management Protocol (ISAKMP). It does so
using User Datagram Protocol (UDP) port 500 and IP Security (IPsec) Network Address Translation
Traversal (NAT-T) using UDP port 4500.
For more information on CoIPs, see Customer-owned IP addresses for Amazon RDS on AWS
Outposts (p. 1184) in this guide, and Customer-owned IP addresses in the AWS Outposts User Guide.
1187
Amazon Relational Database Service User Guide
Working with API operations for Amazon EC2 permissions
These API operations grant to, or remove from, internal RDS accounts the permission to allocate elastic
IP addresses from the CoIP pool specified by the permission. You can view these IP addresses using the
DescribeCoipPoolUsage API operation. For more information on CoIPs, see Customer-owned IP
addresses for Amazon RDS on AWS Outposts (p. 1184) and Customer-owned IP addresses in the AWS
Outposts User Guide.
RDS can also call the following EC2 permission API operations for local gateway route tables on your
behalf for Multi-AZ deployments:
These API operations grant to, or remove from, internal RDS accounts the permission to associate
internal RDS VPCs with your local gateway route tables. You can view these route table–VPC associations
using the DescribeLocalGatewayRouteTableVpcAssociations API operations.
1188
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts
A virtual private cloud (VPC) based on the Amazon VPC service can span all of the Availability Zones
in an AWS Region. You can extend any VPC in the AWS Region to your Outpost by adding an Outpost
subnet. To add an Outpost subnet to a VPC, specify the Amazon Resource Name (ARN) of the Outpost
when you create the subnet.
Before you create an RDS on Outposts DB instance, you can create a DB subnet group that includes one
subnet that is associated with your Outpost. When you create an RDS on Outposts DB instance, specify
this DB subnet group. You can also choose to create a new DB subnet group when you create your DB
instance.
For information about configuring AWS Outposts, see the AWS Outposts User Guide.
Console
Creating a DB subnet group
Create a DB subnet group with one subnet that is associated with your Outpost.
You can also create a new DB subnet group for the Outpost when you create your DB instance. If you
want to do so, then skip this procedure.
Note
To create a DB subnet group for the AWS Cloud, specify at least two subnets.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region where you want to
create the DB subnet group.
3. Choose Subnet groups, and then choose Create DB Subnet Group.
1189
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts
1190
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region where the Outpost on
which you want to create the DB instance is attached.
3. In the navigation pane, choose Databases.
4. Choose Create database.
The AWS Management Console detects available Outposts that you have configured and presents
the On-premises option in the Database location section.
Note
If you haven't configured any Outposts, either the Database location section doesn't appear
or the RDS on Outposts option isn't available in the Choose an on-premises creation
method section.
5. For Database location, choose On-premises.
6. For On-premises creation method, choose RDS on Outposts.
7. Specify your settings for Outposts Connectivity. These settings are for the Outpost that uses the
VPC that has the DB subnet group for your DB instance. Your VPC must be based on the Amazon
VPC service.
a. For Virtual Private Cloud (VPC), choose the VPC that contains the DB subnet group for your DB
instance.
b. For VPC security group, choose the Amazon VPC security group for your DB instance.
c. For DB subnet group, choose the DB subnet group for your DB instance.
You can choose an existing DB subnet group that's associated with the Outpost—for example, if
you performed the procedure in Creating a DB subnet group (p. 1189).
You can also create a new DB subnet group for the Outpost.
8. For Multi-AZ deployment, choose Create a standby instance (recommended for production usage)
to create a standby DB instance in another Outpost.
Note
This option isn't available for Microsoft SQL Server.
If you choose to create a Multi-AZ deployment, you can't store backups on your Outpost.
9. Under Backup, do the following:
• AWS Cloud to store automated backups and manual snapshots in the parent AWS Region.
• Outposts (on-premises) to create local backups.
Note
To store backups on your Outpost, your Outpost must have Amazon S3 capability.
For more information, see Amazon S3 on Outposts.
Local backups aren't supported for Multi-AZ deployments or read replicas.
b. Choose Enable automated backups to create point-in-time snapshots of your DB instance.
If you turn on automated backups, then you can choose values for Backup retention period and
Backup window, or leave the default values.
10. Specify other DB instance settings as needed.
For information about each setting when creating a DB instance, see Settings for DB
instances (p. 308).
1191
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts
The Databases page appears. A banner tells you that your DB instance is being created, and displays
the View credential details button.
1. To view the master user name and password for the DB instance, choose View credential details on
the Databases page.
You can connect to the DB instance as the master user by using these credentials.
Important
You can't view the master user password again. If you don't record it, you might have to
change it. To change the master user password after the DB instance is available, modify
the DB instance. For more information about modifying a DB instance, see Modifying an
Amazon RDS DB instance (p. 401).
2. Choose the name of the new DB instance on the Databases page.
On the RDS console, the details for the new DB instance appear. The DB instance has a status of
Creating until the DB instance is created and ready for use. When the state changes to Available,
you can connect to the DB instance. Depending on the DB instance class and storage allocated, it can
take several minutes for the new DB instance to be available.
After the DB instance is available, you can manage it the same way that you manage RDS DB
instances in the AWS Cloud.
AWS CLI
Before you create a new DB instance in an Outpost with the AWS CLI, first create a DB subnet group for
use by RDS on Outposts.
1192
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts
• Use the create-db-subnet-group command. For --subnet-ids, specify the subnet group in the
Outpost for use by RDS on Outposts.
For Windows:
• Use the create-db-instance command. Specify an Availability Zone for the Outpost, an Amazon VPC
security group associated with the Outpost, and the DB subnet group you created for the Outpost.
You can include the following options:
• --db-instance-identifier
• --db-instance-class
• --engine – The database engine. Use one of the following values:
• MySQL – Specify mysql.
• PostgreSQL – Specify postgres.
• Microsoft SQL Server – Specify sqlserver-ee, sqlserver-se, or sqlserver-web.
• --availability-zone
• --vpc-security-group-ids
• --db-subnet-group-name
• --allocated-storage
• --max-allocated-storage
• --master-username
• --master-user-password
• --multi-az | --no-multi-az – (Optional) Whether to create a standby DB instance in a
different Availability Zone. The default is --no-multi-az.
If you use the --multi-az option, you can't use outposts for --backup-target. In addition,
the DB instance can't have read replicas if you use outposts for --backup-target.
• --storage-encrypted
1193
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts
• --kms-key-id
Example
The following example creates a MySQL DB instance named myoutpostdbinstance with backups
stored on your Outpost.
For Windows:
For information about each setting when creating a DB instance, see Settings for DB instances (p. 308).
RDS API
To create a new DB instance in an Outpost with the RDS API, first create a DB subnet group for use by
RDS on Outposts by calling the CreateDBSubnetGroup operation. For SubnetIds, specify the subnet
group in the Outpost for use by RDS on Outposts.
Next, call the CreateDBInstance operation with the following parameters. Specify an Availability Zone for
the Outpost, an Amazon VPC security group associated with the Outpost, and the DB subnet group you
created for the Outpost.
• AllocatedStorage
• AvailabilityZone
1194
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts
• BackupRetentionPeriod
• BackupTarget
If you are creating a Multi-AZ DB instance deployment, you can't use outposts for BackupTarget. In
addition, the DB instance can't have read replicas if you use outposts for BackupTarget.
• DBInstanceClass
• DBInstanceIdentifier
• VpcSecurityGroupIds
• DBSubnetGroupName
• Engine
• EngineVersion
• MasterUsername
• MasterUserPassword
• MaxAllocatedStorage (optional)
• MultiAZ (optional)
• StorageEncrypted
• KmsKeyID
For information about each setting when creating a DB instance, see Settings for DB instances (p. 308).
1195
Amazon Relational Database Service User Guide
Creating read replicas for RDS on Outposts
When you create a read replica from an RDS on Outposts DB instance, the read replica uses a customer-
owned IP address (CoIP). For more information, see Customer-owned IP addresses for Amazon RDS on
AWS Outposts (p. 1184).
• You can't create read replicas for RDS for SQL Server on RDS on Outposts DB instances.
• Cross-Region read replicas aren't supported on RDS on Outposts.
• Cascading read replicas aren't supported on RDS on Outposts.
• The source RDS on Outposts DB instance can't have local backups. The backup target for the source DB
instance must be your AWS Region.
• Read replicas require customer-owned IP (CoIP) pools. For more information, see Customer-owned IP
addresses for Amazon RDS on AWS Outposts (p. 1184).
You can create a read replica from an RDS on Outposts DB instance using the AWS Management
Console, AWS CLI, or RDS API. For more information on read replicas, see Working with DB instance read
replicas (p. 438).
Console
To create a read replica from a source DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to use as the source for a read replica.
4. For Actions, choose Create read replica.
5. For DB instance identifier, enter a name for the read replica.
6. Specify your settings for Outposts Connectivity. These settings are for the Outpost that uses the
virtual private cloud (VPC) that has the DB subnet group for your DB instance. Your VPC must be
based on the Amazon VPC service.
7. Choose your DB instance class. We recommend that you use the same or larger DB instance class
and storage type as the source DB instance for the read replica.
8. For Multi-AZ deployment, choose Create a standby instance (recommended for production usage)
to create a standby DB instance in a different Availability Zone.
Creating your read replica as a Multi-AZ DB instance is independent of whether the source database
is a Multi-AZ DB instance.
9. (Optional) Under Connectivity, set values for Subnet Group and Availability Zone.
If you specify values for both Subnet Group and Availability Zone, the read replica is created on an
Outpost that is associated with the Availability Zone in the DB subnet group.
1196
Amazon Relational Database Service User Guide
Creating read replicas for RDS on Outposts
If you specify a value for Subnet Group and No preference for Availability Zone, the read replica is
created on a random Outpost in the DB subnet group.
10. For AWS KMS key, choose the AWS KMS key identifier of the KMS key.
After the read replica is created, you can see it on the Databases page in the RDS console. It shows
Replica in the Role column.
AWS CLI
To create a read replica from a source MySQL or PostgreSQL DB instance, use the AWS CLI command
create-db-instance-read-replica.
You can control where the read replica is created by specifying the --db-subnet-group-name and --
availability-zone options:
• If you specify both the --db-subnet-group-name and --availability-zone options, the read
replica is created on an Outpost that is associated with the Availability Zone in the DB subnet group.
• If you specify the --db-subnet-group-name option and don't specify the --availability-zone
option, the read replica is created on a random Outpost in the DB subnet group.
• If you don't specify either option, the read replica is created on the same Outpost as the source RDS on
Outposts DB instance.
The following example creates a replica and specifies the location of the read replica by including --db-
subnet-group-name and --availability-zone options.
Example
For Linux, macOS, or Unix:
For Windows:
RDS API
To create a read replica from a source MySQL or PostgreSQL DB instance, call the Amazon RDS API
CreateDBInstanceReadReplica operation with the following required parameters:
• DBInstanceIdentifier
• SourceDBInstanceIdentifier
1197
Amazon Relational Database Service User Guide
Considerations for restoring DB instances
You can control where the read replica is created by specifying the DBSubnetGroupName and
AvailabilityZone parameters:
• If you specify both the DBSubnetGroupName and AvailabilityZone parameters, the read replica is
created on an Outpost that is associated with the Availability Zone in the DB subnet group.
• If you specify the DBSubnetGroupName parameter and don't specify the AvailabilityZone
parameter, the read replica is created on a random Outpost in the DB subnet group.
• If you don't specify either parameter, the read replica is created on the same Outpost as the source
RDS on Outposts DB instance.
• When restoring from a manual DB snapshot, you can store backups either in the parent AWS Region or
locally on your Outpost.
• When restoring from an automated backup (point-in-time recovery), you have fewer choices:
• If restoring from the parent AWS Region, you can store backups either in the AWS Region or on your
Outpost.
• If restoring from your Outpost, you can store backups only on your Outpost.
1198
Amazon Relational Database Service User Guide
Region and version availability
Using RDS Proxy, you can handle unpredictable surges in database traffic. Otherwise, these surges might
cause issues due to oversubscribing connections or creating new connections at a fast rate. RDS Proxy
establishes a database connection pool and reuses connections in this pool. This approach avoids the
memory and CPU overhead of opening a new database connection each time. To protect the database
against oversubscription, you can control the number of database connections that are created.
RDS Proxy queues or throttles application connections that can't be served immediately from the pool
of connections. Although latencies might increase, your application can continue to scale without
abruptly failing or overwhelming the database. If connection requests exceed the limits you specify, RDS
Proxy rejects application connections (that is, it sheds load). At the same time, it maintains predictable
performance for the load that can be served with the available capacity.
You can reduce the overhead to process credentials and establish a secure connection for each new
connection. RDS Proxy can handle some of that work on behalf of the database.
RDS Proxy is fully compatible with the engine versions that it supports. You can enable RDS Proxy for
most applications with no code changes.
Topics
• Region and version availability (p. 1199)
• Quotas and limitations for RDS Proxy (p. 1199)
• Planning where to use RDS Proxy (p. 1202)
• RDS Proxy concepts and terminology (p. 1203)
• Getting started with RDS Proxy (p. 1207)
• Managing an RDS Proxy (p. 1220)
• Working with Amazon RDS Proxy endpoints (p. 1232)
• Monitoring RDS Proxy metrics with Amazon CloudWatch (p. 1239)
• Working with RDS Proxy events (p. 1244)
• RDS Proxy command-line examples (p. 1245)
• Troubleshooting for RDS Proxy (p. 1247)
• Using RDS Proxy with AWS CloudFormation (p. 1253)
1199
Amazon Relational Database Service User Guide
RDS for MariaDB limitations
• You can have up to 20 proxies for each AWS account ID. If your application requires more proxies, you
can request additional proxies by opening a ticket with the AWS Support organization.
• Each proxy can have up to 200 associated Secrets Manager secrets. Thus, each proxy can connect to
with up to 200 different user accounts at any given time.
• You can create, view, modify, and delete up to 20 endpoints for each proxy. These endpoints are in
addition to the default endpoint that's automatically created for each proxy.
• For RDS DB instances in replication configurations, you can associate a proxy only with the writer DB
instance, not a read replica.
• Your RDS Proxy must be in the same virtual private cloud (VPC) as the database. The proxy can't be
publicly accessible, although the database can be. For example, if you're prototyping on a local host,
you can't connect to your RDS Proxy unless you set up dedicated networking. This is the case because
your local host is outside of the proxy's VPC.
• You can't use RDS Proxy with a VPC that has its tenancy set to dedicated.
• If you use RDS Proxy with an RDS DB instance that has IAM authentication enabled, check user
authentication. Users who connect through a proxy must authenticate through sign-in credentials.
For details about Secrets Manager and IAM support in RDS Proxy, see Setting up database credentials
in AWS Secrets Manager (p. 1209) and Setting up AWS Identity and Access Management (IAM)
policies (p. 1210).
• You can't use RDS Proxy with custom DNS when using SSL hostname validation.
• Each proxy can be associated with a single target DB instance . However, you can associate multiple
proxies with the same DB instance .
• Any statement with a text size greater than 16 KB causes the proxy to pin the session to the current
connection.
For additional limitations for each DB engine, see the following sections:
• Currently, all proxies listen on port 3306 for MariaDB. The proxies still connect to your database using
the port that you specified in the database settings.
• You can't use RDS Proxy with self-managed MariaDB databases in Amazon EC2 instances.
• You can't use RDS Proxy with an RDS for MariaDB DB instance that has the read_only parameter in
its DB parameter group set to 1.
• RDS Proxy doesn't support compressed mode. For example, it doesn't support the compression used
by the --compress or -C options of the mysql command.
• Some SQL statements and functions can change the connection state without causing pinning. For the
most current pinning behavior, see Avoiding pinning (p. 1228).
• RDS Proxy doesn't support the MariaDB auth_ed25519 plugin.
• RDS Proxy doesn't support Transport Layer Security (TLS) version 1.3 for MariaDB databases.
• Database connections processing a GET DIAGNOSTIC command might return inaccurate information
when RDS Proxy reuses the same database connection to run another query. This can happen when
RDS Proxy multiplexes database connections.
1200
Amazon Relational Database Service User Guide
RDS for SQL Server limitations
Important
For proxies associated with MariaDB databases, don't set the configuration parameter
sql_auto_is_null to true or a nonzero value in the initialization query. Doing so might
cause incorrect application behavior.
• The number of Secrets Manager secrets that you need to create for a proxy depends on the collation
that your DB instance uses. For example, suppose that your DB instance uses case-sensitive collation.
If your application accepts both "Admin" and "admin," then your proxy needs two separate secrets. For
more information about collation in SQL Server, see the Microsoft SQL Server documentation.
• RDS Proxy doesn't support connections that use Active Directory.
• You can't use IAM authentication with clients that don't support token properties. For more
information, see Considerations for connecting to a proxy with Microsoft SQL Server (p. 1219).
• The results of @@IDENTITY, @@ROWCOUNT, and SCOPE_IDENTITY aren't always accurate. As a work-
around, retrieve their values in the same session statement to ensure that they return the correct
information.
• If the connection uses multiple active result sets (MARS), RDS Proxy doesn't run the initialization
queries. For information about MARS, see the Microsoft SQL Server documentation.
Important
For proxies associated with MySQL databases, don't set the configuration parameter
sql_auto_is_null to true or a nonzero value in the initialization query. Doing so might
cause incorrect application behavior.
1201
Amazon Relational Database Service User Guide
Planning where to use RDS Proxy
Important
For existing proxies with PostgreSQL databases, if you modify the database authentication to
use SCRAM only, the proxy becomes unavailable for up to 60 seconds. To avoid the issue, do one
of the following:
• Ensure that the database allows both SCRAM and MD5 authentication.
• To use only SCRAM authentication, create a new proxy, migrate your application traffic to the
new proxy, then delete the proxy previously associated with the database.
• Any DB instance that encounters "too many connections" errors is a good candidate for associating
with a proxy. This is often characterized by a high value of the ConnectionAttempts CloudWatch
metric. The proxy enables applications to open many client connections, while the proxy manages a
smaller number of long-lived connections to the DB instance .
• For DB instances that use smaller AWS instance classes, such as T2 or T3, using a proxy can help avoid
out-of-memory conditions. It can also help reduce the CPU overhead for establishing connections.
These conditions can occur when dealing with large numbers of connections.
• You can monitor certain Amazon CloudWatch metrics to determine whether a DB instance is
approaching certain types of limit. These limits are for the number of connections and the memory
associated with connection management. You can also monitor certain CloudWatch metrics to
determine whether a DB instance is handling many short-lived connections. Opening and closing such
connections can impose performance overhead on your database. For information about the metrics to
monitor, see Monitoring RDS Proxy metrics with Amazon CloudWatch (p. 1239).
• AWS Lambda functions can also be good candidates for using a proxy. These functions make frequent
short database connections that benefit from connection pooling offered by RDS Proxy. You can take
advantage of any IAM authentication you already have for Lambda functions, instead of managing
database credentials in your Lambda application code.
• Applications that typically open and close large numbers of database connections and don't have
built-in connection pooling mechanisms are good candidates for using a proxy.
• Applications that keep a large number of connections open for long periods are typically good
candidates for using a proxy. Applications in industries such as software as a service (SaaS) or
ecommerce often minimize the latency for database requests by leaving connections open. With RDS
Proxy, an application can keep more connections open than it can when connecting directly to the DB
instance .
• You might not have adopted IAM authentication and Secrets Manager due to the complexity of setting
up such authentication for all DB instances. If so, you can leave the existing authentication methods
1202
Amazon Relational Database Service User Guide
RDS Proxy concepts and terminology
in place and delegate the authentication to a proxy. The proxy can enforce the authentication policies
for client connections for particular applications. You can take advantage of any IAM authentication
you already have for Lambda functions, instead of managing database credentials in your Lambda
application code.
• RDS Proxy can help make applications more resilient and transparent to database failures. RDS Proxy
bypasses Domain Name System (DNS) caches to reduce failover times by up to 66% for Amazon RDS
Multi-AZ databases. RDS Proxy also automatically routes traffic to a new database instance while
preserving application connections. This makes failovers more transparent for applications.
RDS Proxy handles the network traffic between the client application and the database. It does so in an
active way first by understanding the database protocol. It then adjusts its behavior based on the SQL
operations from your application and the result sets from the database.
RDS Proxy reduces the memory and CPU overhead for connection management on your database.
The database needs less memory and CPU resources when applications open many simultaneous
connections. It also doesn't require logic in your applications to close and reopen connections that stay
idle for a long time. Similarly, it requires less application logic to reestablish connections in case of a
database problem.
The infrastructure for RDS Proxy is highly available and deployed over multiple Availability Zones (AZs).
The computation, memory, and storage for RDS Proxy are independent of your RDS DB instances and
Aurora DB clusters. This separation helps lower overhead on your database servers, so that they can
devote their resources to serving database workloads. The RDS Proxy compute resources are serverless,
automatically scaling based on your database workload.
Topics
• Overview of RDS Proxy concepts (p. 1203)
• Connection pooling (p. 1204)
• RDS Proxy security (p. 1204)
• Failover (p. 1206)
• Transactions (p. 1206)
Each proxy handles connections to a single RDS DB instance or Aurora DB cluster. The proxy
automatically determines the current writer instance for RDS Multi-AZ DB instances and Aurora
provisioned clusters.
The connections that a proxy keeps open and available for your database application to use make up the
connection pool.
By default, RDS Proxy can reuse a connection after each transaction in your session. This transaction-
level reuse is called multiplexing. When RDS Proxy temporarily removes a connection from the
connection pool to reuse it, that operation is called borrowing the connection. When it's safe to do so,
RDS Proxy returns that connection to the connection pool.
1203
Amazon Relational Database Service User Guide
Connection pooling
In some cases, RDS Proxy can't be sure that it's safe to reuse a database connection outside of the current
session. In these cases, it keeps the session on the same connection until the session ends. This fallback
behavior is called pinning.
A proxy has a default endpoint. You connect to this endpoint when you work with an RDS DB instance or
Aurora DB cluster. You do so instead of connecting to the read/write endpoint that connects directly to
the instance or cluster. The special-purpose endpoints for an Aurora cluster remain available for you to
use. For Aurora DB clusters, you can also create additional read/write and read-only endpoints. For more
information, see Overview of proxy endpoints (p. 1233).
For example, you can still connect to the cluster endpoint for read/write connections without connection
pooling. You can still connect to the reader endpoint for load-balanced read-only connections. You can
still connect to the instance endpoints for diagnosis and troubleshooting of specific DB instances within
an Aurora cluster. If you use other AWS services such as AWS Lambda to connect to RDS databases,
change their connection settings to use the proxy endpoint. For example, you specify the proxy endpoint
to allow Lambda functions to access your database while taking advantage of RDS Proxy functionality.
Each proxy contains a target group. This target group embodies the RDS DB instance or Aurora DB cluster
that the proxy can connect to. For an Aurora cluster, by default the target group is associated with all
the DB instances in that cluster. That way, the proxy can connect to whichever Aurora DB instance is
promoted to be the writer instance in the cluster. The RDS DB instance associated with a proxy, or the
Aurora DB cluster and its instances, are called the targets of that proxy. For convenience, when you create
a proxy through the console, RDS Proxy also creates the corresponding target group and registers the
associated targets automatically.
An engine family is a related set of database engines that use the same DB protocol. You choose the
engine family for each proxy that you create.
Connection pooling
Each proxy performs connection pooling for the writer instance of its associated RDS or Aurora database.
Connection pooling is an optimization that reduces the overhead associated with opening and closing
connections and with keeping many connections open simultaneously. This overhead includes memory
needed to handle each new connection. It also involves CPU overhead to close each connection and open
a new one. Examples include Transport Layer Security/Secure Sockets Layer (TLS/SSL) handshaking,
authentication, negotiating capabilities, and so on. Connection pooling simplifies your application logic.
You don't need to write application code to minimize the number of simultaneous open connections.
Each proxy also performs connection multiplexing, also known as connection reuse. With multiplexing,
RDS Proxy performs all the operations for a transaction using one underlying database connection.
RDS then can use a different connection for the next transaction. You can open many simultaneous
connections to the proxy, and the proxy keeps a smaller number of connections open to the DB instance
or cluster. Doing so further minimizes the memory overhead for connections on the database server. This
technique also reduces the chance of "too many connections" errors.
RDS Proxy can act as an additional layer of security between client applications and the underlying
database. For example, you can connect to the proxy using TLS 1.2, even if the underlying DB instance
supports an older version of TLS. You can connect to the proxy using an IAM role. This is so even if the
proxy connects to the database using the native user and password authentication method. By using
this technique, you can enforce strong authentication requirements for database applications without a
costly migration effort for the DB instances themselves.
1204
Amazon Relational Database Service User Guide
Security
You store the database credentials used by RDS Proxy in AWS Secrets Manager. Each database user
for the RDS DB instance or Aurora DB cluster accessed by a proxy must have a corresponding secret
in Secrets Manager. You can also set up IAM authentication for users of RDS Proxy. By doing so,
you can enforce IAM authentication for database access even if the databases use native password
authentication. We recommend using these security features instead of embedding database credentials
in your application code.
To enforce TLS for all connections between the proxy and your database, you can specify a setting
Require Transport Layer Security when you create or modify a proxy.
RDS Proxy can also ensure that your session uses TLS/SSL between your client and the RDS Proxy
endpoint. To have RDS Proxy do so, specify the requirement on the client side. SSL session variables are
not set for SSL connections to a database using RDS Proxy.
• For RDS for MySQL and Aurora MySQL, specify the requirement on the client side with the --ssl-
mode parameter when you run the mysql command.
• For Amazon RDS PostgreSQL and Aurora PostgreSQL, specify sslmode=require as part of the
conninfo string when you run the psql command.
RDS Proxy supports TLS protocol version 1.0, 1.1, and 1.2. You can connect to the proxy using a higher
version of TLS than you use in the underlying database.
By default, client programs establish an encrypted connection with RDS Proxy, with further control
available through the --ssl-mode option. From the client side, RDS Proxy supports all SSL modes.
PREFERRED
No SSL is allowed.
REQUIRED
Enforce SSL.
VERIFY_CA
When using a client with --ssl-mode VERIFY_CA or VERIFY_IDENTITY, specify the --ssl-ca option
pointing to a CA in .pem format. For the .pem file to use, download all root CA PEMs from Amazon Trust
Services and place them into a single .pem file.
RDS Proxy uses wildcard certificates, which apply to a both a domain and its subdomains. If you use the
mysql client to connect with SSL mode VERIFY_IDENTITY, currently you must use the MySQL 8.0-
compatible mysql command.
1205
Amazon Relational Database Service User Guide
Failover
Failover
Failover is a high-availability feature that replaces a database instance with another one when the
original instance becomes unavailable. A failover might happen because of a problem with a database
instance. It might also be part of normal maintenance procedures, such as during a database upgrade.
Failover applies to RDS DB instances in a Multi-AZ configuration. Failover applies to Aurora DB clusters
with one or more reader instances in addition to the writer instance.
Connecting through a proxy makes your application more resilient to database failovers. When the
original DB instance becomes unavailable, RDS Proxy connects to the standby database without dropping
idle application connections. Doing so helps to speed up and simplify the failover process. The result is
faster failover that's less disruptive to your application than a typical reboot or database problem.
Without RDS Proxy, a failover involves a brief outage. During the outage, you can't perform write
operations on that database. Any existing database connections are disrupted, and your application must
reopen them. The database becomes available for new connections and write operations when a read-
only DB instance is promoted in place of one that's unavailable.
During DB failovers, RDS Proxy continues to accept connections at the same IP address and automatically
directs connections to the new primary DB instance. Clients connecting through RDS Proxy are not
susceptible to the following:
For applications that maintain their own connection pool, going through RDS Proxy means that most
connections stay alive during failovers or other disruptions. Only connections that are in the middle of a
transaction or SQL statement are canceled. RDS Proxy immediately accepts new connections. When the
database writer is unavailable, RDS Proxy queues up incoming requests.
For applications that don't maintain their own connection pools, RDS Proxy offers faster connection
rates and more open connections. It offloads the expensive overhead of frequent reconnects from the
database. It does so by reusing database connections maintained in the RDS Proxy connection pool. This
approach is particularly important for TLS connections, where setup costs are significant.
Transactions
All the statements within a single transaction always use the same underlying database connection.
The connection becomes available for use by a different session when the transaction ends. Using the
transaction as the unit of granularity has the following consequences:
• Connection reuse can happen after each individual statement when the RDS for MySQL or Aurora
MySQL autocommit setting is turned on.
• Conversely, when the autocommit setting is turned off, the first statement you issue in a session
begins a new transaction. For example, suppose that you enter a sequence of SELECT, INSERT,
UPDATE, and other data manipulation language (DML) statements. In this case, connection reuse
doesn't happen until you issue a COMMIT, ROLLBACK, or otherwise end the transaction.
• Entering a data definition language (DDL) statement causes the transaction to end after that
statement completes.
1206
Amazon Relational Database Service User Guide
Getting started with RDS Proxy
RDS Proxy detects when a transaction ends through the network protocol used by the database client
application. Transaction detection doesn't rely on keywords such as COMMIT or ROLLBACK appearing in
the text of the SQL statement.
In some cases, RDS Proxy might detect a database request that makes it impractical to move your session
to a different connection. In these cases, it turns off multiplexing for that connection the remainder
of your session. The same rule applies if RDS Proxy can't be certain that multiplexing is practical for
the session. This operation is called pinning. For ways to detect and minimize pinning, see Avoiding
pinning (p. 1228).
Topics
• Setting up network prerequisites (p. 1207)
• Setting up database credentials in AWS Secrets Manager (p. 1209)
• Setting up AWS Identity and Access Management (IAM) policies (p. 1210)
• Creating an RDS Proxy (p. 1212)
• Viewing an RDS Proxy (p. 1217)
• Connecting to a database through RDS Proxy (p. 1218)
Topics
• Getting information about your subnets (p. 1207)
• Planning for IP address capacity (p. 1208)
The following Linux example shows AWS CLI commands to determine the subnet IDs corresponding to
a specific Aurora DB cluster or RDS DB instance. For an Aurora cluster, first you find the ID for one of
the associated DB instances. You can extract the subnet IDs used by that DB instance. To do so, examine
the nested fields within the DBSubnetGroup and Subnets attributes in the describe output for the DB
instance. You specify some or all of those subnet IDs when setting up a proxy for that database server.
1207
Amazon Relational Database Service User Guide
Setting up network prerequisites
$ # Optional first step, only needed if you're starting from an Aurora cluster. Find the ID
of any DB instance in the cluster.
$ aws rds describe-db-clusters --db-cluster-identifier my_cluster_id --query '*[].
[DBClusterMembers]|[0]|[0][*].DBInstanceIdentifier' --output text
my_instance_id
instance_id_2
instance_id_3
...
$ # From the DB instance, trace through the DBSubnetGroup and Subnets to find the subnet
IDs.
$ aws rds describe-db-instances --db-instance-identifier my_instance_id --query '*[].
[DBSubnetGroup]|[0]|[0]|[Subnets]|[0]|[*].SubnetIdentifier' --output text
subnet_id_1
subnet_id_2
subnet_id_3
...
Or you can first find the VPC ID for the DB instance. Then you can examine the VPC to find its subnets.
The following Linux example shows how.
Following are the recommended minimum number of IP addresses to leave free in your subnets for your
proxy based on DB instance class sizes.
db.*.xlarge or smaller 10
db.*.2xlarge 15
db.*.4xlarge 25
db.*.8xlarge 45
1208
Amazon Relational Database Service User Guide
Setting up database credentials in Secrets Manager
db.*.12xlarge 60
db.*.16xlarge 75
db.*.24xlarge 110
These numbers of recommended IP addresses are estimates for a proxy with only the default endpoint.
A proxy with additional endpoints or read replicas might need more free IP addresses. For each
additional endpoint, we recommend that you reserve three more IP addresses. For each read replica, we
recommend that you reserve additional IP addresses as specified in the table based on that read replica's
size.
Note
RDS Proxy never uses more than 215 IP addresses in a VPC.
In Secrets Manager, you create these secrets with values for the username and password fields. Doing
so allows the proxy to connect to the corresponding database users on RDS DB instances or Aurora DB
clusters that you associate with the proxy. To do this, you can use the setting Credentials for other
database, Credentials for RDS database, or Other type of secrets. Fill in the appropriate values for
the User name and Password fields, and placeholder values for any other required fields. The proxy
ignores other fields such as Host and Port if they're present in the secret. Those details are automatically
supplied by the proxy.
You can also choose Other type of secrets. In this case, you create the secret with keys named username
and password.
Because the secrets used by your proxy aren't tied to a specific database server, you can reuse a secret
across multiple proxies. To do so, use the same credentials across multiple database servers. For example,
you might use the same credentials across a group of development and test servers.
To connect through the proxy as a specific user, make sure that the password associated with a secret
matches the database password for that user. If there's a mismatch, you can update the associated secret
in Secrets Manager. In this case, you can still connect to other accounts where the secret credentials and
the database passwords do match.
Note
For RDS for SQL Server, the number of Secrets Manager secrets that you need to create for a
proxy depends on the collation that your DB instance uses. For example, suppose that your DB
instance uses case-sensitive collation. If your application accepts both "Admin" and "admin,"
then your proxy needs two separate secrets. For more information about collation in SQL Server,
see the Microsoft SQL Server documentation.
When you create a proxy through the AWS CLI or RDS API, you specify the Amazon Resource Names
(ARNs) of the corresponding secrets. You do so for all the DB user accounts that the proxy can access. In
the AWS Management Console, you choose the secrets by their descriptive names.
For instructions about creating secrets in Secrets Manager, see the Creating a secret page in the Secrets
Manager documentation. Use one of the following techniques:
1209
Amazon Relational Database Service User Guide
Setting up IAM policies
For example, the following commands create Secrets Manager secrets for two database users, one
named admin and the other named app-user.
To see the secrets owned by your AWS account, use a command such as the following.
When you create a proxy using the CLI, you pass the Amazon Resource Names (ARNs) of one or more
secrets to the --auth parameter. The following Linux example shows how to prepare a report with only
the name and ARN of each secret owned by your AWS account. This example uses the --output table
parameter that is available in AWS CLI version 2. If you are using AWS CLI version 1, use --output text
instead.
To verify that you stored the correct credentials and in the right format in a secret, use a command such
as the following. Substitute the short name or the ARN of the secret for your_secret_name.
The output should include a line displaying a JSON-encoded value like the following.
"SecretString": "{\"username\":\"your_username\",\"password\":\"your_password\"}",
1210
Amazon Relational Database Service User Guide
Setting up IAM policies
To create an IAM policy that accesses your Secrets Manager secrets for use with your proxy
1. Sign in to the IAM console. Follow the Create role process, as described in Creating IAM roles,
choosing Creating a role to delegate permissions to an AWS service.
Choose AWS service for the Trusted entity type. Under Use case, select RDS from Use cases for
other AWS services dropdown. Select RDS - Add Role to Database.
2. For the new role, perform the Add inline policy step. Use the same general procedures as in Editing
IAM policies. Paste the following JSON into the JSON text box. Substitute your own account ID.
Substitute your AWS Region for us-east-2. Substitute the Amazon Resource Names (ARNs) for the
secrets that you created, see Specifying KMS keys in IAM policy statements. For the kms:Decrypt
action, substitute the ARN of the default AWS KMS key or your own KMS key. Which one you use
depends on which one you used to encrypt the Secrets Manager secrets.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": [
"arn:aws:secretsmanager:us-east-2:account_id:secret:secret_name_1",
"arn:aws:secretsmanager:us-east-2:account_id:secret:secret_name_2"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "kms:Decrypt",
"Resource": "arn:aws:kms:us-east-2:account_id:key/key_id",
"Condition": {
"StringEquals": {
"kms:ViaService": "secretsmanager.us-east-2.amazonaws.com"
}
}
}
]
}
3. Edit the trust policy for this IAM role. Paste the following JSON into the JSON text box.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
The following commands perform the same operation through the AWS CLI.
PREFIX=my_identifier
1211
Amazon Relational Database Service User Guide
Creating an RDS Proxy
1212
Amazon Relational Database Service User Guide
Creating an RDS Proxy
To create a proxy
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Proxies.
3. Choose Create proxy.
4. Choose all the settings for your proxy.
• Engine family. This setting determines which database network protocol the proxy recognizes
when it interprets network traffic to and from the database. For RDS for MariaDB or RDS for
MySQL, choose MariaDB and MySQL. For RDS for PostgreSQL, choose PostgreSQL. For RDS for
SQL Server, choose SQL Server.
• Proxy identifier. Specify a name of your choosing, unique within your AWS account ID and current
AWS Region.
• Idle client connection timeout. Choose a time period that a client connection can be idle before
the proxy can close it. The default is 1,800 seconds (30 minutes). A client connection is considered
idle when the application doesn't submit a new request within the specified time after the
previous request completed. The underlying database connection stays open and is returned to
the connection pool. Thus, it's available to be reused for new client connections.
Consider lowering the idle client connection timeout if you want the proxy to proactively remove
stale connections. If your workload is spiking, consider raising the idle client connection timeout to
save the cost of establishing connections.
• Database. Choose one RDS DB instance or Aurora DB cluster to access through this proxy. The list
only includes DB instances and clusters with compatible database engines, engine versions, and
other settings. If the list is empty, create a new DB instance or cluster that's compatible with RDS
Proxy. To do so, follow the procedure in Creating an Amazon RDS DB instance (p. 300). Then try
creating the proxy again.
• Connection pool maximum connections. Specify a value from 1 through 100. This
setting represents the percentage of the max_connections value that RDS Proxy can use
for its connections. If you only intend to use one proxy with this DB instance or cluster,
you can set this value to 100. For details about how RDS Proxy uses this setting, see
MaxConnectionsPercent (p. 1226).
• Session pinning filters. (Optional) This option allows you to force RDS Proxy to not pin for certain
types of detected session states. This circumvents the default safety measures for multiplexing
database connections across client connections. Currently, the setting isn't supported for
PostgreSQL and the only choice is EXCLUDE_VARIABLE_SETS.
Enabling this setting can cause variables of one connection to impact other connections. This can
cause errors or correctness issues if your queries depend on session variable values set outside of
the current transaction. Consider using this option after verifying it is safe for your applications to
share database connections across client connections.
1213
Amazon Relational Database Service User Guide
Creating an RDS Proxy
• IAM role. Choose an IAM role that has permission to access the Secrets Manager secrets that you
chose earlier. You can also choose for the AWS Management Console to create a new IAM role for
you and use that.
• Secrets Manager secrets. Choose at least one Secrets Manager secret that contains database user
credentials for the RDS DB instance or Aurora DB cluster to access with this proxy.
• Client authentication type. Choose the type of authentication the proxy uses for connections
from clients. Your choice applies to all Secrets Manager secrets that you associate with this proxy.
If you need to specify a different client authentication type for each secret, create your proxy by
using the AWS CLI or the API instead.
• IAM authentication. Choose whether to require, allow, or disallow IAM authentication for
connections to your proxy. The allow option is only valid for proxies for RDS for SQL Server. Your
choice applies to all Secrets Manager secrets that you associate with this proxy. If you need to
specify a different IAM authentication for each secret, create your proxy by using the AWS CLI or
the API instead.
• Require Transport Layer Security. Choose this setting if you want the proxy to enforce TLS/SSL
for all client connections. For an encrypted or unencrypted connection to a proxy, the proxy uses
the same encryption setting when it makes a connection to the underlying database.
• Subnets. This field is prepopulated with all the subnets associated with your VPC. You can remove
any subnets that you don't need for this proxy. You must leave at least two subnets.
• VPC security group. Choose an existing VPC security group. You can also choose for the AWS
Management Console to create a new security group for you and use that. You must configure
the Inbound rules to allow your applications to access the proxy. You must also configure the
Outbound rules to allow traffic from your DB targets.
Note
This security group must allow connections from the proxy to the database. The same
security group is used for ingress from your applications to the proxy, and for egress from
the proxy to the database. For example, suppose that you use the same security group for
your database and your proxy. In this case, make sure that you specify that resources in
that security group can communicate with other resources in the same security group.
When using a shared VPC, you can't use the default security group for the VPC, or one
that belongs to another account. Choose a security group that belongs to your account. If
one doesn't exist, create one. For more information about this limitation, see Work with
shared VPCs.
1214
Amazon Relational Database Service User Guide
Creating an RDS Proxy
• Enable enhanced logging. You can enable this setting to troubleshoot proxy compatibility or
performance issues.
When this setting is enabled, RDS Proxy includes detailed information about SQL statements in
its logs. This information helps you to debug issues involving SQL behavior or the performance
and scalability of the proxy connections. The debug information includes the text of SQL
statements that you submit through the proxy. Thus, only enable this setting when needed
for debugging. Also, only enable it when you have security measures in place to safeguard any
sensitive information that appears in the logs.
To minimize overhead associated with your proxy, RDS Proxy automatically turns this setting off
24 hours after you enable it. Enable it temporarily to troubleshoot a specific issue.
5. Choose Create Proxy.
AWS CLI
To create a proxy by using the AWS CLI, call the create-db-proxy command with the following required
parameters:
• --db-proxy-name
• --engine-family
• --role-arn
• --auth
• --vpc-subnet-ids
Example
For Windows:
1215
Amazon Relational Database Service User Guide
Creating an RDS Proxy
[--require-tls | --no-require-tls] ^
[--idle-client-timeout value] ^
[--debug-logging | --no-debug-logging] ^
[--tags comma_separated_list]
The following is an example of the JSON value for the --auth option. This example
applies a different client authentication type to each secret.
[
{
"Description": "proxy description 1",
"AuthScheme": "SECRETS",
"SecretArn": "arn:aws:secretsmanager:us-
west-2:123456789123:secret/1234abcd-12ab-34cd-56ef-1234567890ab",
"IAMAuth": "DISABLED",
"ClientPasswordAuthType": "POSTGRES_SCRAM_SHA_256"
},
{
"Description": "proxy description 2",
"AuthScheme": "SECRETS",
"SecretArn": "arn:aws:secretsmanager:us-
west-2:111122223333:seret/1234abcd-12ab-34cd-56ef-1234567890cd",
"IAMAuth": "DISABLED",
"ClientPasswordAuthType": "POSTGRES_MD5"
},
{
"Description": "proxy description 3",
"AuthScheme": "SECRETS",
"SecretArn": "arn:aws:secretsmanager:us-
west-2:111122221111:secret/1234abcd-12ab-34cd-56ef-1234567890ef",
"IAMAuth": "REQUIRED"
}
Tip
If you don't already know the subnet IDs to use for the --vpc-subnet-ids parameter, see
Setting up network prerequisites (p. 1207) for examples of how to find them.
Note
The security group must allow access to the database the proxy connects to. The same security
group is used for ingress from your applications to the proxy, and for egress from the proxy to
the database. For example, suppose that you use the same security group for your database
and your proxy. In this case, make sure that you specify that resources in that security group can
communicate with other resources in the same security group.
When using a shared VPC, you can't use the default security group for the VPC, or one that
belongs to another account. Choose a security group that belongs to your account. If one
doesn't exist, create one. For more information about this limitation, see Work with shared VPCs.
To create the required information and associations for the proxy, you also use the register-db-proxy-
targets command. Specify the target group name default. RDS Proxy automatically creates a target
group with this name when you create each proxy.
1216
Amazon Relational Database Service User Guide
Viewing an RDS Proxy
RDS API
To create an RDS proxy, call the Amazon RDS API operation CreateDBProxy. You pass a parameter with
the AuthConfig data structure.
RDS Proxy automatically creates a target group named default when you create each proxy. You
associate an RDS DB instance or Aurora DB cluster with the target group by calling the function
RegisterDBProxyTargets.
Any database applications that use the proxy require the proxy endpoint to use in the connection string.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you
created the RDS Proxy.
3. In the navigation pane, choose Proxies.
4. Choose the name of an RDS proxy to display its details.
5. On the details page, the Target groups section shows how the proxy is associated with a specific
RDS DB instance or Aurora DB cluster. You can follow the link to the default target group page to
see more details about the association between the proxy and the database. This page is where
you see settings that you specified when creating the proxy. These include maximum connection
percentage, connection borrow timeout, engine family, and session pinning filters.
CLI
To view your proxy using the CLI, use the describe-db-proxies command. By default, it displays all proxies
owned by your AWS account. To see details for a single proxy, specify its name with the --db-proxy-
name parameter.
To view the other information associated with the proxy, use the following commands.
Use the following sequence of commands to see more detail about the things that are associated with
the proxy:
1217
Amazon Relational Database Service User Guide
Connecting through RDS Proxy
3. To see the details of the RDS DB instance or Aurora DB cluster associated with the returned target
group, run describe-db-proxy-targets.
RDS API
To view your proxies using the RDS API, use the DescribeDBProxies operation. It returns values of the
DBProxy data type.
To see details of the connection settings for the proxy, use the proxy identifiers from this return value
with the DescribeDBProxyTargetGroups operation. It returns values of the DBProxyTargetGroup data
type.
To see the RDS instance or Aurora DB cluster associated with the proxy, use the DescribeDBProxyTargets
operation. It returns values of the DBProxyTarget data type.
Topics
• Connecting to a proxy using native authentication (p. 1218)
• Connecting to a proxy using IAM authentication (p. 1219)
• Considerations for connecting to a proxy with Microsoft SQL Server (p. 1219)
• Considerations for connecting to a proxy with PostgreSQL (p. 1220)
1. Find the proxy endpoint. In the AWS Management Console, you can find the endpoint on the details
page for the corresponding proxy. With the AWS CLI, you can use the describe-db-proxies command.
The following example shows how.
1218
Amazon Relational Database Service User Guide
Connecting through RDS Proxy
2. Specify that endpoint as the host parameter in the connection string for your client application. For
example, specify the proxy endpoint as the value for the mysql -h option or psql -h option.
3. Supply the same database user name and password as you usually do.
To connect to RDS Proxy using IAM authentication, use the same general connection procedure as for
IAM authentication with an RDS DB instance or Aurora cluster. For general information about using IAM
with RDS and Aurora, see Security in Amazon RDS (p. 2565).
The major differences in IAM usage for RDS Proxy include the following:
• You don't configure each individual database user with an authorization plugin. The database users
still have regular user names and passwords within the database. You set up Secrets Manager secrets
containing these user names and passwords, and authorize RDS Proxy to retrieve the credentials from
Secrets Manager.
The IAM authentication applies to the connection between your client program and the proxy. The
proxy then authenticates to the database using the user name and password credentials retrieved from
Secrets Manager.
• Instead of the instance, cluster, or reader endpoint, you specify the proxy endpoint. For details about
the proxy endpoint, see Connecting to your DB instance using IAM authentication (p. 2650).
• In the direct database IAM authentication case, you selectively choose database users and configure
them to be identified with a special authentication plugin. You can then connect to those users using
IAM authentication.
In the proxy use case, you provide the proxy with Secrets that contain some user's user name and
password (native authentication). You then connect to the proxy using IAM authentication. Here, you
do this by generating an authentication token with the proxy endpoint, not the database endpoint.
You also use a user name that matches one of the user names for the secrets that you provided.
• Make sure that you use Transport Layer Security (TLS)/Secure Sockets Layer (SSL) when connecting to
a proxy using IAM authentication.
You can grant a specific user access to the proxy by modifying the IAM policy. An example follows.
"Resource": "arn:aws:rds-db:us-east-2:1234567890:dbuser:prx-ABCDEFGHIJKL01234/db_user"
Under some conditions, a proxy can't share a database connection and instead pins the connection from
your client application to the proxy to a dedicated database connection. For more information about
these conditions, see Avoiding pinning (p. 1228).
1219
Amazon Relational Database Service User Guide
Managing an RDS Proxy
When connecting through an RDS proxy, the startup message can include the following currently
recognized parameters:
• user
• database
• replication
The startup message can also include the following additional runtime parameters:
• application_name
• client_encoding
• DateStyle
• TimeZone
• extra_float_digits
For more information about PostgreSQL messaging, see the Frontend/Backend protocol in the
PostgreSQL documentation.
For PostgreSQL, if you use JDBC we recommend the following to avoid pinning:
• Set the JDBC connection parameter assumeMinServerVersion to at least 9.0 to avoid pinning.
Doing this prevents the JDBC driver from performing an extra round trip during connection startup
when it runs SET extra_float_digits = 3.
• Set the JDBC connection parameter ApplicationName to any/your-application-name to
avoid pinning. Doing this prevents the JDBC driver from performing an extra round trip during
connection startup when it runs SET application_name = "PostgreSQL JDBC Driver".
Note the JDBC parameter is ApplicationName but the PostgreSQL StartupMessage parameter is
application_name.
• Set the JDBC connection parameter preferQueryMode to extendedForPrepared to avoid pinning.
The extendedForPrepared ensures that the extended mode is used only for prepared statements.
The default for the preferQueryMode parameter is extended, which uses the extended mode for
all queries. The extended mode uses a series of Prepare, Bind, Execute, and Sync requests and
corresponding responses. This type of series causes connection pinning in an RDS proxy.
For more information, see Avoiding pinning (p. 1228). For more information about connecting using
JDBC, see Connecting to the database in the PostgreSQL documentation.
1220
Amazon Relational Database Service User Guide
Modifying an RDS Proxy
Topics
• Modifying an RDS Proxy (p. 1221)
• Adding a new database user (p. 1225)
• Changing the password for a database user (p. 1226)
• Configuring connection settings (p. 1226)
• Avoiding pinning (p. 1228)
• Deleting an RDS Proxy (p. 1232)
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Proxies.
3. In the list of proxies, choose the proxy whose settings you want to modify or go to its details page.
4. For Actions, choose Modify.
5. Enter or choose the properties to modify. You can modify the following:
If you didn't find the settings listed that you want to change, use the following procedure to update the
target group for the proxy. The target group associated with a proxy controls the settings related to the
physical database connections. Each proxy has one associated target group named default, which is
created automatically along with the proxy.
You can only modify the target group from the proxy details page, not from the list on the Proxies page.
1221
Amazon Relational Database Service User Guide
Modifying an RDS Proxy
You can't change certain properties, such as the target group identifier and the database engine.
5. Choose Modify target group.
AWS CLI
To modify a proxy using the AWS CLI, use the commands modify-db-proxy, modify-db-proxy-target-
group, deregister-db-proxy-targets, and register-db-proxy-targets.
With the modify-db-proxy command, you can change properties such as the following:
To modify connection-related settings or rename the target group, use the modify-db-proxy-
target-group command. Currently, all proxies have a single target group named default. When
working with this target group, you specify the name of the proxy and default for the name of the
target group.
1222
Amazon Relational Database Service User Guide
Modifying an RDS Proxy
The following example shows how to first check the MaxIdleConnectionsPercent setting for a proxy
and then change it, using the target group.
{
"TargetGroups": [
{
"Status": "available",
"UpdatedDate": "2019-11-30T16:49:30.342Z",
"ConnectionPoolConfig": {
"MaxIdleConnectionsPercent": 50,
"ConnectionBorrowTimeout": 120,
"MaxConnectionsPercent": 100,
"SessionPinningFilters": []
},
"TargetGroupName": "default",
"CreatedDate": "2019-11-30T16:49:27.940Z",
"DBProxyName": "the-proxy",
"IsDefault": true
}
]
}
{
"DBProxyTargetGroup": {
"Status": "available",
"UpdatedDate": "2019-12-02T04:09:50.420Z",
"ConnectionPoolConfig": {
"MaxIdleConnectionsPercent": 75,
"ConnectionBorrowTimeout": 120,
"MaxConnectionsPercent": 100,
"SessionPinningFilters": []
},
"TargetGroupName": "default",
"CreatedDate": "2019-11-30T16:49:27.940Z",
"DBProxyName": "the-proxy",
"IsDefault": true
}
}
The following example starts with a proxy that is associated with an Aurora MySQL cluster named
cluster-56-2020-02-25-1399. The example shows how to change the proxy so that it can connect
to a different cluster named provisioned-cluster.
When you work with an RDS DB instance, you specify the --db-instance-identifier option. When
you work with an Aurora DB cluster, you specify the --db-cluster-identifier option instead.
The following example modifies an Aurora MySQL proxy. An Aurora PostgreSQL proxy has port 5432.
1223
Amazon Relational Database Service User Guide
Modifying an RDS Proxy
{
"Targets": [
{
"Endpoint": "instance-9814.demo.us-east-1.rds.amazonaws.com",
"Type": "RDS_INSTANCE",
"Port": 3306,
"RdsResourceId": "instance-9814"
},
{
"Endpoint": "instance-8898.demo.us-east-1.rds.amazonaws.com",
"Type": "RDS_INSTANCE",
"Port": 3306,
"RdsResourceId": "instance-8898"
},
{
"Endpoint": "instance-1018.demo.us-east-1.rds.amazonaws.com",
"Type": "RDS_INSTANCE",
"Port": 3306,
"RdsResourceId": "instance-1018"
},
{
"Type": "TRACKED_CLUSTER",
"Port": 0,
"RdsResourceId": "cluster-56-2020-02-25-1399"
},
{
"Endpoint": "instance-4330.demo.us-east-1.rds.amazonaws.com",
"Type": "RDS_INSTANCE",
"Port": 3306,
"RdsResourceId": "instance-4330"
}
]
}
{
"Targets": []
}
{
"DBProxyTargets": [
{
"Type": "TRACKED_CLUSTER",
"Port": 0,
"RdsResourceId": "provisioned-cluster"
},
{
"Endpoint": "gkldje.demo.us-east-1.rds.amazonaws.com",
"Type": "RDS_INSTANCE",
"Port": 3306,
"RdsResourceId": "gkldje"
},
{
"Endpoint": "provisioned-1.demo.us-east-1.rds.amazonaws.com",
"Type": "RDS_INSTANCE",
"Port": 3306,
"RdsResourceId": "provisioned-1"
}
]
1224
Amazon Relational Database Service User Guide
Adding a database user
RDS API
To modify a proxy using the RDS API, you use the operations ModifyDBProxy,
ModifyDBProxyTargetGroup, DeregisterDBProxyTargets, and RegisterDBProxyTargets operations.
With ModifyDBProxyTargetGroup, you can modify connection-related settings or rename the target
group. Currently, all proxies have a single target group named default. When working with this target
group, you specify the name of the proxy and default for the name of the target group.
1. Create a new Secrets Manager secret, using the procedure described in Setting up database credentials
in AWS Secrets Manager (p. 1209).
2. Update the IAM role to give RDS Proxy access to the new Secrets Manager secret. To do so, update the
resources section of the IAM role policy.
3. Modify the RDS Proxy to add the new Secrets Manager secret under Secrets Manager secrets.
4. If the new user takes the place of an existing one, update the credentials stored in the proxy's Secrets
Manager secret for the existing user.
If you have you have the run the following command for your PostgreSQL databases:
Grant the rdsproxyadmin user the CONNECT privilege so the user can monitor connections on the
target database.
You can also allow other target database users to perform health checks by changing rdsproxyadmin
to the database user in the command above.
1225
Amazon Relational Database Service User Guide
Changing database passwords
IdleClientTimeout
You can specify how long a client connection can be idle before the proxy can close it. The default is
1,800 seconds (30 minutes).
A client connection is considered idle when the application doesn't submit a new request within the
specified time after the previous request completed. The underlying database connection stays open
and is returned to the connection pool. Thus, it's available to be reused for new client connections. If
you want the proxy to proactively remove stale connections, consider lowering the idle client connection
timeout. If your workload establishes frequent connections with the proxy, consider raising the idle client
connection timeout to save the cost of establishing connections.
This setting is represented by the Idle client connection timeout field in the RDS console and the
IdleClientTimeout setting in the AWS CLI and the API. To learn how to change the value of the Idle
client connection timeout field in the RDS console, see AWS Management Console (p. 1221). To learn
how to change the value of the IdleClientTimeout setting, see the CLI command modify-db-proxy or
the API operation ModifyDBProxy.
MaxConnectionsPercent
You can limit the number of connections that an RDS Proxy can establish with the target database.
You specify the limit as a percentage of the maximum connections available for your database. This
setting is represented by the Connection pool maximum connections field in the RDS console and the
MaxConnectionsPercent setting in the AWS CLI and the API.
For example, for a registered database target with max_connections set to 1000, and
MaxConnectionsPercent set to 95, RDS Proxy sets 950 connections as the upper limit for concurrent
connections to that database target.
1226
Amazon Relational Database Service User Guide
Configuring connection settings
• Allow sufficient connection headroom for changes in workload pattern. It is recommended to set the
parameter at least 30% above your maximum recent monitored usage. As RDS Proxy redistributes
database connection quotas across multiple nodes, internal capacity changes might require at least
30% headroom for additional connections to avoid increased borrow latencies.
• RDS Proxy reserves a certain number of connections for active monitoring to support fast failover,
traffic routing and internal operations. The MaxDatabaseConnectionsAllowed metric does not
include these reserved connections. It represents the number of connections available to serve the
workload, and can be lower than the value derived from the MaxConnectionsPercent setting.
To learn how to change the value of the Connection pool maximum connections field in the
RDS console, see AWS Management Console (p. 1221). To learn how to change the value of the
MaxConnectionsPercent setting, see the CLI command modify-db-proxy-target-group or the API
operation ModifyDBProxyTargetGroup.
For information on database connection limits, see Maximum number of database connections.
MaxIdleConnectionsPercent
You can control the number of idle database connections that RDS Proxy can keep in the connection
pool. RDS Proxy considers a database connection in it's pool to be idle when there's been no activity on
the connection for five minutes.
You specify the limit as a percentage of the maximum connections available for your database.
The default value is 50 percent of MaxConnectionsPercent, and the upper limit is the value of
MaxConnectionsPercent. With a high value, the proxy leaves a high percentage of idle database
connections open. With a low value, the proxy closes a high percentage of idle database connections. If
your workloads are unpredictable, consider setting a high value for MaxIdleConnectionsPercent.
Doing so means that RDS Proxy can accommodate surges in activity without opening a lot of new
database connections.
For information on database connection limits, see Maximum number of database connections.
ConnectionBorrowTimeout
You can choose how long RDS Proxy waits for a database connection in the connection pool to become
available for use before returning a timeout error. The default is 120 seconds. This setting applies when
the number of connections is at the maximum, and so no connections are available in the connection
pool. It also applies if no appropriate database instance is available to handle the request because, for
example, a failover operation is in process. Using this setting, you can set the best wait period for your
application without having to change the query timeout in your application code.
This setting is represented by the Connection borrow timeout field in the RDS console or the
ConnectionBorrowTimeout setting of DBProxyTargetGroup in the AWS CLI or API. To learn how
to change the value of the Connection borrow timeout field in the RDS console, see AWS Management
1227
Amazon Relational Database Service User Guide
Avoiding pinning
Console (p. 1221). To learn how to change the value of the ConnectionBorrowTimeout setting, see
the CLI command modify-db-proxy-target-group or the API operation ModifyDBProxyTargetGroup.
Avoiding pinning
Multiplexing is more efficient when database requests don't rely on state information from previous
requests. In that case, RDS Proxy can reuse a connection at the conclusion of each transaction. Examples
of such state information include most variables and configuration parameters that you can change
through SET or SELECT statements. SQL transactions on a client connection can multiplex between
underlying database connections by default.
Your connections to the proxy can enter a state known as pinning. When a connection is pinned, each
later transaction uses the same underlying database connection until the session ends. Other client
connections also can't reuse that database connection until the session ends. The session ends when the
client connection is dropped.
RDS Proxy automatically pins a client connection to a specific DB connection when it detects a session
state change that isn't appropriate for other sessions. Pinning reduces the effectiveness of connection
reuse. If all or almost all of your connections experience pinning, consider modifying your application
code or workload to reduce the conditions that cause the pinning.
For example, suppose that your application changes a session variable or configuration parameter. In this
case, later statements can rely on the new variable or parameter to be in effect. Thus, when RDS Proxy
processes requests to change session variables or configuration settings, it pins that session to the DB
connection. That way, the session state remains in effect for all later transactions in the same session.
For some database engines, this rule doesn't apply to all parameters that you can set. RDS Proxy tracks
certain statements and variables. Thus RDS Proxy doesn't pin the session when you modify them. In
this case, RDS Proxy only reuses the connection for other sessions that have the same values for those
settings. For details about what RDS Proxy tracks for a database engine, see the following:
• What RDS Proxy tracks for RDS for SQL Server databases (p. 1228)
• What RDS Proxy tracks for RDS for MariaDB and RDS for MySQL databases (p. 1229)
What RDS Proxy tracks for RDS for SQL Server databases
Following are the SQL Server statements that RDS Proxy tracks:
• USE
• SET ANSI_NULLS
• SET ANSI_PADDING
• SET ANSI_WARNINGS
• SET ARITHABORT
• SET CONCAT_NULL_YIELDS_NULL
• SET CURSOR_CLOSE_ON_COMMIT
• SET DATEFIRST
• SET DATEFORMAT
• SET LANGUAGE
• SET LOCK_TIMEOUT
• SET NUMERIC_ROUNDABORT
• SET QUOTED_IDENTIFIER
• SET TEXTSIZE
• SET TRANSACTION ISOLATION LEVEL
1228
Amazon Relational Database Service User Guide
Avoiding pinning
What RDS Proxy tracks for RDS for MariaDB and RDS for MySQL
databases
Following are the MySQL and MariaDB statements that RDS Proxy tracks:
• DROP DATABASE
• DROP SCHEMA
• USE
Following are the MySQL and MariaDB variables that RDS Proxy tracks:
• AUTOCOMMIT
• AUTO_INCREMENT_INCREMENT
• CHARACTER SET (or CHAR SET)
• CHARACTER_SET_CLIENT
• CHARACTER_SET_DATABASE
• CHARACTER_SET_FILESYSTEM
• CHARACTER_SET_CONNECTION
• CHARACTER_SET_RESULTS
• CHARACTER_SET_SERVER
• COLLATION_CONNECTION
• COLLATION_DATABASE
• COLLATION_SERVER
• INTERACTIVE_TIMEOUT
• NAMES
• NET_WRITE_TIMEOUT
• QUERY_CACHE_TYPE
• SESSION_TRACK_SCHEMA
• SQL_MODE
• TIME_ZONE
• TRANSACTION_ISOLATION (or TX_ISOLATION)
• TRANSACTION_READ_ONLY (or TX_READ_ONLY)
• WAIT_TIMEOUT
Minimizing pinning
Performance tuning for RDS Proxy involves trying to maximize transaction-level connection reuse
(multiplexing) by minimizing pinning.
In some cases, RDS Proxy can't be sure that it's safe to reuse a database connection outside of the current
session. In these cases, it keeps the session on the same connection until the session ends. This fallback
behavior is called pinning.
1229
Amazon Relational Database Service User Guide
Avoiding pinning
For example, you can define an initialization query for a proxy that sets certain configuration
parameters. Then, RDS Proxy applies those settings whenever it sets up a new connection for that
proxy. You can remove the corresponding SET statements from your application code, so that they
don't interfere with transaction-level multiplexing.
For metrics about how often pinning occurs for a proxy, see Monitoring RDS Proxy metrics with
Amazon CloudWatch (p. 1239).
• Any statement with a text size greater than 16 KB causes the proxy to pin the session.
• Prepared statements cause the proxy to pin the session. This rule applies whether the prepared
statement uses SQL text or the binary protocol.
Conditions that cause pinning for RDS for Microsoft SQL Server
For RDS for SQL Server, the following interactions also cause pinning:
• Using multiple active result sets (MARS). For information about MARS, see the SQL Server
documentation.
• Using distributed transaction coordinator (DTC) communication.
• Creating temporary tables, transactions, cursors, or prepared statements.
• Using the following SET statements:
• SET ANSI_DEFAULTS
• SET ANSI_NULL_DFLT
• SET ARITHIGNORE
• SET DEADLOCK_PRIORITY
• SET FIPS_FLAGGER
• SET FMTONLY
• SET FORCEPLAN
• SET IDENTITY_INSERT
• SET NOCOUNT
• SET NOEXEC
• SET OFFSETS
• SET PARSEONLY
1230
Amazon Relational Database Service User Guide
Avoiding pinning
• SET QUERY_GOVERNOR_COST_LIMIT
• SET REMOTE_PROC_TRANSACTIONS
• SET ROWCOUNT
• SET SHOWPLAN_ALL, SHOWPLAN_TEXT, and SHOWPLAN_XML
• SET STATISTICS
• SET XACT_ABORT
Conditions that cause pinning for RDS for MariaDB and RDS for
MySQL
For MySQL and MariaDB, the following interactions also cause pinning:
• Explicit table lock statements LOCK TABLE, LOCK TABLES, or FLUSH TABLES WITH READ LOCK
cause the proxy to pin the session.
• Creating named locks by using GET_LOCK causes the proxy to pin the session.
• Setting a user variable or a system variable (with some exceptions) causes the proxy to pin the session.
If this situation reduces your connection reuse too much, you can choose for SET operations not to
cause pinning. For information about how to do so by setting the session pinning filters property, see
Creating an RDS Proxy (p. 1212) and Modifying an RDS Proxy (p. 1221).
• RDS Proxy does not pin connections when you use SET LOCAL.
• Creating a temporary table causes the proxy to pin the session. That way, the contents of the
temporary table are preserved throughout the session regardless of transaction boundaries.
• Calling the functions ROW_COUNT, FOUND_ROWS, and LAST_INSERT_ID sometimes causes pinning.
Calling stored procedures and stored functions doesn't cause pinning. RDS Proxy doesn't detect any
session state changes resulting from such calls. Therefore, make sure that your application doesn't
change session state inside stored routines and rely on that session state to persist across transactions.
For example, if a stored procedure creates a temporary table that is intended to persist across
transactions, that application currently isn't compatible with RDS Proxy.
If you have expert knowledge about your application behavior, you can skip the pinning behavior for
certain application statements. To do so, choose the Session pinning filters option when creating the
proxy. Currently, you can opt out of session pinning for setting session variables and configuration
settings.
1231
Amazon Relational Database Service User Guide
Deleting an RDS Proxy
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Proxies.
3. Choose the proxy to delete from the list.
4. Choose Delete Proxy.
AWS CLI
To delete a DB proxy, use the AWS CLI command delete-db-proxy. To remove related associations, also
use the deregister-db-proxy-targets command.
RDS API
To delete a DB proxy, call the Amazon RDS API function DeleteDBProxy. To delete related items and
associations, you also call the functions DeleteDBProxyTargetGroup and DeregisterDBProxyTargets.
• You can use multiple endpoints with a proxy to monitor and troubleshoot connections from different
applications independently.
• You can use reader endpoints with Aurora DB clusters to improve read scalability and high availability
for your query-intensive applications.
• You can use a cross-VPC endpoint to allow access to databases in one VPC from resources such as
Amazon EC2 instances in a different VPC.
Topics
• Overview of proxy endpoints (p. 1233)
• Reader endpoints (p. 1233)
• Accessing Aurora and RDS databases across VPCs (p. 1233)
• Creating a proxy endpoint (p. 1234)
1232
Amazon Relational Database Service User Guide
Overview of proxy endpoints
By default, the endpoint that you connect to when you use RDS Proxy with an Aurora cluster has read/
write capability. As a result, this endpoint sends all requests to the writer instance of the cluster. All of
those connections count against the max_connections value for the writer instance. If your proxy is
associated with an Aurora DB cluster, you can create additional read/write or read-only endpoints for
that proxy.
You can use a read-only endpoint with your proxy for read-only queries. You do this the same way that
you use the reader endpoint for an Aurora provisioned cluster. Doing so helps you to take advantage
of the read scalability of an Aurora cluster with one or more reader DB instances. You can run more
simultaneous queries and make more simultaneous connections by using a read-only endpoint and
adding more reader DB instances to your Aurora cluster as needed.
For a proxy endpoint that you create, you can also associate the endpoint with a different virtual private
cloud (VPC) than the proxy itself uses. By doing so, you can connect to the proxy from a different VPC,
for example a VPC used by a different application within your organization.
For information about limits associated with proxy endpoints, see Limitations for proxy
endpoints (p. 1239).
In the RDS Proxy logs, each entry is prefixed with the name of the associated proxy endpoint. This name
can be the name you specified for a user-defined endpoint. Or it can be the special name default for
read/write requests using the default endpoint of a proxy.
Each proxy endpoint has its own set of CloudWatch metrics. You can monitor the metrics for all
endpoints of a proxy. You can also monitor metrics for a specific endpoint, or for all the read/write or
read-only endpoints of a proxy. For more information, see Monitoring RDS Proxy metrics with Amazon
CloudWatch (p. 1239).
A proxy endpoint uses the same authentication mechanism as its associated proxy. RDS Proxy
automatically sets up permissions and authorizations for the user-defined endpoint, consistent with the
properties of the associated proxy.
Reader endpoints
With RDS Proxy, you can create and use reader endpoints. However, these endpoints only work for
proxies associated with Aurora DB clusters. You might see references to reader endpoints in the AWS
Management Console. If you use the RDS CLI or API, you might see the TargetRole attribute with a
value of READ_ONLY. You can take advantage of these features by changing the target of a proxy from
an RDS DB instance to an Aurora DB cluster. To learn about reader endpoints, see Managing connections
with Amazon RDS Proxy in the Aurora User Guide.
1233
Amazon Relational Database Service User Guide
Creating a proxy endpoint
RDS DB instance or an Aurora DB cluster. In this case, the application server and database must both be
within the same VPC.
With RDS Proxy, you can set up access to an Aurora cluster or RDS instance in one VPC from resources
such as EC2 instances in another VPC. For example, your organization might have multiple applications
that access the same database resources. Each application might be in its own VPC.
To enable cross-VPC access, you create a new endpoint for the proxy. If you aren't familiar with creating
proxy endpoints, see Working with Amazon RDS Proxy endpoints (p. 1232) for details. The proxy itself
resides in the same VPC as the Aurora DB cluster or RDS instance. However, the cross-VPC endpoint
resides in the other VPC, along with the other resources such as the EC2 instances. The cross-VPC
endpoint is associated with subnets and security groups from the same VPC as the EC2 and other
resources. These associations let you connect to the endpoint from the applications that otherwise can't
access the database due to the VPC restrictions.
The following steps explain how to create and access a cross-VPC endpoint through RDS Proxy:
1. Create two VPCs, or choose two VPCs that you already use for Aurora and RDS work. Each VPC should
have its own associated network resources such as an Internet gateway, route tables, subnets, and
security groups. If you only have one VPC, you can consult Getting started with Amazon RDS (p. 180)
for the steps to set up another VPC to use RDS successfully. You can also examine your existing VPC in
the Amazon EC2 console to see what kinds of resources to connect together.
2. Create a DB proxy associated with the Aurora DB cluster or RDS instance that you want to connect to.
Follow the procedure in Creating an RDS Proxy (p. 1212).
3. On the Details page for your proxy in the RDS console, under the Proxy endpoints section, choose
Create endpoint. Follow the procedure in Creating a proxy endpoint (p. 1234).
4. Choose whether to make the cross-VPC endpoint read/write or read-only.
5. Instead of accepting the default of the same VPC as the Aurora DB cluster or RDS instance, choose a
different VPC. This VPC must be in the same AWS Region as the VPC where the proxy resides.
6. Now instead of accepting the defaults for subnets and security groups from the same VPC as the
Aurora DB cluster or RDS instance, make new selections. Make these based on the subnets and
security groups from the VPC that you chose.
7. You don't need to change any of the settings for the Secrets Manager secrets. The same credentials
work for all endpoints for your proxy, regardless of which VPC each endpoint is in.
8. Wait for the new endpoint to reach the Available state.
9. Make a note of the full endpoint name. This is the value ending in
Region_name.rds.amazonaws.com that you supply as part of the connection string for your
database application.
10.Access the new endpoint from a resource in the same VPC as the endpoint. A simple way to test this
process is to create a new EC2 instance in this VPC. Then you can log into the EC2 instance and run the
mysql or psql commands to connect by using the endpoint value in your connection string.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Proxies.
3. Click the name of the proxy that you want to create a new endpoint for.
1234
Amazon Relational Database Service User Guide
Creating a proxy endpoint
Connections that use a read/write endpoint can perform any kind of operation: data definition
language (DDL) statements, data manipulation language (DML) statements, and queries. These
endpoints always connect to the primary instance of the Aurora cluster. You can use read/write
endpoints for general database operations when you only use a single endpoint in your application.
You can also use read/write endpoints for administrative operations, online transaction processing
(OLTP) applications, and extract-transform-load (ETL) jobs.
Connections that use a read-only endpoint can only perform queries. When there are multiple
reader instances in the Aurora cluster, RDS Proxy can use a different reader instance for each
connection to the endpoint. That way, a query-intensive application can take advantage of Aurora's
clustering capability. You can add more query capacity to the cluster by adding more reader DB
instances. These read-only connections don't impose any overhead on the primary instance of the
cluster. That way, your reporting and analysis queries don't slow down the write operations of your
OLTP applications.
7. For Virtual Private Cloud (VPC), choose the default to access the endpoint from the same EC2
instances or other resources where you normally access the proxy or its associated database. To set
up cross-VPC access for this proxy, choose a VPC other than the default. For more information about
cross-VPC access, see Accessing Aurora and RDS databases across VPCs (p. 1233).
8. For Subnets, RDS Proxy fills in the same subnets as the associated proxy by default. To restrict
access to the endpoint so only a portion of the VPC's address range can connect to it, remove one or
more subnets.
9. For VPC security group, you can choose an existing security group or create a new one. RDS Proxy
fills in the same security group or groups as the associated proxy by default. If the inbound and
outbound rules for the proxy are appropriate for this endpoint, you can leave the default choice.
If you choose to create a new security group, specify a name for the security group on this page.
Then edit the security group settings from the EC2 console afterward.
10. Choose Create proxy endpoint.
AWS CLI
To create a proxy endpoint, use the AWS CLI create-db-proxy-endpoint command.
• --db-proxy-name value
• --db-proxy-endpoint-name value
• --vpc-subnet-ids list_of_ids. Separate the subnet IDs with spaces. You don't specify the ID of
the VPC itself.
1235
Amazon Relational Database Service User Guide
Viewing proxy endpoints
• --vpc-security-group-ids value. Separate the security group IDs with spaces. If you omit this
parameter, RDS Proxy uses the default security group for the VPC. RDS Proxy determines the VPC
based on the subnet IDs that you specify for the --vpc-subnet-ids parameter.
Example
For Windows:
RDS API
To create a proxy endpoint, use the RDS API CreateDBProxyEndpoint action.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Proxies.
3. In the list, choose the proxy whose endpoint you want to view. Click the proxy name to view its
details page.
4. In the Proxy endpoints section, choose the endpoint that you want to view. Click its name to view
the details page.
5. Examine the parameters whose values you're interested in. You can check properties such as the
following:
AWS CLI
To view one or more DB proxy endpoints, use the AWS CLI describe-db-proxy-endpoints command.
1236
Amazon Relational Database Service User Guide
Modifying a proxy endpoint
• --db-proxy-endpoint-name
• --db-proxy-name
Example
For Linux, macOS, or Unix:
For Windows:
RDS API
To describe one or more proxy endpoints, use the RDS API DescribeDBProxyEndpoints operation.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Proxies.
3. In the list, choose the proxy whose endpoint you want to modify. Click the proxy name to view its
details page.
4. In the Proxy endpoints section, choose the endpoint that you want to modify. You can select it in
the list, or click its name to view the details page.
5. On the proxy details page, under the Proxy endpoints section, choose Edit. Or on the proxy
endpoint details page, for Actions, choose Edit.
6. Change the values of the parameters that you want to modify.
7. Choose Save changes.
AWS CLI
To modify a DB proxy endpoint, use the AWS CLI modify-db-proxy-endpoint command with the
following required parameters:
• --db-proxy-endpoint-name
Specify changes to the endpoint properties by using one or more of the following parameters:
• --new-db-proxy-endpoint-name
• --vpc-security-group-ids. Separate the security group IDs with spaces.
1237
Amazon Relational Database Service User Guide
Deleting a proxy endpoint
Example
For Linux, macOS, or Unix:
For Windows:
RDS API
To modify a proxy endpoint, use the RDS API ModifyDBProxyEndpoint operation.
Console
To delete a proxy endpoint using the AWS Management Console
AWS CLI
To delete a proxy endpoint, run the delete-db-proxy-endpoint command with the following required
parameters:
• --db-proxy-endpoint-name
For Windows:
1238
Amazon Relational Database Service User Guide
Limitations for proxy endpoints
RDS API
To delete a proxy endpoint with the RDS API, run the DeleteDBProxyEndpoint operation. Specify the
name of the proxy endpoint for the DBProxyEndpointName parameter.
The maximum number of user-defined endpoints for a proxy is 20. Thus, a proxy can have up to 21
endpoints: the default endpoint, plus 20 that you create.
When you associate additional endpoints with a proxy, RDS Proxy automatically determines which DB
instances in your cluster to use for each endpoint. You can't choose specific instances the way that you
can with Aurora custom endpoints.
In the RDS Proxy logs, each entry is prefixed with the name of the associated proxy endpoint. This name
can be the name you specified for a user-defined endpoint, or the special name default for read/write
requests using the default endpoint of a proxy.
Each proxy endpoint has its own CloudWatch metrics. You can monitor the usage of each proxy endpoint
independently. For more information about proxy endpoints, see Working with Amazon RDS Proxy
endpoints (p. 1232).
You can aggregate the values for each metric using one of the following dimension sets. For example,
by using the ProxyName dimension set, you can analyze all the traffic for a particular proxy. By using
the other dimension sets, you can split the metrics in different ways. You can split the metrics based on
the different endpoints or target databases of each proxy, or the read/write and read-only traffic to each
database.
1239
Amazon Relational Database Service User Guide
Monitoring RDS Proxy with CloudWatch
The number of
ClientConnectionsSetupFailedAuth 1 minute and above Dimension set
client connection 1 (p. 1239), Dimension
attempts that failed set 2 (p. 1239)
due to misconfigured
authentication or
TLS. The most useful
statistic for this metric
is Sum.
The number of
ClientConnectionsSetupSucceeded 1 minute and above Dimension set
client connections 1 (p. 1239), Dimension
successfully established set 2 (p. 1239)
with any authentication
mechanism with or
1240
Amazon Relational Database Service User Guide
Monitoring RDS Proxy with CloudWatch
The time in
DatabaseConnectionsBorrowLatency 1 minute and above Dimension set
microseconds that it 1 (p. 1239), Dimension
takes for the proxy set 2 (p. 1239)
being monitored
to get a database
connection. The most
useful statistic for this
metric is Average.
1241
Amazon Relational Database Service User Guide
Monitoring RDS Proxy with CloudWatch
The number of
DatabaseConnectionsSetupFailed 1 minute and above Dimension set
database connection 1 (p. 1239), Dimension
requests that failed. set 3 (p. 1240),
The most useful Dimension set
statistic for this metric 4 (p. 1240)
is Sum.
The number of
DatabaseConnectionsSetupSucceeded 1 minute and above Dimension set
database connections 1 (p. 1239), Dimension
successfully established set 3 (p. 1240),
with or without TLS. Dimension set
The most useful 4 (p. 1240)
statistic for this metric
is Sum.
The maximum
MaxDatabaseConnectionsAllowed 1 minute Dimension set
number of database 1 (p. 1239), Dimension
connections allowed. set 3 (p. 1240),
This metric is reported Dimension set
every minute. The most 4 (p. 1240)
useful statistic for this
metric is Sum.
1242
Amazon Relational Database Service User Guide
Monitoring RDS Proxy with CloudWatch
The time in
QueryDatabaseResponseLatency 1 minute and above Dimension set
microseconds that 1 (p. 1239), Dimension
the database took set 2 (p. 1239),
to respond to the Dimension set
query. The most useful 3 (p. 1240), Dimension
statistic for this metric set 4 (p. 1240)
is Average.
You can find logs of RDS Proxy activity under CloudWatch in the AWS Management Console. Each proxy
has an entry in the Log groups page.
Important
These logs are intended for human consumption for troubleshooting purposes and not for
programmatic access. The format and content of the logs is subject to change.
In particular, older logs don't contain any prefixes indicating the endpoint for each request. In
newer logs, each entry is prefixed with the name of the associated proxy endpoint. This name
can be the name that you specified for a user-defined endpoint, or the special name default
for requests using the default endpoint of a proxy.
1243
Amazon Relational Database Service User Guide
Working with RDS Proxy events
For more information about working with events, see the following:
• For instructions on how to view events by using the AWS Management Console, AWS CLI, or RDS API,
see Viewing Amazon RDS events (p. 852).
• To learn how to configure Amazon RDS to send events to EventBridge, see Creating a rule that triggers
on an Amazon RDS event (p. 870).
1244
Amazon Relational Database Service User Guide
RDS Proxy examples
The following is an example of an RDS Proxy event in JSON format. The event shows that RDS modified
the endpoint named my-endpoint of the RDS Proxy named my-rds-proxy. The event ID is RDS-
EVENT-0207.
{
"version": "0",
"id": "68f6e973-1a0c-d37b-f2f2-94a7f62ffd4e",
"detail-type": "RDS DB Proxy Event",
"source": "aws.rds",
"account": "123456789012",
"time": "2018-09-27T22:36:43Z",
"region": "us-east-1",
"resources": [
"arn:aws:rds:us-east-1:123456789012:db-proxy:my-rds-proxy"
],
"detail": {
"EventCategories": [
"configuration change"
],
"SourceType": "DB_PROXY",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:db-proxy:my-rds-proxy",
"Date": "2018-09-27T22:36:43.292Z",
"Message": "RDS modified endpoint my-endpoint of DB Proxy my-rds-proxy.",
"SourceIdentifier": "my-endpoint",
"EventID": "RDS-EVENT-0207"
}
}
1245
Amazon Relational Database Service User Guide
RDS Proxy examples
Examples
This MySQL example demonstrates how open connections continue working during a failover. An
example is when you reboot a database or it becomes unavailable due to a problem. This example
uses a proxy named the-proxy and an Aurora DB cluster with DB instances instance-8898 and
instance-9814. When you run the failover-db-cluster command from the Linux command line,
the writer instance that the proxy is connected to changes to a different DB instance. You can see that
the DB instance associated with the proxy changes while the connection remains open.
mysql>
[1]+ Stopped mysql -h the-proxy.proxy-demo.us-east-1.rds.amazonaws.com -
u admin_user -p
$ # Initially, instance-9814 is the writer.
$ aws rds failover-db-cluster --db-cluster-identifier cluster-56-2019-11-14-1399
JSON output
$ # After a short time, the console shows that the failover operation is complete.
$ # Now instance-8898 is the writer.
$ fg
mysql -h the-proxy.proxy-demo.us.us-east-1.rds.amazonaws.com -u admin_user -p
mysql>
[1]+ Stopped mysql -h the-proxy.proxy-demo.us-east-1.rds.amazonaws.com -
u admin_user -p
$ aws rds failover-db-cluster --db-cluster-identifier cluster-56-2019-11-14-1399
JSON output
$ # After a short time, the console shows that the failover operation is complete.
$ # Now instance-9814 is the writer again.
$ fg
mysql -h the-proxy.proxy-demo.us-east-1.rds.amazonaws.com -u admin_user -p
1246
Amazon Relational Database Service User Guide
Troubleshooting RDS Proxy
+---------------+---------------+
| Variable_name | Value |
+---------------+---------------+
| hostname | ip-10-1-3-178 |
+---------------+---------------+
1 row in set (0.02 sec)
This example demonstrates how you can adjust the max_connections setting for an Aurora MySQL
DB cluster. To do so, you create your own DB cluster parameter group based on the default parameter
settings for clusters that are compatible with MySQL 5.7. You specify a value for the max_connections
setting, overriding the formula that sets the default value. You associate the DB cluster parameter group
with your DB cluster.
export REGION=us-east-1
export CLUSTER_PARAM_GROUP=rds-proxy-mysql-57-max-connections-demo
export CLUSTER_NAME=rds-proxy-mysql-57
In the RDS Proxy logs, each entry is prefixed with the name of the associated proxy endpoint. This
name can be the name that you specified for a user-defined endpoint. Or it can be the special name
1247
Amazon Relational Database Service User Guide
Verifying connectivity for a proxy
default for read/write requests using the default endpoint of a proxy. For more information about
proxy endpoints, see Working with Amazon RDS Proxy endpoints (p. 1232).
Topics
• Verifying connectivity for a proxy (p. 1248)
• Common issues and solutions (p. 1249)
Examine the proxy itself using the describe-db-proxies command. Also examine the associated target
group using the describe-db-proxy-target-groups command. Check that the details of the targets match
the RDS DB instance or Aurora DB cluster that you intend to associate with the proxy. Use commands
such as the following.
To confirm that the proxy can connect to the underlying database, examine the targets specified in the
target groups using the describe-db-proxy-targets command. Use a command such as the following.
The output of the describe-db-proxy-targets command includes a TargetHealth field. You can
examine the fields State, Reason, and Description inside TargetHealth to check if the proxy can
communicate with the underlying DB instance.
• A State value of AVAILABLE indicates that the proxy can connect to the DB instance.
• A State value of UNAVAILABLE indicates a temporary or permanent connection problem. In
this case, examine the Reason and Description fields. For example, if Reason has a value of
PENDING_PROXY_CAPACITY, try connecting again after the proxy finishes its scaling operation. If
Reason has a value of UNREACHABLE, CONNECTION_FAILED, or AUTH_FAILURE, use the explanation
from the Description field to help you diagnose the issue.
• The State field might have a value of REGISTERING for a brief time before changing to AVAILABLE
or UNAVAILABLE.
If the following Netcat command (nc) is successful, you can access the proxy endpoint from the EC2
instance or other system where you're logged in. This command reports failure if you're not in the same
VPC as the proxy and the associated database. You might be able to log directly in to the database
without being in the same VPC. However, you can't log into the proxy unless you're in the same VPC.
You can use the following commands to make sure that your EC2 instance has the required properties. In
particular, the VPC for the EC2 instance must be the same as the VPC for the RDS DB instance or Aurora
DB cluster that the proxy connects to.
1248
Amazon Relational Database Service User Guide
Common issues and solutions
Make sure that the SecretString field displayed by get-secret-value is encoded as a JSON
string that includes username and password fields. The following example shows the format of the
SecretString field.
{
"ARN": "some_arn",
"Name": "some_name",
"VersionId": "some_version_id",
"SecretString": '{"username":"some_username","password":"some_password"}',
"VersionStages": [ "some_stage" ],
"CreatedDate": some_timestamp
}
• There are credentials registered for the user to access the proxy.
• The IAM role to access the proxy secret from Secrets Manager is valid.
• The DB proxy is using an authentication method.
You might encounter the following RDS events while creating or connecting to a DB proxy.
You might encounter the following issues while creating a new proxy or connecting to a proxy.
403: The security Select an existing IAM role instead of choosing to create a new one.
token included
1249
Amazon Relational Database Service User Guide
Common issues and solutions
You might encounter the following issues while connecting to a MySQL proxy.
ERROR 1040 The rate of connection requests from the client to the proxy has exceeded
(HY000): the limit.
Connections rate
limit exceeded
(limit_value)
ERROR 1040 The number of simultaneous requests with IAM authentication from the
(HY000): IAM client to the proxy has exceeded the limit.
authentication
rate limit
exceeded
ERROR 1040 The number of simultaneous connection requests from the client to the
(HY000): Number proxy exceeded the limit.
simultaneous
connections
exceeded
(limit_value)
ERROR 1231 The value set for the character_set_client parameter is not valid. For
(42000): Variable example, the value ucs2 is not valid because it can crash the MySQL server.
''character_set_client''
can't be set to
the value of value
ERROR 3159 You enabled the setting Require Transport Layer Security in the proxy
(HY000): This RDS but your connection included the parameter ssl-mode=DISABLED in the
Proxy requires TLS MySQL client. Do either of the following:
connections.
• Disable the setting Require Transport Layer Security for the proxy.
• Connect to the database using the minimum setting of ssl-
mode=REQUIRED in the MySQL client.
ERROR 2026 The TLS handshake to the proxy failed. Some possible reasons include the
(HY000): SSL following:
connection error:
• SSL is required but the server doesn't support it.
1250
Amazon Relational Database Service User Guide
Common issues and solutions
ERROR 9501 The proxy timed-out waiting to acquire a database connection. Some
(HY000): Timed- possible reasons include the following:
out waiting to
acquire database • The proxy is unable to establish a database connection because the
connection maximum connections have been reached
• The proxy is unable to establish a database connection because the
database is unavailable.
You might encounter the following issues while connecting to a PostgreSQL proxy.
IAM authentication is The user tried to connect The user needs to connect to the
allowed only with SSL to the database using IAM database using the minimum
connections. authentication with the setting setting of sslmode=require in
sslmode=disable in the the PostgreSQL client. For more
PostgreSQL client. information, see the PostgreSQL
SSL support documentation.
This RDS Proxy requires The user enabled the option To fix this error, do one of the
TLS connections. Require Transport Layer following:
Security but tried to connect
with sslmode=disable in the • Disable the proxy's Require
PostgreSQL client. Transport Layer Security
option.
• Connect to the database
using the minimum setting
of sslmode=allow in the
PostgreSQL client.
IAM authentication This error might be due to the To fix this error, do the
failed for user following reasons: following:
user_name. Check the IAM
token for this user and • The client supplied the 1. Confirm that the provided
try again. incorrect IAM user name. IAM user exists.
• The client supplied an 2. Confirm that the IAM
incorrect IAM authorization authorization token belongs
token for the user. to the provided IAM user.
• The client is using an IAM 3. Confirm that the IAM policy
policy that does not have the has adequate permissions for
necessary permissions. RDS.
• The client supplied an expired 4. Check the validity of the IAM
IAM authorization token for authorization token used.
the user.
This RDS proxy has no There is no Secrets Manager Add a Secrets Manager secret for
credentials for the role secret for this role. this role. For more information,
role_name. Check the see Setting up AWS Identity
credentials for this and Access Management (IAM)
role and try again. policies (p. 1210).
1251
Amazon Relational Database Service User Guide
Common issues and solutions
RDS supports only The database client being used If you're not using IAM
IAM, MD5, or SCRAM to connect to the proxy is using authentication, use the MD5 or
authentication. an authentication mechanism SCRAM password authentication.
not currently supported by the
proxy.
A user name is missing The database client being used Make sure to define a user name
from the connection to connect to the proxy isn't when setting up a connection to
startup packet. Provide sending a user name when the proxy using the PostgreSQL
a user name for this trying to establish a connection. client of your choice.
connection.
Feature not supported: The PostgreSQL client used Use a newer PostgreSQL client
RDS Proxy supports to connect to the proxy uses a that supports the 3.0 messaging
only version 3.0 of the protocol older than 3.0. protocol. If you're using the
PostgreSQL messaging PostgreSQL psql CLI, use a
protocol. version greater than or equal to
7.4.
Feature not supported: The PostgreSQL client used to Turn off the streaming
RDS Proxy currently connect to the proxy is trying replication mode in the
doesn't support to use the streaming replication PostgreSQL client being used to
streaming replication mode, which isn't currently connect.
mode. supported by RDS Proxy.
Feature not supported: Through the startup message, Turn off the option being
RDS Proxy currently the PostgreSQL client used shown as not supported from
doesn't support the to connect to the proxy is the message above in the
option option_name. requesting an option that isn't PostgreSQL client being used to
currently supported by RDS connect.
Proxy.
The IAM authentication The number of simultaneous Reduce the rate in which
failed because of too requests with IAM connections using IAM
many competing requests. authentication from the client to authentication from a
the proxy has exceeded the limit. PostgreSQL client are
established.
The maximum number The number of simultaneous Reduce the number of active
of client connections connection requests from the connections from PostgreSQL
to the proxy exceeded client to the proxy exceeded the clients to this RDS proxy.
number_value. limit.
Rate of connection The rate of connection requests Reduce the rate in which
to proxy exceeded from the client to the proxy has connections from a PostgreSQL
number_value. exceeded the limit. client are established.
The password that was The password for this role Check the secret for this role in
provided for the role doesn't match the Secrets Secrets Manager to see if the
role_name is wrong. Manager secret. password is the same as what's
being used in your PostgreSQL
client.
1252
Amazon Relational Database Service User Guide
Using RDS Proxy with AWS CloudFormation
IAM is allowed only with A client tried to connect using Enable SSL in the PostgreSQL
SSL connections. IAM authentication, but SSL client.
wasn't enabled.
Timed-out waiting The proxy timed-out waiting to Possible solutions are the
to acquire database acquire a database connection. following:
connection. Some possible reasons include
the following: • Check the target of the RDS
DB instance or Aurora DB
• The proxy can't establish a cluster status to see if it's
database connection because unavailable.
the maximum connections • Check if there are long-
have been reached. running transactions and/or
• The proxy can't establish a queries being executed. They
database connection because can use database connections
the database is unavailable. from the connection pool for a
long time.
The following listing shows a sample AWS CloudFormation template for RDS Proxy.
Resources:
DBProxy:
Type: AWS::RDS::DBProxy
Properties:
1253
Amazon Relational Database Service User Guide
Using RDS Proxy with AWS CloudFormation
DBProxyName: CanaryProxy
EngineFamily: MYSQL
RoleArn:
Fn::ImportValue: SecretReaderRoleArn
Auth:
- {AuthScheme: SECRETS, SecretArn: !ImportValue ProxySecret, IAMAuth: DISABLED}
VpcSubnetIds:
Fn::Split: [",", "Fn::ImportValue": SubnetIds]
ProxyTargetGroup:
Type: AWS::RDS::DBProxyTargetGroup
Properties:
DBProxyName: CanaryProxy
TargetGroupName: default
DBInstanceIdentifiers:
- Fn::ImportValue: DBInstanceName
DependsOn: DBProxy
For more information about the resources in this sample, see DBProxy and DBProxyTargetGroup.
For more information about the Amazon RDS and Aurora resources that you can create using AWS
CloudFormation, see RDS resource type reference.
1254
Amazon Relational Database Service User Guide
• MariaDB 10.11
• MariaDB 10.6
• MariaDB 10.5
• MariaDB 10.4
• MariaDB 10.3 (RDS end of standard support scheduled for October 23, 2023)
For more information about minor version support, see MariaDB on Amazon RDS versions (p. 1265).
To create a MariaDB DB instance, use the Amazon RDS management tools or interfaces. You can then use
the Amazon RDS tools to perform management actions for the DB instance. These include actions such
as the following:
To store and access the data in your DB instance, use standard MariaDB utilities and applications.
MariaDB is available in all of the AWS Regions. For more information about AWS Regions, see Regions,
Availability Zones, and Local Zones (p. 110).
You can use Amazon RDS for MariaDB databases to build HIPAA-compliant applications. You can store
healthcare-related information, including protected health information (PHI), under a Business Associate
Agreement (BAA) with AWS. For more information, see HIPAA compliance. AWS Services in Scope have
been fully assessed by a third-party auditor and result in a certification, attestation of compliance, or
Authority to Operate (ATO). For more information, see AWS services in scope by compliance program.
Before creating a DB instance, complete the steps in Setting up for Amazon RDS (p. 174). When you
create a DB instance, the RDS master user gets DBA privileges, with some limitations. Use this account
for administrative tasks such as creating additional database accounts.
• DB instances
• DB snapshots
• Point-in-time restores
• Automated backups
• Manual backups
You can use DB instances running MariaDB inside a virtual private cloud (VPC) based on Amazon VPC.
You can also add features to your MariaDB DB instance by enabling various options. Amazon RDS
supports Multi-AZ deployments for MariaDB as a high-availability, failover solution.
Important
To deliver a managed service experience, Amazon RDS doesn't provide shell access to DB
instances. It also restricts access to certain system procedures and tables that need advanced
1255
Amazon Relational Database Service User Guide
MariaDB feature support
privileges. You can access your database using standard SQL clients such as the mysql client.
However, you can't access the host directly by using Telnet or Secure Shell (SSH).
Topics
• MariaDB feature support on Amazon RDS (p. 1256)
• MariaDB on Amazon RDS versions (p. 1265)
• Connecting to a DB instance running the MariaDB database engine (p. 1269)
• Securing MariaDB DB instance connections (p. 1274)
• Improving query performance for RDS for MariaDB with Amazon RDS Optimized Reads (p. 1281)
• Improving write performance with Amazon RDS Optimized Writes for MariaDB (p. 1284)
• Upgrading the MariaDB DB engine (p. 1289)
• Importing data into a MariaDB DB instance (p. 1296)
• Working with MariaDB replication in Amazon RDS (p. 1318)
• Options for MariaDB database engine (p. 1334)
• Parameters for MariaDB (p. 1338)
• Migrating data from a MySQL DB snapshot to a MariaDB DB instance (p. 1341)
• MariaDB on Amazon RDS SQL reference (p. 1344)
• Local time zone for MariaDB DB instances (p. 1349)
• Known issues and limitations for RDS for MariaDB (p. 1352)
You can filter new Amazon RDS features on the What's New with Database? page. For Products, choose
Amazon RDS. Then search using keywords such as MariaDB 2023.
Note
The following lists are not exhaustive.
Topics
• MariaDB feature support on Amazon RDS for MariaDB major versions (p. 1256)
• Supported storage engines for MariaDB on Amazon RDS (p. 1261)
• Cache warming for MariaDB on Amazon RDS (p. 1262)
• MariaDB features not supported by Amazon RDS (p. 1263)
Topics
• MariaDB 10.11 support on Amazon RDS (p. 1257)
• MariaDB 10.6 support on Amazon RDS (p. 1258)
• MariaDB 10.5 support on Amazon RDS (p. 1259)
• MariaDB 10.4 support on Amazon RDS (p. 1260)
1256
Amazon Relational Database Service User Guide
MariaDB major versions
For information about supported minor versions of Amazon RDS for MariaDB, see MariaDB on Amazon
RDS versions (p. 1265).
• Password Reuse Check plugin – You can use the MariaDB Password Reuse Check plugin to prevent
users from reusing passwords and to set the retention period of passwords. For more information, see
Password Reuse Check Plugin.
• GRANT TO PUBLIC authorization – You can grant privileges to all users who have access to your
server. For more information, see GRANT TO PUBLIC.
• Separation of SUPER and READ ONLY ADMIN privileges – You can remove READ ONLY ADMIN
privileges from all users, even users that previously had SUPER privileges.
• Security – You can now set option --ssl as the default for your MariaDB client. MariaDB no longer
silently disables SSL if the configuration is incorrect.
• SQL commands and functions – You can now use the SHOW ANALYZE FORMAT=JSON command and
the functions ROW_NUMBER, SFORMAT, and RANDOM_BYTES. SFORMAT allows string formatting and
is enabled by default. You can convert partition to table and table to partition in a single command.
There are also several improvements around JSON_*() functions. DES_ENCRYPT and DES_DECRYPT
functions were deprecated for version 10.10 and higher. For more information, see SFORMAT.
• InnoDB enhancements – These enhancements include the following items:
• Performance improvements in the redo log to reduce write amplification and to improve
concurrency.
• The ability for you to change the undo tablespace without reinitializing the data directory.
This enhancement reduces control plane overhead. It requires restarting but it doesn't require
reinitialization after changing undo tablespace.
• Support for CHECK TABLE … EXTENDED and for descending indexes internally.
• Improvements to bulk insert.
• Binlog changes – These changes include the following items:
• Logging ALTER in two phases to decrease replication latency. The binlog_alter_two_phase
parameter is disabled by default, but can be enabled through parameter groups.
• Logging explicit_defaults_for_timestamp.
• No longer logging INCIDENT_EVENT if the transaction can be safely rolled back.
• Replication improvements – MariaDB version 10.11 DB instances use GTID replication by default if the
master supports it. Also, Seconds_Behind_Master is more precise.
• Clients – You can use new command-line options for mysqlbinglog and mariadb-dump. You can use
mariadb-dump to dump and restore historical data.
• System versioning – You can modify history. MariaDB automatically creates new partitions.
• Atomic DDL – CREATE OR REPLACE is now atomic. Either the statement succeeds or it's completely
reversed.
• Redo log write – Redo log writes asynchronously.
• Stored functions – Stored functions now support the same IN, OUT, and INOUT parameters as in
stored procedures.
• Deprecated or removed parameters – The following parameters have been deprecated or removed for
MariaDB version 10.11 DB instances:
• innodb_change_buffering
• innodb_disallow_writes
1257
Amazon Relational Database Service User Guide
MariaDB major versions
• innodb_log_write_ahead_size
• innodb_prefix_index_cluster_optimization
• keep_files_on_create
• old
• Dynamic parameters – The following parameters are now dynamic for MariaDB version 10.11 DB
instances:
• innodb_log_file_size
• innodb_write_io_threads
• innodb_read_io_threads
• New default values for parameters – The following parameters have new default values for MariaDB
version 10.11 DB instances:
• The default value of the explicit_defaults_for_timestamp parameter changed from OFF to ON.
• The default value of the optimizer_prune_level parameter changed from 1 to 2.
• New valid values for parameters – The following parameters have new valid values for MariaDB
version 10.11 DB instances:
• The valid values for the old parameter were merged into those for the old_mode parameter.
• The valid values for the histogram_type parameter now include JSON_HB.
• The valid value range for the innodb_log_buffer_size parameter is now 262144 to 4294967295
(256KB to 4096MB).
• The valid value range for the innodb_log_file_size parameter is now 4194304 to 512GB (4MB to
512GB).
• The valid values for the optimizer_prune_level parameter now include 2.
• New parameters – The following parameters are new for MariaDB version 10.11 DB instances:
• The binlog_alter_two_phase parameter can improve replication performance.
• The log_slow_min_examined_row_limit parameter can improve performance.
• The log_slow_query parameter and the log_slow_query_file parameter are aliases for
slow_query_log and slow_query_log_file, respectively.
• optimizer_extra_pruning_depth
• system_versioning_insert_history
For a list of all features and documentation, see the following information on the MariaDB website.
MariaDB 10.7 Changes and improvements in MariaDB 10.7 Release notes - MariaDB 10.7 series
MariaDB 10.8 Changes and improvements in MariaDB 10.8 Release notes - MariaDB 10.8 series
MariaDB 10.9 Changes and improvements in MariaDB 10.9 Release notes - MariaDB 10.9 series
MariaDB 10.10 Changes and improvements in MariaDB 10.10 Release notes - MariaDB 10.10 series
MariaDB 10.11 Changes and improvements in MariaDB 10.11 Release notes - MariaDB 10.11 series
For a list of unsupported features, see MariaDB features not supported by Amazon RDS (p. 1263).
1258
Amazon Relational Database Service User Guide
MariaDB major versions
• MyRocks storage engine – You can use the MyRocks storage engine with RDS for MariaDB to
optimize storage consumption of your write-intensive, high-performance web applications. For more
information, see Supported storage engines for MariaDB on Amazon RDS (p. 1261) and MyRocks.
• AWS Identity and Access Management (IAM) DB authentication – You can use IAM DB authentication
for better security and central management of connections to your MariaDB DB instances. For more
information, see IAM database authentication for MariaDB, MySQL, and PostgreSQL (p. 2642).
• Upgrade options – You can now upgrade to RDS for MariaDB version 10.6 from any prior major release
(10.3, 10.4, 10.5). You can also restore a snapshot of an existing MySQL 5.6 or 5.7 DB instance to a
MariaDB 10.6 instance. For more information, see Upgrading the MariaDB DB engine (p. 1289).
• Delayed replication – You can now set a configurable time period for which a read replica lags behind
the source database. In a standard MariaDB replication configuration, there is minimal replication
delay between the source and the replica. With delayed replication, you can set an intentional delay
as a strategy for disaster recovery. For more information, see Configuring delayed replication with
MariaDB (p. 1324).
• Oracle PL/SQL compatibility – By using RDS for MariaDB version 10.6, you can more easily migrate
your legacy Oracle applications to Amazon RDS. For more information, see SQL_MODE=ORACLE.
• Atomic DDL – Your dynamic data language (DDL) statements can be relatively crash-safe with
RDS for MariaDB version 10.6. CREATE TABLE, ALTER TABLE, RENAME TABLE, DROP TABLE,
DROP DATABASE and related DDL statements are now atomic. Either the statement succeeds, or it's
completely reversed. For more information, see Atomic DDL.
• Other enhancements – These enhancements include a JSON_TABLE function for transforming JSON
data to relational format within SQL, and faster empty table data load with Innodb. They also include
new sys_schema for analysis and troubleshooting, optimizer enhancement for ignoring unused
indexes, and performance improvements. For more information, see JSON_TABLE.
• New default values for parameters – The following parameters have new default values for MariaDB
version 10.6 DB instances:
• The default value for the following parameters has changed from utf8 to utf8mb3:
• character_set_client
• character_set_connection
• character_set_results
• character_set_system
Although the default values have changed for these parameters, there is no functional change. For
more information, see Supported Character Sets and Collations in the MariaDB documentation.
• The default value of the collation_connection parameter has changed from utf8_general_ci
to utf8mb3_general_ci. Although the default value has changed for this parameter, there is no
functional change.
• The default value of the old_mode parameter has changed from unset to UTF8_IS_UTF8MB3.
Although the default value has changed for this parameter, there is no functional change.
For a list of all MariaDB 10.6 features and their documentation, see Changes and improvements in
MariaDB 10.6 and Release notes - MariaDB 10.6 series on the MariaDB website.
For a list of unsupported features, see MariaDB features not supported by Amazon RDS (p. 1263).
• InnoDB enhancements – MariaDB version 10.5 includes InnoDB enhancements. For more information,
see InnoDB: Performance Improvements etc. in the MariaDB documentation.
1259
Amazon Relational Database Service User Guide
MariaDB major versions
• Performance schema updates – MariaDB version 10.5 includes performance schema updates. For
more information, see Performance Schema Updates to Match MySQL 5.7 Instrumentation and Tables
in the MariaDB documentation.
• One file in the InnoDB redo log – In versions of MariaDB before version 10.5, the value of the
innodb_log_files_in_group parameter was set to 2. In MariaDB version 10.5, the value of this
parameter is set to 1.
If you are upgrading from a prior version to MariaDB version 10.5, and you don't modify the
parameters, the innodb_log_file_size parameter value is unchanged. However, it applies to
one log file instead of two. The result is that your upgraded MariaDB version 10.5 DB instance uses
half of the redo log size that it was using before the upgrade. This change can have a noticeable
performance impact. To address this issue, you can double the value of the innodb_log_file_size
parameter. For information about modifying parameters, see Modifying parameters in a DB parameter
group (p. 352).
• SHOW SLAVE STATUS command not supported – In versions of MariaDB before version 10.5, the
SHOW SLAVE STATUS command required the REPLICATION SLAVE privilege. In MariaDB version
10.5, the equivalent SHOW REPLICA STATUS command requires the REPLICATION REPLICA ADMIN
privilege. This new privilege isn't granted to the RDS master user.
Instead of using the SHOW REPLICA STATUS command, run the new mysql.rds_replica_status
stored procedure to return similar information. For more information, see
mysql.rds_replica_status (p. 1344).
• SHOW RELAYLOG EVENTS command not supported – In versions of MariaDB before version 10.5, the
SHOW RELAYLOG EVENTS command required the REPLICATION SLAVE privilege. In MariaDB version
10.5, this command requires the REPLICATION REPLICA ADMIN privilege. This new privilege isn't
granted to the RDS master user.
• New default values for parameters – The following parameters have new default values for MariaDB
version 10.5 DB instances:
• The default value of the max_connections parameter has changed to
LEAST({DBInstanceClassMemory/25165760},12000). For information about the LEAST
parameter function, see DB parameter functions (p. 371).
• The default value of the innodb_adaptive_hash_index parameter has changed to OFF (0).
• The default value of the innodb_checksum_algorithm parameter has changed to full_crc32.
• The default value of the innodb_log_file_size parameter has changed to 2 GB.
For a list of all MariaDB 10.5 features and their documentation, see Changes and improvements in
MariaDB 10.5 and Release notes - MariaDB 10.5 series on the MariaDB website.
For a list of unsupported features, see MariaDB features not supported by Amazon RDS (p. 1263).
• User account security enhancements – Password expiration and account locking improvements
• Optimizer enhancements – Optimizer trace feature
• InnoDB enhancements – Instant DROP COLUMN support and instant VARCHAR extension for
ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPACT
• New parameters – Including tcp_nodedelay, tls_version, and gtid_cleanup_batch_size
For a list of all MariaDB 10.4 features and their documentation, see Changes and improvements in
MariaDB 10.4 and Release notes - MariaDB 10.4 series on the MariaDB website.
1260
Amazon Relational Database Service User Guide
Supported storage engines
For a list of unsupported features, see MariaDB features not supported by Amazon RDS (p. 1263).
For a list of all MariaDB 10.3 features and their documentation, see Changes & improvements in MariaDB
10.3 and Release notes - MariaDB 10.3 series on the MariaDB website.
For a list of unsupported features, see MariaDB features not supported by Amazon RDS (p. 1263).
Topics
• The InnoDB storage engine (p. 1261)
• The MyRocks storage engine (p. 1261)
The default parameter group for MariaDB version 10.6 includes MyRocks parameters. For more
information, see Parameters for MariaDB (p. 1338) and Working with parameter groups (p. 347).
To create a table that uses the MyRocks storage engine, specify ENGINE=RocksDB in the CREATE TABLE
statement. The following example creates a table that uses the MyRocks storage engine.
1261
Amazon Relational Database Service User Guide
Cache warming
We strongly recommend that you don't run transactions that span both InnoDB and MyRocks tables.
MariaDB doesn't guarantee ACID (atomicity, consistency, isolation, durability) for transactions across
storage engines. Although it is possible to have both InnoDB and MyRocks tables in a DB instance, we
don't recommend this approach except during a migration from one storage engine to the other. When
both InnoDB and MyRocks tables exist in a DB instance, each storage engine has its own buffer pool,
which might cause performance to degrade.
MyRocks doesn’t support SERIALIZABLE isolation or gap locks. So, generally you can't use MyRocks
with statement-based replication. For more information, see MyRocks and Replication.
• rocksdb_block_cache_size
• rocksdb_bulk_load
• rocksdb_bulk_load_size
• rocksdb_deadlock_detect
• rocksdb_deadlock_detect_depth
• rocksdb_max_latest_deadlocks
The MyRocks storage engine and the InnoDB storage engine can compete for memory based on the
settings for the rocksdb_block_cache_size and innodb_buffer_pool_size parameters. In some
cases, you might only intend to use the MyRocks storage engine on a particular DB instance. If so, we
recommend setting the innodb_buffer_pool_size minimal parameter to a minimal value and
setting the rocksdb_block_cache_size as high as possible.
You can access MyRocks log files by using the DescribeDBLogFiles and
DownloadDBLogFilePortion operations.
For more information about MyRocks, see MyRocks on the MariaDB website.
Cache warming is enabled by default on MariaDB 10.3 and higher DB instances. To enable it, set the
innodb_buffer_pool_dump_at_shutdown and innodb_buffer_pool_load_at_startup
parameters to 1 in the parameter group for your DB instance. Changing these parameter values in
a parameter group affects all MariaDB DB instances that use that parameter group. To enable cache
warming for specific MariaDB DB instances, you might need to create a new parameter group for those
DB instances. For information on parameter groups, see Working with parameter groups (p. 347).
Cache warming primarily provides a performance benefit for DB instances that use standard storage. If
you use PIOPS storage, you don't commonly see a significant performance benefit.
Important
If your MariaDB DB instance doesn't shut down normally, such as during a failover, then the
buffer pool state isn't saved to disk. In this case, MariaDB loads whatever buffer pool file is
available when the DB instance is restarted. No harm is done, but the restored buffer pool might
1262
Amazon Relational Database Service User Guide
Features not supported
not reflect the most recent state of the buffer pool before the restart. To ensure that you have
a recent state of the buffer pool available to warm the cache on startup, we recommend that
you periodically dump the buffer pool "on demand." You can dump or load the buffer pool on
demand.
You can create an event to dump the buffer pool automatically and at a regular interval. For
example, the following statement creates an event named periodic_buffer_pool_dump
that dumps the buffer pool every hour.
• To dump the current state of the buffer pool to disk, call the
mysql.rds_innodb_buffer_pool_dump_now (p. 1784) stored procedure.
• To load the saved state of the buffer pool from disk, call the
mysql.rds_innodb_buffer_pool_load_now (p. 1784) stored procedure.
• To cancel a load operation in progress, call the mysql.rds_innodb_buffer_pool_load_abort (p. 1784)
stored procedure.
• S3 storage engine
• Authentication plugin – GSSAPI
• Authentication plugin – Unix Socket
• AWS Key Management encryption plugin
• Delayed replication for MariaDB versions lower than 10.6
• Native MariaDB encryption at rest for InnoDB and Aria
You can enable encryption at rest for a MariaDB DB instance by following the instructions in
Encrypting Amazon RDS resources (p. 2586).
• HandlerSocket
• JSON table type for MariaDB versions lower than 10.6
• MariaDB ColumnStore
• MariaDB Galera Cluster
• Multisource replication
• MyRocks storage engine for MariaDB versions lower than 10.6
• Password validation plugin, simple_password_check, and cracklib_password_check
• Spider storage engine
• Sphinx storage engine
• TokuDB storage engine
• Storage engine-specific object attributes, as described in Engine-defined new Table/Field/Index
attributes in the MariaDB documentation
• Table and tablespace encryption
1263
Amazon Relational Database Service User Guide
Features not supported
To deliver a managed service experience, Amazon RDS doesn't provide shell access to DB instances, and
it restricts access to certain system procedures and tables that require advanced privileges. Amazon RDS
supports access to databases on a DB instance using any standard SQL client application. Amazon RDS
doesn't allow direct host access to a DB instance by using Telnet, Secure Shell (SSH), or Windows Remote
Desktop Connection.
1264
Amazon Relational Database Service User Guide
MariaDB versions
Topics
• Supported MariaDB minor versions on Amazon RDS (p. 1265)
• Supported MariaDB major versions on Amazon RDS (p. 1267)
• MariaDB 10.3 RDS end of standard support (p. 1267)
• MariaDB 10.2 RDS end of standard support (p. 1268)
• Deprecated versions for Amazon RDS for MariaDB (p. 1268)
MariaDB engine Community release RDS release date RDS end of standard
version date support date
10.11
10.6
10.5
1265
Amazon Relational Database Service User Guide
Supported MariaDB minor versions
MariaDB engine Community release RDS release date RDS end of standard
version date support date
10.4
10.3
You can specify any currently supported MariaDB version when creating a new DB instance. You can
specify the major version (such as MariaDB 10.5), and any supported minor version for the specified
major version. If no version is specified, Amazon RDS defaults to a supported version, typically the most
recent version. If a major version is specified but a minor version is not, Amazon RDS defaults to a recent
release of the major version you have specified. To see a list of supported versions, as well as defaults for
newly created DB instances, use the describe-db-engine-versions AWS CLI command.
For example, to list the supported engine versions for RDS for MariaDB, run the following CLI command:
The default MariaDB version might vary by AWS Region. To create a DB instance with a specific minor
version, specify the minor version during DB instance creation. You can determine the default minor
version for an AWS Region using the following AWS CLI command:
1266
Amazon Relational Database Service User Guide
Supported MariaDB major versions
Replace major-engine-version with the major engine version, and replace region with the AWS
Region. For example, the following AWS CLI command returns the default MariaDB minor engine version
for the 10.5 major version and the US West (Oregon) AWS Region (us-west-2):
MariaDB major Community RDS release date Community end RDS end of
version release date of life date standard support
date
MariaDB 10.11 16 February 2023 21 August 2023 16 February 2028 February 2028
MariaDB 10.6 6 July 2021 3 February 2022 6 July 2026 July 2026
MariaDB 10.5 24 June 2020 21 January 2021 24 June 2025 June 2025
MariaDB 10.4 18 June 2019 6 April 2020 18 June 2024 June 2024
MariaDB 10.3 25 May 2018 23 October 2018 25 May 2023 23 October 2023
MariaDB 10.2 23 May 2017 5 Jan 2018 23 May 2022 15 Oct 2022
You can no longer create new MariaDB 10.3 DB August 23, 2023
instances.
1267
Amazon Relational Database Service User Guide
MariaDB 10.2 RDS end of standard support
You can no longer create new MariaDB 10.2 DB July 15, 2022
instances.
For more information about Amazon RDS for MariaDB 10.2 RDS end of standard support, see
Announcement: Amazon Relational Database Service (Amazon RDS) for MariaDB 10.2 End-of-Life date is
October 15, 2022.
For information about the Amazon RDS deprecation policy for MariaDB, see Amazon RDS FAQs.
1268
Amazon Relational Database Service User Guide
Connecting to a DB instance running MariaDB
You can connect to an Amazon RDS for MariaDB DB instance by using tools like the MySQL command-
line client. For more information on using the MySQL command-line client, see mysql command-line
client in the MariaDB documentation. One GUI-based application that you can use to connect is Heidi.
For more information, see the Download HeidiSQL page. For information about installing MySQL
(including the MySQL command-line client), see Installing and upgrading MySQL.
Most Linux distributions include the MariaDB client instead of the Oracle MySQL client. To install the
MySQL command-line client on Amazon Linux 2023, run the following command:
To install the MySQL command-line client on Amazon Linux 2, run the following command:
To install the MySQL command-line client on most DEB-based Linux distributions, run the following
command.
To check the version of your MySQL command-line client, run the following command.
mysql --version
To read the MySQL documentation for your current client version, run the following command.
man mysql
To connect to a DB instance from outside of a virtual private cloud (VPC) based on Amazon VPC, the DB
instance must be publicly accessible. Also, access must be granted using the inbound rules of the DB
instance's security group, and other requirements must be met. For more information, see Can't connect
to Amazon RDS DB instance (p. 2727).
You can use SSL encryption on connections to a MariaDB DB instance. For information, see Using SSL/
TLS with a MariaDB DB instance (p. 1275).
Topics
• Finding the connection information for a MariaDB DB instance (p. 1270)
• Connecting from the MySQL command-line client (unencrypted) (p. 1272)
• Troubleshooting connections to your MariaDB DB instance (p. 1273)
1269
Amazon Relational Database Service User Guide
Finding the connection information
To connect to a DB instance, use any client for the MariaDB DB engine. For example, you might use the
MySQL command-line client or MySQL Workbench.
To find the connection information for a DB instance, you can use the AWS Management Console, the
AWS Command Line Interface (AWS CLI) describe-db-instances command, or the Amazon RDS API
DescribeDBInstances operation to list its details.
Console
To find the connection information for a DB instance in the AWS Management Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases to display a list of your DB instances.
3. Choose the name of the MariaDB DB instance to display its details.
4. On the Connectivity & security tab, copy the endpoint. Also, note the port number. You need both
the endpoint and the port number to connect to the DB instance.
1270
Amazon Relational Database Service User Guide
Finding the connection information
5. If you need to find the master user name, choose the Configuration tab and view the Master
username value.
AWS CLI
To find the connection information for a MariaDB DB instance by using the AWS CLI, call the describe-db-
instances command. In the call, query for the DB instance ID, endpoint, port, and master user name.
1271
Amazon Relational Database Service User Guide
Connecting from the MySQL
command-line client (unencrypted)
For Windows:
[
[
"mydb1",
"mydb1.123456789012.us-east-1.rds.amazonaws.com",
3306,
"admin"
],
[
"mydb2",
"mydb2.123456789012.us-east-1.rds.amazonaws.com",
3306,
"admin"
]
]
RDS API
To find the connection information for a DB instance by using the Amazon RDS API, call the
DescribeDBInstances operation. In the output, find the values for the endpoint address, endpoint port,
and master user name.
To connect to a DB instance using the MySQL command-line client, enter the following command at a
command prompt on a client computer. Doing this connects you to a database on a MariaDB DB instance.
Substitute the DNS name (endpoint) for your DB instance for <endpoint> and the master user name
that you used for <mymasteruser>. Provide the master password that you used when prompted for a
password.
After you enter the password for the user, you see output similar to the following.
1272
Amazon Relational Database Service User Guide
Troubleshooting
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>
• The DB instance was created using a security group that doesn't authorize connections from the device
or Amazon EC2 instance where the MariaDB application or utility is running. The DB instance must
have a VPC security group that authorizes the connections. For more information, see Amazon VPC
VPCs and Amazon RDS (p. 2688).
You can add or edit an inbound rule in the security group. For Source, choose My IP. This allows access
to the DB instance from the IP address detected in your browser.
• The DB instance was created using the default port of 3306, and your company has firewall rules
blocking connections to that port from devices in your company network. To fix this failure, recreate
the instance with a different port.
For more information on connection issues, see Can't connect to Amazon RDS DB instance (p. 2727).
1273
Amazon Relational Database Service User Guide
Securing MariaDB connections
Topics
• MariaDB security on Amazon RDS (p. 1274)
• Encrypting client connections to MariaDB DB instances with SSL/TLS (p. 1275)
• Updating applications to connect to MariaDB instances using new SSL/TLS certificates (p. 1277)
• AWS Identity and Access Management controls who can perform Amazon RDS management actions
on DB instances. When you connect to AWS using IAM credentials, your IAM account must have IAM
policies that grant the permissions required to perform Amazon RDS management operations. For
more information, see Identity and access management for Amazon RDS (p. 2606).
• When you create a DB instance, you use a VPC security group to control which devices and Amazon
EC2 instances can open connections to the endpoint and port of the DB instance. These connections
can be made using Secure Socket Layer (SSL) and Transport Layer Security (TLS). In addition, firewall
rules at your company can control whether devices running at your company can open connections to
the DB instance.
• Once a connection has been opened to a MariaDB DB instance, authentication of the login and
permissions are applied the same way as in a stand-alone instance of MariaDB. Commands such as
CREATE USER, RENAME USER, GRANT, REVOKE, and SET PASSWORD work just as they do in stand-
alone databases, as does directly modifying database schema tables.
When you create an Amazon RDS DB instance, the master user has the following default privileges:
• alter
• alter routine
• create
• create routine
• create temporary tables
• create user
• create view
• delete
• drop
• event
• execute
• grant option
• index
• insert
• lock tables
• process
• references
• reload
1274
Amazon Relational Database Service User Guide
Encrypting with SSL/TLS
This privilege is limited on MariaDB DB instances. It doesn't grant access to the FLUSH LOGS or FLUSH
TABLES WITH READ LOCK operations.
• replication client
• replication slave
• select
• show databases
• show view
• trigger
• update
For more information about these privileges, see User account management in the MariaDB
documentation.
Note
Although you can delete the master user on a DB instance, we don't recommend doing so. To
recreate the master user, use the ModifyDBInstance API or the modify-db-instance AWS
CLI and specify a new master user password with the appropriate parameter. If the master user
does not exist in the instance, the master user is created with the specified password.
To provide management services for each DB instance, the rdsadmin user is created when the DB
instance is created. Attempting to drop, rename, change the password for, or change privileges for the
rdsadmin account results in an error.
To allow management of the DB instance, the standard kill and kill_query commands have
been restricted. The Amazon RDS commands mysql.rds_kill, mysql.rds_kill_query, and
mysql.rds_kill_query_id are provided for use in MariaDB and also MySQL so that you can end user
sessions or queries on DB instances.
Topics
• Using SSL/TLS with a MariaDB DB instance (p. 1275)
• Requiring SSL/TLS for all connections to a MariaDB DB instance (p. 1276)
• Connecting from the MySQL command-line client with SSL/TLS (encrypted) (p. 1276)
An SSL/TLS certificate created by Amazon RDS is the trusted root entity and should work in most cases
but might fail if your application does not accept certificate chains. If your application does not accept
certificate chains, you might need to use an intermediate certificate to connect to your AWS Region. For
1275
Amazon Relational Database Service User Guide
Encrypting with SSL/TLS
example, you must use an intermediate certificate to connect to the AWS GovCloud (US) Regions using
SSL/TLS.
For information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For more information about using SSL/TLS with MySQL, see Updating applications to
connect to MariaDB instances using new SSL/TLS certificates (p. 1277).
Amazon RDS for MariaDB supports Transport Layer Security (TLS) versions 1.0, 1.1, 1.2, and 1.3 for all
MariaDB versions.
You can require SSL/TLS connections for specific users accounts. For example, you can use one of the
following statements, depending on your MariaDB version, to require SSL/TLS connections on the user
account encrypted_user.
For more information on SSL/TLS connections with MariaDB, see Securing Connections for Client and
Server in the MariaDB documentation.
You can set the require_secure_transport parameter value by updating the DB parameter group
for your DB instance. You don't need to reboot your DB instance for the change to take effect.
ERROR 1045 (28000): Access denied for user 'USER'@'localhost' (using password: YES | NO)
For information about setting parameters, see Modifying parameters in a DB parameter group (p. 352).
For more information about the require_secure_transport parameter, see the MariaDB
documentation.
To find out which version you have, run the mysql command with the --version option. In the
following example, the output shows that the client program is from MariaDB.
$ mysql --version
mysql Ver 15.1 Distrib 10.5.15-MariaDB, for osx10.15 (x86_64) using readline 5.1
1276
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates
Most Linux distributions, such as Amazon Linux, CentOS, SUSE, and Debian have replaced MySQL with
MariaDB, and the mysql version in them is from MariaDB.
For information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591).
2. Use a MySQL command-line client to connect to a DB instance with SSL/TLS encryption. For the -h
parameter, substitute the DNS name (endpoint) for your DB instance. For the --ssl-ca parameter,
substitute the SSL/TLS certificate file name. For the -P parameter, substitute the port for your DB
instance. For the -u parameter, substitute the user name of a valid database user, such as the master
user. Enter the master user password when prompted.
The following example shows how to launch the client using the --ssl-ca parameter using the
MariaDB client:
To require that the SSL/TLS connection verifies the DB instance endpoint against the endpoint in the
SSL/TLS certificate, enter the following command:
The following example shows how to launch the client using the --ssl-ca parameter using the
MySQL 5.7 client or later:
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>
1277
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates
This topic can help you to determine whether your applications require certificate verification to connect
to your DB instances.
Note
Some applications are configured to connect to MariaDB only if they can successfully verify the
certificate on the server. For such applications, you must update your client application trust
stores to include the new CA certificates.
You can specify the following SSL modes: disabled, preferred, and required. When
you use the preferred SSL mode and the CA certificate doesn't exist or isn't up to date, the
connection falls back to not using SSL and still connects successfully.
We recommend avoiding preferred mode. In preferred mode, if the connection encounters
an invalid certificate, it stops using encryption and proceeds unencrypted.
After you update your CA certificates in the client application trust stores, you can rotate the certificates
on your DB instances. We strongly recommend testing these procedures in a development or staging
environment before implementing them in your production environments.
For more information about certificate rotation, see Rotating your SSL/TLS certificate (p. 2596). For
more information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For information about using SSL/TLS with MariaDB DB instances, see Using SSL/TLS
with a MariaDB DB instance (p. 1275).
Topics
• Determining whether a client requires certificate verification in order to connect (p. 1278)
• Updating your application trust store (p. 1279)
• Example Java code for establishing SSL connections (p. 1280)
JDBC
The following example with MySQL Connector/J 8.0 shows one way to check an application's JDBC
connection properties to determine whether successful connections require a valid certificate. For more
information on all of the JDBC connection options for MySQL, see Configuration properties in the
MySQL documentation.
When using the MySQL Connector/J 8.0, an SSL connection requires verification against the server CA
certificate if your connection properties have sslMode set to VERIFY_CA or VERIFY_IDENTITY, as in
the following example.
Note
If you use either the MySQL Java Connector v5.1.38 or later, or the MySQL Java Connector
v8.0.9 or later to connect to your databases, even if you haven't explicitly configured your
applications to use SSL/TLS when connecting to your databases, these client drivers default to
using SSL/TLS. In addition, when using SSL/TLS, they perform partial certificate verification and
fail to connect if the database server certificate is expired.
1278
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates
Specify a password other than the prompt shown here as a security best practice.
MySQL
The following examples with the MySQL Client show two ways to check a script's MySQL connection to
determine whether successful connections require a valid certificate. For more information on all of the
connection options with the MySQL Client, see Client-side configuration for encrypted connections in
the MySQL documentation.
When using the MySQL 5.7 or MySQL 8.0 Client, an SSL connection requires verification against the
server CA certificate if for the --ssl-mode option you specify VERIFY_CA or VERIFY_IDENTITY, as in
the following example.
When using the MySQL 5.6 Client, an SSL connection requires verification against the server CA
certificate if you specify the --ssl-verify-server-cert option, as in the following example.
For information about downloading the root certificate, see Using SSL/TLS to encrypt a connection to a
DB instance (p. 2591).
For sample scripts that import certificates, see Sample script for importing certificates into your trust
store (p. 2603).
Note
When you update the trust store, you can retain older certificates in addition to adding the new
certificates.
If you are using the MariaDB Connector/J JDBC driver in an application, set the following properties in
the application.
System.setProperty("javax.net.ssl.trustStore", certs);
System.setProperty("javax.net.ssl.trustStorePassword", "password");
java -Djavax.net.ssl.trustStore=/path_to_truststore/MyTruststore.jks -
Djavax.net.ssl.trustStorePassword=my_truststore_password com.companyName.MyApplication
1279
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates
Note
Specify passwords other than the prompts shown here as a security best practice.
System.setProperty("javax.net.ssl.trustStore", KEY_STORE_FILE_PATH);
System.setProperty("javax.net.ssl.trustStorePassword", KEY_STORE_PASS);
return;
}
Important
After you have determined that your database connections use SSL/TLS and have updated
your application trust store, you can update your database to use the rds-ca-rsa2048-g1
certificates. For instructions, see step 3 in Updating your CA certificate by modifying your DB
instance (p. 2597).
Specify a password other than the prompt shown here as a security best practice.
1280
Amazon Relational Database Service User Guide
Improving query performance with RDS Optimized Reads
Topics
• Overview of RDS Optimized Reads (p. 1281)
• Use cases for RDS Optimized Reads (p. 1281)
• Best practices for RDS Optimized Reads (p. 1282)
• Using RDS Optimized Reads (p. 1282)
• Monitoring DB instances that use RDS Optimized Reads (p. 1283)
• Limitations for RDS Optimized Reads (p. 1283)
RDS Optimized Reads is turned on by default when a DB instance uses a DB instance class with an
instance store, such as db.m5d or db.m6gd. With RDS Optimized Reads, some temporary objects are
stored on the instance store. These temporary objects include internal temporary files, internal on-disk
temp tables, memory map files, and binary log (binlog) cache files. For more information about the
instance store, see Amazon EC2 instance store in the Amazon Elastic Compute Cloud User Guide for Linux
Instances.
The workloads that generate temporary objects in MariaDB for query processing can take advantage
of the instance store for faster query processing. This type of workload includes queries involving
sorts, hash aggregations, high-load joins, Common Table Expressions (CTEs), and queries on unindexed
columns. These instance store volumes provide higher IOPS and performance, regardless of the storage
configurations used for persistent Amazon EBS storage. Because RDS Optimized Reads offloads
operations on temporary objects to the instance store, the input/output operations per second (IOPS)
or throughput of the persistent storage (Amazon EBS) can now be used for operations on persistent
objects. These operations include regular data file reads and writes and background engine operations,
such as flushing and insert buffer merges.
Note
Both manual and automated RDS snapshots contain only the engine files for persistent objects.
The temporary objects created in the instance store aren't included in RDS snapshots.
• Applications that run analytical queries with complex common table expressions (CTEs), derived tables,
and grouping operations
1281
Amazon Relational Database Service User Guide
Best practices
• Read replicas that serve heavy read traffic with unoptimized queries
• Applications that run on-demand or dynamic reporting queries that involve complex operations, such
as queries with GROUP BY and ORDER BY clauses
• Workloads that use internal temporary tables for query processing
You can monitor the engine status variable created_tmp_disk_tables to determine the number of
disk-based temporary tables created on your DB instance.
• Applications that create large temporary tables, either directly or in procedures, to store intermediate
results
• Database queries that perform grouping or ordering on non-indexed columns
• Add retry logic for read-only queries in case they fail because the instance store is full during the
execution.
• Monitor the storage space available on the instance store with the CloudWatch metric
FreeLocalStorage. If the instance store is reaching its limit because of workload on the DB instance,
modify the DB instance to use a larger DB instance class.
• When your DB instance has sufficient memory but is still reaching the storage limit on the instance
store, increase the binlog_cache_size value to maintain the session-specific binlog entries in
memory. This configuration prevents writing the binlog entries to temporary binlog cache files on disk.
The binlog_cache_size parameter is session-specific. You can change the value for each new
session. The setting for this parameter can increase the memory utilization on the DB instance during
peak workload. Therefore, consider increasing the parameter value based on the workload pattern of
your application and available memory on the DB instance.
• Use the default value of MIXED for the binlog_format. Depending on the size of the transactions,
setting binlog_format to ROW can result in large binlog cache files on the instance store.
• Avoid performing bulk changes in a single transaction. These types of transactions can generate large
binlog cache files on the instance store and can cause issues when the instance store is full. Consider
splitting writes into multiple small transactions to minimize storage use for binlog cache files.
• Create an RDS for MariaDB DB instance using one of these DB instance classes. For more information,
see Creating an Amazon RDS DB instance (p. 300).
• Modify an existing RDS for MariaDB DB instance to use one of these DB instance classes. For more
information, see Modifying an Amazon RDS DB instance (p. 401).
RDS Optimized Reads is available in all AWS Regions where one or more of the DB instance classes with
local NVMe SSD storage are supported. For information about DB instance classes, see the section called
“DB instance classes” (p. 11).
1282
Amazon Relational Database Service User Guide
Monitoring
DB instance class availability differs for AWS Regions. To determine whether a DB instance class is
supported in a specific AWS Region, see the section called “Determining DB instance class support in
AWS Regions” (p. 68).
If you don't want to use RDS Optimized Reads, modify your DB instance so that it doesn't use a DB
instance class that supports the feature.
• FreeLocalStorage
• ReadIOPSLocalStorage
• ReadLatencyLocalStorage
• ReadThroughputLocalStorage
• WriteIOPSLocalStorage
• WriteLatencyLocalStorage
• WriteThroughputLocalStorage
These metrics provide data about available instance store storage, IOPS, and throughput. For
more information about these metrics, see Amazon CloudWatch instance-level metrics for Amazon
RDS (p. 806).
• RDS Optimized Reads is supported for the following RDS for MariaDB versions:
• 10.11.4 and higher 10.11 versions
• 10.6.7 and higher 10.6 versions
• 10.5.16 and higher 10.5 versions
• 10.4.25 and higher 10.4 versions
For information about RDS for MariaDB versions, see MariaDB on Amazon RDS versions (p. 1265).
• You can't change the location of temporary objects to persistent storage (Amazon EBS) on the DB
instance classes that support RDS Optimized Reads.
• When binary logging is enabled on a DB instance, the maximum transaction size is limited by the
size of the instance store. In MariaDB, any session that requires more storage than the value of
binlog_cache_size writes transaction changes to temporary binlog cache files, which are created
on the instance store.
• Transactions can fail when the instance store is full.
1283
Amazon Relational Database Service User Guide
Improving write performance with
RDS Optimized Writes for MariaDB
Topics
• Overview of RDS Optimized Writes (p. 1284)
• Using RDS Optimized Writes (p. 1285)
• Limitations for RDS Optimized Writes (p. 1288)
Relational databases, like MariaDB, provide the ACID properties of atomicity, consistency, isolation, and
durability for reliable database transactions. To help provide these properties, MariaDB uses a data
storage area called the doublewrite buffer that prevents partial page write errors. These errors occur
when there is a hardware failure while the database is updating a page, such as in the case of a power
outage. A MariaDB database can detect partial page writes and recover with a copy of the page in the
doublewrite buffer. While this technique provides protection, it also results in extra write operations. For
more information about the MariaDB doublewrite buffer, see InnoDB Doublewrite Buffer in the MariaDB
documentation.
With RDS Optimized Writes turned on, RDS for MariaDB databases write only once when flushing data
to durable storage without using the doublewrite buffer. RDS Optimized Writes is useful if you run write-
heavy workloads on your RDS for MariaDB databases. Examples of databases with write-heavy workloads
include ones that support digital payments, financial trading, and gaming applications.
These databases run on DB instance classes that use the AWS Nitro System. Because of the hardware
configuration in these systems, the database can write 16-KiB pages directly to data files reliably and
durably in one step. The AWS Nitro System makes RDS Optimized Writes possible.
You can set the new database parameter rds.optimized_writes to control the RDS Optimized Writes
feature for RDS for MariaDB databases. Access this parameter in the DB parameter groups of RDS for
MariaDB for the following versions:
• AUTO – Turn on RDS Optimized Writes if the database supports it. Turn off RDS Optimized Writes if the
database doesn't support it. This setting is the default.
• OFF – Turn off RDS Optimized Writes even if the database supports it.
If you migrate an RDS for MariaDB database that is configured to use RDS Optimized Writes to a DB
instance class that doesn't support the feature, RDS automatically turns off RDS Optimized Writes for the
database.
1284
Amazon Relational Database Service User Guide
Using
When RDS Optimized Writes is turned off, the database uses the MariaDB doublewrite buffer.
To determine whether an RDS for MariaDB database is using RDS Optimized Writes, view the current
value of the innodb_doublewrite parameter for the database. If the database is using RDS Optimized
Writes, this parameter is set to FALSE (0).
• You specify a DB engine version and DB instance class that support RDS Optimized Writes.
• RDS Optimized Writes is supported for the following RDS for MariaDB versions:
• 10.11.4 and higher 10.11 versions
• 10.6.10 and higher 10.6 versions
For information about RDS for MariaDB versions, see MariaDB on Amazon RDS versions (p. 1265).
• RDS Optimized Writes is supported for RDS for MariaDB databases that use the following DB
instance classes:
• db.m7g
• db.m6g
• db.m6gd
• db.m6i
• db.m5d
• db.r7g
• db.r6g
• db.r6gd
• db.r6i
• db.r5
• db.r5b
• db.r5d
• db.x2idn
• db.x2iedn
For information about DB instance classes, see the section called “DB instance classes” (p. 11).
DB instance class availability differs for AWS Regions. To determine whether a DB instance class is
supported in a specific AWS Region, see the section called “Determining DB instance class support in
AWS Regions” (p. 68).
• In the parameter group associated with the database, the rds.optimized_writes parameter is set
to AUTO. In default parameter groups, this parameter is always set to AUTO.
If you want to use a DB engine version and DB instance class that support RDS Optimized Writes, but you
don't want to use this feature, then specify a custom parameter group when you create the database. In
this parameter group, set the rds.optimized_writes parameter to OFF. If you want the database to
use RDS Optimized Writes later, you can set the parameter to AUTO to turn it on. For information about
creating custom parameter groups and setting parameters, see Working with parameter groups (p. 347).
For information about creating a DB instance, see Creating an Amazon RDS DB instance (p. 300).
1285
Amazon Relational Database Service User Guide
Using
Console
When you use the RDS console to create an RDS for MariaDB database, you can filter for the DB engine
versions and DB instance classes that support RDS Optimized Writes. After you turn on the filters, you
can choose from the available DB engine versions and DB instance classes.
To choose a DB engine version that supports RDS Optimized Writes, filter for the RDS for MariaDB DB
engine versions that support it in Engine version, and then choose a version.
In the Instance configuration section, filter for the DB instance classes that support RDS Optimized
Writes, and then choose a DB instance class.
1286
Amazon Relational Database Service User Guide
Using
After you make these selections, you can choose other settings that meet your requirements and finish
creating the RDS for MariaDB database with the console.
AWS CLI
To create a DB instance by using the AWS CLI, use the create-db-instance command. Make sure the --
engine-version and --db-instance-class values support RDS Optimized Writes. In addition, make
sure the parameter group associated with the DB instance has the rds.optimized_writes parameter
set to AUTO. This example associates the default parameter group with the DB instance.
For Windows:
RDS API
You can create a DB instance using the CreateDBInstance operation. When you use this operation, make
sure the EngineVersion and DBInstanceClass values support RDS Optimized Writes. In addition,
make sure the parameter group associated with the DB instance has the rds.optimized_writes
parameter set to AUTO.
1287
Amazon Relational Database Service User Guide
Limitations
• You can only modify a database to turn on RDS Optimized Writes if the database was created with a
DB engine version and DB instance class that support the feature. In this case, if RDS Optimized Writes
is turned off for the database, you can turn it on by setting the rds.optimized_writes parameter
to AUTO. For more information, see Using RDS Optimized Writes (p. 1285).
• You can only modify a database to turn on RDS Optimized Writes if the database was created after
the feature was released. The underlying file system format and organization that RDS Optimized
Writes needs is incompatible with the file system format of databases created before the feature was
released. By extension, you can't use any snapshots of previously created instances with this feature
because the snapshots use the older, incompatible file system.
Important
To convert from the old format to the new format, you need to perform a full database
migration. If you want to use this feature on DB instances that were created before the feature
was released, create a new empty DB instance and manually migrate your older DB instance to
the newer DB instance. You can migrate your older DB instance using the native mysqldump
tool, replication, or AWS Database Migration Service. For more information, see mariadb-
dump/mysqldump in the MariaDB documentation, Working with MariaDB replication in
Amazon RDS (p. 1318), and the AWS Database Migration Service User Guide. For help with
migrating using AWS tools, contact support.
• When you are restoring an RDS for MariaDB database from a snapshot, you can only turn on RDS
Optimized Writes for the database if all of the following conditions apply:
• The snapshot was created from a database that supports RDS Optimized Writes.
• The snapshot was created from a database that was created after RDS Optimized Writes was
released.
• The snapshot is restored to a database that supports RDS Optimized Writes.
• The restored database is associated with a parameter group that has the rds.optimized_writes
parameter set to AUTO.
1288
Amazon Relational Database Service User Guide
Upgrading the MariaDB DB engine
Major version upgrades can contain database changes that are not backward-compatible with existing
applications. As a result, you must manually perform major version upgrades of your DB instances. You
can initiate a major version upgrade by modifying your DB instance. However, before you perform a
major version upgrade, we recommend that you follow the instructions in Major version upgrades for
MariaDB (p. 1290).
In contrast, minor version upgrades include only changes that are backward-compatible with existing
applications. You can initiate a minor version upgrade manually by modifying your DB instance. Or
you can enable the Auto minor version upgrade option when creating or modifying a DB instance.
Doing so means that your DB instance is automatically upgraded after Amazon RDS tests and approves
the new version. For information about performing an upgrade, see Upgrading a DB instance engine
version (p. 429).
If your MariaDB DB instance is using read replicas, you must upgrade all of the read replicas before
upgrading the source instance. If your DB instance is in a Multi-AZ deployment, both the writer and
standby replicas are upgraded. Your DB instance might not be available until the upgrade is complete.
For more information about MariaDB supported versions and version management, see MariaDB on
Amazon RDS versions (p. 1265).
Database engine upgrades require downtime. The duration of the downtime varies based on the size of
your DB instance.
Tip
You can minimize the downtime required for DB instance upgrade by using a blue/green
deployment. For more information, see Using Amazon RDS Blue/Green Deployments for
database updates (p. 566).
Topics
• Overview of upgrading (p. 1289)
• Major version upgrades for MariaDB (p. 1290)
• Upgrading a MariaDB DB instance (p. 1291)
• Automatic minor version upgrades for MariaDB (p. 1291)
• Using a read replica to reduce downtime when upgrading a MariaDB database (p. 1293)
Overview of upgrading
When you use the AWS Management Console to upgrade a DB instance, it shows the valid upgrade
targets for the DB instance. You can also use the following AWS CLI command to identify the valid
upgrade targets for a DB instance:
For Windows:
1289
Amazon Relational Database Service User Guide
Major version upgrades
For example, to identify the valid upgrade targets for a MariaDB version 10.5.17 DB instance, run the
following AWS CLI command:
For Windows:
Amazon RDS takes two or more DB snapshots during the upgrade process. Amazon RDS takes up to
two snapshots of the DB instance before making any upgrade changes. If the upgrade doesn't work for
your databases, you can restore one of these snapshots to create a DB instance running the old version.
Amazon RDS takes another snapshot of the DB instance when the upgrade completes. Amazon RDS
takes these snapshots regardless of whether AWS Backup manages the backups for the DB instance.
Note
Amazon RDS only takes DB snapshots if you have set the backup retention period for your DB
instance to a number greater than 0. To change your backup retention period, see Modifying an
Amazon RDS DB instance (p. 401).
After the upgrade is complete, you can't revert to the previous version of the database engine. If you
want to return to the previous version, restore the first DB snapshot taken to create a new DB instance.
You control when to upgrade your DB instance to a new version supported by Amazon RDS. This level of
control helps you maintain compatibility with specific database versions and test new versions with your
application before deploying in production. When you are ready, you can perform version upgrades at
the times that best fit your schedule.
If your DB instance is using read replication, you must upgrade all of the Read Replicas before upgrading
the source instance.
If your DB instance is in a Multi-AZ deployment, both the primary and standby DB instances are
upgraded. The primary and standby DB instances are upgraded at the same time and you will experience
an outage until the upgrade is complete. The time for the outage varies based on your database engine,
engine version, and the size of your DB instance.
1290
Amazon Relational Database Service User Guide
Upgrading a MariaDB DB instance
Amazon RDS supports the following in-place upgrades for major versions of the MariaDB database
engine:
To perform a major version upgrade to a MariaDB version lower than 10.6, upgrade to each major
version in order. For example, to upgrade from version 10.3 to version 10.5, upgrade in the following
order: 10.3 to 10.4 and then 10.4 to 10.5.
If you are using a custom parameter group, and you perform a major version upgrade, you must specify
either a default parameter group for the new DB engine version or create your own custom parameter
group for the new DB engine version. Associating the new parameter group with the DB instance requires
a customer-initiated database reboot after the upgrade completes. The instance's parameter group
status will show pending-reboot if the instance needs to be rebooted to apply the parameter group
changes. An instance's parameter group status can be viewed in the AWS Management Console or by
using a "describe" call such as describe-db-instances.
In the AWS Management Console, these settings are under Additional configuration. The following
image shows the Auto minor version upgrade setting.
1291
Amazon Relational Database Service User Guide
Automatic minor version upgrades
For more information about these settings, see Settings for DB instances (p. 402).
For some RDS for MariaDB major versions in some AWS Regions, one minor version is designated
by RDS as the automatic upgrade version. After a minor version has been tested and approved by
Amazon RDS, the minor version upgrade occurs automatically during your maintenance window. RDS
doesn't automatically set newer released minor versions as the automatic upgrade version. Before RDS
designates a newer automatic upgrade version, several criteria are considered, such as the following:
You can use the following AWS CLI command to determine the current automatic minor upgrade target
version for a specified MariaDB minor version in a specific AWS Region.
For Windows:
For example, the following AWS CLI command determines the automatic minor upgrade target for
MariaDB minor version 10.5.16 in the US East (Ohio) AWS Region (us-east-2).
For Windows:
1292
Amazon Relational Database Service User Guide
Upgrading with reduced downtime
----------------------------------
| DescribeDBEngineVersions |
+--------------+-----------------+
| AutoUpgrade | EngineVersion |
+--------------+-----------------+
| True | 10.5.17 |
| False | 10.5.18 |
| False | 10.5.19 |
| False | 10.6.5 |
| False | 10.6.7 |
| False | 10.6.8 |
| False | 10.6.10 |
| False | 10.6.11 |
| False | 10.6.12 |
+--------------+-----------------+
In this example, the AutoUpgrade value is True for MariaDB version 10.5.17. So, the automatic minor
upgrade target is MariaDB version 10.5.17, which is highlighted in the output.
A MariaDB DB instance is automatically upgraded during your maintenance window if the following
criteria are met:
For more information, see Automatically upgrading the minor engine version (p. 431).
If you can't use a blue/green deployment and your MariaDB DB instance is currently in use with a
production application, you can use the following procedure to upgrade the database version for your DB
instance. This procedure can reduce the amount of downtime for your application.
By using a read replica, you can perform most of the maintenance steps ahead of time and minimize the
necessary changes during the actual outage. With this technique, you can test and prepare the new DB
instance without making any changes to your existing DB instance.
The following procedure shows an example of upgrading from MariaDB version 10.5 to MariaDB version
10.6. You can use the same general steps for upgrades to other major versions.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Create a read replica of your MariaDB 10.5 DB instance. This process creates an upgradable copy of
your database. Other read replicas of the DB instance might also exist.
a. In the console, choose Databases, and then choose the DB instance that you want to upgrade.
1293
Amazon Relational Database Service User Guide
Upgrading with reduced downtime
By default, a read replica is created as a Single-AZ deployment with backups disabled. Because the
read replica ultimately becomes the production DB instance, it is a best practice to configure a Multi-
AZ deployment and enable backups now.
a. In the console, choose Databases, and then choose the read replica that you just created.
b. Choose Modify.
c. For Multi-AZ deployment, choose Create a standby instance.
d. For Backup Retention Period, choose a positive nonzero value, such as 3 days, and then choose
Continue.
e. For Scheduling of modifications, choose Apply immediately.
f. Choose Modify DB instance.
4. When the read replica Status shows Available, upgrade the read replica to MariaDB 10.6.
a. In the console, choose Databases, and then choose the read replica that you just created.
b. Choose Modify.
c. For DB engine version, choose the MariaDB 10.6 version to upgrade to, and then choose
Continue.
d. For Scheduling of modifications, choose Apply immediately.
e. Choose Modify DB instance to start the upgrade.
5. When the upgrade is complete and Status shows Available, verify that the upgraded read replica is
up-to-date with the source MariaDB 10.5 DB instance. To verify, connect to the read replica and run
the SHOW REPLICA STATUS command. If the Seconds_Behind_Master field is 0, then replication
is up-to-date.
Note
Previous versions of MariaDB used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MariaDB version before 10.6, then use SHOW SLAVE STATUS.
6. (Optional) Create a read replica of your read replica.
If you want the DB instance to have a read replica after it is promoted to a standalone DB instance,
you can create the read replica now.
a. In the console, choose Databases, and then choose the read replica that you just upgraded.
b. For Actions, choose Create read replica.
c. Provide a value for DB instance identifier for your read replica and ensure that the DB instance
class and other settings match your MariaDB 10.5 DB instance.
d. Choose Create read replica.
7. (Optional) Configure a custom DB parameter group for the read replica.
If you want the DB instance to use a custom parameter group after it is promoted to a standalone
DB instance, you can create the DB parameter group now and associate it with the read replica.
a. Create a custom DB parameter group for MariaDB 10.6. For instructions, see Creating a DB
parameter group (p. 350).
b. Modify the parameters that you want to change in the DB parameter group you just created. For
instructions, see Modifying parameters in a DB parameter group (p. 352).
1294
Amazon Relational Database Service User Guide
Upgrading with reduced downtime
c. In the console, choose Databases, and then choose the read replica.
d. Choose Modify.
e. For DB parameter group, choose the MariaDB 10.6 DB parameter group you just created, and
then choose Continue.
f. For Scheduling of modifications, choose Apply immediately.
g. Choose Modify DB instance to start the upgrade.
8. Make your MariaDB 10.6 read replica a standalone DB instance.
Important
When you promote your MariaDB 10.6 read replica to a standalone DB instance, it is no
longer a replica of your MariaDB 10.5 DB instance. We recommend that you promote
your MariaDB 10.6 read replica during a maintenance window when your source MariaDB
10.5 DB instance is in read-only mode and all write operations are suspended. When the
promotion is completed, you can direct your write operations to the upgraded MariaDB 10.6
DB instance to ensure that no write operations are lost.
In addition, we recommend that, before promoting your MariaDB 10.6 read replica, you
perform all necessary data definition language (DDL) operations on your MariaDB 10.6
read replica. An example is creating indexes. This approach avoids negative effects on the
performance of the MariaDB 10.6 read replica after it has been promoted. To promote a
read replica, use the following procedure.
a. In the console, choose Databases, and then choose the read replica that you just upgraded.
b. For Actions, choose Promote.
c. Choose Yes to enable automated backups for the read replica instance. For more information,
see Working with backups (p. 591).
d. Choose Continue.
e. Choose Promote Read Replica.
9. You now have an upgraded version of your MariaDB database. At this point, you can direct your
applications to the new MariaDB 10.6 DB instance.
1295
Amazon Relational Database Service User Guide
Importing data into a MariaDB DB instance
Find techniques to import data into an RDS for MariaDB DB instance in the following table.
Existing Any One Minimal Create a read replica for ongoing replication. Working
MariaDB time or Promote the read replica for one-time with DB
DB ongoing creation of a new DB instance. instance
instance read
replicas (p. 438)
Existing Small One time Some Copy the data directly to your MySQL DB Importing
MariaDB instance using a command-line utility. data
or from a
MySQL MariaDB
database or
MySQL
database
to a
MariaDB
or
MySQL
DB
instance (p. 1297)
Data not Medium One time Some Create flat files and import them using the Importing
stored mysqlimport utility. data
in an from any
existing source
database to a
MariaDB
or
MySQL
DB
instance (p. 1313)
1296
Amazon Relational Database Service User Guide
Importing data from an external database
Any Any One Minimal Use AWS Database Migration Service What
existing time or to migrate the database with minimal is AWS
database ongoing downtime and, for many database DB Database
engines, continue ongoing replication. Migration
Service
and
Using a
MySQL-
compatible
database
as a
target
for AWS
DMS in
the AWS
Database
Migration
Service
User
Guide
Note
The mysql system database contains authentication and authorization information required
to log into your DB instance and access your data. Dropping, altering, renaming, or truncating
tables, data, or other contents of the mysql database in your DB instance can result in errors and
might render the DB instance and your data inaccessible. If this occurs, the DB instance can be
restored from a snapshot using the AWS CLI restore-db-instance-from-db-snapshot or
recovered using restore-db-instance-to-point-in-time commands.
1297
Amazon Relational Database Service User Guide
Importing data from an external database
Note
If you are using a MySQL DB instance and your scenario supports it, it's easier to move data
in and out of Amazon RDS by using backup files and Amazon S3. For more information, see
Restoring a backup into a MySQL DB instance (p. 1680).
A typical mysqldump command to move data from an external database to an Amazon RDS DB instance
looks similar to the following.
mysqldump -u local_user \
--databases database_name \
--single-transaction \
--compress \
--order-by-primary \
-plocal_password | mysql -u RDS_user \
--port=port_number \
--host=host_name \
-pRDS_password
Important
Make sure not to leave a space between the -p option and the entered password.
Specify credentials other than the prompts shown here as a security best practice.
Make sure that you're aware of the following recommendations and considerations:
• Exclude the following schemas from the dump file: sys, performance_schema, and
information_schema. The mysqldump utility excludes these schemas by default.
• If you need to migrate users and privileges, consider using a tool that generates the data control
language (DCL) for recreating them, such as the pt-show-grants utility.
• To perform the import, make sure the user doing so has access to the DB instance. For more
information, see Controlling access with security groups (p. 2680).
• -u local_user – Use to specify a user name. In the first usage of this parameter, you specify the
name of a user account on the local MariaDB or MySQL database identified by the --databases
parameter.
• --databases database_name – Use to specify the name of the database on the local MariaDB or
MySQL instance that you want to import into Amazon RDS.
• --single-transaction – Use to ensure that all of the data loaded from the local database is
consistent with a single point in time. If there are other processes changing the data while mysqldump
is reading it, using this parameter helps maintain data integrity.
• --compress – Use to reduce network bandwidth consumption by compressing the data from the local
database before sending it to Amazon RDS.
• --order-by-primary – Use to reduce load time by sorting each table's data by its primary key.
• -plocal_password – Use to specify a password. In the first usage of this parameter, you specify the
password for the user account identified by the first -u parameter.
• -u RDS_user – Use to specify a user name. In the second usage of this parameter, you specify the
name of a user account on the default database for the MariaDB or MySQL DB instance identified by
the --host parameter.
• --port port_number – Use to specify the port for your MariaDB or MySQL DB instance. By default,
this is 3306 unless you changed the value when creating the instance.
• --host host_name – Use to specify the Domain Name System (DNS) name from the Amazon RDS DB
instance endpoint, for example, myinstance.123456789012.us-east-1.rds.amazonaws.com.
You can find the endpoint value in the instance details in the Amazon RDS Management Console.
1298
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
• -pRDS_password – Use to specify a password. In the second usage of this parameter, you specify the
password for the user account identified by the second -u parameter.
Make sure to create any stored procedures, triggers, functions, or events manually in your Amazon RDS
database. If you have any of these objects in the database that you are copying, then exclude them when
you run mysqldump. To do so, include the following parameters with your mysqldump command: --
routines=0 --triggers=0 --events=0.
The following example copies the world sample database on the local host to a MySQL DB instance.
For Windows, run the following command in a command prompt that has been opened by right-clicking
Command Prompt on the Windows programs menu and choosing Run as administrator:
mysqldump -u localuser ^
--databases world ^
--single-transaction ^
--compress ^
--order-by-primary ^
--routines=0 ^
--triggers=0 ^
--events=0 ^
-plocalpassword | mysql -u rdsuser ^
--port=3306 ^
--host=myinstance.123456789012.us-east-1.rds.amazonaws.com ^
-prdspassword
Note
Specify credentials other than the prompts shown here as a security best practice.
In this procedure, you transfer a copy of your database data to an Amazon EC2 instance and import the
data into a new Amazon RDS database. You then use replication to bring the Amazon RDS database
up-to-date with your live external instance, before redirecting your application to the Amazon RDS
1299
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
database. Configure MariaDB replication based on global transaction identifiers (GTIDs) if the external
instance is MariaDB 10.0.24 or higher and the target instance is RDS for MariaDB. Otherwise, configure
replication based on binary log coordinates. We recommend GTID-based replication if your external
database supports it because GTID-based replication is a more reliable method. For more information,
see Global transaction ID in the MariaDB documentation.
Note
If you want to import data into a MySQL DB instance and your scenario supports it, we
recommend moving data in and out of Amazon RDS by using backup files and Amazon S3. For
more information, see Restoring a backup into a MySQL DB instance (p. 1680).
Note
We don't recommend that you use this procedure with source MySQL databases from MySQL
versions earlier than version 5.5 because of potential replication issues. For more information,
see Replication compatibility between MySQL versions in the MySQL documentation.
1300
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
You can use the mysqldump utility to create a database backup in either SQL or delimited-text format.
We recommend that you do a test run with each format in a non-production environment to see which
method minimizes the amount of time that mysqldump runs.
We also recommend that you weigh mysqldump performance against the benefit offered by using the
delimited-text format for loading. A backup using delimited-text format creates a tab-separated text
file for each table being dumped. To reduce the amount of time required to import your database, you
can load these files in parallel using the LOAD DATA LOCAL INFILE command. For more information
about choosing a mysqldump format and then loading the data, see Using mysqldump for backups in
the MySQL documentation.
Before you start the backup operation, make sure to set the replication options on the MariaDB or
MySQL database that you are copying to Amazon RDS. The replication options include turning on
binary logging and setting a unique server ID. Setting these options causes your server to start logging
database transactions and prepares it to be a source replication instance later in this process.
Note
Use the --single-transaction option with mysqldump because it dumps a consistent
state of the database. To ensure a valid dump file, don't run data definition language (DDL)
statements while mysqldump is running. You can schedule a maintenance window for these
operations.
Exclude the following schemas from the dump file: sys, performance_schema, and
information_schema. The mysqldump utility excludes these schemas by default.
To migrate users and privileges, consider using a tool that generates the data control language
(DCL) for recreating them, such as the pt-show-grants utility.
sudo vi /etc/my.cnf
1301
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
Add the log_bin and server_id options to the [mysqld] section. The log_bin option provides
a file name identifier for binary log files. The server_id option provides a unique identifier for the
server in source-replica relationships.
The following example shows the updated [mysqld] section of a my.cnf file.
[mysqld]
log-bin=mysql-bin
server-id=1
Specify --master-data=2 to create a backup file that can be used to start replication between
servers. For more information, see the mysqldump documentation.
To improve performance and ensure data integrity, use the --order-by-primary and --single-
transaction options of mysqldump.
To avoid including the MySQL system database in the backup, do not use the --all-databases
option with mysqldump. For more information, see Creating a data snapshot using mysqldump in the
MySQL documentation.
Use chmod if necessary to make sure that the directory where the backup file is being created is
writeable.
Important
On Windows, run the command window as an administrator.
• To produce SQL output, use the following command.
sudo mysqldump \
--databases database_name \
--master-data=2 \
--single-transaction \
--order-by-primary \
-r backup.sql \
-u local_user \
1302
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
-p password
Note
Specify credentials other than the prompts shown here as a security best practice.
For Windows:
mysqldump ^
--databases database_name ^
--master-data=2 ^
--single-transaction ^
--order-by-primary ^
-r backup.sql ^
-u local_user ^
-p password
Note
Specify credentials other than the prompts shown here as a security best practice.
• To produce delimited-text output, use the following command.
sudo mysqldump \
--tab=target_directory \
--fields-terminated-by ',' \
--fields-enclosed-by '"' \
--lines-terminated-by 0x0d0a \
database_name \
--master-data=2 \
--single-transaction \
--order-by-primary \
-p password
For Windows:
mysqldump ^
--tab=target_directory ^
--fields-terminated-by "," ^
--fields-enclosed-by """ ^
--lines-terminated-by 0x0d0a ^
database_name ^
--master-data=2 ^
--single-transaction ^
--order-by-primary ^
-p password
Note
Specify credentials other than the prompts shown here as a security best practice.
Make sure to create any stored procedures, triggers, functions, or events manually in
your Amazon RDS database. If you have any of these objects in the database that you
are copying, exclude them when you run mysqldump. To do so, include the following
arguments with your mysqldump command: --routines=0 --triggers=0 --
events=0.
When using the delimited-text format, a CHANGE MASTER TO comment is returned when you
run mysqldump. This comment contains the master log file name and position. If the external
instance is other than MariaDB version 10.0.24 or higher, note the values for MASTER_LOG_FILE
and MASTER_LOG_POS. You need these values when setting up replication.
1303
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
If you are using SQL format, you can get the master log file name and position in the CHANGE
MASTER TO comment in the backup file. If the external instance is MariaDB version 10.0.24 or
higher, you can get the GTID in the next step.
2. If the external instance you are using is MariaDB version 10.0.24 or higher, you use GTID-based
replication. Run SHOW MASTER STATUS on the external MariaDB instance to get the binary log file
name and position, then convert them to a GTID by running BINLOG_GTID_POS on the external
MariaDB instance.
gzip backup.sql
1304
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
Important
Be sure to copy sensitive data using a secure network transfer protocol.
1305
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
5. Connect to your Amazon EC2 instance and install the latest updates and the MySQL client tools using
the following commands.
For more information, see Connect to your instance in the Amazon Elastic Compute Cloud User Guide
for Linux.
Important
This example installs the MySQL client on an Amazon Machine Image (AMI) for an Amazon
Linux distribution. To install the MySQL client on a different distribution, such as Ubuntu or
Red Hat Enterprise Linux, this example doesn't work. For information about installing MySQL,
see Installing and Upgrading MySQL in the MySQL documentation.
6. While connected to your Amazon EC2 instance, decompress your database backup file. The following
are examples.
• To decompress SQL output, use the following command.
gzip backup.sql.gz -d
1306
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
To create a MariaDB or MySQL DB instance, follow the instructions in Creating an Amazon RDS DB
instance (p. 300) and use the following guidelines:
• Specify a DB engine version that is compatible with your source DB instance, as follows:
• If your source instance is MySQL 5.5.x, the Amazon RDS DB instance must be MySQL.
• If your source instance is MySQL 5.6.x or 5.7.x, the Amazon RDS DB instance must be MySQL or
MariaDB.
• If your source instance is MySQL 8.0.x, the Amazon RDS DB instance must be MySQL 8.0.x.
• If your source instance is MariaDB 5.5 or higher, the Amazon RDS DB instance must be MariaDB.
• Specify the same virtual private cloud (VPC) and VPC security group as for your Amazon EC2
instance. This approach ensures that your Amazon EC2 instance and your Amazon RDS instance
are visible to each other over the network. Make sure your DB instance is publicly accessible. To
set up replication with your source database as described later, your DB instance must be publicly
accessible.
• Don't configure multiple Availability Zones, backup retention, or read replicas until after you have
imported the database backup. When that import is completed, you can configure Multi-AZ and
backup retention for the production instance.
3. Review the default configuration options for the Amazon RDS database. If the default parameter
group for the database doesn't have the configuration options that you want, find a different one
that does or create a new parameter group. For more information on creating a parameter group,
see Working with parameter groups (p. 347).
4. Connect to the new Amazon RDS database as the master user. Create the users required to support
the administrators, applications, and services that need to access the instance. The hostname for the
Amazon RDS database is the Endpoint value for this instance without including the port number.
An example is mysampledb.123456789012.us-west-2.rds.amazonaws.com. You can find the
endpoint value in the database details in the Amazon RDS Management Console.
5. Connect to your Amazon EC2 instance. For more information, see Connect to your instance in the
Amazon Elastic Compute Cloud User Guide for Linux.
6. Connect to your Amazon RDS database as a remote host from your Amazon EC2 instance using the
mysql command. The following is an example.
1307
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
• For delimited-text format, first create the database, if it isn't the default database you created
when setting up the Amazon RDS database.
mysql> LOAD DATA LOCAL INFILE 'table1.txt' INTO TABLE table1 FIELDS TERMINATED BY ','
ENCLOSED BY '"' LINES TERMINATED BY '0x0d0a';
$ mysql> LOAD DATA LOCAL INFILE 'table2.txt' INTO TABLE table2 FIELDS TERMINATED BY
',' ENCLOSED BY '"' LINES TERMINATED BY '0x0d0a';
etc...
To improve performance, you can perform these operations in parallel from multiple connections
so that all of your tables are created and then loaded at the same time.
Note
If you used any data-formatting options with mysqldump when you initially dumped
the table, make sure to use the same options with mysqlimport or LOAD DATA LOCAL
INFILE to ensure proper interpretation of the data file contents.
8. Run a simple SELECT query against one or two of the tables in the imported database to verify that
the import was successful.
If you no longer need the Amazon EC2 instance used in this procedure, terminate the EC2 instance
to reduce your AWS resource usage. To terminate an EC2 instance, see Terminating an instance in the
Amazon EC2 User Guide.
1308
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
The permissions required to start replication on an Amazon RDS database are restricted and not
available to your Amazon RDS master user. Because of this, make sure to use either the Amazon RDS
mysql.rds_set_external_master (p. 1769) command or the mysql.rds_set_external_master_gtid (p. 1345)
command to configure replication, and the mysql.rds_start_replication (p. 1780) command to start
replication between your live database and your Amazon RDS database.
To start replication
Earlier, you turned on binary logging and set a unique server ID for your source database. Now you can
set up your Amazon RDS database as a replica with your live database as the source replication instance.
1. In the Amazon RDS Management Console, add the IP address of the server that hosts the source
database to the VPC security group for the Amazon RDS database. For more information on modifying
a VPC security group, see Security groups for your VPC in the Amazon Virtual Private Cloud User Guide.
You might also need to configure your local network to permit connections from the IP address of
your Amazon RDS database, so that it can communicate with your source instance. To find the IP
address of the Amazon RDS database, use the host command.
host rds_db_endpoint
The hostname is the DNS name from the Amazon RDS database endpoint, for example
myinstance.123456789012.us-east-1.rds.amazonaws.com. You can find the endpoint value
in the instance details in the Amazon RDS Management Console.
2. Using the client of your choice, connect to the source instance and create a user to be used for
replication. This account is used solely for replication and must be restricted to your domain to
improve security. The following is an example.
MySQL 8.0
1309
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
Note
Specify credentials other than the prompts shown here as a security best practice.
3. For the source instance, grant REPLICATION CLIENT and REPLICATION SLAVE privileges to
your replication user. For example, to grant the REPLICATION CLIENT and REPLICATION SLAVE
privileges on all databases for the 'repl_user' user for your domain, issue the following command.
MySQL 8.0
Note
Specify credentials other than the prompts shown here as a security best practice.
4. If you used SQL format to create your backup file and the external instance is not MariaDB 10.0.24 or
higher, look at the contents of that file.
cat backup.sql
The file includes a CHANGE MASTER TO comment that contains the master log file name and
position. This comment is included in the backup file when you use the --master-data option with
mysqldump. Note the values for MASTER_LOG_FILE and MASTER_LOG_POS.
--
-- Position to start replication or point-in-time recovery from
--
If you used delimited text format to create your backup file and the external instance isn't MariaDB
10.0.24 or higher, you should already have binary log coordinates from step 1 of the procedure at "To
create a backup copy of your existing database" in this topic.
If the external instance is MariaDB 10.0.24 or higher, you should already have the GTID from which to
start replication from step 2 of the procedure at "To create a backup copy of your existing database" in
this topic.
5. Make the Amazon RDS database the replica. If the external instance isn't MariaDB 10.0.24 or higher,
connect to the Amazon RDS database as the master user and identify the source database as the
source replication instance by using the mysql.rds_set_external_master (p. 1769) command. Use the
master log file name and master log position that you determined in the previous step if you have a
SQL format backup file. Or use the name and position that you determined when creating the backup
files if you used delimited-text format. The following is an example.
1310
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
Note
Specify credentials other than the prompts shown here as a security best practice.
If the external instance is MariaDB 10.0.24 or higher, connect to the Amazon RDS database as
the master user and identify the source database as the source replication instance by using the
mysql.rds_set_external_master_gtid (p. 1345) command. Use the GTID that you determined in step 2
of the procedure at "To create a backup copy of your existing database" in this topic.. The following is
an example.
CALL mysql.rds_start_replication;
7. On the Amazon RDS database, run the SHOW REPLICA STATUS command to determine when the
replica is up-to-date with the source replication instance. The results of the SHOW REPLICA STATUS
command include the Seconds_Behind_Master field. When the Seconds_Behind_Master field
returns 0, then the replica is up-to-date with the source replication instance.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS.
If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
For a MariaDB 10.5, 10.6, or 10.11 DB instance, run the mysql.rds_replica_status (p. 1344) procedure
instead of the MySQL command.
8. After the Amazon RDS database is up-to-date, turn on automated backups so you can restore
that database if needed. You can turn on or modify automated backups for your Amazon RDS
database using the Amazon RDS Management Console. For more information, see Working with
backups (p. 591).
1311
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime
To redirect your live application to your MariaDB or MySQL database and stop
replication
1. To add the VPC security group for the Amazon RDS database, add the IP address of the server that
hosts the application. For more information on modifying a VPC security group, see Security groups
for your VPC in the Amazon Virtual Private Cloud User Guide.
2. Verify that the Seconds_Behind_Master field in the SHOW REPLICA STATUS command results is 0,
which indicates that the replica is up-to-date with the source replication instance.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS.
If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
For a MariaDB 10.5, 10.6, or 10.11 DB instance, run the mysql.rds_replica_status (p. 1344) procedure
instead of the MySQL command.
3. Close all connections to the source when their transactions complete.
4. Update your application to use the Amazon RDS database. This update typically involves changing the
connection settings to identify the hostname and port of the Amazon RDS database, the user account
and password to connect with, and the database to use.
5. Connect to the DB instance.
CALL mysql.rds_stop_replication;
7. Run the mysql.rds_reset_external_master (p. 1769) command on your Amazon RDS database to reset
the replication configuration so this instance is no longer identified as a replica.
1312
Amazon Relational Database Service User Guide
Importing data from any source
CALL mysql.rds_reset_external_master;
8. Turn on additional Amazon RDS features such as Multi-AZ support and read replicas. For more
information, see Configuring and managing a Multi-AZ deployment (p. 492) and Working with DB
instance read replicas (p. 438).
We also recommend creating DB snapshots of the target Amazon RDS DB instance before and after the
data load. Amazon RDS DB snapshots are complete backups of your DB instance that can be used to
restore your DB instance to a known state. When you initiate a DB snapshot, I/O operations to your DB
instance are momentarily suspended while your database is backed up.
Creating a DB snapshot immediately before the load makes it possible for you to restore the database
to its state before the load, if you need to. A DB snapshot taken immediately after the load protects
you from having to load the data again in case of a mishap and can also be used to seed new database
instances.
The following list shows the steps to take. Each step is discussed in more detail following.
Whenever possible, order the data by the primary key of the table being loaded. Doing this drastically
improves load times and minimizes disk storage requirements.
The speed and efficiency of this procedure depends on keeping the size of the files small. If the
uncompressed size of any individual file is larger than 1 GiB, split it into multiple files and load each one
separately.
On Unix-like systems (including Linux), use the split command. For example, the following command
splits the sales.csv file into multiple files of less than 1 GiB, splitting only at line breaks (-C 1024m).
The new files are named sales.part_00, sales.part_01, and so on.
1313
Amazon Relational Database Service User Guide
Importing data from any source
Of course, this might not be possible or practical. If you can't stop applications from accessing the DB
instance before the load, take steps to ensure the availability and integrity of your data. The specific
steps required vary greatly depending upon specific use cases and site requirements.
The example following uses the AWS CLI create-db-snapshot command to create a DB snapshot of
the AcmeRDS instance and give the DB snapshot the identifier "preload".
For Windows:
You can also use the restore from DB snapshot functionality to create test DB instances for dry runs or to
undo changes made during the load.
Keep in mind that restoring a database from a DB snapshot creates a new DB instance that, like all
DB instances, has a unique identifier and endpoint. To restore the DB instance without changing the
endpoint, first delete the DB instance so that you can reuse the endpoint.
For example, to create a DB instance for dry runs or other testing, you give the DB instance its own
identifier. In the example, AcmeRDS-2" is the identifier. The example connects to the DB instance using
the endpoint associated with AcmeRDS-2.
1314
Amazon Relational Database Service User Guide
Importing data from any source
For Windows:
To reuse the existing endpoint, first delete the DB instance and then give the restored database the same
identifier.
For Windows:
The preceding example takes a final DB snapshot of the DB instance before deleting it. This is optional
but recommended.
Turning off automated backups erases all existing backups, so point-in-time recovery isn't possible after
automated backups have been turned off. Disabling automated backups is a performance optimization
and isn't required for data loads. Manual DB snapshots aren't affected by turning off automated backups.
All existing manual DB snapshots are still available for restore.
Turning off automated backups reduces load time by about 25 percent and reduces the amount of
storage space required during the load. If you plan to load data into a new DB instance that contains
no data, turning off backups is an easy way to speed up the load and avoid using the additional storage
needed for backups. However, in some cases you might plan to load into a DB instance that already
contains data. If so, weigh the benefits of turning off backups against the impact of losing the ability to
perform point-in-time-recovery.
DB instances have automated backups turned on by default (with a one day retention period). To turn off
automated backups, set the backup retention period to zero. After the load, you can turn backups back
on by setting the backup retention period to a nonzero value. To turn on or turn off backups, Amazon
RDS shuts the DB instance down and restarts it to turn MariaDB or MySQL logging on or off.
Use the AWS CLI modify-db-instance command to set the backup retention to zero and apply the
change immediately. Setting the retention period to zero requires a DB instance restart, so wait until the
restart has completed before proceeding.
1315
Amazon Relational Database Service User Guide
Importing data from any source
For Windows:
You can check the status of your DB instance with the AWS CLI describe-db-instances command.
The following example displays the DB instance status of the AcmeRDS DB instance.
Use the --compress option to minimize network traffic. The --fields-terminated-by=',' option is used for
CSV files, and the --local option specifies that the incoming data is located on the client. Without the --
local option, the Amazon RDS DB instance looks for the data on the database host, so always specify the
--local option. For the --host option, specify the DB instance endpoint of the RDS for MySQL DB instance.
In the following examples, replace master_user with the master username for your DB instance.
Replace hostname with the endpoint for your DB instance. An example of a DB instance endpoint is my-
db-instance.123456789012.us-west-2.rds.amazonaws.com.
For RDS for MySQL version 8.0.15 and higher, run the following statement before using the mysqlimport
utility.
mysqlimport --local \
--compress \
--user=master_user \
--password \
--host=hostname \
--fields-terminated-by=',' Acme sales.part_*
For Windows:
mysqlimport --local ^
--compress ^
--user=master_user ^
--password ^
1316
Amazon Relational Database Service User Guide
Importing data from any source
--host=hostname ^
--fields-terminated-by="," Acme sales.part_*
For very large data loads, take additional DB snapshots periodically between loading files and note
which files have been loaded. If a problem occurs, you can easily resume from the point of the last DB
snapshot, avoiding lengthy reloads.
The following example uses the AWS CLI modify-db-instance command to turn on automated
backups for the AcmeRDS DB instance and set the retention period to one day.
For Windows:
1317
Amazon Relational Database Service User Guide
Working with MariaDB replication
You can also configure replication based on binary log coordinates for a MariaDB DB instance. For
MariaDB instances, you can also configure replication based on global transaction IDs (GTIDs), which
provides better crash safety. For more information, see Configuring GTID-based replication with an
external source instance (p. 1328).
The following are other replication options available with RDS for MariaDB:
• You can set up replication between an RDS for MariaDB DB instance and a MySQL or MariaDB instance
that is external to Amazon RDS. For information about configuring replication with an external source,
see Configuring binary log file position replication with an external source instance (p. 1331).
• You can configure replication to import databases from a MySQL or MariaDB instance that is external
to Amazon RDS, or to export databases to such instances. For more information, see Importing data to
an Amazon RDS MariaDB or MySQL DB instance with reduced downtime (p. 1299) and Exporting data
from a MySQL DB instance by using replication (p. 1728).
For any of these replication options, you can use either row-based replication, statement-based, or mixed
replication. Row-based replication only replicates the changed rows that result from a SQL statement.
Statement-based replication replicates the entire SQL statement. Mixed replication uses statement-
based replication when possible, but switches to row-based replication when SQL statements that are
unsafe for statement-based replication are run. In most cases, mixed replication is recommended. The
binary log format of the DB instance determines whether replication is row-based, statement-based, or
mixed. For information about setting the binary log format, see Binary logging format (p. 907).
Topics
• Working with MariaDB read replicas (p. 1318)
• Configuring GTID-based replication with an external source instance (p. 1328)
• Configuring binary log file position replication with an external source instance (p. 1331)
Topics
• Configuring read replicas with MariaDB (p. 1319)
• Configuring replication filters with MariaDB (p. 1319)
• Configuring delayed replication with MariaDB (p. 1324)
• Updating read replicas with MariaDB (p. 1325)
• Working with Multi-AZ read replica deployments with MariaDB (p. 1325)
• Using cascading read replicas with RDS for MariaDB (p. 1326)
• Monitoring MariaDB read replicas (p. 1326)
• Starting and stopping replication with MariaDB read replicas (p. 1327)
• Troubleshooting a MariaDB read replica problem (p. 1327)
1318
Amazon Relational Database Service User Guide
Working with MariaDB read replicas
You can create up to 15 read replicas from one DB instance within the same Region. For replication to
operate effectively, each read replica should have as the same amount of compute and storage resources
as the source DB instance. If you scale the source DB instance, also scale the read replicas.
RDS for MariaDB supports cascading read replicas. To learn how to configure cascading read replicas, see
Using cascading read replicas with RDS for MariaDB (p. 1326).
You can run multiple read replica create and delete actions at the same time that reference the same
source DB instance. When you perform these actions, stay within the limit of 15 read replicas for each
source instance.
• To reduce the size of a read replica. With replication filtering, you can exclude the databases and tables
that aren't needed on the read replica.
• To exclude databases and tables from read replicas for security reasons.
• To replicate different databases and tables for specific use cases at different read replicas. For example,
you might use specific read replicas for analytics or sharding.
• For a DB instance that has read replicas in different AWS Regions, to replicate different databases or
tables in different AWS Regions.
Note
You can also use replication filters to specify which databases and tables are replicated
with a primary MariaDB DB instance that is configured as a replica in an inbound replication
topology. For more information about this configuration, see Configuring binary log file position
replication with an external source instance (p. 1724).
Topics
• Setting replication filtering parameters for RDS for MariaDB (p. 1319)
• Replication filtering limitations for RDS for MariaDB (p. 1320)
• Replication filtering examples for RDS for MariaDB (p. 1320)
• Viewing the replication filters for a read replica (p. 1323)
• replicate-do-db – Replicate changes to the specified databases. When you set this parameter for a
read replica, only the databases specified in the parameter are replicated.
• replicate-ignore-db – Don't replicate changes to the specified databases. When the replicate-
do-db parameter is set for a read replica, this parameter isn't evaluated.
• replicate-do-table – Replicate changes to the specified tables. When you set this parameter for a
read replica, only the tables specified in the parameter are replicated. Also, when the replicate-do-
1319
Amazon Relational Database Service User Guide
Working with MariaDB read replicas
db or replicate-ignore-db parameter is set, the database that includes the specified tables must
be included in replication with the read replica.
• replicate-ignore-table – Don't replicate changes to the specified tables. When the replicate-
do-table parameter is set for a read replica, this parameter isn't evaluated.
• replicate-wild-do-table – Replicate tables based on the specified database and table
name patterns. The % and _ wildcard characters are supported. When the replicate-do-db or
replicate-ignore-db parameter is set, make sure to include the database that includes the
specified tables in replication with the read replica.
• replicate-wild-ignore-table – Don't replicate tables based on the specified database and table
name patterns. The % and _ wildcard characters are supported. When the replicate-do-table or
replicate-wild-do-table parameter is set for a read replica, this parameter isn't evaluated.
The parameters are evaluated in the order that they are listed. For more information about how these
parameters work, see the MariaDB documentation.
By default, each of these parameters has an empty value. On each read replica, you can use these
parameters to set, change, and delete replication filters. When you set one of these parameters, separate
each filter from others with a comma.
You can use the % and _ wildcard characters in the replicate-wild-do-table and replicate-
wild-ignore-table parameters. The % wildcard matches any number of characters, and the _
wildcard matches only one character.
The binary logging format of the source DB instance is important for replication because it determines
the record of data changes. The setting of the binlog_format parameter determines whether the
replication is row-based or statement-based. For more information, see Binary logging format (p. 907).
Note
All data definition language (DDL) statements are replicated as statements, regardless of the
binlog_format setting on the source DB instance.
You can set parameters in a parameter group using the AWS Management Console, AWS CLI, or RDS API.
For information about setting parameters, see Modifying parameters in a DB parameter group (p. 352).
1320
Amazon Relational Database Service User Guide
Working with MariaDB read replicas
When you set parameters in a parameter group, all of the DB instances associated with the parameter
group use the parameter settings. If you set the replication filtering parameters in a parameter group,
make sure that the parameter group is associated only with read replicas. Leave the replication filtering
parameters empty for source DB instances.
The following examples set the parameters using the AWS CLI. These examples set ApplyMethod to
immediate so that the parameter changes occur immediately after the CLI command completes. If you
want a pending change to be applied after the read replica is rebooted, set ApplyMethod to pending-
reboot.
The following example includes the mydb1 and mydb2 databases in replication. When you set
replicate-do-db for a read replica, only the databases specified in the parameter are replicated.
For Windows:
The following example includes the table1 and table2 tables in database mydb1 in replication.
For Windows:
1321
Amazon Relational Database Service User Guide
Working with MariaDB read replicas
--db-parameter-group-name myparametergroup ^
--parameters "[{"ParameterName": "replicate-do-table", "ParameterValue":
"mydb1.table1,mydb1.table2", "ApplyMethod":"immediate"}]"
The following example includes tables with names that begin with orders and returns in database
mydb in replication.
For Windows:
The following example shows you how to use the escape character \ to escape a wildcard character that
is part of a name.
Assume that you have several table names in database mydb1 that start with my_table, and you want
to include these tables in replication. The table names include an underscore, which is also a wildcard
character, so the example escapes the underscore in the table names.
For Windows:
The following example excludes the mydb1 and mydb2 databases from replication.
1322
Amazon Relational Database Service User Guide
Working with MariaDB read replicas
For Windows:
The following example excludes tables table1 and table2 in database mydb1 from replication.
For Windows:
The following example excludes tables with names that begin with orders and returns in database
mydb from replication.
For Windows:
• Check the settings of the replication filtering parameters in the parameter group associated with the
read replica.
For instructions, see Viewing parameter values for a DB parameter group (p. 359).
1323
Amazon Relational Database Service User Guide
Working with MariaDB read replicas
• In a MariaDB client, connect to the read replica and run the SHOW REPLICA STATUS statement.
In the output, the following fields show the replication filters for the read replica:
• Replicate_Do_DB
• Replicate_Ignore_DB
• Replicate_Do_Table
• Replicate_Ignore_Table
• Replicate_Wild_Do_Table
• Replicate_Wild_Ignore_Table
For more information about these fields, see Checking Replication Status in the MySQL
documentation.
Note
Previous versions of MariaDB used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS.
If you are using a MariaDB version before 10.5, then use SHOW SLAVE STATUS.
• Stop replication to the read replica before the change that caused the disaster is sent to it.
Note
Topics
• Configuring delayed replication during read replica creation (p. 1324)
• Modifying delayed replication for an existing read replica (p. 1325)
• Promoting a read replica (p. 1325)
1. Using a MariaDB client, connect to the MariaDB DB instance to be the source for read replicas as the
master user.
1324
Amazon Relational Database Service User Guide
Working with MariaDB read replicas
2. Run the mysql.rds_set_configuration (p. 1758) stored procedure with the target delay
parameter.
For example, run the following stored procedure to specify that replication is delayed by at least one
hour (3,600 seconds) for any read replica created from the current DB instance.
Note
After running this stored procedure, any read replica you create using the AWS CLI or
Amazon RDS API is configured with replication delayed by the specified number of seconds.
1. Using a MariaDB client, connect to the read replica as the master user.
2. Use the mysql.rds_stop_replication (p. 1782) stored procedure to stop replication.
3. Run the mysql.rds_set_source_delay (p. 1777) stored procedure.
For example, run the following stored procedure to specify that replication to the read replica is
delayed by at least one hour (3600 seconds).
call mysql.rds_set_source_delay(3600);
You can create a read replica as a Multi-AZ DB instance. Amazon RDS creates a standby of your replica in
another Availability Zone for failover support for the replica. Creating your read replica as a Multi-AZ DB
instance is independent of whether the source database is a Multi-AZ DB instance.
1325
Amazon Relational Database Service User Guide
Working with MariaDB read replicas
With cascading read replicas, your RDS for MariaDB DB instance sends data to the first read replica in the
chain. That read replica then sends data to the second replica in the chain, and so on. The end result is
that all read replicas in the chain have the changes from the RDS for MariaDB DB instance, but without
the overhead solely on the source DB instance.
You can create a series of up to three read replicas in a chain from a source RDS for MariaDB DB instance.
For example, suppose that you have an RDS for MariaDB DB instance, mariadb-main. You can do the
following:
• Starting with mariadb-main, create the first read replica in the chain, read-replica-1.
• Next, from read-replica-1, create the next read replica in the chain, read-replica-2.
• Finally, from read-replica-2, create the third read replica in the chain, read-replica-3.
You can't create another read replica beyond this third cascading read replica in the series for mariadb-
main. A complete series of instances from an RDS for MariaDB source DB instance through to the end of
a series of cascading read replicas can consist of at most four DB instances.
For cascading read replicas to work, each source RDS for MariaDB DB instance must have automated
backups turned on. To turn on automatic backups on a read replica, first create the read replica, and
then modify the read replica to turn on automatic backups. For more information, see Creating a read
replica (p. 445).
As with any read replica, you can promote a read replica that's part of a cascade. Promoting a read
replica from within a chain of read replicas removes that replica from the chain. For example, suppose
that you want to move some of the workload from your mariadb-main DB instance to a new instance
for use by the accounting department only. Assuming the chain of three read replicas from the example,
you decide to promote read-replica-2. The chain is affected as follows:
For more information about promoting read replicas, see Promoting a read replica to be a standalone DB
instance (p. 447).
Common causes for replication lag for MariaDB are the following:
• A network outage.
• Writing to tables with indexes on a read replica. If the read_only parameter is not set to 0 on the
read replica, it can break replication.
1326
Amazon Relational Database Service User Guide
Working with MariaDB read replicas
• Using a nontransactional storage engine such as MyISAM. Replication is only supported for the InnoDB
storage engine on MariaDB.
When the ReplicaLag metric reaches 0, the replica has caught up to the source DB instance. If the
ReplicaLag metric returns -1, then replication is currently not active. ReplicaLag = -1 is equivalent to
Seconds_Behind_Master = NULL.
If replication is stopped for more than 30 consecutive days, either manually or due to a replication error,
Amazon RDS ends replication between the source DB instance and all read replicas. It does so to prevent
increased storage requirements on the source DB instance and long failover times. The read replica DB
instance is still available. However, replication can't be resumed because the binary logs required by the
read replica are deleted from the source DB instance after replication is ended. You can create a new read
replica for the source DB instance to reestablish replication.
You can do several things to reduce the lag between updates to a source DB instance and the subsequent
updates to the read replica, such as the following:
• Sizing a read replica to have a storage size and DB instance class comparable to the source DB
instance.
• Ensuring that parameter settings in the DB parameter groups used by the source DB instance and
the read replica are compatible. For more information and an example, see the discussion of the
max_allowed_packet parameter later in this section.
Amazon RDS monitors the replication status of your read replicas and updates the Replication State
field of the read replica instance to Error if replication stops for any reason. An example might be if
DML queries run on your read replica conflict with the updates made on the source DB instance.
You can review the details of the associated error thrown by the MariaDB engine by viewing the
Replication Error field. Events that indicate the status of the read replica are also generated,
including RDS-EVENT-0045 (p. 887), RDS-EVENT-0046 (p. 888), and RDS-EVENT-0047 (p. 883). For
more information about events and subscribing to events, see Working with Amazon RDS event
notification (p. 855). If a MariaDB error message is returned, review the error in the MariaDB error
message documentation.
One common issue that can cause replication errors is when the value for the max_allowed_packet
parameter for a read replica is less than the max_allowed_packet parameter for the source DB
1327
Amazon Relational Database Service User Guide
Configuring GTID-based replication
with an external source instance
instance. The max_allowed_packet parameter is a custom parameter that you can set in a DB
parameter group that is used to specify the maximum size of DML code that can be run on the database.
In some cases, the max_allowed_packet parameter value in the DB parameter group associated with
a source DB instance is smaller than the max_allowed_packet parameter value in the DB parameter
group associated with the source's read replica. In these cases, the replication process can throw an error
(Packet bigger than 'max_allowed_packet' bytes) and stop replication. You can fix the error by having
the source and read replica use DB parameter groups with the same max_allowed_packet parameter
values.
Other common situations that can cause replication errors include the following:
• Writing to tables on a read replica. If you are creating indexes on a read replica, you need to have the
read_only parameter set to 0 to create the indexes. If you are writing to tables on the read replica, it
might break replication.
• Using a non-transactional storage engine such as MyISAM. read replicas require a transactional storage
engine. Replication is only supported for the InnoDB storage engine on MariaDB.
• Using unsafe nondeterministic queries such as SYSDATE(). For more information, see Determination
of safe and unsafe statements in binary logging.
If you decide that you can safely skip an error, you can follow the steps described in Skipping the current
replication error (p. 1744). Otherwise, you can delete the read replica and create an instance using the
same DB instance identifier so that the endpoint remains the same as that of your old read replica. If a
replication error is fixed, the Replication State changes to replicating.
For MariaDB DB instances, in some cases read replicas can't be switched to the secondary if some
binary log (binlog) events aren't flushed during the failure. In these cases, manually delete and recreate
the read replicas. You can reduce the chance of this happening by setting the following parameter
values: sync_binlog=1 and innodb_flush_log_at_trx_commit=1. These settings might reduce
performance, so test their impact before implementing the changes in a production environment.
• Monitor failover events for the RDS for MariaDB DB instance that is your replica. If a failover occurs,
then the DB instance that is your replica might be recreated on a new host with a different network
address. For information on how to monitor failover events, see Working with Amazon RDS event
notification (p. 855).
• Maintain the binary logs (binlogs) on your source instance until you have verified that they have been
applied to the replica. This maintenance ensures that you can restore your source instance in the event
of a failure.
• Turn on automated backups on your MariaDB DB instance on Amazon RDS. Turning on automated
backups ensures that you can restore your replica to a particular point in time if you need to
resynchronize your source instance and replica. For information on backups and Point-In-Time Restore,
see Backing up and restoring (p. 590).
Note
The permissions required to start replication on a MariaDB DB instance are restricted and
not available to your Amazon RDS master user. Because of this, you must use the Amazon
RDS mysql.rds_set_external_master_gtid (p. 1345) and mysql.rds_start_replication (p. 1780)
commands to set up replication between your live database and your RDS for MariaDB database.
1328
Amazon Relational Database Service User Guide
Configuring GTID-based replication
with an external source instance
To start replication between an external source instance and a MariaDB DB instance on Amazon RDS, use
the following procedure.
To start replication
2. Get the current GTID of the external MariaDB instance. You can do this by using mysql or the query
editor of your choice to run SELECT @@gtid_current_pos;.
mysqldump \
--databases database_name \
--single-transaction \
--compress \
--order-by-primary \
-u local_user \
-plocal_password | mysql \
--host=hostname \
--port=3306 \
-u RDS_user_name \
-pRDS_password
For Windows:
mysqldump ^
--databases database_name ^
--single-transaction ^
--compress ^
--order-by-primary \
-u local_user \
-plocal_password | mysql ^
--host=hostname ^
--port=3306 ^
-u RDS_user_name ^
-pRDS_password
Note
Make sure that there isn't a space between the -p option and the entered password.
Specify a password other than the prompt shown here as a security best practice.
Use the --host, --user (-u), --port and -p options in the mysql command to specify the host
name, user name, port, and password to connect to your MariaDB DB instance. The host name is the
DNS name from the MariaDB DB instance endpoint, for example myinstance.123456789012.us-
east-1.rds.amazonaws.com. You can find the endpoint value in the instance details in the
Amazon RDS Management Console.
4. Make the source MariaDB instance writeable again.
1329
Amazon Relational Database Service User Guide
Configuring GTID-based replication
with an external source instance
5. In the Amazon RDS Management Console, add the IP address of the server that hosts the external
MariaDB database to the VPC security group for the MariaDB DB instance. For more information on
modifying a VPC security group, go to Security groups for your VPC in the Amazon Virtual Private
Cloud User Guide.
The IP address can change when the following conditions are met:
• You are using a public IP address for communication between the external source instance and the
DB instance.
• The external source instance was stopped and restarted.
If these conditions are met, verify the IP address before adding it.
You might also need to configure your local network to permit connections from the IP address of
your MariaDB DB instance, so that it can communicate with your external MariaDB instance. To find
the IP address of the MariaDB DB instance, use the host command.
host db_instance_endpoint
The host name is the DNS name from the MariaDB DB instance endpoint.
6. Using the client of your choice, connect to the external MariaDB instance and create a MariaDB user
to be used for replication. This account is used solely for replication and must be restricted to your
domain to improve security. The following is an example.
Note
Specify a password other than the prompt shown here as a security best practice.
7. For the external MariaDB instance, grant REPLICATION CLIENT and REPLICATION SLAVE
privileges to your replication user. For example, to grant the REPLICATION CLIENT and
REPLICATION SLAVE privileges on all databases for the 'repl_user' user for your domain, issue
the following command.
8. Make the MariaDB DB instance the replica. Connect to the MariaDB DB instance as the master
user and identify the external MariaDB database as the replication source instance by using the
mysql.rds_set_external_master_gtid (p. 1345) command. Use the GTID that you determined in Step
2. The following is an example.
Note
Specify a password other than the prompt shown here as a security best practice.
9. On the MariaDB DB instance, issue the mysql.rds_start_replication (p. 1780) command to start
replication.
CALL mysql.rds_start_replication;
1330
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance
Topics
• Before you begin (p. 1331)
• Configuring binary log file position replication with an external source instance (p. 1331)
The permissions required to start replication on an Amazon RDS DB instance are restricted and not
available to your Amazon RDS master user. Because of this, make sure that you use the Amazon RDS
mysql.rds_set_external_master (p. 1769) and mysql.rds_start_replication (p. 1780) commands to set up
replication between your live database and your Amazon RDS database.
To set the binary logging format for a MySQL or MariaDB database, update the binlog_format
parameter. If your DB instance uses the default DB instance parameter group, create a new DB parameter
group to modify binlog_format settings. We recommend that you use the default setting for
binlog_format, which is MIXED. However, you can also set binlog_format to ROW or STATEMENT if
you need a specific binary log (binlog) format. Reboot your DB instance for the change to take effect.
For information about setting the binlog_format parameter, see Configuring MySQL binary
logging (p. 921). For information about the implications of different MySQL replication types,
see Advantages and disadvantages of statement-based and row-based replication in the MySQL
documentation.
• Monitor failover events for the Amazon RDS DB instance that is your replica. If a failover occurs,
then the DB instance that is your replica might be recreated on a new host with a different network
address. For information on how to monitor failover events, see Working with Amazon RDS event
notification (p. 855).
• Maintain the binlogs on your source instance until you have verified that they have been applied to
the replica. This maintenance makes sure that you can restore your source instance in the event of a
failure.
• Turn on automated backups on your Amazon RDS DB instance. Turning on automated backups makes
sure that you can restore your replica to a particular point in time if you need to re-synchronize your
source instance and replica. For information on backups and point-in-time restore, see Backing up and
restoring (p. 590).
1331
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance
2. Run the SHOW MASTER STATUS command on the source MySQL or MariaDB instance to determine
the binlog location.
File Position
------------------------------------
mysql-bin-changelog.000031 107
------------------------------------
3. Copy the database from the external instance to the Amazon RDS DB instance using mysqldump.
For very large databases, you might want to use the procedure in Importing data to an Amazon RDS
MariaDB or MySQL database with reduced downtime (p. 1690).
For Windows:
Note
Make sure that there isn't a space between the -p option and the entered password.
To specify the host name, user name, port, and password to connect to your Amazon RDS DB
instance, use the --host, --user (-u), --port and -p options in the mysql command. The
host name is the Domain Name Service (DNS) name from the Amazon RDS DB instance endpoint,
for example myinstance.123456789012.us-east-1.rds.amazonaws.com. You can find the
endpoint value in the instance details in the AWS Management Console.
4. Make the source MySQL or MariaDB instance writeable again.
For more information on making backups for use with replication, see the MySQL documentation.
5. In the AWS Management Console, add the IP address of the server that hosts the external database
to the virtual private cloud (VPC) security group for the Amazon RDS DB instance. For more
information on modifying a VPC security group, see Security groups for your VPC in the Amazon
Virtual Private Cloud User Guide.
1332
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance
The IP address can change when the following conditions are met:
• You are using a public IP address for communication between the external source instance and the
DB instance.
• The external source instance was stopped and restarted.
If these conditions are met, verify the IP address before adding it.
You might also need to configure your local network to permit connections from the IP address of
your Amazon RDS DB instance. You do this so that your local network can communicate with your
external MySQL or MariaDB instance. To find the IP address of the Amazon RDS DB instance, use the
host command.
host db_instance_endpoint
The host name is the DNS name from the Amazon RDS DB instance endpoint.
6. Using the client of your choice, connect to the external instance and create a user to use for
replication. Use this account solely for replication and restrict it to your domain to improve security.
The following is an example.
Note
Specify a password other than the prompt shown here as a security best practice.
7. For the external instance, grant REPLICATION CLIENT and REPLICATION SLAVE privileges to
your replication user. For example, to grant the REPLICATION CLIENT and REPLICATION SLAVE
privileges on all databases for the 'repl_user' user for your domain, issue the following command.
8. Make the Amazon RDS DB instance the replica. To do so, first connect to the Amazon RDS DB
instance as the master user. Then identify the external MySQL or MariaDB database as the source
instance by using the mysql.rds_set_external_master (p. 1769) command. Use the master log file
name and master log position that you determined in step 2. The following is an example.
Note
On RDS for MySQL, you can choose to use delayed replication by running the
mysql.rds_set_external_master_with_delay (p. 1774) stored procedure instead.
On RDS for MySQL, one reason to use delayed replication is to turn on disaster
recovery with the mysql.rds_start_replication_until (p. 1780) stored procedure.
Currently, RDS for MariaDB supports delayed replication but doesn't support the
mysql.rds_start_replication_until procedure.
9. On the Amazon RDS DB instance, issue the mysql.rds_start_replication (p. 1780) command to start
replication.
CALL mysql.rds_start_replication;
1333
Amazon Relational Database Service User Guide
Options for MariaDB
SERVER_AUDIT_FILE_PATH
/rdsdbdata/ /rdsdbdata/ The location of the log file. The log file
log/audit/ log/audit/ contains the record of the activity specified in
SERVER_AUDIT_EVENTS. For more information,
see Viewing and listing database log files (p. 895)
and MariaDB database log files (p. 902).
1–1000000000 1000000
SERVER_AUDIT_FILE_ROTATE_SIZE The size in bytes that when reached, causes the
file to rotate. For more information, see Log file
size (p. 906).
0–100 9
SERVER_AUDIT_FILE_ROTATIONS The number of log rotations to save when
server_audit_output_type=file. If set
to 0, then the log file never rotates. For more
information, see Log file size (p. 906) and
Downloading a database log file (p. 896).
SERVER_AUDIT_EVENTS
CONNECT, CONNECT, The types of activity to record in the log. Installing
QUERY, TABLE, QUERY the MariaDB Audit Plugin is itself logged.
QUERY_DDL,
QUERY_DML, • CONNECT: Log successful and unsuccessful
QUERY_DML_NO_SELECT, connections to the database, and
QUERY_DCL disconnections from the database.
• QUERY: Log the text of all queries run against
the database.
• TABLE: Log tables affected by queries when the
queries are run against the database.
1334
Amazon Relational Database Service User Guide
MariaDB Audit Plugin support
Multiple
SERVER_AUDIT_INCL_USERS None Include only activity from the specified
comma- users. By default, activity is recorded for
separated all users. SERVER_AUDIT_INCL_USERS
values and SERVER_AUDIT_EXCL_USERS are
mutually exclusive. If you add values
to SERVER_AUDIT_INCL_USERS,
make sure no values are added to
SERVER_AUDIT_EXCL_USERS.
Multiple
SERVER_AUDIT_EXCL_USERS None Exclude activity from the specified users.
comma- By default, activity is recorded for all
separated users. SERVER_AUDIT_INCL_USERS
values and SERVER_AUDIT_EXCL_USERS are
mutually exclusive. If you add values
to SERVER_AUDIT_EXCL_USERS,
make sure no values are added to
SERVER_AUDIT_INCL_USERS.
SERVER_AUDIT_LOGGING
ON ON Logging is active. The only valid value is ON.
Amazon RDS does not support deactivating
logging. If you want to deactivate logging, remove
the MariaDB Audit Plugin. For more information,
see Removing the MariaDB Audit Plugin (p. 1336).
0–2147483647 1024
SERVER_AUDIT_QUERY_LOG_LIMIT The limit on the length of the query string in a
record.
1335
Amazon Relational Database Service User Guide
MariaDB Audit Plugin support
After you add the MariaDB Audit Plugin, you don't need to restart your DB instance. As soon as the
option group is active, auditing begins immediately.
1. Determine the option group you want to use. You can create a new option group or use an existing
option group. If you want to use an existing option group, skip to the next step. Otherwise, create a
custom DB option group. Choose mariadb for Engine, and choose 10.3 or higher for Major engine
version. For more information, see Creating an option group (p. 332).
2. Add the MARIADB_AUDIT_PLUGIN option to the option group, and configure the option settings.
For more information about adding options, see Adding an option to an option group (p. 335). For
more information about each setting, see Audit Plugin option settings (p. 1334).
3. Apply the option group to a new or existing DB instance.
• For a new DB instance, you apply the option group when you launch the instance. For more
information, see Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, you apply the option group by modifying the DB instance and
attaching the new option group. For more information, see Modifying an Amazon RDS DB
instance (p. 401).
To remove the MariaDB Audit Plugin from a DB instance, do one of the following:
• Remove the MariaDB Audit Plugin option from the option group it belongs to. This change affects all
DB instances that use the option group. For more information, see Removing an option from an option
group (p. 343)
• Modify the DB instance and specify a different option group that doesn't include the plugin. This
change affects a single DB instance. You can specify the default (empty) option group, or a different
custom option group. For more information, see Modifying an Amazon RDS DB instance (p. 401).
1336
Amazon Relational Database Service User Guide
MariaDB Audit Plugin support
1337
Amazon Relational Database Service User Guide
Parameters for MariaDB
You can view the parameters available for a specific RDS for MariaDB version using the RDS console or
the AWS CLI. For information about viewing the parameters in a MariaDB parameter group in the RDS
console, see Viewing parameter values for a DB parameter group (p. 359).
Using the AWS CLI, you can view the parameters for an RDS for MariaDB version by running the
describe-engine-default-parameters command. Specify one of the following values for the --
db-parameter-group-family option:
• mariadb10.11
• mariadb10.6
• mariadb10.5
• mariadb10.4
• mariadb10.3
For example, to view the parameters for RDS for MariaDB version 10.6, run the following command.
{
"EngineDefaults": {
"Parameters": [
{
"ParameterName": "alter_algorithm",
"Description": "Specify the alter table algorithm.",
"Source": "engine-default",
"ApplyType": "dynamic",
"DataType": "string",
"AllowedValues": "DEFAULT,COPY,INPLACE,NOCOPY,INSTANT",
"IsModifiable": true
},
{
"ParameterName": "analyze_sample_percentage",
"Description": "Percentage of rows from the table ANALYZE TABLE will sample
to collect table statistics.",
"Source": "engine-default",
"ApplyType": "dynamic",
"DataType": "float",
"AllowedValues": "0-100",
"IsModifiable": true
},
1338
Amazon Relational Database Service User Guide
MySQL parameters that aren't available
{
"ParameterName": "aria_block_size",
"Description": "Block size to be used for Aria index pages.",
"Source": "engine-default",
"ApplyType": "static",
"DataType": "integer",
"AllowedValues": "1024-32768",
"IsModifiable": false
},
{
"ParameterName": "aria_checkpoint_interval",
"Description": "Interval in seconds between automatic checkpoints.",
"Source": "engine-default",
"ApplyType": "dynamic",
"DataType": "integer",
"AllowedValues": "0-4294967295",
"IsModifiable": true
},
...
To list only the modifiable parameters for RDS for MariaDB version 10.6, run the following command.
For Windows:
• bind_address
• binlog_error_action
• binlog_gtid_simple_recovery
• binlog_max_flush_queue_time
• binlog_order_commits
• binlog_row_image
• binlog_rows_query_log_events
• binlogging_impossible_mode
• block_encryption_mode
• core_file
• default_tmp_storage_engine
• div_precision_increment
• end_markers_in_json
• enforce_gtid_consistency
• eq_range_index_dive_limit
• explicit_defaults_for_timestamp
• gtid_executed
• gtid-mode
1339
Amazon Relational Database Service User Guide
MySQL parameters that aren't available
• gtid_next
• gtid_owned
• gtid_purged
• log_bin_basename
• log_bin_index
• log_bin_use_v1_row_events
• log_slow_admin_statements
• log_slow_slave_statements
• log_throttle_queries_not_using_indexes
• master-info-repository
• optimizer_trace
• optimizer_trace_features
• optimizer_trace_limit
• optimizer_trace_max_mem_size
• optimizer_trace_offset
• relay_log_info_repository
• rpl_stop_slave_timeout
• slave_parallel_workers
• slave_pending_jobs_size_max
• slave_rows_search_algorithms
• storage_engine
• table_open_cache_instances
• timed_mutexes
• transaction_allow_batching
• validate-password
• validate_password_dictionary_file
• validate_password_length
• validate_password_mixed_case_count
• validate_password_number_count
• validate_password_policy
• validate_password_special_char_count
1340
Amazon Relational Database Service User Guide
Migrating data from a MySQL DB
snapshot to a MariaDB DB instance
Migrating the snapshot doesn't affect the original DB instance from which the snapshot was taken. You
can test and validate the new DB instance before diverting traffic to it as a replacement for the original
DB instance.
After you migrate from MySQL to MariaDB, the MariaDB DB instance is associated with the default DB
parameter group and option group. After you restore the DB snapshot, you can associate a custom DB
parameter group with the new DB instance. However, a MariaDB parameter group has a different set
of configurable system variables. For information about the differences between MySQL and MariaDB
system variables, see System Variable Differences between MariaDB and MySQL. To learn about DB
parameter groups, see Working with parameter groups (p. 347). To learn about option groups, see
Working with option groups (p. 331).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots, and then select the MySQL DB snapshot you want to
migrate.
3. For Actions, choose Migrate snapshot. The Migrate database page appears.
4. For Migrate to DB Engine, choose mariadb.
Amazon RDS selects the DB engine version automatically. You can't change the DB engine version.
1341
Amazon Relational Database Service User Guide
Performing the migration
5. For the remaining sections, specify your DB instance settings. For information about each setting,
see Settings for DB instances (p. 308).
6. Choose Migrate.
AWS CLI
To migrate data from a MySQL DB snapshot to a MariaDB DB instance, use the AWS CLI restore-db-
instance-from-db-snapshot command with the following parameters:
Example
For Windows:
1342
Amazon Relational Database Service User Guide
Incompatibilities between MariaDB and MySQL
API
To migrate data from a MySQL DB snapshot to a MariaDB DB instance, call the Amazon RDS API
operation RestoreDBInstanceFromDBSnapshot.
SET old_passwords = 0;
UPDATE mysql.user SET plugin = 'mysql_native_password',
Password = PASSWORD('new_password')
WHERE (User, Host) = ('master_user_name', %);
FLUSH PRIVILEGES;
• If your RDS master user account uses the SHA-256 password hash, make sure to reset the password
using the AWS Management Console, the modify-db-instance AWS CLI command, or the
ModifyDBInstance RDS API operation. For information about modifying a DB instance, see Modifying
an Amazon RDS DB instance (p. 401).
• MariaDB doesn't support the Memcached plugin. However, the data used by the Memcached plugin
is stored as InnoDB tables. After you migrate a MySQL DB snapshot, you can access the data used by
the Memcached plugin using SQL. For more information about the innodb_memcache database, see
InnoDB memcached Plugin Internals.
1343
Amazon Relational Database Service User Guide
MariaDB on Amazon RDS SQL reference
You can use the system stored procedures that are available for MySQL DB instances and MariaDB
DB instances. These stored procedures are documented at RDS for MySQL stored procedure
reference (p. 1757). MariaDB DB instances support all of the stored procedures, except for
mysql.rds_start_replication_until and mysql.rds_start_replication_until_gtid.
Additionally, the following system stored procedures are supported only for Amazon RDS DB instances
running MariaDB:
mysql.rds_replica_status
Shows the replication status of a MariaDB read replica.
Call this procedure on the read replica to show status information on essential parameters of the replica
threads.
Syntax
CALL mysql.rds_replica_status;
Usage notes
This procedure is only supported for MariaDB DB instances running MariaDB version 10.5 and higher.
This procedure is the equivalent of the SHOW REPLICA STATUS command. This command isn't
supported for MariaDB version 10.5 and higher DB instances.
In prior versions of MariaDB, the equivalent SHOW SLAVE STATUS command required the REPLICATION
SLAVE privilege. In MariaDB version 10.5 and higher, it requires the REPLICATION REPLICA ADMIN
privilege. To protect the RDS management of MariaDB 10.5 and higher DB instances, this new privilege
isn't granted to the RDS master user.
Examples
The following example shows the status of a MariaDB read replica:
call mysql.rds_replica_status;
1344
Amazon Relational Database Service User Guide
mysql.rds_set_external_master_gtid
Connect_Retry: 60
Source_Log_File: mysql-bin-changelog.003988
Read_Source_Log_Pos: 405
Relay_Log_File: relaylog.011024
Relay_Log_Pos: 657
Relay_Source_Log_File: mysql-bin-changelog.003988
Replica_IO_Running: Yes
Replica_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
mysql.rds_sysinfo,mysql.rds_history,mysql.rds_replication_status
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Source_Log_Pos: 405
Relay_Log_Space: 1016
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Source_SSL_Allowed: No
Source_SSL_CA_File:
Source_SSL_CA_Path:
Source_SSL_Cert:
Source_SSL_Cipher:
Source_SSL_Key:
Seconds_Behind_Master: 0
Source_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Source_Server_Id: 807509301
Source_SSL_Crl:
Source_SSL_Crlpath:
Using_Gtid: Slave_Pos
Gtid_IO_Pos: 0-807509301-3980
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: optimistic
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Replica_SQL_Running_State: Reading event from the relay log
Replica_DDL_Groups: 15
Replica_Non_Transactional_Groups: 0
Replica_Transactional_Groups: 3658
1 row in set (0.000 sec)
mysql.rds_set_external_master_gtid
Configures GTID-based replication from a MariaDB instance running external to Amazon RDS to a
MariaDB DB instance. This stored procedure is supported only where the external MariaDB instance
is version 10.0.24 or higher. When setting up replication where one or both instances do not support
MariaDB global transaction identifiers (GTIDs), use mysql.rds_set_external_master (p. 1769).
Using GTIDs for replication provides crash-safety features not offered by binary log replication, so we
recommend it in cases where the replicating instances support it.
1345
Amazon Relational Database Service User Guide
mysql.rds_set_external_master_gtid
Syntax
CALL mysql.rds_set_external_master_gtid(
host_name
, host_port
, replication_user_name
, replication_user_password
, gtid
, ssl_encryption
);
Parameters
host_name
String. The host name or IP address of the MariaDB instance running external to Amazon RDS that
will become the source instance.
host_port
Integer. The port used by the MariaDB instance running external to Amazon RDS to be configured
as the source instance. If your network configuration includes SSH port replication that converts the
port number, specify the port number that is exposed by SSH.
replication_user_name
String. The ID of a user with REPLICATION SLAVE permissions in the MariaDB DB instance to be
configured as the read replica.
replication_user_password
String. The global transaction ID on the source instance that replication should start from.
You can use @@gtid_current_pos to get the current GTID if the source instance has been locked
while you are configuring replication, so the binary log doesn't change between the points when you
get the GTID and when replication starts.
Otherwise, if you are using mysqldump version 10.0.13 or greater to populate the replica instance
prior to starting replication, you can get the GTID position in the output by using the --master-
data or --dump-slave options. If you are not using mysqldump version 10.0.13 or greater, you
can run the SHOW MASTER STATUS or use those same mysqldump options to get the binary log
file name and position, then convert them to a GTID by running BINLOG_GTID_POS on the external
MariaDB instance:
For more information about the MariaDB implementation of GTIDs, go to Global transaction ID in
the MariaDB documentation.
ssl_encryption
A value that specifies whether Secure Socket Layer (SSL) encryption is used on the replication
connection. 1 specifies to use SSL encryption, 0 specifies to not use encryption. The default is 0.
Note
The MASTER_SSL_VERIFY_SERVER_CERT option isn't supported. This option is set to 0,
which means that the connection is encrypted, but the certificates aren't verified.
1346
Amazon Relational Database Service User Guide
mysql.rds_kill_query_id
Usage notes
The mysql.rds_set_external_master_gtid procedure must be run by the master user. It must be
run on the MariaDB DB instance that you are configuring as the replica of a MariaDB instance running
external to Amazon RDS. Before running mysql.rds_set_external_master_gtid, you must have
configured the instance of MariaDB running external to Amazon RDS as a source instance. For more
information, see Importing data into a MariaDB DB instance (p. 1296).
Warning
Do not use mysql.rds_set_external_master_gtid to manage replication between
two Amazon RDS DB instances. Use it only when replicating with a MariaDB instance running
external to RDS. For information about managing replication between Amazon RDS DB
instances, see Working with DB instance read replicas (p. 438).
When mysql.rds_set_external_master_gtid is called, Amazon RDS records the time, user, and an
action of "set master" in the mysql.rds_history and mysql.rds_replication_status tables.
Examples
When run on a MariaDB DB instance, the following example configures it as the replica of an instance of
MariaDB running external to Amazon RDS.
call mysql.rds_set_external_master_gtid
('Sourcedb.some.com',3306,'ReplicationUser','SomePassW0rd','0-123-456',0);
mysql.rds_kill_query_id
Ends a query running against the MariaDB server.
Syntax
CALL mysql.rds_kill_query_id(queryID);
Parameters
queryID
Usage notes
To stop a query running against the MariaDB server, use the mysql.rds_kill_query_id procedure
and pass in the ID of that query. To obtain the query ID, query the MariaDB Information schema
PROCESSLIST table, as shown following:
1347
Amazon Relational Database Service User Guide
mysql.rds_kill_query_id
Examples
The following example ends a query with a query ID of 230040:
call mysql.rds_kill_query_id(230040);
1348
Amazon Relational Database Service User Guide
Local time zone
To set the local time zone for a DB instance, set the time_zone parameter in the parameter group for
your DB instance to one of the supported values listed later in this section. When you set the time_zone
parameter for a parameter group, all DB instances and read replicas that are using that parameter group
change to use the new local time zone. For information on setting parameters in a parameter group, see
Working with parameter groups (p. 347).
After you set the local time zone, all new connections to the database reflect the change. If you have any
open connections to your database when you change the local time zone, you won't see the local time
zone update until after you close the connection and open a new connection.
You can set a different local time zone for a DB instance and one or more of its read replicas. To do this,
use a different parameter group for the DB instance and the replica or replicas and set the time_zone
parameter in each parameter group to a different local time zone.
If you are replicating across AWS Regions, then the source DB instance and the read replica use different
parameter groups (parameter groups are unique to an AWS Region). To use the same local time zone
for each instance, you must set the time_zone parameter in the instance's and read replica's parameter
groups.
When you restore a DB instance from a DB snapshot, the local time zone is set to UTC. You can update
the time zone to your local time zone after the restore is complete. If you restore a DB instance to a
point in time, then the local time zone for the restored DB instance is the time zone setting from the
parameter group of the restored DB instance.
The Internet Assigned Numbers Authority (IANA) publishes new time zones at https://www.iana.org/
time-zones several times a year. Every time RDS releases a new minor maintenance release of MariaDB, it
ships with the latest time zone data at the time of the release. When you use the latest RDS for MariaDB
versions, you have recent time zone data from RDS. To ensure that your DB instance has recent time
zone data, we recommend upgrading to a higher DB engine version. Alternatively, you can modify the
time zone tables in MariaDB DB instances manually. To do so, you can use SQL commands or run the
mysql_tzinfo_to_sql tool in a SQL client. After updating the time zone data manually, reboot your DB
instance so that the changes take effect. RDS doesn't modify or reset the time zone data of running DB
instances. New time zone data is installed only when you perform a database engine version upgrade.
You can set your local time zone to one of the following values.
Africa/Cairo Asia/Riyadh
Africa/Casablanca Asia/Seoul
Africa/Harare Asia/Shanghai
Africa/Monrovia Asia/Singapore
Africa/Nairobi Asia/Taipei
Africa/Tripoli Asia/Tehran
Africa/Windhoek Asia/Tokyo
America/Araguaina Asia/Ulaanbaatar
America/Asuncion Asia/Vladivostok
1349
Amazon Relational Database Service User Guide
Local time zone
America/Bogota Asia/Yakutsk
America/Buenos_Aires Asia/Yerevan
America/Caracas Atlantic/Azores
America/Chihuahua Australia/Adelaide
America/Cuiaba Australia/Brisbane
America/Denver Australia/Darwin
America/Fortaleza Australia/Hobart
America/Guatemala Australia/Perth
America/Halifax Australia/Sydney
America/Manaus Brazil/East
America/Matamoros Canada/Newfoundland
America/Monterrey Canada/Saskatchewan
America/Montevideo Canada/Yukon
America/Phoenix Europe/Amsterdam
America/Santiago Europe/Athens
America/Tijuana Europe/Dublin
Asia/Amman Europe/Helsinki
Asia/Ashgabat Europe/Istanbul
Asia/Baghdad Europe/Kaliningrad
Asia/Baku Europe/Moscow
Asia/Bangkok Europe/Paris
Asia/Beirut Europe/Prague
Asia/Calcutta Europe/Sarajevo
Asia/Damascus Pacific/Auckland
Asia/Dhaka Pacific/Fiji
Asia/Irkutsk Pacific/Guam
Asia/Jerusalem Pacific/Honolulu
Asia/Kabul Pacific/Samoa
Asia/Karachi US/Alaska
Asia/Kathmandu US/Central
Asia/Krasnoyarsk US/Eastern
Asia/Magadan US/East-Indiana
1350
Amazon Relational Database Service User Guide
Local time zone
Asia/Muscat US/Pacific
Asia/Novosibirsk UTC
1351
Amazon Relational Database Service User Guide
Known issues and limitations for MariaDB
Topics
• MariaDB file size limits in Amazon RDS (p. 1352)
• InnoDB reserved word (p. 1353)
• Custom ports (p. 1353)
• Performance Insights (p. 1353)
There are advantages and disadvantages to using InnoDB file-per-table tablespaces, depending on your
application. To determine the best approach for your application, see File-per-table tablespaces in the
MySQL documentation.
We don't recommend allowing tables to grow to the maximum file size. In general, a better practice is to
partition data into smaller tables, which can improve performance and recovery times.
One option that you can use for breaking up a large table into smaller tables is partitioning. Partitioning
distributes portions of your large table into separate files based on rules that you specify. For example,
if you store transactions by date, you can create partitioning rules that distribute older transactions into
separate files using partitioning. Then periodically, you can archive the historical transaction data that
doesn't need to be readily available to your application. For more information, see Partitioning in the
MySQL documentation.
• Use the following SQL command to determine if any of your tables are too large and are candidates
for partitioning.
Note
For MariaDB 10.6 and higher, this query also returns the size of the InnoDB system
tablespace.
For MariaDB versions earlier than 10.6, you can't determine the size of the InnoDB system
tablespace by querying the system tables. We recommend that you upgrade to a later
version.
SELECT SPACE,NAME,ROUND((ALLOCATED_SIZE/1024/1024/1024), 2)
as "Tablespace Size (GB)"
FROM information_schema.INNODB_SYS_TABLESPACES ORDER BY 3 DESC;
• Use the following SQL command to determine if any of your non-InnoDB user tables are too large.
1352
Amazon Relational Database Service User Guide
InnoDB reserved word
• Set the innodb_file_per_table parameter to 1 in the parameter group for the DB instance.
• Set the innodb_file_per_table parameter to 0 in the parameter group for the DB instance.
For information on updating a parameter group, see Working with parameter groups (p. 347).
When you have enabled or disabled InnoDB file-per-table tablespaces, you can issue an ALTER TABLE
command. You can use this command to move a table from the global tablespace to its own tablespace.
Or you can move a table from its own tablespace to the global tablespace. Following is an example.
Custom ports
Amazon RDS blocks connections to custom port 33060 for the MariaDB engine. Choose a different port
for your MariaDB engine.
Performance Insights
InnoDB counters are not visible in Performance Insights for RDS for MariaDB version 10.11 because the
MariaDB community no longer supports them.
1353
Amazon Relational Database Service User Guide
Major version Service Pack / Cumulative Minor version Knowledge Release Date
GDR Update Base Article
For information about licensing for SQL Server, see Licensing Microsoft SQL Server on Amazon
RDS (p. 1379). For information about SQL Server builds, see this Microsoft support article about the
latest SQL Server builds.
With Amazon RDS, you can create DB instances and DB snapshots, point-in-time restores, and automated
or manual backups. DB instances running SQL Server can be used inside a VPC. You can also use Secure
Sockets Layer (SSL) to connect to a DB instance running SQL Server, and you can use transparent data
encryption (TDE) to encrypt data at rest. Amazon RDS currently supports Multi-AZ deployments for SQL
Server using SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs) as a high-
availability, failover solution.
To deliver a managed service experience, Amazon RDS does not provide shell access to DB instances,
and it restricts access to certain system procedures and tables that require advanced privileges. Amazon
RDS supports access to databases on a DB instance using any standard SQL client application such
as Microsoft SQL Server Management Studio. Amazon RDS does not allow direct host access to a DB
instance via Telnet, Secure Shell (SSH), or Windows Remote Desktop Connection. When you create a DB
instance, the master user is assigned to the db_owner role for all user databases on that instance, and has
all database-level permissions except for those that are used for backups. Amazon RDS manages backups
for you.
Before creating your first DB instance, you should complete the steps in the setting up section of this
guide. For more information, see Setting up for Amazon RDS (p. 174).
Topics
• Common management tasks for Microsoft SQL Server on Amazon RDS (p. 1355)
• Limitations for Microsoft SQL Server DB instances (p. 1357)
• DB instance class support for Microsoft SQL Server (p. 1358)
• Microsoft SQL Server security (p. 1360)
1354
Amazon Relational Database Service User Guide
Common management tasks
• Compliance program support for Microsoft SQL Server DB instances (p. 1361)
• SSL support for Microsoft SQL Server DB instances (p. 1362)
• Microsoft SQL Server versions on Amazon RDS (p. 1362)
• Version management in Amazon RDS (p. 1363)
• Microsoft SQL Server features on Amazon RDS (p. 1364)
• Change data capture support for Microsoft SQL Server DB instances (p. 1366)
• Features not supported and features with limited support (p. 1367)
• Multi-AZ deployments using Microsoft SQL Server Database Mirroring or Always On availability
groups (p. 1368)
• Using Transparent Data Encryption to encrypt data at rest (p. 1368)
• Functions and stored procedures for Amazon RDS for Microsoft SQL Server (p. 1368)
• Local time zone for Microsoft SQL Server DB instances (p. 1371)
• Licensing Microsoft SQL Server on Amazon RDS (p. 1379)
• Connecting to a DB instance running the Microsoft SQL Server database engine (p. 1380)
• Working with Active Directory with RDS for SQL Server (p. 1387)
• Updating applications to connect to Microsoft SQL Server DB instances using new SSL/TLS
certificates (p. 1411)
• Upgrading the Microsoft SQL Server DB engine (p. 1414)
• Importing and exporting SQL Server databases using native backup and restore (p. 1419)
• Working with read replicas for Microsoft SQL Server in Amazon RDS (p. 1446)
• Multi-AZ deployments for Amazon RDS for Microsoft SQL Server (p. 1450)
• Additional features for Microsoft SQL Server on Amazon RDS (p. 1455)
• Options for the Microsoft SQL Server database engine (p. 1514)
• Common DBA tasks for Microsoft SQL Server (p. 1602)
1355
Amazon Relational Database Service User Guide
Common management tasks
When you create your DB instance, you can configure it to take Importing and exporting SQL
automated backups. You can also back up and restore your Server databases using native
databases manually by using full backup files (.bak files). backup and restore (p. 1419)
There are also advanced administrative tasks for working with SQL Server DB instances. For more
information, see the following documentation:
1356
Amazon Relational Database Service User Guide
Limitations
• The maximum number of databases supported on a DB instance depends on the instance class type
and the availability mode—Single-AZ, Multi-AZ Database Mirroring (DBM), or Multi-AZ Availability
Groups (AGs). The Microsoft SQL Server system databases don't count toward this limit.
The following table shows the maximum number of supported databases for each instance class type
and availability mode. Use this table to help you decide if you can move from one instance class type
to another, or from one availability mode to another. If your source DB instance has more databases
than the target instance class type or availability mode can support, modifying the DB instance fails.
You can see the status of your request in the Events pane.
Instance class type Single-AZ Multi-AZ with DBM Multi-AZ with Always
On AGs
db.*.large 30 30 30
db.*.xlarge to 100 50 75
db.*.16xlarge
For example, let's say that your DB instance runs on a db.*.16xlarge with Single-AZ and that it has 76
databases. You modify the DB instance to upgrade to using Multi-AZ Always On AGs. This upgrade
fails, because your DB instance contains more databases than your target configuration can support. If
you upgrade your instance class type to db.*.24xlarge instead, the modification succeeds.
If the upgrade fails, you see events and messages similar to the following:
• Unable to modify database instance class. The instance has 76 databases, but after conversion it
would only support 75.
• Unable to convert the DB instance to Multi-AZ: The instance has 76 databases, but after conversion
it would only support 75.
If the point-in-time restore or snapshot restore fails, you see events and messages similar to the
following:
• Database instance put into incompatible-restore. The instance has 76 databases, but after
conversion it would only support 75.
• Some ports are reserved for Amazon RDS, and you can't use them when you create a DB instance.
• Client connections from IP addresses within the range 169.254.0.0/16 are not permitted. This is the
Automatic Private IP Addressing Range (APIPA), which is used for local-link addressing.
• SQL Server Standard Edition uses only a subset of the available processors if the DB instance has more
processors than the software limits (24 cores, 4 sockets, and 128GB RAM). Examples of this are the
db.m5.24xlarge and db.r5.24xlarge instance classes.
For more information, see the table of scale limits under Editions and supported features of SQL
Server 2019 (15.x) in the Microsoft documentation.
1357
Amazon Relational Database Service User Guide
DB instance class support
• Amazon RDS for SQL Server doesn't support importing data into the msdb database.
• You can't rename databases on a DB instance in a SQL Server Multi-AZ deployment.
• Make sure that you use these guidelines when setting the following DB parameters on RDS for SQL
Server:
• max server memory (mb) >= 256 MB
• max worker threads >= (number of logical CPUs * 7)
For more information on setting DB parameters, see Working with parameter groups (p. 347).
• The maximum storage size for SQL Server DB instances is the following:
• General Purpose (SSD) storage – 16 TiB for all editions
• Provisioned IOPS storage – 16 TiB for all editions
• Magnetic storage – 1 TiB for all editions
If you have a scenario that requires a larger amount of storage, you can use sharding across multiple
DB instances to get around the limit. This approach requires data-dependent routing logic in
applications that connect to the sharded system. You can use an existing sharding framework, or you
can write custom code to enable sharding. If you use an existing framework, the framework can't
install any components on the same server as the DB instance.
• The minimum storage size for SQL Server DB instances is the following:
• General Purpose (SSD) storage – 20 GiB for Enterprise, Standard, Web, and Express Editions
• Provisioned IOPS storage – 20 GiB for Enterprise, Standard, Web, and Express Editions
• Magnetic storage – 20 GiB for Enterprise, Standard, Web, and Express Editions
• Amazon RDS doesn't support running these services on the same server as your RDS DB instance:
• Data Quality Services
• Master Data Services
To use these features, we recommend that you install SQL Server on an Amazon EC2 instance, or use
an on-premises SQL Server instance. In these cases, the EC2 or SQL Server instance acts as the Master
Data Services server for your SQL Server DB instance on Amazon RDS. You can install SQL Server on an
Amazon EC2 instance with Amazon EBS storage, pursuant to Microsoft licensing policies.
• Because of limitations in Microsoft SQL Server, restoring to a point in time before successfully running
DROP DATABASE might not reflect the state of that database at that point in time. For example,
the dropped database is typically restored to its state up to 5 minutes before the DROP DATABASE
command was issued. This type of restore means that you can't restore the transactions made
during those few minutes on your dropped database. To work around this, you can reissue the DROP
DATABASE command after the restore operation is completed. Dropping a database removes the
transaction logs for that database.
• For SQL Server, you create your databases after you create your DB instance. Database names follow
the usual SQL Server naming rules with the following differences:
• Database names can't start with rdsadmin.
• They can't start or end with a space or a tab.
• They can't contain any of the characters that create a new line.
• They can't contain a single quote (').
The following list of DB instance classes supported for Microsoft SQL Server is provided here for your
convenience. For the most current list, see the RDS console: https://console.aws.amazon.com/rds/.
Not all DB instance classes are available on all supported SQL Server minor versions. For example,
some newer DB instance classes such as db.r6i aren't available on older minor versions. You can use the
describe-orderable-db-instance-options AWS CLI command to find out which DB instance classes are
available for your SQL Server edition and version.
SQL 2019 support range 2017 and 2016 support 2014 support range
Server range
edition
Enterprise db.t3.xlarge–db.t3.2xlarge
db.t3.xlarge–db.t3.2xlarge
db.t3.xlarge–db.t3.2xlarge
Edition
db.r5.xlarge–db.r5.24xlarge
db.r3.xlarge–db.r3.8xlarge
db.r3.xlarge–db.r3.8xlarge
db.r5b.xlarge–db.r5b.24xlarge
db.r4.xlarge–db.r4.16xlarge
db.r4.xlarge–db.r4.8xlarge
db.r5d.xlarge–db.r5d.24xlarge
db.r5.xlarge–db.r5.24xlarge
db.r5.xlarge–db.r5.24xlarge
db.r6i.xlarge–db.r6i.32xlarge
db.r5b.xlarge–db.r5b.24xlarge
db.r5b.xlarge–db.r5b.24xlarge
db.m5.xlarge–db.m5.24xlarge
db.r5d.xlarge–db.r5d.24xlarge
db.r5d.xlarge–db.r5d.24xlarge
db.m5d.xlarge–db.m5d.24xlarge
db.r6i.xlarge–db.r6i.32xlarge
db.r6i.xlarge–db.r6i.32xlarge
db.m6i.xlarge–db.m6i.32xlarge
db.m4.xlarge–db.m4.16xlarge
db.m4.xlarge–db.m4.10xlarge
db.x1.16xlarge–db.x1.32xlarge
db.m5.xlarge–db.m5.24xlarge
db.m5.xlarge–db.m5.24xlarge
db.x1e.xlarge–db.x1e.32xlarge
db.m5d.xlarge–db.m5d.24xlarge
db.m5d.xlarge–db.m5d.24xlarge
db.z1d.xlarge–db.z1d.12xlarge
db.m6i.xlarge–db.m6i.32xlarge
db.m6i.xlarge–db.m6i.32xlarge
db.x1.16xlarge–db.x1.32xlarge
db.x1.16xlarge–db.x1.32xlarge
db.x1e.xlarge–db.x1e.32xlarge
db.z1d.xlarge–db.z1d.12xlarge
Standard db.t3.xlarge–db.t3.2xlarge
db.t3.xlarge–db.t3.2xlarge
db.t3.xlarge–db.t3.2xlarge
Edition
db.r5.large–db.r5.24xlarge
db.r4.large–db.r4.16xlarge
db.r3.large–db.r3.8xlarge
db.r5b.large–db.r5b.24xlarge
db.r5.large–db.r5.24xlarge
db.r4.large–db.r4.8xlarge
db.r5d.large–db.r5d.24xlarge
db.r5b.large–db.r5b.24xlarge
db.r5.large–db.r5.24xlarge
db.r6i.large–db.r6i.8xlarge
db.r5d.large–db.r5d.24xlarge
db.r5b.large–db.r5b.24xlarge
db.m5.large–db.m5.24xlarge
db.r6i.large–db.r6i.8xlarge
db.r5d.large–db.r5d.24xlarge
db.m5d.large–db.m5d.24xlarge
db.m4.large–db.m4.16xlarge
db.r6i.large–db.r6i.8xlarge
db.m6i.large–db.m6i.8xlarge
db.m5.large–db.m5.24xlarge
db.m3.medium–db.m3.2xlarge
db.x1.16xlarge–db.x1.32xlarge
db.m5d.large–db.m5d.24xlarge
db.m4.large–db.m4.10xlarge
db.x1e.xlarge–db.x1e.32xlarge
db.m6i.large–db.m6i.8xlarge
db.m5.large–db.m5.24xlarge
db.z1d.large–db.z1d.12xlarge
db.x1.16xlarge–db.x1.32xlarge
db.m5d.large–db.m5d.24xlarge
1359
Amazon Relational Database Service User Guide
Security
SQL 2019 support range 2017 and 2016 support 2014 support range
Server range
edition
db.x1e.xlarge–db.x1e.32xlarge
db.m6i.large–db.m6i.8xlarge
db.z1d.large–db.z1d.12xlarge
db.x1.16xlarge–db.x1.32xlarge
Web db.t3.small–db.t3.2xlarge
db.t2.small–db.t2.mediumdb.t2.small–db.t2.medium
Edition
db.r5.large–db.r5.4xlarge
db.t3.small–db.t3.2xlarge
db.t3.small–db.t3.2xlarge
db.r5b.large–db.r5b.4xlarge
db.r4.large–db.r4.2xlarge
db.r3.large–db.r3.2xlarge
db.r5d.large–db.r5d.4xlarge
db.r5.large–db.r5.4xlarge
db.r4.large–db.r4.2xlarge
db.r6i.large–db.r6i.4xlarge
db.r5b.large–db.r5b.4xlarge
db.r5.large–db.r5.4xlarge
db.m5.large–db.m5.4xlarge
db.r5d.large–db.r5d.4xlarge
db.r5b.large–db.r5b.4xlarge
db.m5d.large–db.m5d.4xlarge
db.r6i.large–db.r6i.4xlarge
db.r5d.large–db.r5d.4xlarge
db.m6i.large–db.m6i.4xlarge
db.m4.large–db.m4.4xlarge
db.r6i.large–db.r6i.4xlarge
db.z1d.large–db.z1d.3xlarge
db.m5.large–db.m5.4xlarge
db.m3.medium–db.m3.2xlarge
db.m5d.large–db.m5d.4xlarge
db.m4.large–db.m4.4xlarge
db.m6i.large–db.m6i.4xlarge
db.m5.large–db.m5.4xlarge
db.z1d.large–db.z1d.3xlarge
db.m5d.large–db.m5d.4xlarge
db.m6i.large–db.m6i.4xlarge
Express db.t3.small–db.t3.xlargedb.t2.micro–db.t2.mediumdb.t2.micro–db.t2.medium
Edition
db.t3.small–db.t3.xlargedb.t3.small–db.t3.xlarge
Any user who creates a database is assigned to the db_owner role for that database and has all
database-level permissions except for those that are used for backups. Amazon RDS manages backups
for you.
The following server-level roles aren't available in Amazon RDS for SQL Server:
• bulkadmin
• dbcreator
• diskadmin
• securityadmin
• serveradmin
• sysadmin
1360
Amazon Relational Database Service User Guide
Compliance programs
The following server-level permissions aren't available on RDS for SQL Server DB instances:
Amazon RDS for SQL Server supports HIPAA for the following versions and editions:
To enable HIPAA support on your DB instance, set up the following three components.
Component Details
1361
Amazon Relational Database Service User Guide
SSL support
Component Details
Transport To set up transport encryption, force all connections to your DB instance to use
encryption Secure Sockets Layer (SSL). For more information, see Forcing connections to
your DB instance to use SSL (p. 1456).
SSL is supported in all AWS Regions and for all supported SQL Server editions. For more information, see
Using SSL with a Microsoft SQL Server DB instance (p. 1456).
The following table shows the supported versions for all editions and all AWS Regions, except where
noted. You can also use the describe-db-engine-versions AWS CLI command to see a list of
supported versions, as well as defaults for newly created DB instances.
15.00.4043.16 (CU5)
1362
Amazon Relational Database Service User Guide
Version management
Currently, you manually perform all engine upgrades on your DB instance. For more information, see
Upgrading the Microsoft SQL Server DB engine (p. 1414).
1363
Amazon Relational Database Service User Guide
Deprecation schedule
Date Information
July 9, 2024 Microsoft will stop critical patch updates for SQL Server 2014. For more information, see
documentation.
June 1, 2024 Amazon RDS plans to end support of Microsoft SQL Server 2014 on RDS for SQL Server. A
scheduled to migrate to SQL Server 2016 (latest minor version available). For more inform
Server ending support for SQL Server 2014 major versions.
To avoid an automatic upgrade from Microsoft SQL Server 2014, you can upgrade at a tim
see Upgrading a DB instance engine version (p. 429).
July 12, 2022 Microsoft will stop critical patch updates for SQL Server 2012. For more information, see
documentation.
June 1, 2022 Amazon RDS plans to end support of Microsoft SQL Server 2012 on RDS for SQL Server. A
scheduled to migrate to SQL Server 2014 (latest minor version available). For more inform
Server ending support for SQL Server 2012 major versions.
To avoid an automatic upgrade from Microsoft SQL Server 2012, you can upgrade at a tim
see Upgrading a DB instance engine version (p. 429).
September 1, 2021 Amazon RDS is starting to disable the creation of new RDS for SQL Server DB instances u
information, see Announcement: Amazon RDS for SQL Server ending support for SQL Ser
July 12, 2019 The Amazon RDS team deprecated support for Microsoft SQL Server 2008 R2 in June 201
2008 R2 are migrating to SQL Server 2012 (latest minor version available).
To avoid an automatic upgrade from Microsoft SQL Server 2008 R2, you can upgrade at a
information, see Upgrading a DB instance engine version (p. 429).
April 25, 2019 Before the end of April 2019, you will no longer be able to create new Amazon RDS for SQ
Server 2008R2.
Topics
• Microsoft SQL Server 2019 features (p. 1365)
• Microsoft SQL Server 2017 features (p. 1365)
• Microsoft SQL Server 2016 features (p. 1366)
• Microsoft SQL Server 2014 features (p. 1366)
• Microsoft SQL Server 2012 end of support on Amazon RDS (p. 1366)
• Microsoft SQL Server 2008 R2 end of support on Amazon RDS (p. 1366)
1364
Amazon Relational Database Service User Guide
SQL Server 2019 features
• Accelerated database recovery (ADR) – Reduces crash recovery time after a restart or a long-running
transaction rollback.
• Intelligent Query Processing (IQP):
• Row mode memory grant feedback – Corrects excessive grants automatically, that would otherwise
result in wasted memory and reduced concurrency.
• Batch mode on rowstore – Enables batch mode execution for analytic workloads without requiring
columnstore indexes.
• Table variable deferred compilation – Improves plan quality and overall performance for queries that
reference table variables.
• Intelligent performance:
• OPTIMIZE_FOR_SEQUENTIAL_KEY index option – Improves throughput for high-concurrency
inserts into indexes.
• Improved indirect checkpoint scalability – Helps databases with heavy DML workloads.
• Concurrent Page Free Space (PFS) updates – Enables handling as a shared latch rather than an
exclusive latch.
• Monitoring improvements:
• WAIT_ON_SYNC_STATISTICS_REFRESH wait type – Shows accumulated instance-level time spent
on synchronous statistics refresh operations.
• Database-scoped configurations – Include LIGHTWEIGHT_QUERY_PROFILING and
LAST_QUERY_PLAN_STATS.
• Dynamic management functions (DMFs) – Include sys.dm_exec_query_plan_stats and
sys.dm_db_page_info.
• Verbose truncation warnings – The data truncation error message defaults to include table and column
names and the truncated value.
• Resumable online index creation – In SQL Server 2017, only resumable online index rebuild is
supported.
For the full list of SQL Server 2019 features, see What's new in SQL Server 2019 (15.x) in the Microsoft
documentation.
For a list of unsupported features, see Features not supported and features with limited
support (p. 1367).
For the full list of SQL Server 2017 features, see What's new in SQL Server 2017 in the Microsoft
documentation.
For a list of unsupported features, see Features not supported and features with limited
support (p. 1367).
1365
Amazon Relational Database Service User Guide
SQL Server 2016 features
• Always Encrypted
• JSON Support
• Operational Analytics
• Query Store
• Temporal Tables
For the full list of SQL Server 2016 features, see What's new in SQL Server 2016 in the Microsoft
documentation.
For a list of unsupported features, see Features not supported and features with limited
support (p. 1367).
SQL Server 2014 supports all the parameters from SQL Server 2012 and uses the same default values.
SQL Server 2014 includes one new parameter, backup checksum default. For more information, see
How to enable the CHECKSUM option if backup utilities do not expose the option in the Microsoft
documentation.
RDS is upgrading all existing DB instances that are still using SQL Server 2012 to the latest minor version
of SQL Server 2014. For more information, see Version management in Amazon RDS (p. 1363).
RDS is upgrading all existing DB instances that are still using SQL Server 2008 R2 to the latest minor
version of SQL Server 2012. For more information, see Version management in Amazon RDS (p. 1363).
1366
Amazon Relational Database Service User Guide
Features not supported and features with limited support
change that you can access later. For more information, see Change data capture in the Microsoft
documentation.
Amazon RDS supports CDC for the following SQL Server editions and versions:
To use CDC with your Amazon RDS DB instances, first enable or disable CDC at the database level by
using RDS-provided stored procedures. After that, any user that has the db_owner role for that database
can use the native Microsoft stored procedures to control CDC on that database. For more information,
see Using change data capture (p. 1614).
You can use CDC and AWS Database Migration Service to enable ongoing replication from SQL Server DB
instances.
1367
Amazon Relational Database Service User Guide
Multi-AZ deployments
The following Microsoft SQL Server features have limited support on Amazon RDS:
• Distributed queries/linked servers. For more information, see Implement linked servers with Amazon
RDS for Microsoft SQL Server.
• Common Runtime Language (CLR). On RDS for SQL Server 2016 and lower versions, CLR is supported
in SAFE mode and using assembly bits only. CLR isn't supported on RDS for SQL Server 2017 and
higher versions. For more information, see Common Runtime Language Integration in the Microsoft
documentation.
Amazon RDS manages failover by actively monitoring your Multi-AZ deployment and initiating a failover
when a problem with your primary occurs. Failover doesn't occur unless the standby and primary are
fully in sync. Amazon RDS actively maintains your Multi-AZ deployment by automatically repairing
unhealthy DB instances and re-establishing synchronous replication. You don't have to manage anything.
Amazon RDS handles the primary, the witness, and the standby instance for you. When you set up SQL
Server Multi-AZ, RDS configures passive secondary instances for all of the databases on the instance.
For more information, see Multi-AZ deployments for Amazon RDS for Microsoft SQL Server (p. 1450).
1368
Amazon Relational Database Service User Guide
Functions and stored procedures
rds_fn_get_system_database_sync_objects
rds_fn_server_object_last_sync_time
rds_show_configuration
To see the values that are set using rds_set_configuration,
see these topics:
1369
Amazon Relational Database Service User Guide
Functions and stored procedures
Amazon S3 file transfer Deleting files on the RDS DB instance (p. 1473)
rds_delete_from_filesystem
Transparent Data Support for Transparent Data Encryption in SQL Server (p. 1528)
rds_backup_tde_certificate
Encryption
rds_drop_tde_certificate
rds_restore_tde_certificate
rds_fn_list_user_tde_certificates
Microsoft Business rds_msbi_task This operation is used with SQL Server Analysis Services (SSAS):
Intelligence (MSBI)
• Deploying SSAS projects on Amazon RDS (p. 1549)
• Adding a domain user as a database administrator (p. 1552)
• Backing up an SSAS database (p. 1556)
• Restoring an SSAS database (p. 1556)
1370
Amazon Relational Database Service User Guide
Local time zone
You set the time zone when you first create your DB instance. You can create your DB instance by using
the AWS Management Console, the Amazon RDS API CreateDBInstance action, or the AWS CLI create-db-
instance command.
If your DB instance is part of a Multi-AZ deployment (using SQL Server DBM or AGs), then when you
fail over, your time zone remains the local time zone that you set. For more information, see Multi-AZ
deployments using Microsoft SQL Server Database Mirroring or Always On availability groups (p. 1368).
When you request a point-in-time restore, you specify the time to restore to. The time is shown in your
local time zone. For more information, see Restoring a DB instance to a specified time (p. 660).
The following are limitations to setting the local time zone on your DB instance:
• You can't modify the time zone of an existing SQL Server DB instance.
• You can't restore a snapshot from a DB instance in one time zone to a DB instance in a different time
zone.
• We strongly recommend that you don't restore a backup file from one time zone to a different time
zone. If you restore a backup file from one time zone to a different time zone, you must audit your
queries and applications for the effects of the time zone change. For more information, see Importing
and exporting SQL Server databases using native backup and restore (p. 1419).
1371
Amazon Relational Database Service User Guide
Supported time zones
Argentina Standard Time (UTC–03:00) City of Buenos Aires This time zone
doesn't observe
daylight saving time.
Cape Verde Standard Time (UTC–01:00) Cabo Verde Is. This time zone
doesn't observe
daylight saving time.
1372
Amazon Relational Database Service User Guide
Supported time zones
Central America Standard Time (UTC–06:00) Central America This time zone
doesn't observe
daylight saving time.
Central Pacific Standard Time (UTC+11:00) Solomon Islands, New This time zone
Caledonia doesn't observe
daylight saving time.
1373
Amazon Relational Database Service User Guide
Supported time zones
1374
Amazon Relational Database Service User Guide
Supported time zones
1375
Amazon Relational Database Service User Guide
Supported time zones
SA Pacific Standard Time (UTC–05:00) Bogota, Lima, Quito, This time zone
Rio Branco doesn't observe
daylight saving time.
South Africa Standard Time (UTC+02:00) Harare, Pretoria This time zone
doesn't observe
daylight saving time.
1376
Amazon Relational Database Service User Guide
Supported time zones
Sri Lanka Standard Time (UTC+05:30) Sri Jayawardenepura This time zone
doesn't observe
daylight saving time.
1377
Amazon Relational Database Service User Guide
Supported time zones
W. Central Africa Standard Time (UTC+01:00) West Central Africa This time zone
doesn't observe
daylight saving time.
West Asia Standard Time (UTC+05:00) Ashgabat, Tashkent This time zone
doesn't observe
daylight saving time.
West Pacific Standard Time (UTC+10:00) Guam, Port Moresby This time zone
doesn't observe
daylight saving time.
1378
Amazon Relational Database Service User Guide
Licensing SQL Server on Amazon RDS
This means that you don't need to purchase SQL Server licenses separately. AWS holds the license for the
SQL Server database software. Amazon RDS pricing includes the software license, underlying hardware
resources, and Amazon RDS management capabilities.
• Enterprise
• Standard
• Web
• Express
Note
Licensing for SQL Server Web Edition supports only public and internet-accessible webpages,
websites, web applications, and web services. This level of support is required for compliance
with Microsoft's usage rights. For more information, see AWS service terms.
Amazon RDS supports Multi-AZ deployments for DB instances running Microsoft SQL Server by using
SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs). There are no additional
licensing requirements for Multi-AZ deployments. For more information, see Multi-AZ deployments for
Amazon RDS for Microsoft SQL Server (p. 1450).
1379
Amazon Relational Database Service User Guide
Connecting to a DB instance running SQL Server
For an example that walks you through the process of creating and connecting to a sample DB instance,
see Creating and connecting to a Microsoft SQL Server DB instance (p. 194).
1. Make sure that its status is available. You can check this on the details page for your instance in the
AWS Management Console or by using the describe-db-instances AWS CLI command.
2. Make sure that it is accessible to your source. Depending on your scenario, it may not need to be
publicly accessible. For more information, see Amazon VPC VPCs and Amazon RDS (p. 2688).
3. Make sure that the inbound rules of your VPC security group allow access to your DB instance. For
more information, see Can't connect to Amazon RDS DB instance (p. 2727).
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
1380
Amazon Relational Database Service User Guide
Connecting to your DB instance with SSMS
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region of your DB instance.
3. Find the Domain Name System (DNS) name (endpoint) and port number for your DB instance:
a. Open the RDS console and choose Databases to display a list of your DB instances.
b. Choose the SQL Server DB instance name to display its details.
c. On the Connectivity & security tab, copy the endpoint.
1381
Amazon Relational Database Service User Guide
Connecting to your DB instance with SSMS
database-2.cg034itsfake.us-east-1.rds.amazonaws.com,1433
If you can't connect to your DB instance, see Security group considerations (p. 1385) and
Troubleshooting connections to your SQL Server DB instance (p. 1385).
4. Your SQL Server DB instance comes with SQL Server's standard built-in system databases (master,
model, msdb, and tempdb). To explore the system databases, do the following:
1382
Amazon Relational Database Service User Guide
Connecting to your DB instance with SSMS
5. Your SQL Server DB instance also comes with a database named rdsadmin. Amazon RDS uses this
database to store the objects that it uses to manage your database. The rdsadmin database also
includes stored procedures that you can run to perform advanced tasks. For more information, see
Common DBA tasks for Microsoft SQL Server (p. 1602).
6. You can now start creating your own databases and running queries against your DB instance and
databases as usual. To run a test query against your DB instance, do the following:
a. In SSMS, on the File menu point to New and then choose Query with Current Connection.
b. Enter the following SQL query.
select @@VERSION
c. Run the query. SSMS returns the SQL Server version of your Amazon RDS DB instance.
1383
Amazon Relational Database Service User Guide
Connecting to your DB instance with SQL Workbench/J
SQL Workbench/J uses JDBC to connect to your DB instance. You also need the JDBC driver for SQL
Server. To download this driver, see Microsoft JDBC drivers 4.1 (preview) and 4.0 for SQL Server.
1. Open SQL Workbench/J. The Select Connection Profile dialog box appears, as shown following.
2. In the first box at the top of the dialog box, enter a name for the profile.
3. For Driver, choose SQL JDBC 4.0.
4. For URL, enter jdbc:sqlserver://, then enter the endpoint of your DB instance. For example, the
URL value might be the following.
jdbc:sqlserver://sqlsvr-pdz.abcd12340.us-west-2.rds.amazonaws.com:1433
5. For Username, enter the master user name for the DB instance.
6. For Password, enter the password for the master user.
7. Choose the save icon in the dialog toolbar, as shown following.
8. Choose OK. After a few moments, SQL Workbench/J connects to your DB instance. If you can't
connect to your DB instance, see Security group considerations (p. 1385) and Troubleshooting
connections to your SQL Server DB instance (p. 1385).
9. In the query pane, enter the following SQL query.
select @@VERSION
1384
Amazon Relational Database Service User Guide
Security group considerations
The query returns the version information for your DB instance, similar to the following.
In some cases, you might need to create a new security group to make access possible. For instructions
on creating a new security group, see Controlling access with security groups (p. 2680). For a topic that
walks you through the process of setting up rules for your VPC security group, see Tutorial: Create a VPC
for use with a DB instance (IPv4 only) (p. 2706).
After you have created the new security group, modify your DB instance to associate it with the security
group. For more information, see Modifying an Amazon RDS DB instance (p. 401).
You can enhance security by using SSL to encrypt connections to your DB instance. For more information,
see Using SSL with a Microsoft SQL Server DB instance (p. 1456).
Could not open a connection Make sure that you specified the server name correctly. For Server
to SQL Server – Microsoft name, enter the DNS name and port number of your sample DB
SQL Server, Error: 53 instance, separated by a comma.
Important
If you have a colon between the DNS name and port number,
change the colon to a comma.
Your server name should look like the following example.
sample-instance.cg034itsfake.us-east-1.rds.amazonaws.com,1433
No connection could be You were able to reach the DB instance but the connection was refused.
made because the target This issue is usually caused by specifying the user name or password
machine actively refused it – incorrectly. Verify the user name and password, then retry.
1385
Amazon Relational Database Service User Guide
Troubleshooting
A network-related or The access rules enforced by your local firewall and the IP addresses
instance-specific error authorized to access your DB instance might not match. The problem
occurred while establishing is most likely the inbound rules in your security group. For more
a connection to SQL Server. information, see Security in Amazon RDS (p. 2565).
The server was not found
or was not accessible... The Your database instance must be publicly accessible. To connect to it
wait operation timed out – from outside of the VPC, the instance must have a public IP address
Microsoft SQL Server, Error: assigned.
258
Note
For more information on connection issues, see Can't connect to Amazon RDS DB
instance (p. 2727).
1386
Amazon Relational Database Service User Guide
Working with Active Directory with RDS for SQL Server
You can authenticate domain users using NTLM authentication with Self Managed Active Directory. You
can use Kerberos and NTLM authentication with AWS Managed Active Directory.
In the following sections, you can find information about working with Self Managed Active Directory
and AWS Managed Active Directory for Microsoft SQL Server on Amazon RDS.
Topics
• Working with Self Managed Active Directory with an Amazon RDS for SQL Server DB
instance (p. 1388)
• Working with AWS Managed Active Directory with RDS for SQL Server (p. 1401)
1387
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance
Topics
• Region and version availability (p. 1388)
• Requirements (p. 1388)
• Limitations (p. 1390)
• Overview of setting up Self Managed Active Directory (p. 1391)
• Setting up Self Managed Active Directory (p. 1391)
• Managing a DB instance in a self-managed Active Directory Domain (p. 1397)
• Understanding self-managed Active Directory Domain membership (p. 1398)
• Troubleshooting self-managed Active Directory (p. 1398)
• Restoring a SQL Server DB instance and then adding it to a self-managed Active Directory
domain (p. 1400)
Requirements
Make sure you've met the following requirements before joining an RDS for SQL Server DB instance to
your self-managed AD domain.
Topics
• Configure your on-premises AD (p. 1388)
• Configure your network connectivity (p. 1389)
• Configure your AD domain service account (p. 1390)
• If you have Active Directory sites defined, make sure the subnets in the VPC associated with your RDS
for SQL Server DB instance are defined in your Active Directory site. Confirm there aren't any conflicts
between the subnets in your VPC and the subnets in your other AD sites.
• Your AD domain controller has a domain functional level of Windows Server 2008 R2 or higher.
• Your AD domain name can't be in Single Label Domain (SLD) format. RDS for SQL Server does not
support SLD domains.
1388
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance
• The fully qualified domain name (FQDN) and organizational unit (OU) for your AD can't exceed 64
characters.
• Connectivity configured between the Amazon VPC where you want to create the RDS for SQL Server
DB instance and your self-managed Active Directory. You can set up connectivity using AWS Direct
Connect, AWS VPN, VPC peering, or AWS Transit Gateway.
• For VPC security groups, the default security group for your default Amazon VPC is already added
to your RDS for SQL Server DB instance in the console. Ensure that the security group and the VPC
network ACLs for the subnet(s) where you're creating your RDS for SQL Server DB instance allow traffic
on the ports and in the directions shown in the following diagram.
1389
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance
• Generally, the domain DNS servers are located in the AD domain controllers. You do not need to
configure the VPC DHCP option set to use this feature. For more information, see DHCP option sets in
the Amazon VPC User Guide.
Important
If you're using VPC network ACLs, you must also allow outbound traffic on dynamic ports
(49152-65535) from your RDS for SQL Server DB instance. Ensure that these traffic rules are
also mirrored on the firewalls that apply to each of the AD domain controllers, DNS servers, and
RDS for SQL Server DB instances.
While VPC security groups require ports to be opened only in the direction that network traffic
is initiated, most Windows firewalls and VPC network ACLs require ports to be open in both
directions.
• Make sure that you have a service account in your self-managed AD domain with delegated
permissions to join computers to the domain. A domain service account is a user account in your self-
managed AD that has been delegated permission to perform certain tasks.
• The domain service account needs to be delegated the following permissions in the Organizational
Unit (OU) that you're joining your RDS for SQL Server DB instance to:
• Validated ability to write to the DNS host name
• Validated ability to write to the service principal name
• Create and delete computer objects
These represent the minimum set of permissions that are required to join computer objects to your
self-managed Active Directory. For more information, see Errors when attempting to join computers to
a domain in the Microsoft Windows Server documentation.
Important
Do not move computer objects that RDS for SQL Server creates in the Organizational Unit after
your DB instance is created. Moving the associated objects will cause your RDS for SQL Server
DB instance to become misconfigured. If you need to move the computer objects created by
Amazon RDS, use the ModifyDBInstance RDS API operation to modify the domain parameters
with the desired location of the computer objects.
Limitations
The following limitations apply for Self Managed AD for SQL Server.
• NTLM is the only supported authentication type. Kerberos authentication is not supported. If you need
to use kerberos authentication, you can use AWS Managed AD instead of self-managed AD.
• The Microsoft Distributed Transaction Coordinator (MSDTC) service isn't supported, as it requires
Kerberos authentication.
• Your RDS for SQL Server DB instances do not use the Network Time Protocol (NTP) server of your self-
managed AD domain. They use an AWS NTP service instead.
• SQL Server linked servers must use SQL authentication to connect to other RDS for SQL Server DB
instances joined to your self-managed AD domain.
1390
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance
• Microsoft Group Policy Object (GPO) settings from your self-managed AD domain are not applied to
RDS for SQL Server DB instances.
In your AD domain:
Topics
• Step 1: Create an Organizational Unit in your AD (p. 1391)
• Step 2: Create an AD domain user in your AD (p. 1392)
• Step 3: Delegate control to the AD user (p. 1392)
• Step 4: Create an AWS KMS key (p. 1392)
• Step 5: Create an AWS secret (p. 1393)
• Step 6: Create or modify a SQL Server DB instance (p. 1394)
• Step 7: Create Windows Authentication SQL Server logins (p. 1396)
To create an OU in your AD
1391
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance
1. Open Active Directory Users and Computers and select the domain and OU where you want to
create your user.
2. Right-click the Users object and choose New, then User.
3. Enter a first name, last name, and logon name for the user. Click Next.
4. Enter a password for the user. Don't select "User must change password at next login". Don't select
"Account is disabled". Click Next.
5. Click OK. Your new user will appear under your domain.
1. Open Active Directory Users and Computers MMC snap-in and select the domain where you want
to create your user.
2. Right-click the OU that you created earlier and choose Delegate Control.
3. On the Delegation of Control Wizard, click Next.
4. On the Users or Groups section, click Add.
5. On the Select Users, Computers, or Groups section, enter the AD user you created and click Check
Names. If your AD user check is successful, click OK.
6. On the Users or Groups section, confirm your AD user was added and click Next.
7. On the Tasks to Delegate section, choose Create a custom task to delegate and click Next.
8. On the Active Directory Object Type section:
1392
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance
{
"Sid": "Allow use of the KMS key on behalf of RDS",
"Effect": "Allow",
"Principal": {
"Service": [
"rds.amazonaws.com"
]
},
"Action": "kms:Decrypt",
"Resource": "*"
}
1393
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance
{
"Version": "2012-10-17",
"Statement":
[
{
"Effect": "Allow",
"Principal":
{
"Service": "rds.amazonaws.com"
},
"Action": "secretsmanager:GetSecretValue",
"Resource": "*",
"Condition":
{
"StringEquals":
{
"aws:sourceAccount": "123456789012"
},
"ArnLike":
{
"aws:sourceArn": "arn:aws:rds:us-west-2:123456789012:db:*"
}
}
}
]
}
• Create a new SQL Server DB instance using the console, the create-db-instance CLI command, or the
CreateDBInstance RDS API operation.
1394
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance
• Restore a SQL Server DB instance to a point-in-time using the console, the restore-db-instance-to-
point-in-time CLI command, or the RestoreDBInstanceToPointInTime RDS API operation.
When you use the AWS CLI, the following parameters are required for the DB instance to be able to use
the self-managed Active Directory domain that you created:
• For the --domain-fqdn parameter, use the fully qualified domain name (FQDN) of your self-managed
Active Directory.
• For the --domain-ou parameter, use the OU that you created in your self-managed AD.
• For the --domain-auth-secret-arn parameter, use the value of the Secret ARN that you created in
a previous step.
• For the --domain-dns-ips parameter, use the primary and secondary IPv4 addresses of the DNS
servers for your self-managed AD. If you don't have a secondary DNS server IP address, enter the
primary IP address twice.
The following example CLI commands show how to create, modify, and remove an RDS for SQL Server
DB instance with a self-managed AD domain.
Important
If you modify a DB instance to join it to or remove it from a self-managed AD domain, a reboot
of the DB instance is required for the modification to take effect. You can choose to apply
the changes immediately or wait until the next maintenance window. Choosing the Apply
Immediately option will cause downtime for a single-AZ DB instance. A multi-AZ DB instance
will perform a failover before completing a reboot. For more information, see Using the Apply
Immediately setting (p. 402).
The following CLI command creates a new RDS for SQL Server DB instance and joins it to a self-managed
AD domain.
For Windows:
1395
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance
--master-username my-master-username ^
--master-user-password my-master-password ^
--domain-fqdn my-AD-test.my-AD.mydomain ^
--domain-ou OU=my-AD-test-OU,DC=my-AD-test,DC=my-AD,DC=my-domain ^
--domain-auth-secret-arn "arn:aws:secretsmanager:region:account-number:secret:my-AD-
test-secret-123456" \ ^
--domain-dns-ips "10.11.12.13" "10.11.12.14"
The following CLI command modifies an existing RDS for SQL Server DB instance to use a self-managed
Active Directory domain.
For Windows:
The following CLI command removes an RDS for SQL Server DB instance from a self-managed Active
Directory domain.
For Windows:
In order for a self-managed AD user to authenticate with SQL Server, a SQL Server Windows login must
exist for the self-managed AD user or a self-managed Active Directory group that the user is a member
of. Fine-grained access control is handled through granting and revoking permissions on these SQL
1396
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance
Server logins. A self-managed AD user that doesn't have a SQL Server login or belong to a self-managed
AD group with such a login can't access the SQL Server DB instance.
The ALTER ANY LOGIN permission is required to create a self-managed AD SQL Server login. If you
haven't created any logins with this permission, connect as the DB instance's master user using SQL
Server Authentication and create your self-managed AD SQL Server logins under the context of the
master user.
You can run a data definition language (DDL) command such as the following to create a SQL Server
login for an self-managed AD user or group.
Note
Specify users and groups using the pre-Windows 2000 login name in the format
my_AD_domain\my_AD_domain_user. You can't use a user principal name (UPN) in the format
my_AD_domain_user@my_AD_domain.
USE [master]
GO
CREATE LOGIN [my_AD_domain\my_AD_domain_user] FROM WINDOWS WITH DEFAULT_DATABASE =
[master], DEFAULT_LANGUAGE = [us_english];
GO
For more information, see CREATE LOGIN (Transact-SQL) in the Microsoft Developer Network
documentation.
Users (both humans and applications) from your domain can now connect to the RDS for SQL Server
instance from a self-managed AD domain-joined client machine using Windows authentication.
For example, using the Amazon RDS API, you can do the following:
• To reattempt a self-managed domain join for a failed membership, use the ModifyDBInstance API
operation and specify the same set of parameters:
• --domain-fqdn
• --domain-dns-ips
• --domain-ou
• --domain-auth-secret-arn
• To remove a DB instance from a self-managed domain, use the ModifyDBInstance API operation
and specify --disable-domain for the domain parameter.
• To move a DB instance from one self-managed domain to another, use the ModifyDBInstance API
operation and specify the domain parameters for the new domain:
• --domain-fqdn
• --domain-dns-ips
• --domain-ou
• --domain-auth-secret-arn
• To list self-managed AD domain membership for each DB instance, use the DescribeDBInstances API
operation.
1397
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance
A request to become a member of a self-managed AD domain can fail because of a network connectivity
issue. For example, you might create a DB instance or modify an existing instance and have the attempt
fail for the DB instance to become a member of a self-managed AD domain. In this case, either reissue
the command to create or modify the DB instance or modify the newly created instance to join the self-
managed AD domain.
Error 2 / 0x2 The The format or location for Review the —domain-
system the Organizational Unit ou parameter. Ensure the
cannot (OU) specified with the — domain service account
find domain-ou parameter is has the correct permissions
the file invalid. The domain service to the OU. For more
specified. account specified via AWS information, see Configure
Secrets Manager lack the your AD domain service
permissions required to join account (p. 1390).
the OU.
1398
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance
Error 87 / 0x57 The The domain service account Review the requirements for
parameter specified via AWS Secrets the domain service account.
is Manager doesn't have the For more information, see
incorrect. correct permissions. The Configure your AD domain
user profile may also be service account (p. 1390).
corrupted.
Error 234 / 0xEA Specified The OU specified with the Review the —domain-ou
Organizational
—domain-ou parameter parameter and ensure the
Unit (OU) doesn't exist in your self- specified OU exists in your
does not managed AD. self-managed AD.
exist.
Error 1326 / 0x52E The user The domain service account Ensure the credentials
name or credentials provided in AWS provided in AWS Secrets
password Secrets Manager contains Manager are correct and the
is an unknown username domain account is enabled
incorrect. or bad password. The in your self-managed Active
domain account may also Directory.
be disabled in your self-
managed AD.
Error 1355 / 0x54B The The domain is down, Review the —domain-
specified the specified set of DNS dns-ips and —domain-
domain IPs are unreachable, or fqdn parameters to
either the specified FQDN is ensure they're correct.
does not unreachable. Review the networking
exist or configuration of your RDS
could for SQL Server DB instance
not be and ensure your self-
contacted. managed AD is reachable.
For more information, see
Configure your network
connectivity (p. 1389).
Error 1772 / 0x6BA The RPC There was an issue reaching Validate that the RPC
server is the RPC service of your AD service is running on your
unavailable. domain. This might be a domain controllers and
service or network issue. that the TCP ports 135 and
49152-65535 are reachable
on your domain from your
RDS for SQL Server DB
instance.
1399
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance
Error 2224 / 0x8B0 The user The computer account that's Identify the computer
account attempting to be added account by running SELECT
already to your self-managed AD @@SERVERNAME on your
exists. already exists. RDS for SQL Server DB
instance and then carefully
remove it from your self-
managed AD.
Error 2242 / 0x8c2 The The password for the Update the password for
password domain service account the domain service account
of this specified via AWS Secrets used to join your RDS for
user has Manager has expired. SQL Server DB instance to
expired. your self-managed AD.
1400
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server
For information on version and Region availability, see Kerberos authentication with RDS for SQL Server.
To get Windows Authentication using an on-premises or self-hosted Microsoft Active Directory, create
a forest trust. The trust can be one-way or two-way. For more information on setting up forest trusts
using AWS Directory Service, see When to create a trust relationship in the AWS Directory Service
Administration Guide.
To set up Windows authentication for a SQL Server DB instance, do the following steps, explained in
greater detail in Setting up Windows Authentication for SQL Server DB instances (p. 1402):
1. Use AWS Managed Microsoft AD, either from the AWS Management Console or AWS Directory Service
API, to create an AWS Managed Microsoft AD directory.
2. If you use the AWS CLI or Amazon RDS API to create your SQL Server DB instance, create
an AWS Identity and Access Management (IAM) role. This role uses the managed IAM policy
AmazonRDSDirectoryServiceAccess and allows Amazon RDS to make calls to your directory. If
you use the console to create your SQL Server DB instance, AWS creates the IAM role for you.
For the role to allow access, the AWS Security Token Service (AWS STS) endpoint must be activated in
the AWS Region for your AWS account. AWS STS endpoints are active by default in all AWS Regions,
and you can use them without any further actions. For more information, see Managing AWS STS in an
AWS Region in the IAM User Guide.
3. Create and configure users and groups in the AWS Managed Microsoft AD directory using the
Microsoft Active Directory tools. For more information about creating users and groups in your Active
Directory, see Manage users and groups in AWS Managed Microsoft AD in the AWS Directory Service
Administration Guide.
4. If you plan to locate the directory and the DB instance in different VPCs, enable cross-VPC traffic.
5. Use Amazon RDS to create a new SQL Server DB instance either from the console, AWS CLI, or Amazon
RDS API. In the create request, you provide the domain identifier ("d-*" identifier) that was generated
when you created your directory and the name of the role you created. You can also modify an
existing SQL Server DB instance to use Windows Authentication by setting the domain and IAM role
parameters for the DB instance.
1401
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server
6. Use the Amazon RDS master user credentials to connect to the SQL Server DB instance as you do any
other DB instance. Because the DB instance is joined to the AWS Managed Microsoft AD domain, you
can provision SQL Server logins and users from the Active Directory users and groups in their domain.
(These are known as SQL Server "Windows" logins.) Database permissions are managed through
standard SQL Server permissions granted and revoked to these Windows logins.
ad-test.corp-ad.company.com
If you want to make sure your connection is using Kerberos, run the following query:
Step 1: Create a directory using the AWS Directory Service for Microsoft Active
Directory
AWS Directory Service creates a fully managed, Microsoft Active Directory in the AWS Cloud. When you
create an AWS Managed Microsoft AD directory, AWS Directory Service creates two domain controllers
and Domain Name Service (DNS) servers on your behalf. The directory servers are created in two subnets
in two different Availability Zones within a VPC. This redundancy helps ensure that your directory
remains accessible even if a failure occurs.
When you create an AWS Managed Microsoft AD directory, AWS Directory Service performs the following
tasks on your behalf:
When you launch an AWS Directory Service for Microsoft Active Directory, AWS creates an Organizational
Unit (OU) that contains all your directory's objects. This OU, which has the NetBIOS name that you typed
when you created your directory, is located in the domain root. The domain root is owned and managed
by AWS.
The admin account that was created with your AWS Managed Microsoft AD directory has permissions for
the most common administrative activities for your OU:
1402
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server
The admin account also has rights to perform the following domain-wide activities:
• Manage DNS configurations (add, remove, or update records, zones, and forwarders).
• View DNS event logs.
• View security event logs.
1. In the AWS Directory Service console navigation pane, choose Directories and choose Set up
directory.
2. Choose AWS Managed Microsoft AD. This is the only option currently supported for use with
Amazon RDS.
3. Choose Next.
4. On the Enter directory information page, provide the following information:
Edition
The fully qualified name for the directory, such as corp.example.com. Names longer than 47
characters aren't supported by SQL Server.
Directory NetBIOS name
The password for the directory administrator. The directory creation process creates an
administrator account with the user name Admin and this password.
The directory administrator password can't include the word admin. The password is case-
sensitive and must be 8–64 characters in length. It must also contain at least one character from
three of the following four categories:
• Lowercase letters (a-z)
• Uppercase letters (A-Z)
• Numbers (0-9)
• Non-alphanumeric characters (~!@#$%^&*_-+=`|\(){}[]:;"'<>,.?/)
Confirm password
1403
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server
5. Choose Next.
6. On the Choose VPC and subnets page, provide the following information:
VPC
Choose the subnets for the directory servers. The two subnets must be in different Availability
Zones.
7. Choose Next.
8. Review the directory information. If changes are needed, choose Previous. When the information is
correct, choose Create directory.
It takes several minutes for the directory to be created. When it has been successfully created, the Status
value changes to Active.
1404
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server
To see information about your directory, choose the directory ID in the directory listing. Make a note of
the Directory ID. You need this value when you create or modify your SQL Server DB instance.
If you are using a custom policy for joining a domain, rather than using the AWS-
managed AmazonRDSDirectoryServiceAccess policy, make sure that you allow the
ds:GetAuthorizedApplicationDetails action. This requirement is effective starting July 2019, due
to a change in the AWS Directory Service API.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ds:DescribeDirectories",
"ds:AuthorizeApplication",
"ds:UnauthorizeApplication",
1405
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server
"ds:GetAuthorizedApplicationDetails"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
We recommend using the aws:SourceArn and aws:SourceAccount global condition context keys in
resource-based trust relationships to limit the service's permissions to a specific resource. This is the most
effective way to protect against the confused deputy problem.
You might use both global condition context keys and have the aws:SourceArn value contain the
account ID. In this case, the aws:SourceAccount value and the account in the aws:SourceArn value
must use the same account ID when used in the same statement.
In the trust relationship, make sure to use the aws:SourceArn global condition context key with the full
Amazon Resource Name (ARN) of the resources accessing the role. For Windows Authentication, make
sure to include the DB instances, as shown in the following example.
Example trust relationship with global condition context key for Windows Authentication
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceArn": [
"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier"
]
}
}
}
]
}
Create an IAM role using this IAM policy and trust relationship. For more information about creating IAM
roles, see Creating customer managed policies in the IAM User Guide.
To create users and groups in an AWS Directory Service directory, you must be connected to a Windows
EC2 instance that is a member of the AWS Directory Service directory. You must also be logged in as
1406
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server
a user that has privileges to create users and groups. For more information, see Add users and groups
(Simple AD and AWS Managed Microsoft AD) in the AWS Directory Service Administration Guide.
Step 4: Enable cross-VPC traffic between the directory and the DB instance
If you plan to locate the directory and the DB instance in the same VPC, skip this step and move on to
Step 5: Create or modify a SQL Server DB instance (p. 1407).
If you plan to locate the directory and the DB instance in different VPCs, configure cross-VPC traffic using
VPC peering or AWS Transit Gateway.
The following procedure enables traffic between VPCs using VPC peering. Follow the instructions in
What is VPC peering? in the Amazon Virtual Private Cloud Peering Guide.
1. Set up appropriate VPC routing rules to ensure that network traffic can flow both ways.
2. Ensure that the DB instance's security group can receive inbound traffic from the directory's security
group.
3. Ensure that there is no network access control list (ACL) rule to block traffic.
If a different AWS account owns the directory, you must share the directory.
1. Start sharing the directory with the AWS account that the DB instance will be created in by following
the instructions in Tutorial: Sharing your AWS Managed Microsoft AD directory for seamless EC2
domain-join in the AWS Directory Service Administration Guide.
2. Sign in to the AWS Directory Service console using the account for the DB instance, and ensure that
the domain has the SHARED status before proceeding.
3. While signed into the AWS Directory Service console using the account for the DB instance, note the
Directory ID value. You use this directory ID to join the DB instance to the domain.
• Create a new SQL Server DB instance using the console, the create-db-instance CLI command, or the
CreateDBInstance RDS API operation.
1407
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server
For the DB instance to be able to use the domain directory that you created, the following is required:
• For Directory, you must choose the domain identifier (d-ID) generated when you created the
directory.
• Make sure that the VPC security group has an outbound rule that lets the DB instance communicate
with the directory.
When you use the AWS CLI, the following parameters are required for the DB instance to be able to use
the directory that you created:
• For the --domain parameter, use the domain identifier (d-ID) generated when you created the
directory.
• For the --domain-iam-role-name parameter, use the role that you created that uses the managed
IAM policy AmazonRDSDirectoryServiceAccess.
For example, the following CLI command modifies a DB instance to use a directory.
For Windows:
Important
If you modify a DB instance to enable Kerberos authentication, reboot the DB instance after
making the change.
1408
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server
can provision SQL Server logins and users. You do this from the Active Directory users and groups in
your domain. Database permissions are managed through standard SQL Server permissions granted and
revoked to these Windows logins.
For an Active Directory user to authenticate with SQL Server, a SQL Server Windows login must exist for
the user or a group that the user is a member of. Fine-grained access control is handled through granting
and revoking permissions on these SQL Server logins. A user that doesn't have a SQL Server login or
belong to a group with such a login can't access the SQL Server DB instance.
The ALTER ANY LOGIN permission is required to create an Active Directory SQL Server login. If you
haven't created any logins with this permission, connect as the DB instance's master user using SQL
Server Authentication.
Run a data definition language (DDL) command such as the following example to create a SQL Server
login for an Active Directory user or group.
Note
Specify users and groups using the pre-Windows 2000 login name in the format
domainName\login_name. You can't use a user principal name (UPN) in the format
login_name@DomainName.
USE [master]
GO
CREATE LOGIN [mydomain\myuser] FROM WINDOWS WITH DEFAULT_DATABASE = [master],
DEFAULT_LANGUAGE = [us_english];
GO
For more information, see CREATE LOGIN (Transact-SQL) in the Microsoft Developer Network
documentation.
Users (both humans and applications) from your domain can now connect to the RDS for SQL Server
instance from a domain-joined client machine using Windows authentication.
For example, using the Amazon RDS API, you can do the following:
• To reattempt a domain join for a failed membership, use the ModifyDBInstance API operation and
specify the current membership's directory ID.
• To update the IAM role name for membership, use the ModifyDBInstance API operation and specify
the current membership's directory ID and the new IAM role.
• To remove a DB instance from a domain, use the ModifyDBInstance API operation and specify none
as the domain parameter.
• To move a DB instance from one domain to another, use the ModifyDBInstance API operation and
specify the domain identifier of the new domain as the domain parameter.
• To list membership for each DB instance, use the DescribeDBInstances API operation.
1409
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server
A request to become a member of a domain can fail because of a network connectivity issue or an
incorrect IAM role. For example, you might create a DB instance or modify an existing instance and have
the attempt fail for the DB instance to become a member of a domain. In this case, either reissue the
command to create or modify the DB instance or modify the newly created instance to join the domain.
1410
Amazon Relational Database Service User Guide
Updating applications for new SSL/TLS certificates
This topic can help you to determine whether any client applications use SSL/TLS to connect to your DB
instances. If they do, you can further check whether those applications require certificate verification to
connect.
Note
Some applications are configured to connect to SQL Server DB instances only if they can
successfully verify the certificate on the server.
For such applications, you must update your client application trust stores to include the new CA
certificates.
After you update your CA certificates in the client application trust stores, you can rotate the certificates
on your DB instances. We strongly recommend testing these procedures in a development or staging
environment before implementing them in your production environments.
For more information about certificate rotation, see Rotating your SSL/TLS certificate (p. 2596). For
more information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For information about using SSL/TLS with Microsoft SQL Server DB instances, see
Using SSL with a Microsoft SQL Server DB instance (p. 1456).
Topics
• Determining whether any applications are connecting to your Microsoft SQL Server DB instance
using SSL (p. 1411)
• Determining whether a client requires certificate verification in order to connect (p. 1412)
• Updating your application trust store (p. 1413)
Run the following query to get the current encryption option for all the open connections to a DB
instance. The column ENCRYPT_OPTION returns TRUE if the connection is encrypted.
select SESSION_ID,
ENCRYPT_OPTION,
NET_TRANSPORT,
AUTH_SCHEME
from SYS.DM_EXEC_CONNECTIONS
This query shows only the current connections. It doesn't show whether applications that have
connected and disconnected in the past have used SSL.
1411
Amazon Relational Database Service User Guide
Determining whether a client requires
certificate verification in order to connect
For more information about SQL Server Management Studio, see Use SQL Server Management Studio.
Sqlcmd
The following example with the sqlcmd client shows how to check a script's SQL Server connection
to determine whether successful connections require a valid certificate. For more information, see
Connecting with sqlcmd in the Microsoft SQL Server documentation.
When using sqlcmd, an SSL connection requires verification against the server certificate if you use the -
N command argument to encrypt connections, as in the following example.
Note
If sqlcmd is invoked with the -C option, it trusts the server certificate, even if that doesn't
match the client-side trust store.
ADO.NET
In the following example, the application connects using SSL, and the server certificate must be verified.
...
1412
Amazon Relational Database Service User Guide
Updating your application trust store
{
connection.Open();
...
}
Java
In the following example, the application connects using SSL, and the server certificate must be verified.
String connectionUrl =
"jdbc:sqlserver://dbinstance.rds.amazon.com;" +
"databaseName=ExampleDB;integratedSecurity=true;" +
"encrypt=true;trustServerCertificate=false";
To enable SSL encryption for clients that connect using JDBC, you might need to add the Amazon RDS
certificate to the Java CA certificate store. For instructions, see Configuring the client for encryption
in the Microsoft SQL Server documentation. You can also provide the trusted CA certificate file name
directly by appending trustStore=path-to-certificate-trust-store-file to the connection
string.
Note
If you use TrustServerCertificate=true (or its equivalent) in the connection string, the
connection process skips the trust chain validation. In this case, the application connects even if
the certificate can't be verified. Using TrustServerCertificate=false enforces certificate
validation and is a best practice.
If you are using an operating system other than Microsoft Windows, see the software distribution
documentation for SSL/TLS implementation for information about adding a new root CA certificate. For
example, OpenSSL and GnuTLS are popular options. Use the implementation method to add trust to the
RDS root CA certificate. Microsoft provides instructions for configuring certificates on some systems.
For information about downloading the root certificate, see Using SSL/TLS to encrypt a connection to a
DB instance (p. 2591).
For sample scripts that import certificates, see Sample script for importing certificates into your trust
store (p. 2603).
Note
When you update the trust store, you can retain older certificates in addition to adding the new
certificates.
1413
Amazon Relational Database Service User Guide
Upgrading the SQL Server DB engine
Major version upgrades can contain database changes that are not backward-compatible with existing
applications. As a result, you must manually perform major version upgrades of your DB instances. You
can initiate a major version upgrade by modifying your DB instance. However, before you perform a
major version upgrade, we recommend that you test the upgrade by following the steps described in
Testing an upgrade (p. 1417).
In contrast, minor version upgrades include only changes that are backward-compatible with existing
applications. You can initiate a minor version upgrade manually by modifying your DB instance.
Alternatively, you can enable the Auto minor version upgrade option when creating or modifying a
DB instance. Doing so means that your DB instance is automatically upgraded after Amazon RDS tests
and approves the new version. You can confirm whether the minor version upgrade will be automatic by
using the describe-db-engine-versions AWS CLI command. For example:
In the following example, the CLI command returns a response showing AutoUpgrade is true, indicating
that upgrades are automatic.
...
"ValidUpgradeTarget": [
{
"Engine": "sqlserver-se",
"EngineVersion": "14.00.3281.6.v1",
"Description": "SQL Server 2017 14.00.3281.6.v1",
"AutoUpgrade": true,
"IsMajorVersionUpgrade": false
}
...
For more information about performing upgrades, see Upgrading a SQL Server DB instance (p. 1418).
For information about what SQL Server versions are available on Amazon RDS, see Amazon RDS for
Microsoft SQL Server (p. 1354).
Topics
• Overview of upgrading (p. 1415)
• Major version upgrades (p. 1415)
• Multi-AZ and in-memory optimization considerations (p. 1417)
• Read replica considerations (p. 1417)
• Option group considerations (p. 1417)
• Parameter group considerations (p. 1417)
• Testing an upgrade (p. 1417)
• Upgrading a SQL Server DB instance (p. 1418)
• Upgrading deprecated DB instances before support ends (p. 1418)
1414
Amazon Relational Database Service User Guide
Overview
Overview of upgrading
Amazon RDS takes two DB snapshots during the upgrade process. The first DB snapshot is of the DB
instance before any upgrade changes have been made. The second DB snapshot is taken after the
upgrade finishes.
Note
Amazon RDS only takes DB snapshots if you have set the backup retention period for your DB
instance to a number greater than 0. To change your backup retention period, see Modifying an
Amazon RDS DB instance (p. 401).
After an upgrade is completed, you can't revert to the previous version of the database engine. If you
want to return to the previous version, restore from the DB snapshot that was taken before the upgrade
to create a new DB instance.
During a minor or major version upgrade of SQL Server, the Free Storage Space and Disk Queue Depth
metrics will display -1. After the upgrade is completed, both metrics will return to normal.
You can upgrade your existing DB instance to SQL Server 2017 or 2019 from any version except SQL
Server 2008. To upgrade from SQL Server 2008, first upgrade to one of the other versions.
You can use an AWS CLI query, such as the following example, to find the available upgrades for a
particular database engine version.
1415
Amazon Relational Database Service User Guide
Major version upgrades
Example
For Windows:
The output shows that you can upgrade version 14.00.3281.6 to the latest available SQL Server 2017 or
2019 versions.
--------------------------
|DescribeDBEngineVersions|
+------------------------+
| EngineVersion |
+------------------------+
| 14.00.3294.2.v1 |
| 14.00.3356.20.v1 |
| 14.00.3381.3.v1 |
| 14.00.3401.7.v1 |
| 14.00.3421.10.v1 |
| 14.00.3451.2.v1 |
| 15.00.4043.16.v1 |
| 15.00.4073.23.v1 |
| 15.00.4153.1.v1 |
| 15.00.4198.2.v1 |
| 15.00.4236.7.v1 |
+------------------------+
When you upgrade your DB instance, all existing databases remain at their original compatibility level.
For example, if you upgrade from SQL Server 2014 to SQL Server 2016, all existing databases have a
compatibility level of 120. Any new database created after the upgrade have compatibility level 130.
You can change the compatibility level of a database by using the ALTER DATABASE command. For
example, to change a database named customeracct to be compatible with SQL Server 2014, issue the
following command:
1416
Amazon Relational Database Service User Guide
Multi-AZ and in-memory optimization considerations
If your DB instance is in a Multi-AZ deployment, both the primary and standby instances are upgraded.
Amazon RDS does rolling upgrades. You have an outage only for the duration of a failover.
SQL Server 2014 through 2019 Enterprise Edition support in-memory optimization.
When you perform a database version upgrade of the primary DB instance, all its read-replicas are
also automatically upgraded. Amazon RDS will upgrade all of the read replicas simultaneously before
upgrading the primary DB instance. Read replicas may not be available until the database version
upgrade on the primary DB instance is complete.
For more information, see Creating an option group (p. 332) or Copying an option group (p. 334).
For example, when you upgrade to a new major version, you must specify a new parameter group. We
recommend that you create a new parameter group, and configure the parameters as in your existing
custom parameter group.
For more information, see Creating a DB parameter group (p. 350) or Copying a DB parameter
group (p. 356).
Testing an upgrade
Before you perform a major version upgrade on your DB instance, you should thoroughly test your
database, and all applications that access the database, for compatibility with the new version. We
recommend that you use the following procedure.
1. Review Upgrade SQL Server in the Microsoft documentation for the new version of the database
engine to see if there are compatibility issues that might affect your database or applications.
1417
Amazon Relational Database Service User Guide
Upgrading a SQL server DB instance
2. If your DB instance uses a custom option group, create a new option group compatible with the new
version you are upgrading to. For more information, see Option group considerations (p. 1417).
3. If your DB instance uses a custom parameter group, create a new parameter group compatible
with the new version you are upgrading to. For more information, see Parameter group
considerations (p. 1417).
4. Create a DB snapshot of the DB instance to be upgraded. For more information, see Creating a DB
snapshot (p. 613).
5. Restore the DB snapshot to create a new test DB instance. For more information, see Restoring from
a DB snapshot (p. 615).
6. Modify this new test DB instance to upgrade it to the new version, by using one of the following
methods:
Important
If you have any snapshots that are encrypted using AWS KMS, we recommend that you initiate
an upgrade before support ends.
If you need to restore a deprecated DB instance, you can do point-in-time recovery (PITR) or restore
a snapshot. Doing this gives you temporary access a DB instance that uses the version that is being
deprecated. However, after a major version is fully deprecated, these DB instances will also be
automatically upgraded to a supported version.
1418
Amazon Relational Database Service User Guide
Importing and exporting SQL Server databases
For example, you can create a full backup from your local server, store it on S3, and then restore it onto
an existing Amazon RDS DB instance. You can also make backups from RDS, store them on S3, and then
restore them wherever you want.
Native backup and restore is available in all AWS Regions for Single-AZ and Multi-AZ DB instances,
including Multi-AZ DB instances with read replicas. Native backup and restore is available for all editions
of Microsoft SQL Server supported on Amazon RDS.
Using native .bak files to back up and restore databases is usually the fastest way to back up and restore
databases. There are many additional advantages to using native backup and restore. For example, you
can do the following:
Contents
• Limitations and recommendations (p. 1420)
• Setting up for native backup and restore (p. 1421)
1419
Amazon Relational Database Service User Guide
Limitations and recommendations
• Manually creating an IAM role for native backup and restore (p. 1422)
• Using native backup and restore (p. 1425)
• Backing up a database (p. 1425)
• Usage (p. 1425)
• Examples (p. 1427)
• Restoring a database (p. 1428)
• Usage (p. 1428)
• Examples (p. 1429)
• Restoring a log (p. 1430)
• Usage (p. 1430)
• Examples (p. 1431)
• Finishing a database restore (p. 1431)
• Usage (p. 1432)
• Working with partially restored databases (p. 1432)
• Dropping a partially restored database (p. 1432)
• Snapshot restore and point-in-time recovery behavior for partially restored
databases (p. 1432)
• Canceling a task (p. 1432)
• Usage (p. 1432)
• Tracking the status of tasks (p. 1432)
• Usage (p. 1432)
• Examples (p. 1433)
• Response (p. 1433)
• Compressing backup files (p. 1435)
• Troubleshooting (p. 1435)
• Importing and exporting SQL Server data using other methods (p. 1437)
• Importing data into RDS for SQL Server by using a snapshot (p. 1437)
• Import the data (p. 1440)
• Generate and Publish Scripts Wizard (p. 1440)
• Import and Export Wizard (p. 1441)
• Bulk copy (p. 1441)
• Exporting data from RDS for SQL Server (p. 1442)
• SQL Server Import and Export Wizard (p. 1442)
• SQL Server Generate and Publish Scripts Wizard and bcp utility (p. 1444)
• You can't back up to, or restore from, an Amazon S3 bucket in a different AWS Region from your
Amazon RDS DB instance.
• You can't restore a database with the same name as an existing database. Database names are unique.
• We strongly recommend that you don't restore backups from one time zone to a different time zone.
If you restore backups from one time zone to a different time zone, you must audit your queries and
applications for the effects of the time zone change.
• Amazon S3 has a size limit of 5 TB per file. For native backups of larger databases, you can use
multifile backup.
1420
Amazon Relational Database Service User Guide
Setting up
• The maximum database size that can be backed up to S3 depends on the available memory, CPU, I/
O, and network resources on the DB instance. The larger the database, the more memory the backup
agent consumes. Our testing shows that you can make a compressed backup of a 16-TB database on
our newest-generation instance types from 2xlarge instance sizes and larger, given sufficient system
resources.
• You can't back up to or restore from more than 10 backup files at the same time.
• A differential backup is based on the last full backup. For differential backups to work, you can't take
a snapshot between the last full backup and the differential backup. If you want a differential backup,
but a manual or automated snapshot exists, then do another full backup before proceeding with the
differential backup.
• Differential and log restores aren't supported for databases with files that have their file_guid (unique
identifier) set to NULL.
• You can run up to two backup or restore tasks at the same time.
• You can't perform native log backups from SQL Server on Amazon RDS.
• RDS supports native restores of databases up to 16 TB. Native restores of databases on SQL Server
Express Edition are limited to 10 GB.
• You can't do a native backup during the maintenance window, or any time Amazon RDS is in the
process of taking a snapshot of the database. If a native backup task overlaps with the RDS daily
backup window, the native backup task is canceled.
• On Multi-AZ DB instances, you can only natively restore databases that are backed up in the full
recovery model.
• Restoring from differential backups on Multi-AZ instances isn't supported.
• Calling the RDS procedures for native backup and restore within a transaction isn't supported.
• Use a symmetric encryption AWS KMS key to encrypt your backups. Amazon RDS doesn't support
asymmetric KMS keys. For more information, see Creating symmetric encryption KMS keys in the AWS
Key Management Service Developer Guide.
• Native backup files are encrypted with the specified KMS key using the "Encryption-Only" crypto
mode. When you are restoring encrypted backup files, be aware that they were encrypted with the
"Encryption-Only" crypto mode.
• You can't restore a database that contains a FILESTREAM file group.
If your database can be offline while the backup file is created, copied, and restored, we recommend that
you use native backup and restore to migrate it to RDS. If your on-premises database can't be offline, we
recommend that you use the AWS Database Migration Service to migrate your database to Amazon RDS.
For more information, see What is AWS Database Migration Service?
Native backup and restore isn't intended to replace the data recovery capabilities of the cross-region
snapshot copy feature. We recommend that you use snapshot copy to copy your database snapshot
to another AWS Region for cross-region disaster recovery in Amazon RDS. For more information, see
Copying a DB snapshot (p. 619).
You must have an S3 bucket to use for your backup files and then upload backups you want to
migrate to RDS. If you already have an Amazon S3 bucket, you can use that. If you don't, you can
create a bucket. Alternatively, you can choose to have a new bucket created for you when you add the
SQLSERVER_BACKUP_RESTORE option by using the AWS Management Console.
For information on using S3, see the Amazon Simple Storage Service User Guide
1421
Amazon Relational Database Service User Guide
Setting up
2. An AWS Identity and Access Management (IAM) role to access the bucket.
If you already have an IAM role, you can use that. You can choose to have a new IAM role created
for you when you add the SQLSERVER_BACKUP_RESTORE option by using the AWS Management
Console. Alternatively, you can create a new one manually.
If you want to create a new IAM role manually, take the approach discussed in the next section. Do the
same if you want to attach trust relationships and permissions policies to an existing IAM role.
3. The SQLSERVER_BACKUP_RESTORE option added to an option group on your DB instance.
To enable native backup and restore on your DB instance, you add the SQLSERVER_BACKUP_RESTORE
option to an option group on your DB instance. For more information and instructions, see Support
for native backup and restore in SQL Server (p. 1525).
For the native backup and restore feature, use trust relationships and permissions policies similar
to the examples in this section. In the following example, we use the service principal name
rds.amazonaws.com as an alias for all service accounts. In the other examples, we specify an Amazon
Resource Name (ARN) to identify another account, user, or role that we're granting access to in the trust
policy.
We recommend using the aws:SourceArn and aws:SourceAccount global condition context keys in
resource-based trust relationships to limit the service's permissions to a specific resource. This is the most
effective way to protect against the confused deputy problem.
You might use both global condition context keys and have the aws:SourceArn value contain the
account ID. In this case, the aws:SourceAccount value and the account in the aws:SourceArn value
must use the same account ID when used in the same statement.
In the trust relationship, make sure to use the aws:SourceArn global condition context key with the full
ARN of the resources accessing the role. For native backup and restore, make sure to include both the DB
option group and the DB instances, as shown in the following example.
Example trust relationship with global condition context key for native backup and restore
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
1422
Amazon Relational Database Service User Guide
Setting up
"StringEquals": {
"aws:SourceArn": [
"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier",
"arn:aws:rds:Region:my_account_ID:og:option_group_name"
]
}
}
}
]
}
The following example uses an ARN to specify a resource. For more information on using ARNs, see
Amazon resource names (ARNs).
Example permissions policy for native backup and restore without encryption support
{
"Version": "2012-10-17",
"Statement":
[
{
"Effect": "Allow",
"Action":
[
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::bucket_name"
},
{
"Effect": "Allow",
"Action":
[
"s3:GetObjectAttributes",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::bucket_name/*"
}
]
}
Example permissions policy for native backup and restore with encryption support
If you want to encrypt your backup files, include an encryption key in your permissions policy. For more
information about encryption keys, see Getting started in the AWS Key Management Service Developer
Guide.
Note
You must use a symmetric encryption KMS key to encrypt your backups. Amazon RDS doesn't
support asymmetric KMS keys. For more information, see Creating symmetric encryption KMS
keys in the AWS Key Management Service Developer Guide.
The IAM role must also be a key user and key administrator for the KMS key, that is, it must be
specified in the key policy. For more information, see Creating symmetric encryption KMS keys
in the AWS Key Management Service Developer Guide.
{
"Version": "2012-10-17",
"Statement":
[
1423
Amazon Relational Database Service User Guide
Setting up
{
"Effect": "Allow",
"Action":
[
"kms:DescribeKey",
"kms:GenerateDataKey",
"kms:Encrypt",
"kms:Decrypt"
],
"Resource": "arn:aws:kms:region:account-id:key/key-id"
},
{
"Effect": "Allow",
"Action":
[
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::bucket_name"
},
{
"Effect": "Allow",
"Action":
[
"s3:GetObjectAttributes",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::bucket_name/*"
}
]
}
1424
Amazon Relational Database Service User Guide
Using native backup and restore
Some of the stored procedures require that you provide an Amazon Resource
Name (ARN) to your Amazon S3 bucket and file. The format for your ARN is
arn:aws:s3:::bucket_name/file_name.extension. Amazon S3 doesn't require an account
number or AWS Region in ARNs.
If you also provide an optional KMS key, the format for the ARN of the key is
arn:aws:kms:region:account-id:key/key-id. For more information, see Amazon resource
names (ARNs) and AWS service namespaces. You must use a symmetric encryption KMS key to encrypt
your backups. Amazon RDS doesn't support asymmetric KMS keys. For more information, see Creating
symmetric encryption KMS keys in the AWS Key Management Service Developer Guide.
Note
Whether or not you use a KMS key, the native backup and restore tasks enable server-side
Advanced Encryption Standard (AES) 256-bit encryption by default for files uploaded to S3.
For instructions on how to call each stored procedure, see the following topics:
Backing up a database
To back up your database, use the rds_backup_database stored procedure.
Note
You can't back up a database during the maintenance window, or while Amazon RDS is taking a
snapshot.
Usage
exec msdb.dbo.rds_backup_database
@source_db_name='database_name',
@s3_arn_to_backup_to='arn:aws:s3:::bucket_name/file_name.extension',
[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],
[@overwrite_s3_backup_file=0|1],
[@type='DIFFERENTIAL|FULL'],
[@number_of_files=n];
1425
Amazon Relational Database Service User Guide
Using native backup and restore
The file can have any extension, but .bak is usually used.
• @kms_master_key_arn – The ARN for the symmetric encryption KMS key to use to encrypt the item.
• You can't use the default encryption key. If you use the default key, the database won't be backed
up.
• If you don't specify a KMS key identifier, the backup file won't be encrypted. For more information,
see Encrypting Amazon RDS resources.
• When you specify a KMS key, client-side encryption is used.
• Amazon RDS doesn't support asymmetric KMS keys. For more information, see Creating symmetric
encryption KMS keys in the AWS Key Management Service Developer Guide.
• @overwrite_s3_backup_file – A value that indicates whether to overwrite an existing backup file.
• 0 – Doesn't overwrite an existing file. This value is the default.
A differential backup is based on the last full backup. For differential backups to work, you can't take
a snapshot between the last full backup and the differential backup. If you want a differential backup,
but a snapshot exists, then do another full backup before proceeding with the differential backup.
You can look for the last full backup or snapshot using the following example SQL query:
select top 1
database_name
, backup_start_date
, backup_finish_date
from msdb.dbo.backupset
where database_name='mydatabase'
and type = 'D'
order by backup_start_date desc;
• @number_of_files – The number of files into which the backup will be divided (chunked). The
maximum number is 10.
• Multifile backup is supported for both full and differential backups.
• If you enter a value of 1 or omit the parameter, a single backup file is created.
Provide the prefix that the files have in common, then suffix that with an asterisk (*). The asterisk
can be anywhere in the file_name part of the S3 ARN. The asterisk is replaced by a series of
alphanumeric strings in the generated files, starting with 1-of-number_of_files.
For example, if the file names in the S3 ARN are backup*.bak and you set @number_of_files=4,
the backup files generated are backup1-of-4.bak, backup2-of-4.bak, backup3-of-4.bak, and
backup4-of-4.bak.
• If any of the file names already exists, and @overwrite_s3_backup_file is 0, an error is returned.
• Multifile backups can only have one asterisk in the file_name part of the S3 ARN.
• Single-file backups can have any number of asterisks in the file_name part of the S3 ARN.
Asterisks aren't removed from the generated file name.
1426
Amazon Relational Database Service User Guide
Using native backup and restore
Examples
Example of differential backup
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup1.bak',
@overwrite_s3_backup_file=1,
@type='DIFFERENTIAL';
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup1.bak',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE',
@overwrite_s3_backup_file=1,
@type='FULL';
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@number_of_files=4;
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@type='DIFFERENTIAL',
@number_of_files=4;
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE',
@number_of_files=4;
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@overwrite_s3_backup_file=1,
@number_of_files=4;
exec msdb.dbo.rds_backup_database
1427
Amazon Relational Database Service User Guide
Using native backup and restore
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@number_of_files=1;
Restoring a database
To restore your database, call the rds_restore_database stored procedure. Amazon RDS creates an
initial snapshot of the database after the restore task is complete and the database is open.
Usage
exec msdb.dbo.rds_restore_database
@restore_db_name='database_name',
@s3_arn_to_restore_from='arn:aws:s3:::bucket_name/file_name.extension',
@with_norecovery=0|1,
[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],
[@type='DIFFERENTIAL|FULL'];
• @restore_db_name – The name of the database to restore. Database names are unique. You can't
restore a database with the same name as an existing database.
• @s3_arn_to_restore_from – The ARN indicating the Amazon S3 prefix and names of the backup
files used to restore the database.
• For a single-file backup, provide the entire file name.
• For a multifile backup, provide the prefix that the files have in common, then suffix that with an
asterisk (*).
• If @s3_arn_to_restore_from is empty, the following error message is returned: S3 ARN prefix
cannot be empty.
The following parameter is required for differential restores, but optional for full restores:
• @kms_master_key_arn – If you encrypted the backup file, the KMS key to use to decrypt the file.
Note
For differential restores, either the database must be in the RESTORING state or a task must
already exist that restores with NORECOVERY.
You can't restore later differential backups while the database is online.
You can't submit a restore task for a database that already has a pending restore task with
RECOVERY.
Full restores with NORECOVERY and differential restores aren't supported on Multi-AZ instances.
1428
Amazon Relational Database Service User Guide
Using native backup and restore
Restoring a database on a Multi-AZ instance with read replicas is similar to restoring a database
on a Multi-AZ instance. You don't have to take any additional actions to restore a database on a
replica.
Examples
Example of single-file restore
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak';
To avoid errors when restoring multiple files, make sure that all the backup files have the same prefix,
and that no other files use that prefix.
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup*';
The following three examples perform the same task, full restore with RECOVERY.
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak';
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
[@type='DIFFERENTIAL|FULL'];
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='FULL',
@with_norecovery=0;
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE';
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='FULL',
@with_norecovery=1;
1429
Amazon Relational Database Service User Guide
Using native backup and restore
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='DIFFERENTIAL',
@with_norecovery=1;
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='DIFFERENTIAL',
@with_norecovery=0;
Restoring a log
To restore your log, call the rds_restore_log stored procedure.
Usage
exec msdb.dbo.rds_restore_log
@restore_db_name='database_name',
@s3_arn_to_restore_from='arn:aws:s3:::bucket_name/log_file_name.extension',
[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],
[@with_norecovery=0|1],
[@stopat='datetime'];
• @kms_master_key_arn – If you encrypted the log, the KMS key to use to decrypt the log.
• @with_norecovery – The recovery clause to use for the restore operation. This value defaults to 1.
• Set it to 0 to restore with RECOVERY. In this case, the database is online after the restore. You can't
restore further log backups while the database is online.
• Set it to 1 to restore with NORECOVERY. In this case, the database remains in the RESTORING state
after restore task completion. With this approach, you can do later log restores.
• @stopat – A value that specifies that the database is restored to its state at the date and time
specified (in datetime format). Only transaction log records written before the specified date and time
are applied to the database.
If this parameter isn't specified (it is NULL), the complete log is restored.
Note
For log restores, either the database must be in a state of restoring or a task must already exist
that restores with NORECOVERY.
1430
Amazon Relational Database Service User Guide
Using native backup and restore
Examples
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn';
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE';
The following two examples perform the same task, log restore with NORECOVERY.
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@with_norecovery=1;
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn';
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@with_norecovery=0;
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@with_norecovery=0,
@stopat='2019-12-01 03:57:09';
1431
Amazon Relational Database Service User Guide
Using native backup and restore
Usage
Note
To use this approach, the database must be in the RESTORING state without any pending
restore tasks.
The rds_finish_restore procedure isn't supported on Multi-AZ instances.
To finish restoring the database, use the master login. Or use the user login that most recently
restored the database or log with NORECOVERY.
Note
You can't submit a DROP database request for a database that already has a pending restore or
finish restore task.
To drop the database, use the master login. Or use the user login that most recently restored the
database or log with NORECOVERY.
Canceling a task
To cancel a backup or restore task, call the rds_cancel_task stored procedure.
Note
You can't cancel a FINISH_RESTORE task.
Usage
• @task_id – The ID of the task to cancel. You can get the task ID by calling rds_task_status.
Usage
exec msdb.dbo.rds_task_status
1432
Amazon Relational Database Service User Guide
Using native backup and restore
[@db_name='database_name'],
[@task_id=ID_number];
• @db_name – The name of the database to show the task status for.
• @task_id – The ID of the task to show the task status for.
Examples
Example of listing the status for a specific task
exec msdb.dbo.rds_task_status
@db_name='my_database',
@task_id=5;
Example of listing all tasks and their statuses on the current instance
exec msdb.dbo.rds_task_status;
Response
The rds_task_status stored procedure returns the following columns.
Column Description
1433
Amazon Relational Database Service User Guide
Using native backup and restore
Column Description
Amazon RDS creates an initial snapshot of the database after it is open on
completion of the following restore tasks:
• RESTORE_DB
• RESTORE_DB_DIFFERENTIAL
• RESTORE_DB_LOG
• FINISH_RESTORE
database_name The name of the database that the task is associated with.
lifecycle The status of the task. The possible statuses are the following:
last_updated The date and time that the task status was last updated. The status is updated
after every 5 percent of progress.
created_at The date and time that the task was created.
S3_object_arn The ARN indicating the Amazon S3 prefix and the name of the file that is being
backed up or restored.
overwrite_s3_backup_file
The value of the @overwrite_s3_backup_file parameter specified when
calling a backup task. For more information, see Backing up a database (p. 1425).
The ARN for the KMS key used for encryption (for backup) and decryption (for
KMS_master_key_arn
restore).
1434
Amazon Relational Database Service User Guide
Compressing backup files
Compressing your backup files is supported for the following database editions:
To turn on compression for your backup files, run the following code:
To turn off compression for your backup files, run the following code:
Troubleshooting
The following are issues you might encounter when you use native backup and restore.
Database backup/restore Make sure that you have added the SQLSERVER_BACKUP_RESTORE
option is not enabled yet option to the DB option group associated with your DB instance.
or is in the process of being For more information, see Adding the native backup and restore
enabled. Please try again option (p. 1525).
later.
Access Denied The backup or restore process can't access the backup file. This is
usually caused by issues like the following:
BACKUP DATABASE Compressing your backup files is only supported for Microsoft SQL
WITH COMPRESSION Server Enterprise Edition and Standard Edition.
isn't supported on
<edition_name> Edition For more information, see Compressing backup files (p. 1435).
Key <ARN> does not exist You attempted to restore an encrypted backup, but didn't provide a
valid encryption key. Check your encryption key and retry.
1435
Amazon Relational Database Service User Guide
Troubleshooting
Please reissue task with If you attempt to back up your database and provide the name of a
correct type and overwrite file that already exists, but set the overwrite property to false, the save
property operation fails. To fix this error, either provide the name of a file that
doesn't already exist, or set the overwrite property to true.
It's also possible that you intended to restore your database, but called
the rds_backup_database stored procedure accidentally. In that
case, call the rds_restore_database stored procedure instead.
For more information, see Using native backup and restore (p. 1425).
Please specify a bucket that You can't back up to, or restore from, an Amazon S3 bucket in a
is in the same region as RDS different AWS Region from your Amazon RDS DB instance. You can
instance use Amazon S3 replication to copy the backup file to the correct AWS
Region.
The specified bucket does Verify that you have provided the correct ARN for your bucket and file,
not exist in the correct format.
For more information, see Using native backup and restore (p. 1425).
User <ARN> is not authorized You requested an encrypted operation, but didn't provide correct AWS
to perform <kms action> on KMS permissions. Verify that you have the correct permissions, or add
resource <ARN> them.
The Restore task is unable to Reduce the number of files that you're trying to restore from. You can
restore from more than 10 make each individual file larger if necessary.
backup file(s). Please reduce
the number of files matched
and try again.
Database 'database_name' You can't restore a database with the same name as an existing
already exists. Two databases database. Database names are unique.
that differ only by case or
accent are not allowed.
Choose a different database
name.
1436
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods
If your scenario supports it, it's easier to move data in and out of Amazon RDS by using the native backup
and restore functionality. For more information, see Importing and exporting SQL Server databases
using native backup and restore (p. 1419).
Note
Amazon RDS for Microsoft SQL Server doesn't support importing data into the msdb database.
1. Create a DB instance. For more information, see Creating an Amazon RDS DB instance (p. 300).
2. Stop applications from accessing the destination DB instance.
If you prevent access to your DB instance while you are importing data, data transfer is faster.
Additionally, you don't need to worry about conflicts while data is being loaded if other applications
cannot write to the DB instance at the same time. If something goes wrong and you have to roll
back to an earlier database snapshot, the only changes that you lose are the imported data. You can
import this data again after you resolve the issue.
For information about controlling access to your DB instance, see Controlling access with security
groups (p. 2680).
3. Create a snapshot of the target database.
If the target database is already populated with data, we recommend that you take a snapshot of
the database before you import the data. If something goes wrong with the data import or you want
to discard the changes, you can restore the database to its previous state by using the snapshot. For
information about database snapshots, see Creating a DB snapshot (p. 613).
Note
When you take a database snapshot, I/O operations to the database are suspended for a
moment (milliseconds) while the backup is in progress.
4. Disable automated backups on the target database.
Disabling automated backups on the target DB instance improves performance while you are
importing your data because Amazon RDS doesn't log transactions when automatic backups are
disabled. However, there are some things to consider. Automated backups are required to perform
a point-in-time recovery. Thus, you can't restore the database to a specific point in time while you
are importing data. Additionally, any automated backups that were created on the DB instance are
erased unless you choose to retain them.
Choosing to retain the automated backups can help protect you against accidental deletion of
data. Amazon RDS also saves the database instance properties along with each automated backup
to make it easy to recover. Using this option lets you can restore a deleted database instance to a
specified point in time within the backup retention period even after deleting it. Automated backups
are automatically deleted at the end of the specified backup window, just as they are for an active
database instance.
You can also use previous snapshots to recover the database, and any snapshots that you have taken
remain available. For information about automated backups, see Working with backups (p. 591).
1437
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods
If you need to disable foreign key constraints, you can do so with the following script.
OPEN table_cursor;
FETCH NEXT FROM table_cursor INTO @table_name;
CLOSE table_cursor;
DEALLOCATE table_cursor;
GO
If you need to disable triggers, you can do so with the following script.
OPEN trigger_cursor;
FETCH NEXT FROM trigger_cursor INTO @trigger, @table;
CLOSE trigger_cursor;
DEALLOCATE trigger_cursor;
GO
8. Query the source SQL Server instance for any logins that you want to import to the destination DB
instance.
1438
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods
SQL Server stores logins and passwords in the master database. Because Amazon RDS doesn't
grant access to the master database, you cannot directly import logins and passwords into your
destination DB instance. Instead, you must query the master database on the source SQL Server
instance to generate a data definition language (DDL) file. This file should include all logins and
passwords that you want to add to the destination DB instance. This file also should include role
memberships and permissions that you want to transfer.
For information about querying the master database, see How to transfer the logins and the
passwords between instances of SQL Server 2005 and SQL Server 2008 in the Microsoft Knowledge
Base.
The output of the script is another script that you can run on the destination DB instance. The script
in the Knowledge Base article has the following code:
p.type IN
p.type = 'S'
9. Import the data using the method in Import the data (p. 1440).
10. Grant applications access to the target DB instance.
When your data import is complete, you can grant access to the DB instance to those applications
that you blocked during the import. For information about controlling access to your DB instance,
see Controlling access with security groups (p. 2680).
11. Enable automated backups on the target DB instance.
For information about automated backups, see Working with backups (p. 591).
12. Enable foreign key constraints.
If you disabled foreign key constraints earlier, you can now enable them with the following script.
OPEN table_cursor;
FETCH NEXT FROM table_cursor INTO @table_name;
CLOSE table_cursor;
DEALLOCATE table_cursor;
If you disabled triggers earlier, you can now enable them with the following script.
1439
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods
OPEN trigger_cursor;
FETCH NEXT FROM trigger_cursor INTO @trigger, @table;
CLOSE trigger_cursor;
DEALLOCATE trigger_cursor;
SQL Server Management Studio includes the following tools, which are useful in importing data to a SQL
Server DB instance:
The Generate and Publish Scripts Wizard creates a script that contains the schema of a database, the
data itself, or both. You can generate a script for a database in your local SQL Server deployment. You
can then run the script to transfer the information that it contains to an Amazon RDS DB instance.
Note
For databases of 1 GiB or larger, it's more efficient to script only the database schema. You then
use the Import and Export Wizard or the bulk copy feature of SQL Server to transfer the data.
For detailed information about the Generate and Publish Scripts Wizard, see the Microsoft SQL Server
documentation.
In the wizard, pay particular attention to the advanced options on the Set Scripting Options page to
ensure that everything you want your script to include is selected. For example, by default, database
triggers are not included in the script.
1440
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods
When the script is generated and saved, you can use SQL Server Management Studio to connect to your
DB instance and then run the script.
The Import and Export Wizard creates a special Integration Services package, which you can use to copy
data from your local SQL Server database to the destination DB instance. The wizard can filter which
tables and even which tuples within a table are copied to the destination DB instance.
Note
The Import and Export Wizard works well for large datasets, but it might not be the fastest way
to remotely export data from your local deployment. For an even faster way, consider the SQL
Server bulk copy feature.
For detailed information about the Import and Export Wizard, see the Microsoft SQL Server
documentation.
• For Server Name, type the name of the endpoint for your DB instance.
• For the server authentication mode, choose Use SQL Server Authentication.
• For User name and Password, type the credentials for the master user that you created for the DB
instance.
Bulk copy
The SQL Server bulk copy feature is an efficient means of copying data from a source database to your
DB instance. Bulk copy writes the data that you specify to a data file, such as an ASCII file. You can then
run bulk copy again to write the contents of the file to the destination DB instance.
This section uses the bcp utility, which is included with all editions of SQL Server. For detailed
information about bulk import and export operations, see the Microsoft SQL Server documentation.
Note
Before you use bulk copy, you must first import your database schema to the destination DB
instance. The Generate and Publish Scripts Wizard, described earlier in this topic, is an excellent
tool for this purpose.
The following command connects to the local SQL Server instance. It generates a tab-delimited file of a
specified table in the C:\ root directory of your existing SQL Server deployment. The table is specified by
its fully qualified name, and the text file has the same name as the table that is being copied.
• -n specifies that the bulk copy uses the native data types of the data to be copied.
• -S specifies the SQL Server instance that the bcp utility connects to.
• -U specifies the user name of the account to log in to the SQL Server instance.
• -P specifies the password for the user specified by -U.
• -b specifies the number of rows per batch of imported data.
Note
There might be other parameters that are important to your import situation. For example,
you might need the -E parameter that pertains to identity values. For more information; see
1441
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods
the full description of the command line syntax for the bcp utility in the Microsoft SQL Server
documentation.
For example, suppose that a database named store that uses the default schema, dbo, contains a table
named customers. The user account admin, with the password insecure, copies 10,000 rows of the
customers table to a file named customers.txt.
After you generate the data file, you can upload the data to your DB instance by using a similar
command. Beforehand, create the database and schema on the target DB instance. Then use the in
argument to specify an input file instead of out to specify an output file. Instead of using localhost to
specify the local SQL Server instance, specify the endpoint of your DB instance. If you use a port other
than 1433, specify that too. The user name and password are the master user and password for your DB
instance. The syntax is as follows.
bcp dbname.schema_name.table_name
in C:\table_name.txt -n -S endpoint,port -U master_user_name -P master_user_password -
b 10000
To continue the previous example, suppose that the master user name is admin, and the
password is insecure. The endpoint for the DB instance is rds.ckz2kqd4qsn1.us-
east-1.rds.amazonaws.com, and you use port 4080. The command is as follows.
Note
Specify a password other than the prompt shown here as a security best practice.
• Native database backup using a full backup file (.bak) – Using .bak files to backup databases is
heavily optimized, and is usually the fastest way to export data. For more information, see Importing
and exporting SQL Server databases using native backup and restore (p. 1419).
• SQL Server Import and Export Wizard – For more information, see SQL Server Import and Export
Wizard (p. 1442).
• SQL Server Generate and Publish Scripts Wizard and bcp utility – For more information, see SQL
Server Generate and Publish Scripts Wizard and bcp utility (p. 1444).
The SQL Server Import and Export Wizard is available as part of Microsoft SQL Server Management
Studio. This graphical SQL Server client is included in all Microsoft SQL Server editions except the
Express Edition. SQL Server Management Studio is available only as a Windows-based application.
SQL Server Management Studio Express is available from Microsoft as a free download. To find this
download, see the Microsoft website.
1442
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods
To use the SQL Server Import and Export Wizard to export data
1. In SQL Server Management Studio, connect to your RDS for SQL Server DB instance. For details
on how to do this, see Connecting to a DB instance running the Microsoft SQL Server database
engine (p. 1380).
2. In Object Explorer, expand Databases, open the context (right-click) menu for the source database,
choose Tasks, and then choose Export Data. The wizard appears.
3. On the Choose a Data Source page, do the following:
If you choose New, see Create database in the SQL Server documentation for details on the
database information to provide.
e. Choose Next.
5. On the Table Copy or Query page, choose Copy data from one or more tables or views or Write a
query to specify the data to transfer. Choose Next.
6. If you chose Write a query to specify the data to transfer, you see the Provide a Source Query
page. Type or paste in a SQL query, and then choose Parse to verify it. Once the query validates,
choose Next.
7. On the Select Source Tables and Views page, do the following:
a. Select the tables and views that you want to export, or verify that the query you provided is
selected.
b. Choose Edit Mappings and specify database and column mapping information. For more
information, see Column mappings in the SQL Server documentation.
c. (Optional) To see a preview of data to be exported, select the table, view, or query, and then
choose Preview.
d. Choose Next.
8. On the Run Package page, verify that Run immediately is selected. Choose Next.
9. On the Complete the Wizard page, verify that the data export details are as you expect. Choose
Finish.
10. On the The execution was successful page, choose Close.
1443
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods
SQL Server Generate and Publish Scripts Wizard and bcp utility
You can use the SQL Server Generate and Publish Scripts Wizard to create scripts for an entire database
or just selected objects. You can run these scripts on a target SQL Server DB instance to recreate the
scripted objects. You can then use the bcp utility to bulk export the data for the selected objects to the
target DB instance. This choice is best if you want to move a whole database (including objects other
than tables) or large quantities of data between two SQL Server DB instances. For a full description of
the bcp command-line syntax, see bcp utility in the Microsoft SQL Server documentation.
The SQL Server Generate and Publish Scripts Wizard is available as part of Microsoft SQL Server
Management Studio. This graphical SQL Server client is included in all Microsoft SQL Server editions
except the Express Edition. SQL Server Management Studio is available only as a Windows-based
application. SQL Server Management Studio Express is available from Microsoft as a free download.
To use the SQL Server Generate and Publish Scripts Wizard and the bcp utility to export data
1. In SQL Server Management Studio, connect to your RDS for SQL Server DB instance. For details
on how to do this, see Connecting to a DB instance running the Microsoft SQL Server database
engine (p. 1380).
2. In Object Explorer, expand the Databases node and select the database you want to script.
3. Follow the instructions in Generate and publish scripts Wizard in the SQL Server documentation to
create a script file.
4. In SQL Server Management Studio, connect to your target SQL Server DB instance.
5. With the target SQL Server DB instance selected in Object Explorer, choose Open on the File menu,
choose File, and then open the script file.
6. If you have scripted the entire database, review the CREATE DATABASE statement in the script. Make
sure that the database is being created in the location and with the parameters that you want. For
more information, see CREATE DATABASE in the SQL Server documentation.
7. If you are creating database users in the script, check to see if server logins exist on the target DB
instance for those users. If not, create logins for those users; the scripted commands to create
the database users fail otherwise. For more information, see Create a login in the SQL Server
documentation.
8. Choose !Execute on the SQL Editor menu to run the script file and create the database objects.
When the script finishes, verify that all database objects exist as expected.
9. Use the bcp utility to export data from the RDS for SQL Server DB instance into files. Open a
command prompt and type the following command.
• table_name is the name of one of the tables that you've recreated in the target database and now
want to populate with data.
• data_file is the full path and name of the data file to be created.
• -n specifies that the bulk copy uses the native data types of the data to be copied.
• -S specifies the SQL Server DB instance to export from.
• -U specifies the user name to use when connecting to the SQL Server DB instance.
• -P specifies the password for the user specified by -U.
1444
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods
Repeat this step until you have data files for all of the tables you want to export.
10. Prepare your target DB instance for bulk import of data by following the instructions at Basic
guidelines for bulk importing data in the SQL Server documentation.
11. Decide on a bulk import method to use after considering performance and other concerns discussed
in About bulk import and bulk export operations in the SQL Server documentation.
12. Bulk import the data from the data files that you created using the bcp utility. To do so, follow the
instructions at either Import and export bulk data by using the bcp utility or Import bulk data by
using BULK INSERT or OPENROWSET(BULK...) in the SQL Server documentation, depending on what
you decided in step 11.
1445
Amazon Relational Database Service User Guide
Working with SQL Server read replicas
In this section, you can find specific information about working with read replicas on Amazon RDS for
SQL Server.
Topics
• Configuring read replicas for SQL Server (p. 1446)
• Read replica limitations with SQL Server (p. 1446)
• Option considerations for RDS for SQL Server replicas (p. 1447)
• Synchronizing database users and objects with a SQL Server read replica (p. 1448)
• Troubleshooting a SQL Server read replica problem (p. 1449)
Creating a SQL Server read replica doesn't require an outage for the primary DB instance. Amazon RDS
sets the necessary parameters and permissions for the source DB instance and the read replica without
any service interruption. A snapshot is taken of the source DB instance, and this snapshot becomes the
read replica. No outage occurs when you delete a read replica.
You can create up to 15 read replicas from one source DB instance. For replication to operate effectively,
we recommend that you configure each read replica with the same amount of compute and storage
resources as the source DB instance. If you scale the source DB instance, also scale the read replicas.
The SQL Server DB engine version of the source DB instance and all of its read replicas must be the same.
Amazon RDS upgrades the primary immediately after upgrading the read replicas, regardless of the
maintenance window. For more information about upgrading the DB engine version, see Upgrading the
Microsoft SQL Server DB engine (p. 1414).
For a read replica to receive and apply changes from the source, it should have sufficient compute
and storage resources. If a read replica reaches compute, network, or storage resource capacity, the
read replica stops receiving or applying changes from its source. You can modify the storage and CPU
resources of a read replica independently from its source and other read replicas.
• Read replicas are only available on the SQL Server Enterprise Edition (EE) engine.
• Read replicas are available for SQL Server versions 2016–2019.
• The source DB instance to be replicated must be a Multi-AZ deployment with Always On AGs.
• You can create up to 15 read replicas from one source DB instance.
• Read replicas are only available for DB instances running on DB instance classes with four or more
vCPUs.
1446
Amazon Relational Database Service User Guide
Option considerations
• If your SQL Server replica is in the same Region as its source DB instance, make sure that it belongs
to the same option group as the source DB instance. Modifications to the source option group or
source option group membership propagate to replicas. These changes are applied to the replicas
immediately after they are applied to the source DB instance, regardless of the replica's maintenance
window.
For more information about option groups, see Working with option groups (p. 331).
• When you create a SQL Server cross-Region replica, Amazon RDS creates a dedicated option group for
it.
You can't remove an SQL Server cross-Region replica from its dedicated option group. No other DB
instances can use the dedicated option group for a SQL Server cross-Region replica.
The following options are replicated options. To add replicated options to a SQL Server cross-Region
replica, add it to the source DB instance's option group. The option is also installed on all of the source
DB instance's replicas.
• TDE
The following options are non-replicated options. You can add or remove non-replicated options from
a dedicated option group.
• MSDTC
• SQLSERVER_AUDIT
• To enable the SQLSERVER_AUDIT option on cross-Region read replica, add the SQLSERVER_AUDIT
option on the dedicated option group on the cross-region read replica and the source instance’s
option group. By adding the SQLSERVER_AUDIT option on the source instance of SQL Server
cross-Region read replica, you can create Server Level Audit Object and Server Level Audit
Specifications on each of the cross-Region read replicas of the source instance. To allow the cross-
Region read replicas access to upload the completed audit logs to an Amazon S3 bucket, add the
SQLSERVER_AUDIT option to the dedicated option group and configure the option settings. The
Amazon S3 bucket that you use as a target for audit files must be in the same Region as the cross-
Region read replica. You can modify the option setting of the SQLSERVER_AUDIT option for each
cross region read replica independently so each can access an Amazon S3 bucket in their respective
Region.
The following options are not supported for cross-Region read replicas.
• SSRS
• SSAS
1447
Amazon Relational Database Service User Guide
Synchronizing database users and
objects with a SQL Server read replica
• SSIS
The following options are partially supported for cross-Region read replicas.
• SQLSERVER_BACKUP_RESTORE
• The source DB instance of a SQL Server cross-Region replica can have the
SQLSERVER_BACKUP_RESTORE option, but you can not perform native restores on the
source DB instance until you delete all its cross-Region replicas. Any existing native restore
tasks will be cancelled during the creation of a cross-Region replica. You can't add the
SQLSERVER_BACKUP_RESTORE option to a dedicated option group.
For more information on native backup and restore, see Importing and exporting SQL Server
databases using native backup and restore (p. 1419)
When you promote a SQL Server cross-Region read replica, the promoted replica behaves the same as
other SQL Server DB instances, including the management of its options. For more information about
option groups, see Working with option groups (p. 331).
The database users are automatically replicated from the primary DB instance to the read replica. As the
read replica database is in read-only mode, the security identifier (SID) of the database user cannot be
updated in the database. Therefore, when creating SQL logins in the read replica, it's essential to ensure
that the SID of that login matches the SID of the corresponding SQL login in the primary DB instance. If
you don't synchronize the SIDs of the SQL logins, they won't be able to access the database in the read
replica. Windows Active Directory (AD) Authenticated Logins do not experience this issue because the
SQL Server obtains the SID from the Active Directory.
To synchronize a SQL login from the primary DB instance to the read replica
USE [master]
GO
CREATE LOGIN TestLogin1
WITH PASSWORD = 'REPLACE WITH PASSWORD';
Note
Specify a password other than the prompt shown here as a security best practice.
3. Create a new database user for the SQL login in the database.
4. Check the SID of the newly created SQL login in primary DB instance.
1448
Amazon Relational Database Service User Guide
Troubleshooting a SQL Server read replica problem
CREATE LOGIN TestLogin1 WITH PASSWORD = 'REPLACE WITH PASSWORD', SID=[REPLACE WITH sid
FROM STEP #4];
Alternately, if you have access to the read replica database, you can fix the orphaned user as
follows:
CREATE LOGIN TestLogin1 WITH PASSWORD = 'REPLACE WITH PASSWORD', SID=[REPLACE WITH sid
FROM STEP #2];
Example:
Note
Specify a password other than the prompt shown here as a security best practice.
If replication lag is too long, you can use the following query to get information about the lag.
SELECT AR.replica_server_name
, DB_NAME (ARS.database_id) 'database_name'
, AR.availability_mode_desc
, ARS.synchronization_health_desc
, ARS.last_hardened_lsn
, ARS.last_redone_lsn
, ARS.secondary_lag_seconds
FROM sys.dm_hadr_database_replica_states ARS
INNER JOIN sys.availability_replicas AR ON ARS.replica_id = AR.replica_id
--WHERE DB_NAME(ARS.database_id) = 'database_name'
ORDER BY AR.replica_server_name;
1449
Amazon Relational Database Service User Guide
Multi-AZ for RDS for SQL Server
Amazon RDS supports Multi-AZ deployments for Microsoft SQL Server by using either SQL Server
Database Mirroring (DBM) or Always On Availability Groups (AGs). Amazon RDS monitors and maintains
the health of your Multi-AZ deployment. If problems occur, RDS automatically repairs unhealthy DB
instances, reestablishes synchronization, and initiates failovers. Failover only occurs if the standby and
primary are fully in sync. You don't have to manage anything.
When you set up SQL Server Multi-AZ, RDS automatically configures all databases on the instance to
use DBM or AGs. Amazon RDS handles the primary, the witness, and the secondary DB instance for you.
Because configuration is automatic, RDS selects DBM or Always On AGs based on the version of SQL
Server that you deploy.
Amazon RDS supports Multi-AZ with Always On AGs for the following SQL Server versions and editions:
Amazon RDS supports Multi-AZ with DBM for the following SQL Server versions and editions, except for
the versions noted previously:
You can use the following SQL query to determine whether your SQL Server DB instance is Single-AZ,
Multi-AZ with DBM, or Multi-AZ with Always On AGs.
1450
Amazon Relational Database Service User Guide
Adding Multi-AZ to a SQL Server DB instance
high_availability
Multi-AZ (AlwaysOn)
When you modify an existing SQL Server DB instance using the console, you can add Multi-AZ with DBM
or AGs by choosing Yes (Mirroring / Always On) from Multi-AZ deployment on the Modify DB instance
page. For more information, see Modifying an Amazon RDS DB instance (p. 401).
Note
If your DB instance is running Database Mirroring (DBM)—not Always On Availability Groups
(AGs)—you might need to disable in-memory optimization before you add Multi-AZ. Disable in-
memory optimization with DBM before you add Multi-AZ if your DB instance runs SQL Server
2014, 2016, or 2017 Enterprise Edition and has in-memory optimization enabled.
If your DB instance is running AGs, it doesn't require this step.
If you need a higher limit, request an increase by contacting AWS Support. Open the AWS Support
Center page, sign in if necessary, and choose Create case. Choose Service limit increase. Complete
and submit the form.
1451
Amazon Relational Database Service User Guide
Limitations, notes, and recommendations
The following are some notes about working with Multi-AZ deployments on RDS for SQL Server DB
instances:
• Amazon RDS exposes the Always On AGs availability group listener endpoint. The endpoint is visible
in the console, and is returned by the DescribeDBInstances API operation as an entry in the
endpoints field.
• Amazon RDS supports availability group multisubnet failovers.
• To use SQL Server Multi-AZ with a SQL Server DB instance in a virtual private cloud (VPC), first create a
DB subnet group that has subnets in at least two distinct Availability Zones. Then assign the DB subnet
group to the primary replica of the SQL Server DB instance.
• When a DB instance is modified to be a Multi-AZ deployment, during the modification it has a status
of modifying. Amazon RDS creates the standby, and makes a backup of the primary DB instance. After
the process is complete, the status of the primary DB instance becomes available.
• Multi-AZ deployments maintain all databases on the same node. If a database on the primary host
fails over, all your SQL Server databases fail over as one atomic unit to your standby host. Amazon RDS
provisions a new healthy host, and replaces the unhealthy host.
• Multi-AZ with DBM or AGs supports a single standby replica.
• Users, logins, and permissions are automatically replicated for you on the secondary. You don't need to
recreate them. User-defined server roles are only replicated in DB instances that use Always On AGs for
Multi-AZ deployments.
• In Multi-AZ deployments, SQL Server Agent jobs are replicated from the primary host to the secondary
host when the job replication feature is turned on. For more information, see Turning on SQL Server
Agent job replication (p. 1617).
• You might observe elevated latencies compared to a standard DB instance deployment (in a single
Availability Zone) because of the synchronous data replication.
• Failover times are affected by the time it takes to complete the recovery process. Large transactions
increase the failover time.
• In SQL Server Multi-AZ deployments, reboot with failover reboots only the primary DB instance. After
the failover, the primary DB instance becomes the new secondary DB instance. Parameters might not
be updated for Multi-AZ instances. For reboot without failover, both the primary and secondary DB
instances reboot, and parameters are updated after the reboot. If the DB instance is unresponsive, we
recommend reboot without failover.
The following are some recommendations for working with Multi-AZ deployments on RDS for Microsoft
SQL Server DB instances:
1452
Amazon Relational Database Service User Guide
Determining the location of the secondary
• Don't use the Set Partner Off command when working with Multi-AZ instances. For example, don't
do the following.
--Don't do this
ALTER DATABASE db1 SET PARTNER off
• Don't set the recovery mode to simple. For example, don't do the following.
--Don't do this
ALTER DATABASE db1 SET RECOVERY simple
• Don't use the DEFAULT_DATABASE parameter when creating new logins on Multi-AZ DB instances,
because these settings can't be applied to the standby mirror. For example, don't do the following.
--Don't do this
CREATE LOGIN [test_dba] WITH PASSWORD=foo, DEFAULT_DATABASE=[db2]
--Don't do this
ALTER LOGIN [test_dba] SET DEFAULT_DATABASE=[db3]
1453
Amazon Relational Database Service User Guide
Migrating to Always On AGs
You can also view the Availability Zone of the secondary using the AWS CLI command describe-db-
instances or RDS API operation DescribeDBInstances. The output shows the secondary AZ where
the standby mirror is located.
To migrate from Database Mirroring (DBM) to AGs, first check your version. If you are using a DB instance
with a version prior to Enterprise Edition 13.00.5216.0, modify the instance to patch it to 13.00.5216.0
or later. If you are using a DB instance with a version prior to Enterprise Edition 14.00.3049.1, modify the
instance to patch it to 14.00.3049.1 or later.
If you want to upgrade a mirrored DB instance to use AGs, run the upgrade first, modify the instance to
remove Multi-AZ, and then modify it again to add Multi-AZ. This converts your instance to use Always On
AGs.
1454
Amazon Relational Database Service User Guide
Additional features for SQL Server
Topics
• Using SSL with a Microsoft SQL Server DB instance (p. 1456)
• Configuring security protocols and ciphers (p. 1459)
• Integrating an Amazon RDS for SQL Server DB instance with Amazon S3 (p. 1464)
• Using Database Mail on Amazon RDS for SQL Server (p. 1478)
• Instance store support for the tempdb database on Amazon RDS for SQL Server (p. 1489)
• Using extended events with Amazon RDS for Microsoft SQL Server (p. 1491)
• Access to transaction log backups with RDS for SQL Server (p. 1494)
1455
Amazon Relational Database Service User Guide
Using SSL with a SQL Server DB instance
When you create a SQL Server DB instance, Amazon RDS creates an SSL certificate for it. The SSL
certificate includes the DB instance endpoint as the Common Name (CN) for the SSL certificate to guard
against spoofing attacks.
There are 2 ways to use SSL to connect to your SQL Server DB instance:
• Force SSL for all connections — this happens transparently to the client, and the client doesn't have to
do any work to use SSL.
• Encrypt specific connections — this sets up an SSL connection from a specific client computer, and you
must do work on the client to encrypt connections.
For information about Transport Layer Security (TLS) support for SQL Server, see TLS 1.2 support for
Microsoft SQL Server.
If you want to force SSL, use the rds.force_ssl parameter. By default, the rds.force_ssl
parameter is set to 0 (off). Set the rds.force_ssl parameter to 1 (on) to force connections to use
SSL. The rds.force_ssl parameter is static, so after you change the value, you must reboot your DB
instance for the change to take effect.
a. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
b. In the top right corner of the Amazon RDS console, choose the AWS Region of your DB instance.
c. In the navigation pane, choose Databases, and then choose the name of your DB instance to
show its details.
d. Choose the Configuration tab. Find the Parameter group in the section.
2. If necessary, create a new parameter group. If your DB instance uses the default parameter group,
you must create a new parameter group. If your DB instance uses a nondefault parameter group, you
can choose to edit the existing parameter group or to create a new parameter group. If you edit an
existing parameter group, the change affects all DB instances that use that parameter group.
To create a new parameter group, follow the instructions in Creating a DB parameter group (p. 350).
3. Edit your new or existing parameter group to set the rds.force_ssl parameter to true. To
edit the parameter group, follow the instructions in Modifying parameters in a DB parameter
group (p. 352).
4. If you created a new parameter group, modify your DB instance to attach the new parameter group.
Modify the DB Parameter Group setting of the DB instance. For more information, see Modifying an
Amazon RDS DB instance (p. 401).
5. Reboot your DB instance. For more information, see Rebooting a DB instance (p. 436).
1456
Amazon Relational Database Service User Guide
Using SSL with a SQL Server DB instance
To obtain that certificate, download the certificate to your client computer. You can download a root
certificate that works for all regions. You can also download a certificate bundle that contains both the
old and new root certificate. In addition, you can download region-specific intermediate certificates. For
more information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591).
After you have downloaded the appropriate certificate, import the certificate into your Microsoft
Windows operating system by following the procedure in the section following.
1. On the Start menu, type Run in the search box and press Enter.
2. In the Open box, type MMC and then choose OK.
3. In the MMC console, on the File menu, choose Add/Remove Snap-in.
4. In the Add or Remove Snap-ins dialog box, for Available snap-ins, select Certificates, and then
choose Add.
5. In the Certificates snap-in dialog box, choose Computer account, and then choose Next.
6. In the Select computer dialog box, choose Finish.
7. In the Add or Remove Snap-ins dialog box, choose OK.
8. In the MMC console, expand Certificates, open the context (right-click) menu for Trusted Root
Certification Authorities, choose All Tasks, and then choose Import.
9. On the first page of the Certificate Import Wizard, choose Next.
10. On the second page of the Certificate Import Wizard, choose Browse. In the browse window, change
the file type to All files (*.*) because .pem is not a standard certificate extension. Locate the .pem
file that you downloaded previously.
11. Choose Open to select the certificate file, and then choose Next.
12. On the third page of the Certificate Import Wizard, choose Next.
13. On the fourth page of the Certificate Import Wizard, choose Finish. A dialog box appears indicating
that the import was successful.
1457
Amazon Relational Database Service User Guide
Using SSL with a SQL Server DB instance
14. In the MMC console, expand Certificates, expand Trusted Root Certification Authorities, and then
choose Certificates. Locate the certificate to confirm it exists, as shown here.
For SQL Server Management Studio, use the following procedure. For more information about SQL
Server Management Studio, see Use SQL Server management studio.
1. Append encrypt=true to your connection string. This string might be available as an option, or as
a property on the connection page in GUI tools.
Note
To enable SSL encryption for clients that connect using JDBC, you might need to add the
Amazon RDS SQL certificate to the Java CA certificate (cacerts) store. You can do this by
using the keytool utility.
2. Confirm that your connection is encrypted by running the following query. Verify that the query
returns true for encrypt_option.
1458
Amazon Relational Database Service User Guide
Configuring security protocols and ciphers
For parameters other than rds.fips, the value of default means that the operating system default
value is used, whether it is enabled or disabled.
Note
You can't disable TLS 1.2, because Amazon RDS uses it internally.
rds.diffie-hellman-min-key-bit- default, 1024, 2048, 4096 Minimum bit length for Diffie-
length Hellman keys.
1459
Amazon Relational Database Service User Guide
Configuring security protocols and ciphers
Note
For more information on the default values for SQL Server security protocols and ciphers,
see Protocols in TLS/SSL (Schannel SSP) and Cipher Suites in TLS/SSL (Schannel SSP) in the
Microsoft documentation.
For more information on viewing and setting these values in the Windows Registry, see
Transport Layer Security (TLS) best practices with the .NET Framework in the Microsoft
documentation.
Use the following process to configure the security protocols and ciphers:
For more information on DB parameter groups, see Working with parameter groups (p. 347).
Console
The following procedure creates a parameter group for SQL Server Standard Edition 2016.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose Create parameter group.
4. In the Create parameter group pane, do the following:
CLI
The following procedure creates a parameter group for SQL Server Standard Edition 2016.
Example
1460
Amazon Relational Database Service User Guide
Configuring security protocols and ciphers
For Windows:
Console
The following procedure modifies the parameter group that you created for SQL Server Standard Edition
2016. This example turns off TLS version 1.0.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose the parameter group, such as sqlserver-ciphers-se-13.
4. Under Parameters, filter the parameter list for rds.
5. Choose Edit parameters.
6. Choose rds.tls10.
7. For Values, choose disabled.
8. Choose Save changes.
CLI
The following procedure modifies the parameter group that you created for SQL Server Standard Edition
2016. This example turns off TLS version 1.0.
Example
For Windows:
1461
Amazon Relational Database Service User Guide
Configuring security protocols and ciphers
--db-parameter-group-name sqlserver-ciphers-se-13 ^
--parameters
"ParameterName='rds.tls10',ParameterValue='disabled',ApplyMethod=pending-reboot"
Console
You can associate the parameter group with a new or existing DB instance:
• For a new DB instance, associate it when you launch the instance. For more information, see Creating
an Amazon RDS DB instance (p. 300).
• For an existing DB instance, associate it by modifying the instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).
CLI
You can associate the parameter group with a new or existing DB instance.
• Specify the same DB engine type and major version as you used when creating the parameter group.
Example
For Windows:
1462
Amazon Relational Database Service User Guide
Configuring security protocols and ciphers
Note
Specify a password other than the prompt shown here as a security best practice.
Example
For Windows:
1463
Amazon Relational Database Service User Guide
Amazon S3 integration
• Files in the D:\S3 folder are deleted on the standby replica after a failover on Multi-AZ instances. For
more information, see Multi-AZ limitations for S3 integration (p. 1476).
• The DB instance and the S3 bucket must be in the same AWS Region.
• If you run more than one S3 integration task at a time, the tasks run sequentially, not in parallel.
Note
S3 integration tasks share the same queue as native backup and restore tasks. At maximum,
you can have only two tasks in progress at any time in this queue. Therefore, two running
native backup and restore tasks will block any S3 integration tasks.
• You must re-enable the S3 integration feature on restored instances. S3 integration isn't propagated
from the source instance to the restored instance. Files in D:\S3 are deleted on a restored instance.
• Downloading to the DB instance is limited to 100 files. In other words, there can't be more than 100
files in D:\S3\.
• Only files without file extensions or with the following file extensions are supported for
download: .abf, .asdatabase, .bcp, .configsettings, .csv, .dat, .deploymentoptions, .deploymenttargets, .fmt, .info, .isp
and .xmla.
• The S3 bucket must have the same owner as the related AWS Identity and Access Management (IAM)
role. Therefore, cross-account S3 integration isn't supported.
• The S3 bucket can't be open to the public.
• The file size for uploads from RDS to S3 is limited to 50 GB per file.
• The file size for downloads from S3 to RDS is limited to the maximum supported by S3.
Topics
• Prerequisites for integrating RDS for SQL Server with S3 (p. 1465)
• Enabling RDS for SQL Server integration with S3 (p. 1470)
• Transferring files between RDS for SQL Server and Amazon S3 (p. 1471)
• Listing files on the RDS DB instance (p. 1473)
• Deleting files on the RDS DB instance (p. 1473)
• Monitoring the status of a file transfer task (p. 1474)
• Canceling a task (p. 1476)
• Multi-AZ limitations for S3 integration (p. 1476)
• Disabling RDS for SQL Server integration with S3 (p. 1476)
For more information on working with files in Amazon S3, see Getting started with Amazon Simple
Storage Service.
1464
Amazon Relational Database Service User Guide
Amazon S3 integration
Console
• ListAllMyBuckets – required
• ListBucket – required
• GetBucketACL – required
• GetBucketLocation – required
• GetObject – required for downloading files from S3 to D:\S3\
• PutObject – required for uploading files from D:\S3\ to S3
• ListMultipartUploadParts – required for uploading files from D:\S3\ to S3
• AbortMultipartUpload – required for uploading files from D:\S3\ to S3
5. For Resources, the options that display depend on which actions you choose in the previous step.
You might see options for bucket, object, or both. For each of these, add the appropriate Amazon
Resource Name (ARN).
For bucket, add the ARN for the bucket that you want to use. For example, if your bucket is named
example-bucket, set the ARN to arn:aws:s3:::example-bucket.
For object, enter the ARN for the bucket and then choose one of the following:
• To grant access to all files in the specified bucket, choose Any for both Bucket name and Object
name.
• To grant access to specific files or folders in the bucket, provide ARNs for the specific buckets and
objects that you want SQL Server to access.
6. Follow the instructions in the console until you finish creating the policy.
The preceding is an abbreviated guide to setting up a policy. For more detailed instructions on
creating IAM policies, see Creating IAM policies in the IAM User Guide.
To create an IAM role that uses the IAM policy from the previous procedure
• AWS service
• RDS
• RDS – Add Role to Database
1465
Amazon Relational Database Service User Guide
Amazon S3 integration
4. Follow the instructions in the console until you finish creating the role.
The preceding is an abbreviated guide to setting up a role. If you want more detailed instructions on
creating roles, see IAM roles in the IAM User Guide.
AWS CLI
To grant Amazon RDS access to an Amazon S3 bucket, use the following process:
For more information, see Creating a role to delegate permissions to an IAM user in the IAM User
Guide.
3. Attach the IAM policy that you created to the IAM role that you created.
Include the appropriate actions to grant the access your DB instance requires:
• ListAllMyBuckets – required
• ListBucket – required
• GetBucketACL – required
• GetBucketLocation – required
• GetObject – required for downloading files from S3 to D:\S3\
• PutObject – required for uploading files from D:\S3\ to S3
• ListMultipartUploadParts – required for uploading files from D:\S3\ to S3
• AbortMultipartUpload – required for uploading files from D:\S3\ to S3
1. The following AWS CLI command creates an IAM policy named rds-s3-integration-policy
with these options. It grants access to a bucket named bucket_name.
Example
1466
Amazon Relational Database Service User Guide
Amazon S3 integration
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::bucket_name/key_prefix/*"
}
]
}'
For Windows:
Make sure to change the line endings to the ones supported by your interface (^ instead of \). Also,
in Windows, you must escape all double quotes with a \. To avoid the need to escape the quotes in
the JSON, you can save it to a file instead and pass that in as a parameter.
First, create the policy.json file with the following permission policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketACL",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::bucket_name"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::bucket_name/key_prefix/*"
}
]
}
2. After the policy is created, note the Amazon Resource Name (ARN) of the policy. You need the ARN
for a later step.
1467
Amazon Relational Database Service User Guide
Amazon S3 integration
• The following AWS CLI command creates the rds-s3-integration-role IAM role for this
purpose.
Example
For Windows:
Make sure to change the line endings to the ones supported by your interface (^ instead of \). Also,
in Windows, you must escape all double quotes with a \. To avoid the need to escape the quotes in
the JSON, you can save it to a file instead and pass that in as a parameter.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"rds.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
Example of using the global condition context key to create the IAM role
We recommend using the aws:SourceArn and aws:SourceAccount global condition context keys
in resource-based policies to limit the service's permissions to a specific resource. This is the most
effective way to protect against the confused deputy problem.
1468
Amazon Relational Database Service User Guide
Amazon S3 integration
You might use both global condition context keys and have the aws:SourceArn value contain the
account ID. In this case, the aws:SourceAccount value and the account in the aws:SourceArn
value must use the same account ID when used in the same policy statement.
In the policy, make sure to use the aws:SourceArn global condition context key with the full
Amazon Resource Name (ARN) of the resources accessing the role. For S3 integration, make sure to
include the DB instance ARNs, as shown in the following example.
"aws:SourceArn":"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier"
}
}
}
]
}'
For Windows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"rds.amazonaws.com"
]
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceArn":"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier"
}
}
}
]
}
1469
Amazon Relational Database Service User Guide
Amazon S3 integration
• The following AWS CLI command attaches the policy to the role named rds-s3-integration-
role. Replace your-policy-arn with the policy ARN that you noted in a previous step.
Example
For Windows:
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose the RDS for SQL Server DB instance name to display its details.
3. On the Connectivity & security tab, in the Manage IAM roles section, choose the IAM role to add for
Add IAM roles to this instance.
4. For Feature, choose S3_INTEGRATION.
1470
Amazon Relational Database Service User Guide
Amazon S3 integration
AWS CLI
To add the IAM role to the RDS for SQL Server DB instance
• The following AWS CLI command adds your IAM role to an RDS for SQL Server DB instance named
mydbinstance.
Example
For Windows:
Replace your-role-arn with the role ARN that you noted in a previous step. S3_INTEGRATION
must be specified for the --feature-name option.
The files that you download from and upload to S3 are stored in the D:\S3 folder. This is the only folder
that you can use to access your files. You can organize your files into subfolders, which are created for
you when you include the destination folder during download.
Some of the stored procedures require that you provide an Amazon Resource Name (ARN) to your S3
bucket and file. The format for your ARN is arn:aws:s3:::bucket_name/file_name. Amazon S3
doesn't require an account number or AWS Region in ARNs.
S3 integration tasks run sequentially and share the same queue as native backup and restore tasks.
At maximum, you can have only two tasks in progress at any time in this queue. It can take up to five
minutes for the task to begin processing.
1471
Amazon Relational Database Service User Guide
Amazon S3 integration
0 = Don't overwrite
1 = Overwrite
You can download files without a file extension and files with the following file
extensions: .bcp, .csv, .dat, .fmt, .info, .lst, .tbl, .txt, and .xml.
Note
Files with the .ispac file extension are supported for download when SQL Server Integration
Services is enabled. For more information on enabling SSIS, see SQL Server Integration
Services (p. 1562).
Files with the following file extensions are supported for download when SQL Server Analysis
Services is enabled: .abf, .asdatabase, .configsettings, .deploymentoptions, .deploymenttargets,
and .xmla. For more information on enabling SSAS, see SQL Server Analysis Services (p. 1543).
The following example shows the stored procedure to download files from S3.
exec msdb.dbo.rds_download_from_s3
@s3_arn_of_file='arn:aws:s3:::bucket_name/bulk_data.csv',
@rds_file_path='D:\S3\seed_data\data.csv',
@overwrite_file=1;
1472
Amazon Relational Database Service User Guide
Amazon S3 integration
0 = Don't overwrite
1 = Overwrite
The following example uploads the file named data.csv from the specified location in D:
\S3\seed_data\ to a file new_data.csv in the S3 bucket specified by the ARN.
exec msdb.dbo.rds_upload_to_s3
@rds_file_path='D:\S3\seed_data\data.csv',
@s3_arn_of_file='arn:aws:s3:::bucket_name/new_data.csv',
@overwrite_file=1;
If the file previously existed in S3, it's overwritten because the @overwrite_file parameter is set to 1.
exec msdb.dbo.rds_gather_file_details;
The stored procedure returns the ID of the task. Like other tasks, this stored procedure runs
asynchronously. As soon as the status of the task is SUCCESS, you can use the task ID in the
rds_fn_list_file_details function to list the existing files and directories in D:\S3\, as shown
following.
1473
Amazon Relational Database Service User Guide
Amazon S3 integration
1 = delete a directory
To delete a directory, the @rds_file_path must end with a backslash (\) and @force_delete must be
set to 1.
exec msdb.dbo.rds_delete_from_filesystem
@rds_file_path='D:\S3\delete_me.txt';
exec msdb.dbo.rds_delete_from_filesystem
@rds_file_path='D:\S3\example_folder\',
@force_delete=1;
To see a list of all tasks, set the first parameter to NULL and the second parameter to 0, as shown in the
following example.
To get a specific task, set the first parameter to NULL and the second parameter to the task ID, as shown
in the following example.
1474
Amazon Relational Database Service User Guide
Amazon S3 integration
last_updated The date and time that the task status was last
updated.
created_at The date and time that the task was created.
1475
Amazon Relational Database Service User Guide
Amazon S3 integration
Canceling a task
To cancel S3 integration tasks, use the msdb.dbo.rds_cancel_task stored procedure with the
task_id parameter. Delete and list tasks that are in progress can't be cancelled. The following example
shows a request to cancel a task.
To get an overview of all tasks and their task IDs, use the rds_fn_task_status function as described
in Monitoring the status of a file transfer task (p. 1474).
To determine the last failover time, you can use the msdb.dbo.rds_failover_time stored procedure.
For more information, see Determining the last failover time (p. 1612).
This example shows the output when there is no recent failover in the error logs. No failover has
happened since 2020-04-29 23:59:00.01.
Therefore, all files downloaded after that time that haven't been deleted using the
rds_delete_from_filesystem stored procedure are still accessible on the current host. Files
downloaded before that time might also be available.
errorlog_available_from recent_failover_time
This example shows the output when there is a failover in the error logs. The most recent failover was at
2020-05-05 18:57:51.89.
All files downloaded after that time that haven't been deleted using the
rds_delete_from_filesystem stored procedure are still accessible on the current host.
errorlog_available_from recent_failover_time
1476
Amazon Relational Database Service User Guide
Amazon S3 integration
Note
To remove an IAM role from a DB instance, the status of the DB instance must be available.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose the RDS for SQL Server DB instance name to display its details.
3. On the Connectivity & security tab, in the Manage IAM roles section, choose the IAM role to
remove.
4. Choose Delete.
AWS CLI
To remove the IAM role from the RDS for SQL Server DB instance
• The following AWS CLI command removes the IAM role from a RDS for SQL Server DB instance
named mydbinstance.
Example
For Windows:
Replace your-role-arn with the appropriate IAM role ARN for the --feature-name option.
1477
Amazon Relational Database Service User Guide
Using Database Mail
• Configuration and security objects – These objects create profiles and accounts, and are stored in the
msdb database.
• Messaging objects – These objects include the sp_send_dbmail stored procedure used to send
messages, and data structures that hold information about messages. They're stored in the msdb
database.
• Logging and auditing objects – Database Mail writes logging information to the msdb database and
the Microsoft Windows application event log.
• Database Mail executable – DatabaseMail.exe reads from a queue in the msdb database and sends
email messages.
RDS supports Database Mail for all SQL Server versions on the Web, Standard, and Enterprise Editions.
Limitations
The following limitations apply to using Database Mail on your SQL Server DB instance:
Console
The following example creates a parameter group for SQL Server Standard Edition 2016.
1478
Amazon Relational Database Service User Guide
Using Database Mail
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose Create parameter group.
4. In the Create parameter group pane, do the following:
CLI
The following example creates a parameter group for SQL Server Standard Edition 2016.
Example
For Windows:
Console
The following example modifies the parameter group that you created for SQL Server Standard Edition
2016.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
1479
Amazon Relational Database Service User Guide
Using Database Mail
CLI
The following example modifies the parameter group that you created for SQL Server Standard Edition
2016.
Example
For Windows:
Console
You can associate the Database Mail parameter group with a new or existing DB instance.
• For a new DB instance, associate it when you launch the instance. For more information, see Creating
an Amazon RDS DB instance (p. 300).
• For an existing DB instance, associate it by modifying the instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).
CLI
You can associate the Database Mail parameter group with a new or existing DB instance.
• Specify the same DB engine type and major version as you used when creating the parameter group.
1480
Amazon Relational Database Service User Guide
Using Database Mail
Example
For Windows:
Example
For Windows:
1481
Amazon Relational Database Service User Guide
Using Database Mail
Note
To configure Database Mail, make sure that you have execute permission on the stored
procedures in the msdb database.
USE msdb
GO
EXECUTE msdb.dbo.sysmail_add_profile_sp
@profile_name = 'Notifications',
@description = 'Profile used for sending outgoing notifications using
Amazon SES.';
GO
• @email_address – An Amazon SES verified identity. For more information, see Verified identities in
Amazon SES.
• @mailserver_name – An Amazon SES SMTP endpoint. For more information, see Connecting to an
Amazon SES SMTP endpoint.
• @username – An Amazon SES SMTP user name. For more information, see Obtaining Amazon SES
SMTP credentials.
USE msdb
GO
EXECUTE msdb.dbo.sysmail_add_account_sp
@account_name = 'SES',
1482
Amazon Relational Database Service User Guide
Using Database Mail
Note
Specify credentials other than the prompts shown here as a security best practice.
USE msdb
GO
EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
@profile_name = 'Notifications',
@account_name = 'SES',
@sequence_number = 1;
GO
USE msdb
GO
EXECUTE msdb.dbo.sysmail_add_principalprofile_sp
@profile_name = 'Notifications',
@principal_name = 'public',
@is_default = 1;
GO
1483
Amazon Relational Database Service User Guide
Using Database Mail
Procedure/Function Description
rds_sysmail_delete_mailitems_sp Deletes email messages sent by all users from the Database Mail
internal tables.
Usage
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'profile_name',
@recipients = '[email protected][; recipient2; ... recipientn]',
@subject = 'subject',
@body = 'message_body',
[@body_format = 'HTML'],
[@file_attachments = 'file_path1; file_path2; ... file_pathn'],
[@query = 'SQL_query'],
[@attach_query_result_as_file = 0|1]';
• @profile_name – The name of the Database Mail profile from which to send the message.
• @recipients – The semicolon-delimited list of email addresses to which to send the message.
• @subject – The subject of the message.
• @body – The body of the message. You can also use a declared variable as the body.
• @body_format – This parameter is used with a declared variable to send email in HTML format.
• @file_attachments – The semicolon-delimited list of message attachments. File paths must be
absolute paths.
• @query – A SQL query to run. The query results can be attached as a file or included in the body of the
message.
• @attach_query_result_as_file – Whether to attach the query result as a file. Set to 0 for no, 1
for yes. The default is 0.
1484
Amazon Relational Database Service User Guide
Using Database Mail
Examples
The following examples demonstrate how to send email messages.
USE msdb
GO
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Notifications',
@recipients = '[email protected]',
@subject = 'Automated DBMail message - 1',
@body = 'Database Mail configuration was successful.';
GO
USE msdb
GO
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Notifications',
@recipients = '[email protected];[email protected]',
@subject = 'Automated DBMail message - 2',
@body = 'This is a message.';
GO
USE msdb
GO
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Notifications',
@recipients = '[email protected]',
@subject = 'Test SQL query',
@body = 'This is a SQL query test.',
@query = 'SELECT * FROM abc.dbo.test',
@attach_query_result_as_file = 1;
GO
USE msdb
GO
DECLARE @HTML_Body as NVARCHAR(500) = 'Hi, <h4> Heading </h4> </br> See the report. <b>
Regards </b>';
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Notifications',
@recipients = '[email protected]',
@subject = 'Test HTML message',
@body = @HTML_Body,
@body_format = 'HTML';
GO
1485
Amazon Relational Database Service User Guide
Using Database Mail
Example of sending a message using a trigger when a specific event occurs in the database
USE AdventureWorks2017
GO
IF OBJECT_ID ('Production.iProductNotification', 'TR') IS NOT NULL
DROP TRIGGER Purchasing.iProductNotification
GO
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Notifications',
@recipients = '[email protected]',
@subject = 'New product information',
@body = @ProductInformation;
GO
Deleting messages
You use the rds_sysmail_delete_mailitems_sp stored procedure to delete messages.
Note
RDS automatically deletes mail table items when DBMail history data reaches 1 GB in size, with
a retention period of at least 24 hours.
1486
Amazon Relational Database Service User Guide
Using Database Mail
If you want to keep mail items for a longer period, you can archive them. For more information,
see Create a SQL Server Agent job to archive Database Mail messages and event logs in the
Microsoft documentation.
1487
Amazon Relational Database Service User Guide
Using Database Mail
Database Mail uses the Microsoft Windows security context of the current user to control access to files.
Users who log in with SQL Server Authentication can't attach files using the @file_attachments
parameter with the sp_send_dbmail stored procedure. Windows doesn't allow SQL Server to provide
credentials from a remote computer to another remote computer. Therefore, Database Mail can't attach
files from a network share when the command is run from a computer other than the computer running
SQL Server.
However, you can use SQL Server Agent jobs to attach files. For more information on SQL Server Agent,
see Using SQL Server Agent (p. 1617) and SQL Server Agent in the Microsoft documentation.
If you create a read replica from your Multi-AZ instance that has Database Mail configured, the replica
inherits the configuration, but without the password to the SMTP server. Update the Database Mail
account with the password.
1488
Amazon Relational Database Service User Guide
Instance store support for tempdb
By placing tempdb data files and tempdb log files on the instance store, you can achieve lower read and
write latencies compared to standard storage based on Amazon EBS.
Note
SQL Server database files and database log files aren't placed on the instance store.
• db.m5d
• db.r5d
• Create a SQL Server DB instance using one of these instance types. For more information, see Creating
an Amazon RDS DB instance (p. 300).
• Modify an existing SQL Server DB instance to use one of them. For more information, see Modifying an
Amazon RDS DB instance (p. 401).
The instance store is available in all AWS Regions where one or more of these instance types are
supported. For more information on the db.m5d and db.r5d instance classes, see DB instance
classes (p. 11). For more information on the instance classes supported by Amazon RDS for SQL Server,
see DB instance class support for Microsoft SQL Server (p. 1358).
On instances with an instance store, RDS stores the tempdb data and log files in the T:\rdsdbdata
\DATA directory.
When tempdb has only one data file (tempdb.mdf) and one log file (templog.ldf), templog.ldf
starts at 8 MB by default and tempdb.mdf starts at 80% or more of the instance's storage capacity.
Twenty percent of the storage capacity or 200 GB, whichever is less, is kept free to start. Multiple
tempdb data files split the 80% disk space evenly, while log files always have an 8-MB initial size.
For example, if you modify your DB instance class from db.m5.2xlarge to db.m5d.2xlarge, the size
of tempdb data files increases from 8 MB each to 234 GB in total.
Note
Besides the tempdb data and log files on the instance store (T:\rdsdbdata\DATA), you can
still create extra tempdb data and log files on the data volume (D:\rdsdbdata\DATA). Those
files always have an 8 MB initial size.
1489
Amazon Relational Database Service User Guide
Instance store support for tempdb
Backup considerations
You might need to retain backups for long periods, incurring costs over time. The tempdb data and log
blocks can change very often depending on the workload. This can greatly increase the DB snapshot size.
When tempdb is on the instance store, snapshots don't include temporary files. This means that
snapshot sizes are smaller and consume less of the free backup allocation compared to EBS-only storage.
You can do one or more of the following when the instance store is full:
1490
Amazon Relational Database Service User Guide
Using extended events
Extended events are turned on automatically for users with master user privileges in Amazon RDS for
SQL Server.
Topics
• Limitations and recommendations (p. 1491)
• Configuring extended events on RDS for SQL Server (p. 1491)
• Considerations for Multi-AZ deployments (p. 1492)
• Querying extended event files (p. 1493)
• Extended events are supported only for the Enterprise and Standard Editions.
• You can't alter default extended event sessions.
• Make sure to set the session memory partition mode to NONE.
• Session event retention mode can be either ALLOW_SINGLE_EVENT_LOSS or
ALLOW_MULTIPLE_EVENT_LOSS.
• Event Tracing for Windows (ETW) targets aren't supported.
• Make sure that file targets are in the D:\rdsdbdata\log directory.
• For pair matching targets, set the respond_to_memory_pressure property to 1.
• Ring buffer target memory can't be greater than 4 MB.
• The following actions aren't supported:
• debug_break
• create_dump_all_threads
• create_dump_single_threads
• The rpc_completed event is supported on the following versions and later: 15.0.4083.2, 14.0.3370.1,
13.0.5865.1, 12.0.6433.1, 11.0.7507.2.
1491
Amazon Relational Database Service User Guide
Using extended events
Note
Setting xe_file_retention to zero causes .xel files to be removed automatically after the
lock on these files is released by SQL Server. The lock is released whenever an .xel file reaches
the size limit set in xe_file_target_size.
You can use the rdsadmin.dbo.rds_show_configuration stored procedure to show the current
values of these parameters. For example, use the following SQL statement to view the current setting of
xe_session_max_memory.
You can use the rdsadmin.dbo.rds_set_configuration stored procedure to modify them. For
example, use the following SQL statement to set xe_session_max_memory to 4 MB.
You can also use a SQL Server Agent job to track the standby replica and start the sessions if the standby
becomes the primary. For example, use the following query in your SQL Server Agent job step to restart
event sessions on a primary DB instance.
BEGIN
IF (DATABASEPROPERTYEX('rdsadmin','Updateability')='READ_WRITE'
AND DATABASEPROPERTYEX('rdsadmin','status')='ONLINE'
AND (DATABASEPROPERTYEX('rdsadmin','Collation') IS NOT NULL OR
DATABASEPROPERTYEX('rdsadmin','IsAutoClose')=1)
)
BEGIN
IF NOT EXISTS (SELECT 1 FROM sys.dm_xe_sessions WHERE name='xe1')
ALTER EVENT SESSION xe1 ON SERVER STATE=START
IF NOT EXISTS (SELECT 1 FROM sys.dm_xe_sessions WHERE name='xe2')
ALTER EVENT SESSION xe2 ON SERVER STATE=START
END
END
This query restarts the event sessions xe1 and xe2 on a primary DB instance if these sessions are in a
stopped state. You can also add a schedule with a convenient interval to this query.
1492
Amazon Relational Database Service User Guide
Using extended events
Extended event file targets can only write files to the D:\rdsdbdata\log directory on RDS for SQL
Server.
As an example, use the following SQL query to list the contents of all files of extended event sessions
whose names start with xe.
1493
Amazon Relational Database Service User Guide
Access to transaction log backups
Access to transaction log backups provides the following capabilities and benefits:
• List and view the metadata of available transaction log backups for a database on an RDS for SQL
Server DB instance.
• Copy available transaction log backups from RDS for SQL Server to a target Amazon S3 bucket.
• Perform point-in-time restores of databases without the need to restore an entire DB instance. For
more information on restoring a DB instance to a point in time, see Restoring a DB instance to a
specified time (p. 660).
Requirements
The following requirements must be met before enabling access to transaction log backups:
• Automated backups must be enabled on the DB instance and the backup retention must be set to a
value of one or more days. For more information on enabling automated backups and configuring a
retention policy, see Enabling automated backups (p. 593).
• An Amazon S3 bucket must exist in the same account and Region as the source DB instance. Before
enabling access to transaction log backups, choose an existing Amazon S3 bucket or create a new
bucket to use for your transaction log backup files.
• An Amazon S3 bucket permissions policy must be configured as follows to allow Amazon RDS to copy
transaction log files into it:
1. Set the object account ownership property on the bucket to Bucket Owner Preferred.
2. Add the following policy. There will be no policy by default, so use the bucket Access Control Lists
(ACL) to edit the bucket policy and add it.
The following example uses an ARN to specify a resource. We recommend using the SourceArn
and SourceAccount global condition context keys in resource-based trust relationships to limit the
service's permissions to a specific resource. For more information on working with ARNs, see Amazon
resource names (ARNs) and Working with Amazon Resource Names (ARNs) in Amazon RDS (p. 471).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Only allow writes to my bucket with bucket owner full control",
1494
Amazon Relational Database Service User Guide
Access to transaction log backups
"Effect": "Allow",
"Principal": {
"Service": "backups.rds.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::{customer_bucket}/{customer_path}/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control",
"aws:sourceAccount": "{customer_account}",
"aws:sourceArn": "{db_instance_arn}"
}
}
}
]
}
• An AWS Identity and Access Management (IAM) role to access the Amazon S3 bucket. If you already
have an IAM role, you can use that. You can choose to have a new IAM role created for you when you
add the SQLSERVER_BACKUP_RESTORE option by using the AWS Management Console. Alternatively,
you can create a new one manually. For more information on creating and configuring an IAM role
with SQLSERVER_BACKUP_RESTORE, see Manually creating an IAM role for native backup and
restore (p. 1422).
• The SQLSERVER_BACKUP_RESTORE option must be added to an option group on your DB instance.
For more information on adding the SQLSERVER_BACKUP_RESTORE option, see Support for native
backup and restore in SQL Server (p. 1525).
Note
If your DB instance has storage encryption enabled , the AWS KMS (KMS) actions and key must
be provided in the IAM role provided in the native backup and restore option group.
Optionally, if you intend to use the rds_restore_log stored procedure to perform point in time
database restores, we recommend using the same Amazon S3 path for the native backup and restore
option group and access to transaction log backups. This method ensures that when Amazon RDS
assumes the role from the option group to perform the restore log functions, it has access to retrieve
transaction log backups from the same Amazon S3 path.
• If the DB instance is encrypted, regardless of encryption type (AWS managed key or Customer
managed key), you must provide a Customer managed KMS key in the IAM role and in the
rds_tlog_backup_copy_to_S3 stored procedure.
• You can list and copy up to the last seven days of transaction log backups for any DB instance that has
backup retention configured between one to 35 days.
• The Amazon S3 bucket used for access to transaction log backups must exist in the same account and
Region as the source DB instance. Cross-account and cross-region copy is not supported.
• Only one Amazon S3 bucket can be configured as a target to copy transaction log backups into. You
can choose a new target Amazon S3 bucket with the rds_tlog_copy_setup stored procedure. For
more information on choosing a new target Amazon S3 bucket, see Setting up access to transaction
log backups (p. 1496).
• You cannot specify the KMS key when using the rds_tlog_backup_copy_to_S3 stored procedure if
your RDS instance is not enabled for storage encryption.
• Multi-account copying is not supported. The IAM role used for copying will only permit write access to
Amazon S3 buckets within the owner account of the DB instance.
• Only two concurrent tasks of any type may be run on an RDS for SQL Server DB instance.
1495
Amazon Relational Database Service User Guide
Access to transaction log backups
• Only one copy task can run for a single database at a given time. If you want to copy transaction log
backups for multiple databases on the DB instance, use a separate copy task for each database.
• If you copy a transaction log backup that already exists with the same name in the Amazon S3 bucket,
the existing transaction log backup will be overwritten.
• You can only run the stored procedures that are provided with access to transaction log backups on the
primary DB instance. You can’t run these stored procedures on an RDS for SQL Server read replica or
on a secondary instance of a Multi-AZ DB cluster.
• If the RDS for SQL Server DB instance is rebooted while the rds_tlog_backup_copy_to_S3 stored
procedure is running, the task will automatically restart from the beginning when the DB instance is
back online. Any transaction log backups that had been copied to the Amazon S3 bucket while the task
was running before the reboot will be overwritten.
• The Microsoft SQL Server system databases and the RDSAdmin database cannot be configured for
access to transaction log backups.
• Copying to buckets encrypted by SSE-KMS isn't supported.
Example usage:
exec msdb.dbo.rds_tlog_copy_setup
@target_s3_arn='arn:aws:s3:::mybucket/myfolder';
• @target_s3_arn – The ARN of the target Amazon S3 bucket to copy transaction log backups files to.
To modify access to transaction log backups to point to a different Amazon S3 bucket, you can view the
current Amazon S3 bucket value and re-run the rds_tlog_copy_setup stored procedure using a new
value for the @target_s3_arn.
1496
Amazon Relational Database Service User Guide
Access to transaction log backups
Example of viewing the existing Amazon S3 bucket configured for access to transaction log
backups
After you have enabled access to transaction log backups, you can start using it to list and copy available
transaction log backup files.
To list all transaction log backups available for an individual database, call the
rds_fn_list_tlog_backup_metadata function. You can use an ORDER BY or a WHERE clause when
calling the function.
1497
Amazon Relational Database Service User Guide
Access to transaction log backups
backup_file_epoch bigint The epoch time that a transaction backup file was
generated.
datetime
backup_file_time_utc The UTC time-converted value for the
backup_file_epoch value.
starting_lsn numeric(25,0) The log sequence number of the first or oldest log
record of a transaction log backup file.
ending_lsn numeric(25,0) The log sequence number of the last or next log
record of a transaction log backup file.
1498
Amazon Relational Database Service User Guide
Access to transaction log backups
exec msdb.dbo.rds_tlog_backup_copy_to_S3
@db_name='mydatabasename',
[@kms_key_arn='arn:aws:kms:region:account-id:key/key-id'],
[@backup_file_start_time='2022-09-01 01:00:15'],
[@backup_file_end_time='2022-09-01 21:30:45'],
[@starting_lsn=149000000112100001],
[@ending_lsn=149000000120400001],
[@rds_backup_starting_seq_id=5],
[@rds_backup_ending_seq_id=10];
Parameter Description
@db_name The name of the database to copy transaction log backups for
@ending_lsn The log sequence number (LSN) as provided from the [ending_lsn]
column of the rds_fn_list_tlog_backup_metadata function.
@rds_backup_starting_seq_id
The sequence ID as provided from the [rds_backup_seq_id]
column of the rds_fn_list_tlog_backup_metadata function.
1499
Amazon Relational Database Service User Guide
Access to transaction log backups
Parameter Description
@rds_backup_ending_seq_id
The sequence ID as provided from the [rds_backup_seq_id]
column of the rds_fn_list_tlog_backup_metadata function.
You can specify a set of either the time, LSN, or sequence ID parameters. Only one set of parameters are
required.
You can also specify just a single parameter in any of the sets. For example, by providing a value for only
the backup_file_end_time parameter, all available transaction log backup files prior to that time
within the seven-day limit will be copied to your Amazon S3 bucket.
Following are the valid input parameter combinations for the rds_tlog_backup_copy_to_S3 stored
procedure.
Copies transaction
exec
log backups from the
msdb.dbo.rds_tlog_backup_copy_to_S3
last seven days and
@db_name = exist between the
'testdb1', provided range of
backup_file_start_time
@backup_file_start_time='2022-08-23
and
00:00:00',
backup_file_end_time.
In this example, the
@backup_file_end_time='2022-08-30
00:00:00'; stored procedure
will copy transaction
log backups that
were generated
between '2022-08-23
00:00:00' and
'2022-08-30
00:00:00'.
Copies transaction
exec
log backups from
msdb.dbo.rds_tlog_backup_copy_to_S3
@db_name the last seven
= 'testdb1', days and starting
from the provided
@backup_file_start_time='2022-08-23
backup_file_start_time.
00:00:00'; In this example, the
stored procedure
will copy transaction
log backups from
'2022-08-23
00:00:00' up to the
latest transaction log
backup.
Copies transaction
exec
log backups from
msdb.dbo.rds_tlog_backup_copy_to_S3
@db_name the last seven days
= 'testdb1', up to the provided
backup_file_end_time.
In this example, the
1500
Amazon Relational Database Service User Guide
Access to transaction log backups
Copies transaction
exec
log backups that
msdb.dbo.rds_tlog_backup_copy_to_S3
are available from
@db_name='testdb1', the last seven days
and are between the
@starting_lsn provided range of
=1490000000040007, the starting_lsn
and ending_lsn.
@ending_lsn =
1490000000050009; In this example, the
stored procedure will
copy transaction log
backups from the last
seven days with an
LSN range between
1490000000040007
and
1490000000050009.
Copies transaction
exec
log backups that
msdb.dbo.rds_tlog_backup_copy_to_S3
are available from
@db_name='testdb1', the last seven
days, beginning
@starting_lsn from the provided
=1490000000040007; starting_lsn. In
this example, the
stored procedure will
copy transaction log
backups from LSN
1490000000040007
up to the latest
transaction log
backup.
Copies transaction
exec
log backups that
msdb.dbo.rds_tlog_backup_copy_to_S3
are available from
@db_name='testdb1', the last seven days,
up to the provided
@ending_lsn ending_lsn. In
=1490000000050009; this example, the
stored procedure will
copy transaction log
backups beginning
from the last seven
days up to lsn
1490000000050009.
1501
Amazon Relational Database Service User Guide
Access to transaction log backups
Copies transaction
exec
log backups that are
msdb.dbo.rds_tlog_backup_copy_to_S3
available from the
@db_name='testdb1', last seven days, and
exist between the
@rds_backup_starting_seq_id=
provided range of
2000, rds_backup_starting_seq_id
and
@rds_backup_ending_seq_id=
5000; rds_backup_ending_seq_id.
In this example, the
stored procedure will
copy transaction log
backups beginning
from the last seven
days and within the
provided rds backup
sequence id range,
starting from seq_id
2000 up to seq_id
5000.
Copies transaction
exec
log backups that
msdb.dbo.rds_tlog_backup_copy_to_S3
are available from
@db_name='testdb1', the last seven
days, beginning
@rds_backup_starting_seq_id=
from the provided
2000; rds_backup_starting_seq_id.
In this example, the
stored procedure will
copy transaction log
backups beginning
from seq_id 2000,
up to the latest
transaction log
backup.
Copies transaction
exec
log backups that
msdb.dbo.rds_tlog_backup_copy_to_S3
are available from
@db_name='testdb1', the last seven days,
up to the provided
@rds_backup_ending_seq_id=
rds_backup_ending_seq_id.
5000; In this example, the
stored procedure will
copy transaction log
backups beginning
from the last seven
days, up to seq_id
5000.
1502
Amazon Relational Database Service User Guide
Access to transaction log backups
Copies a single
exec
transaction log
msdb.dbo.rds_tlog_backup_copy_to_S3
backup with
@db_name='testdb1', the provided
rds_backup_starting_seq_id,
@rds_backup_starting_seq_id=
if available within
2000; the last seven days.
In this example, the
@rds_backup_ending_seq_id=
2000; stored procedure
will copy a single
transaction log
backup that has a
seq_id of 2000, if it
exists within the last
seven days.
To manually validate the log chain before copying transaction log backups, call the
rds_fn_list_tlog_backup_metadata function and review the values in the
is_log_chain_broken column. A value of "1" indicates the log chain was broken between the current
log backup and the previous log backup.
The following example shows a broken log chain in the output from the
rds_fn_list_tlog_backup_metadata stored procedure.
In a normal log chain, the log sequence number (LSN) value for first_lsn for given rds_sequence_id
should match the value of last_lsn in the preceding rds_sequence_id. In the image, the rds_sequence_id
of 45 has a first_lsn value 90987, which does not match the last_lsn value of 90985 for preceeding
rds_sequence_id 44.
For more information about SQL Server transaction log architecture and log sequence numbers, see
Transaction Log Logical Architecture in the Microsoft SQL Server documentation.
• A new folder is created under the target_s3_arn path for each database with the naming structure
as {db_id}.{family_guid}.
• Within the folder, transaction log backups have a filename structure as {db_id}.{family_guid}.
{rds_backup_seq_id}.{backup_file_epoch}.
1503
Amazon Relational Database Service User Guide
Access to transaction log backups
The following example shows the folder and file structure of a set of transaction log backups within an
Amazon S3 bucket.
Example usage:
exec msdb.dbo.rds_task_status
@db_name='database_name',
1504
Amazon Relational Database Service User Guide
Access to transaction log backups
@task_id=ID_number;
• @db_name – The name of the database to show the task status for.
• @task_id – The ID of the task to show the task status for.
exec msdb.dbo.rds_task_status@db_name='my_database',@task_id=5;
Example of listing all tasks and their status for a specific database:
Example of listing all tasks and their status on the current DB instance:
exec msdb.dbo.rds_task_status;
Canceling a task
To cancel a running task, call the rds_cancel_task stored procedure.
Example usage:
• @task_id – The ID of the task to cancel. You can view the task ID by calling the rds_task_status
stored procedure.
For more information on viewing and canceling running tasks, see Importing and exporting SQL Server
databases using native backup and restore (p. 1419).
rds_tlog_copy_setup
Backups are Automated backups are not DB instance backup retention
disabled enabled for the DB instance. must be enabled with a retention
1505
Amazon Relational Database Service User Guide
Access to transaction log backups
rds_tlog_copy_setup
Error An internal error occurred. Reconnect to the RDS
running the endpoint and run the
rds_tlog_copy_setup rds_tlog_copy_setup stored
stored procedure again.
procedure.
Reconnect
to the RDS
endpoint and
try again.
rds_tlog_copy_setup
Running the The stored procedure was Avoid using BEGIN and
rds_tlog_backup_copy_setup
attempted within a transaction END when running the
stored using BEGIN and END. rds_tlog_copy_setup stored
procedure procedure.
inside a
transaction
is not
supported.
Verify that
the session
has no open
transactions
and try
again.
rds_tlog_copy_setup
The S3 An incorrect value was provided Ensure the input parameter
bucket name for the input parameter @target_s3_arn specifies the
for the input @target_s3_arn. complete Amazon S3 bucket ARN.
parameter
@target_s3_arn
should
contain at
least one
character
other than a
space.
1506
Amazon Relational Database Service User Guide
Access to transaction log backups
rds_tlog_copy_setup
The The Enable the
SQLSERVER_BACKUP_RESTORE
SQLSERVER_BACKUP_RESTORE SQLSERVER_BACKUP_RESTORE
option isn't option is not enabled on the DB option as specified in the
enabled instance or was just enabled and Requirements section. Wait
or is in the pending internal activation. a few minutes and run the
process rds_tlog_copy_setup stored
of being procedure again.
enabled.
Enable the
option or try
again later.
rds_tlog_copy_setup
The target An NULL value was provided Ensure the input parameter
S3 arn for for the input parameter @target_s3_arn specifies the
the input @target_s3_arn, or the value complete Amazon S3 bucket ARN.
parameter wasn't provided.
@target_s3_arn
can't be
empty or
null.
rds_tlog_copy_setup
The target The input parameter Ensure the input parameter
S3 arn for @target_s3_arn was provide @target_s3_arn specifies the
the input without arn:aws on the front. complete Amazon S3 bucket ARN.
parameter
@target_s3_arn
must begin
with arn:aws.
rds_tlog_copy_setup
The target The rds_tlog_copy_setup To modify the Amazon S3 bucket
S3 ARN is stored procedure previously value for access to transaction
already set to ran and was configured with an log backups, provide a different
the provided Amazon S3 bucket ARN. target S3 ARN.
value.
rds_tlog_copy_setup
Unable to There was an unspecified error Review your setup configuration
generate while generating credentials to and try again.
credentials enable access to transaction log
for enabling backups.
Access to
Transaction
Log Backups.
Confirm the
S3 path ARN
provided
with
rds_tlog_copy_setup,
and try again
later.
1507
Amazon Relational Database Service User Guide
Access to transaction log backups
rds_tlog_copy_setup
You cannot Only two tasks may run at any View pending tasks and wait
run the time. There are pending tasks for them to complete. For more
rds_tlog_copy_setup
awaiting completion. information on monitoring task
stored status, see Tracking the status of
procedure tasks (p. 1504).
while there
are pending
tasks. Wait
for the
pending
tasks to
complete and
try again.
rds_tlog_backup_copy_to_S3
A T-log Only one copy task may run at any View pending tasks and wait
backup file time for a given database. There for them to complete. For more
copy task has is a pending copy task awaiting information on monitoring task
already been completion. status, see Tracking the status of
issued for tasks (p. 1504).
database: %s
with task Id:
%d, please
try again
later.
rds_tlog_backup_copy_to_S3
At least None of the three parameter You can specify either the time,
one of sets were provided, or a provided lsn, or sequence ID parameters.
these three parameter set is missing a One set from these three sets
parameter required parameter. of parameters are required. For
sets must more information on required
be provided. parameters, see Copying
SET-1: transaction log backups (p. 1499).
(@backup_file_start_time,
@backup_file_end_time)
| SET-2:
(@starting_lsn,
@ending_lsn)
| SET-3:
(@rds_backup_starting_seq_id,
@rds_backup_ending_seq_id)
rds_tlog_backup_copy_to_S3
Backups are Automated backups are not For more information on
disabled enabled for the DB instance. enabling automated backups and
on your configuring backup retention, see
instance. Backup retention period (p. 593).
Please enable
backups and
try again in
some time.
rds_tlog_backup_copy_to_S3
Cannot find The value provided for input Use the correct database
the given parameter @db_name does not name. To list all databases by
database %s. match a database name on the DB name, run SELECT * from
instance. sys.databases
1508
Amazon Relational Database Service User Guide
Access to transaction log backups
rds_tlog_backup_copy_to_S3
Cannot The value provided for input The following databases are not
run the parameter @db_name matches a allowed to be used with access to
rds_tlog_backup_copy_to_S3
SQL Server system database name transaction log backups: master,
stored or the RDSAdmin database. model, msdb, tempdb,
procedure for RDSAdmin.
SQL Server
system
databases or
the rdsadmin
database.
rds_tlog_backup_copy_to_S3
Database The value provided for input Use the correct database
name for parameter @db_name was empty name. To list all databases by
the input or NULL. name, run SELECT * from
parameter sys.databases
@db_name
can't be
empty or
null.
rds_tlog_backup_copy_to_S3
DB instance Automated backups are not For more information on
backup enabled for the DB instance. enabling automated backups and
retention configuring backup retention, see
period must Backup retention period (p. 593).
be set to
at least 1
to run the
rds_tlog_backup_copy_setup
stored
procedure.
rds_tlog_backup_copy_to_S3
Error running An internal error occurred. Reconnect to the RDS
the stored endpoint and run the
procedure rds_tlog_backup_copy_to_S3
rds_tlog_backup_copy_to_S3. stored procedure again.
Reconnect
to the RDS
endpoint and
try again.
rds_tlog_backup_copy_to_S3
Only one of Multiple parameter sets were You can specify either the time,
these three provided. lsn, or sequence ID parameters.
parameter One set from these three sets
sets can be of parameters are required. For
provided. more information on required
SET-1: parameters, see Copying
(@backup_file_start_time, transaction log backups (p. 1499).
@backup_file_end_time)
| SET-2:
(@starting_lsn,
@ending_lsn)
| SET-3:
(@rds_backup_starting_seq_id,
@rds_backup_ending_seq_id)
1509
Amazon Relational Database Service User Guide
Access to transaction log backups
rds_tlog_backup_copy_to_S3
Running the The stored procedure was Avoid using BEGIN and
rds_tlog_backup_copy_to_S3
attempted within a transaction END when running the
stored using BEGIN and END. rds_tlog_backup_copy_to_S3
procedure stored procedure.
inside a
transaction
is not
supported.
Verify that
the session
has no open
transactions
and try
again.
rds_tlog_backup_copy_to_S3
The provided There are no available Try again with a valid set of
parameters transactional log backups for the parameters. For more information
fall outside provided input parameters that fit on required parameters,
of the in the copy retention window. see Copying transaction log
transaction backups (p. 1499).
backup log
retention
period. To list
of available
transaction
log backup
files, run the
rds_fn_list_tlog_backup_metadata
function.
rds_tlog_backup_copy_to_S3
There was a There was an issue detected with Confirm your setup for access to
permissions the provided S3 bucket or its transaction log backups is correct.
error in policy permissions. For more information on setup
processing requirements for your S3 bucket,
the request. see Requirements (p. 1494).
Ensure the
bucket is in
the same
Account and
Region as the
DB Instance,
and confirm
the S3
bucket policy
permissions
against the
template in
the public
documentation.
1510
Amazon Relational Database Service User Guide
Access to transaction log backups
rds_tlog_backup_copy_to_S3
Running the The stored procedure was Connect to the RDS primary
attempted on a RDS read replica
rds_tlog_backup_copy_to_S3 DB instance to run the
stored instance. rds_tlog_backup_copy_to_S3
procedure stored procedure.
on an RDS
read replica
instance isn't
permitted.
rds_tlog_backup_copy_to_S3
The LSN for The value provided for input Ensure the value provided for
the input parameter @starting_lsn input parameter @starting_lsn
parameter was greater than the value is less than the value provided for
@starting_lsnprovided for input parameter input parameter @ending_lsn.
must be @ending_lsn.
less than
@ending_lsn.
rds_tlog_backup_copy_to_S3
The The db_owner role has Ensure the account running the
rds_tlog_backup_copy_to_S3
not been granted for the stored procedure is permissioned
stored account attempting to run the with the db_owner role for the
procedure rds_tlog_backup_copy_to_S3 provided db_name.
can only be stored procedure on the provided
performed by db_name.
the members
of db_owner
role in the
source
database.
rds_tlog_backup_copy_to_S3
The sequence The value provided Ensure the value provided
ID for for input parameter for input parameter
the input @rds_backup_starting_seq_id @rds_backup_starting_seq_id
parameter was greater than the value is less than the value
@rds_backup_starting_seq_id
provided for input parameter provided for input parameter
must be @rds_backup_ending_seq_id. @rds_backup_ending_seq_id.
less than
or equal to
@rds_backup_ending_seq_id.
rds_tlog_backup_copy_to_S3
The The Enable the
SQLSERVER_BACKUP_RESTORE
SQLSERVER_BACKUP_RESTORE SQLSERVER_BACKUP_RESTORE
option isn't option is not enabled on the DB option as specified in the
enabled instance or was just enabled and Requirements section. Wait
or is in the pending internal activation. a few minutes and run the
process rds_tlog_backup_copy_to_S3
of being stored procedure again.
enabled.
Enable the
option or try
again later.
1511
Amazon Relational Database Service User Guide
Access to transaction log backups
rds_tlog_backup_copy_to_S3
The start The value provided Ensure the value provided
time for for input parameter for input parameter
the input @backup_file_start_time @backup_file_start_time
parameter was greater than the value is less than the value
@backup_file_start_time
provided for input parameter provided for input parameter
must be @backup_file_end_time. @backup_file_end_time.
less than
@backup_file_end_time.
rds_tlog_backup_copy_to_S3
We were There may be an issue with the Ensure the Amazon S3
unable to Amazon S3 bucket permissions, or bucket policy permissions are
process the the Amazon S3 bucket provided is permissioned to allow RDS access.
request due in another account or Region. Ensure the Amazon S3 bucket is in
to a lack the same account and Region as
of access. the DB instance.
Please
check your
setup and
permissions
for the
feature.
rds_tlog_backup_copy_to_S3
You cannot When storage encryption is not Do not provide an input
provide a enabled on the DB instance, the parameter for @kms_key_arn.
KMS Key input parameter @kms_key_arn
ARN as input should not be provided.
parameter
to the stored
procedure
for instances
that are not
storage-
encrypted.
rds_tlog_backup_copy_to_S3
You must When storage encryption is Provide an input parameter for
provide a enabled on the DB instance, the @kms_key_arn with a value that
KMS Key input parameter @kms_key_arn matches the ARN of the Amazon
ARN as input must be provided. S3 bucket to use for transaction
parameter log backups.
to the stored
procedure
for storage
encrypted
instances.
1512
Amazon Relational Database Service User Guide
Access to transaction log backups
rds_tlog_backup_copy_to_S3
You must The access to transaction Run the rds_tlog_copy_setup
run the log backups setup procedure stored procedure
was not completed before
rds_tlog_copy_setup before running the
stored attempting to run the rds_tlog_backup_copy_to_S3
procedure rds_tlog_backup_copy_to_S3 stored procedure. For more
and set the stored procedure. information on running the
@target_s3_arn, setup procedure for access to
before transaction log backups, see
running the Setting up access to transaction
rds_tlog_backup_copy_to_S3 log backups (p. 1496).
stored
procedure.
1513
Amazon Relational Database Service User Guide
Options for SQL Server
If you're looking for optional features that aren't added through RDS option groups (such as SSL,
Microsoft Windows Authentication, and Amazon S3 integration), see Additional features for Microsoft
SQL Server on Amazon RDS (p. 1455).
Amazon RDS supports the following options for Microsoft SQL Server DB instances.
Linked Servers with Oracle OLEDB (p. 1517) OLEDB_ORACLE SQL Server Enterprise
Edition
1514
Amazon Relational Database Service User Guide
Listing the available options for
SQL Server versions and editions
SQL Server Analysis Services (p. 1543) SSAS SQL Server Enterprise
Edition
SQL Server Integration Services (p. 1562) SSIS SQL Server Enterprise
Edition
SQL Server Reporting Services (p. 1577) SSRS SQL Server Enterprise
Edition
The following example shows the options and option settings for SQL Server 2019 Enterprise Edition.
The --engine-name option is required.
{
"OptionGroupOptions": [
{
"Name": "MSDTC",
"Description": "Microsoft Distributed Transaction Coordinator",
"EngineName": "sqlserver-ee",
"MajorEngineVersion": "15.00",
"MinimumRequiredMinorEngineVersion": "4043.16.v1",
"PortRequired": true,
"DefaultPort": 5000,
"OptionsDependedOn": [],
"OptionsConflictsWith": [],
"Persistent": false,
"Permanent": false,
1515
Amazon Relational Database Service User Guide
Listing the available options for
SQL Server versions and editions
"RequiresAutoMinorEngineVersionUpgrade": false,
"VpcOnly": false,
"OptionGroupOptionSettings": [
{
"SettingName": "ENABLE_SNA_LU",
"SettingDescription": "Enable support for SNA LU protocol",
"DefaultValue": "true",
"ApplyType": "DYNAMIC",
"AllowedValues": "true,false",
"IsModifiable": true,
"IsRequired": false,
"MinimumEngineVersionPerAllowedValue": []
},
...
{
"Name": "TDE",
"Description": "SQL Server - Transparent Data Encryption",
"EngineName": "sqlserver-ee",
"MajorEngineVersion": "15.00",
"MinimumRequiredMinorEngineVersion": "4043.16.v1",
"PortRequired": false,
"OptionsDependedOn": [],
"OptionsConflictsWith": [],
"Persistent": true,
"Permanent": false,
"RequiresAutoMinorEngineVersionUpgrade": false,
"VpcOnly": false,
"OptionGroupOptionSettings": []
}
]
}
1516
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB
You can activate one or more linked servers for Oracle on either an existing or new RDS for SQL Server
DB instance. Then you can integrate external Oracle data sources with your DB instance.
Contents
• Supported versions and Regions (p. 1517)
• Limitations and recommendations (p. 1517)
• Activating linked servers with Oracle (p. 1518)
• Creating the option group for OLEDB_ORACLE (p. 1518)
• Adding the OLEDB_ORACLE option to the option group (p. 1519)
• Associating the option group with your DB instance (p. 1520)
• Modifying OLEDB provider properties (p. 1521)
• Modifying OLEDB driver properties (p. 1522)
• Deactivating linked servers with Oracle (p. 1523)
Linked servers with Oracle OLEDB is supported for the following Oracle Database versions:
1517
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB
• Allow network traffic by adding the applicable TCP port in the security group for each RDS for SQL
Server DB instance. For example, if you’re configuring a linked server between an EC2 Oracle DB
instance and an RDS for SQL Server DB instance, then you must allow traffic from the IP address of
the EC2 Oracle DB instance. You also must allow traffic on the port that SQL Server is using to listen
for database communication. For more information on security groups, see Controlling access with
security groups (p. 2680).
• Perform a reboot of the RDS for SQL Server DB instance after turning on, turning off, or modifying the
OLEDB_ORACLE option in your option group. The option group status displays pending_reboot for
these events and is required.
• Only simple authentication is supported with a user name and password for the Oracle data source.
• Open Database Connectivity (ODBC) drivers are not supported. Only the latest version of the OLEDB
driver is supported.
• Distributed transactions (XA) are supported. To activate distributed transactions, turn on the MSDTC
option in the Option Group for your DB instance and make sure XA transactions are turned on. For
more information, see Support for Microsoft Distributed Transaction Coordinator in RDS for SQL
Server (p. 1590).
• Creating data source names (DSNs) to use as a shortcut for a connection string is not supported.
• OLEDB driver tracing is not supported. You can use SQL Server Extended Events to trace OLEDB
events. For more information, see Set up Extended Events in RDS for SQL Server.
• Access to the catalogs folder for an Oracle linked server is not supported using SQL Server
Management Studio (SSMS).
Console
The following procedure creates an option group for SQL Server Standard Edition 2019.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose Create group.
4. In the Create option group window, do the following:
1518
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB
a. For Name, enter a name for the option group that is unique within your AWS account, such as
oracle-oledb-se-2019. The name can contain only letters, digits, and hyphens.
b. For Description, enter a brief description of the option group, such as OLEDB_ORACLE option
group for SQL Server SE 2019. The description is used for display purposes.
c. For Engine, choose sqlserver-se.
d. For Major engine version, choose 15.00.
5. Choose Create.
CLI
The following procedure creates an option group for SQL Server Standard Edition 2019.
Example
For Windows:
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group that you just created, which is oracle-oledb-se-2019 in this example.
4. Choose Add option.
5. Under Option details, choose OLEDB_ORACLE for Option name.
6. Under Scheduling, choose whether to add the option immediately or at the next maintenance
window.
7. Choose Add option.
1519
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB
CLI
Example
For Linux, macOS, or Unix:
For Windows:
Console
To finish activating linked servers for Oracle, associate your OLEDB_ORACLE option group with a new or
existing DB instance:
• For a new DB instance, associate them when you launch the instance. For more information, see
Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, associate them by modifying the instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).
CLI
You can associate the OLEDB_ORACLE option group and parameter group with a new or existing DB
instance.
To create an instance with the OLEDB_ORACLE option group and parameter group
• Specify the same DB engine type and major version that you used when creating the option group.
Example
For Linux, macOS, or Unix:
1520
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB
--master-username admin \
--storage-type gp2 \
--license-model li \
--domain-iam-role-name my-directory-iam-role \
--domain my-domain-id \
--option-group-name oracle-oledb-se-2019 \
--db-parameter-group-name my-parameter-group-name
For Windows:
Example
For Windows:
1521
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB
USE [master]
GO
EXEC sp_MSset_oledb_prop N'OraOLEDB.Oracle', N'AllowInProcess', 1
EXEC sp_MSset_oledb_prop N'OraOLEDB.Oracle', N'DynamicParameters', 0
GO
Level zero only 0 Only base-level OLEDB interfaces are called against
the provider.
Allow inprocess 1 If turned on, Microsoft SQL Server allows the provider
to be instantiated as an in-process server. Set this
property to 1 to use Oracle linked servers.
Index as access False If non-zero, SQL Server attempts to use indexes of the
path provider to fetch data.
Disallow adhoc False If set, SQL Server does not allow running pass-
access through queries against the OLEDB provider.
While this option can be checked, it is sometimes
appropriate to run pass-through queries.
Supports LIKE 1 Indicates that the provider supports queries using the
operator LIKE keyword.
Example: To create a linked server and change the OLEDB driver FetchSize property
EXEC master.dbo.sp_addlinkedserver
@server = N'Oracle_link2',
@srvproduct=N'Oracle',
@provider=N'OraOLEDB.Oracle',
@datasrc=N'my-oracle-test.cnetsipka.us-west-2.rds.amazonaws.com:1521/ORCL,
@provstr='FetchSize=200'
GO
1522
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB
EXEC master.dbo.sp_addlinkedsrvlogin
@rmtsrvname=N'Oracle_link2',
@useself=N'False',
@locallogin=NULL,
@rmtuser=N'master',
@rmtpassword='Test#1234'
GO
Note
Specify a password other than the prompt shown here as a security best practice.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group with the OLEDB_ORACLE option (oracle-oledb-se-2019 in the previous
examples).
4. Choose Delete option.
5. Under Deletion options, choose OLEDB_ORACLE for Options to delete.
6. Under Apply immediately, choose Yes to delete the option immediately, or No to delete it during
the next maintenance window.
7. Choose Delete.
CLI
Example
For Windows:
1523
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB
1524
Amazon Relational Database Service User Guide
Native backup and restore
Amazon RDS supports native backup and restore for Microsoft SQL Server databases by using differential
and full backup files (.bak files).
That is, the option must have as its option setting a valid Amazon Resource Name (ARN) in the format
arn:aws:iam::account-id:role/role-name. For more information, see Amazon Resource
Names (ARNs) in the AWS General Reference.
The IAM role must also have a trust relationship and a permissions policy attached. The trust
relationship allows RDS to assume the role, and the permissions policy defines the actions that the
role can perform. For more information, see Manually creating an IAM role for native backup and
restore (p. 1422).
4. Associate the option group with the DB instance.
After you add the native backup and restore option, you don't need to restart your DB instance. As soon
as the option group is active, you can begin backing up and restoring immediately.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Create a new option group or use an existing option group. For information on how to create a
custom DB option group, see Creating an option group (p. 332).
• To use an existing IAM role and Amazon S3 settings, choose an existing IAM role for IAM Role. If
you use an existing IAM role, RDS uses the Amazon S3 settings configured for this role.
• To create a new role and configure Amazon S3 settings, do the following:
1. For IAM role, choose Create a new role.
1525
Amazon Relational Database Service User Guide
Native backup and restore
This prefix can include a file path but doesn't have to. If you provide a prefix, RDS attaches that
prefix to all backup files. RDS then uses the prefix during a restore to identify related files and
ignore irrelevant files. For example, you might use the S3 bucket for purposes besides holding
backup files. In this case, you can use the prefix to have RDS perform native backup and restore
only on a particular folder and its subfolders.
If you leave the prefix blank, then RDS doesn't use a prefix to identify backup files or files to
restore. As a result, during a multiple-file restore, RDS attempts to restore every file in every
folder of the S3 bucket.
4. Choose the Enable encryption check box to encrypt the backup file. Leave the check box
cleared (the default) to have the backup file unencrypted.
If you chose Enable encryption, choose an encryption key for AWS KMS key. For more
information about encryption keys, see Getting started in the AWS Key Management Service
Developer Guide.
6. Choose Add option.
7. Apply the option group to a new or existing DB instance:
• For a new DB instance, apply the option group when you launch the instance. For more
information, see Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, apply the option group by modifying the instance and attaching the
new option group. For more information, see Modifying an Amazon RDS DB instance (p. 401).
CLI
• You're adding the SQLSERVER_BACKUP_RESTORE option to an option group that already exists. For
more information about adding options, see Adding an option to an option group (p. 335).
• You're associating the option with an IAM role that already exists and has access to an S3 bucket to
store the backups.
• You're applying the option group to a DB instance that already exists. For more information, see
Modifying an Amazon RDS DB instance (p. 401).
Example
For Windows:
1526
Amazon Relational Database Service User Guide
Native backup and restore
Note
When using the Windows command prompt, you must escape double quotes (") in JSON
code by prefixing them with a backslash (\).
2. Apply the option group to the DB instance.
Example
For Windows:
To remove the native backup and restore option from a DB instance, do one of the following:
• Remove the option from the option group it belongs to. This change affects all DB instances that use
the option group. For more information, see Removing an option from an option group (p. 343).
• Modify the DB instance and specify a different option group that doesn't include the native backup
and restore option. This change affects a single DB instance. You can specify the default (empty)
option group, or a different custom option group. For more information, see Modifying an Amazon
RDS DB instance (p. 401).
1527
Amazon Relational Database Service User Guide
Transparent Data Encryption
Amazon RDS supports TDE for the following SQL Server versions and editions:
Transparent Data Encryption for SQL Server provides encryption key management by using a two-tier
key architecture. A certificate, which is generated from the database master key, is used to protect the
data encryption keys. The database encryption key performs the actual encryption and decryption of
data on the user database. Amazon RDS backs up and manages the database master key and the TDE
certificate.
Transparent Data Encryption is used in scenarios where you need to encrypt sensitive data. For example,
you might want to provide data files and backups to a third party, or address security-related regulatory
compliance issues. You can't encrypt the system databases for SQL Server, such as the model or master
databases.
A detailed discussion of Transparent Data Encryption is beyond the scope of this guide, but make sure
that you understand the security strengths and weaknesses of each encryption algorithm and key. For
information about Transparent Data Encryption for SQL Server, see Transparent Data Encryption (TDE) in
the Microsoft documentation.
Topics
• Turning on TDE for RDS for SQL Server (p. 1528)
• Encrypting data on RDS for SQL Server (p. 1529)
• Backing up and restoring TDE certificates on RDS for SQL Server (p. 1530)
• Backing up and restoring TDE certificates for on-premises databases (p. 1533)
• Turning off TDE for RDS for SQL Server (p. 1535)
1. Determine whether your DB instance is already associated with an option group that has the TDE
option. To view the option group that a DB instance is associated with, use the RDS console, the
describe-db-instance AWS CLI command, or the API operation DescribeDBInstances.
2. If the DB instance isn't associated with an option group that has TDE turned on, you have two choices.
You can create an option group and add the TDE option, or you can modify the associated option
group to add it.
Note
In the RDS console, the option is named TRANSPARENT_DATA_ENCRYPTION. In the AWS CLI
and RDS API, it's named TDE.
1528
Amazon Relational Database Service User Guide
Transparent Data Encryption
For information about creating or modifying an option group, see Working with option
groups (p. 331). For information about adding an option to an option group, see Adding an option to
an option group (p. 335).
3. Associate the DB instance with the option group that has the TDE option. For information about
associating a DB instance with an option group, see Modifying an Amazon RDS DB instance (p. 401).
Because the TDE option is a persistent option, you can have a conflict between the option group and an
associated DB instance. You can have a conflict in the following situations:
• The current option group has the TDE option, and you replace it with an option group that doesn't
have the TDE option.
• You restore from a DB snapshot to a new DB instance that doesn't have an option group that contains
the TDE option. For more information about this scenario, see Option group considerations (p. 624).
Performance for unencrypted databases can also be degraded if the databases are on a DB instance
that has at least one encrypted database. As a result, we recommend that you keep encrypted and
unencrypted databases on separate DB instances.
The following example uses the RDS-created certificate called RDSTDECertificateName to encrypt a
database called myDatabase.
USE [myDatabase]
GO
-- Create a database encryption key (DEK) using one of the certificates from the previous
step
CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE [RDSTDECertificateName]
GO
1529
Amazon Relational Database Service User Guide
Transparent Data Encryption
The time that it takes to encrypt a SQL Server database using TDE depends on several factors. These
include the size of the DB instance, whether the instance uses Provisioned IOPS storage, the amount of
data, and other factors.
User TDE certificates are used to restore databases to RDS for SQL Server that are on-premises and have
TDE turned on. These certificates have the prefix UserTDECertificate_. After restoring databases,
and before making them available to use, RDS modifies the databases that have TDE turned on to use
RDS-generated TDE certificates. These certificates have the prefix RDSTDECertificate.
User TDE certificates remain on the RDS for SQL Server DB instance, unless you drop them using the
rds_drop_tde_certificate stored procedure. For more information, see Dropping restored TDE
certificates (p. 1533).
You can use a user TDE certificate to restore other databases from the source DB instance. The databases
to restore must use the same TDE certificate and have TDE turned on. You don't have to import (restore)
the same certificate again.
Topics
• Prerequisites (p. 1530)
• Limitations (p. 1531)
• Backing up a TDE certificate (p. 1531)
• Restoring a TDE certificate (p. 1532)
• Viewing restored TDE certificates (p. 1533)
• Dropping restored TDE certificates (p. 1533)
Prerequisites
Before you can back up or restore TDE certificates on RDS for SQL Server, make sure to perform the
following tasks. The first three are described in Setting up for native backup and restore (p. 1421).
We recommend that you use separate buckets for database backups and for TDE certificate backups.
2. Create an IAM role for backing up and restoring files.
The IAM role must be both a user and an administrator for the AWS KMS key.
In addition to the permissions required for SQL Server native backup and restore, the IAM role also
requires the following permissions:
• s3:GetBucketACL, s3:GetBucketLocation, and s3:ListBucket on the S3 bucket resource
• s3:ListAllMyBuckets on the * resource
1530
Amazon Relational Database Service User Guide
Transparent Data Encryption
For more information on enabling Amazon S3 integration, see Integrating an Amazon RDS for SQL
Server DB instance with Amazon S3 (p. 1464).
Limitations
Using stored procedures to back up and restore TDE certificates has the following limitations:
EXECUTE msdb.dbo.rds_backup_tde_certificate
@certificate_name='UserTDECertificate_certificate_name | RDSTDECertificatetimestamp',
@certificate_file_s3_arn='arn:aws:s3:::bucket_name/certificate_file_name.cer',
@private_key_file_s3_arn='arn:aws:s3:::bucket_name/key_file_name.pvk',
@kms_password_key_arn='arn:aws:kms:region:account-id:key/key-id',
[@overwrite_s3_files=0|1];
1531
Amazon Relational Database Service User Guide
Transparent Data Encryption
• @certificate_file_s3_arn – The destination Amazon Resource Name (ARN) for the certificate
backup file in Amazon S3.
• @private_key_file_s3_arn – The destination S3 ARN of the private key file that secures the TDE
certificate.
• @kms_password_key_arn – The ARN of the symmetric KMS key used to encrypt the private key
password.
• @overwrite_s3_files – Indicates whether to overwrite the existing certificate and private key files
in S3:
• 0 – Doesn't overwrite the existing files. This value is the default.
EXECUTE msdb.dbo.rds_backup_tde_certificate
@certificate_name='RDSTDECertificate20211115T185333',
@certificate_file_s3_arn='arn:aws:s3:::TDE_certs/mycertfile.cer',
@private_key_file_s3_arn='arn:aws:s3:::TDE_certs/mykeyfile.pvk',
@kms_password_key_arn='arn:aws:kms:us-west-2:123456789012:key/AKIAIOSFODNN7EXAMPLE',
@overwrite_s3_files=1;
EXECUTE msdb.dbo.rds_restore_tde_certificate
@certificate_name='UserTDECertificate_certificate_name',
@certificate_file_s3_arn='arn:aws:s3:::bucket_name/certificate_file_name.cer',
@private_key_file_s3_arn='arn:aws:s3:::bucket_name/key_file_name.pvk',
@kms_password_key_arn='arn:aws:kms:region:account-id:key/key-id';
• @certificate_name – The name of the TDE certificate to restore. The name must start with the
UserTDECertificate_ prefix.
• @certificate_file_s3_arn – The S3 ARN of the backup file used to restore the TDE certificate.
• @private_key_file_s3_arn – The S3 ARN of the private key backup file of the TDE certificate to
be restored.
• @kms_password_key_arn – The ARN of the symmetric KMS key used to encrypt the private key
password.
EXECUTE msdb.dbo.rds_restore_tde_certificate
@certificate_name='UserTDECertificate_myTDEcertificate',
@certificate_file_s3_arn='arn:aws:s3:::TDE_certs/mycertfile.cer',
@private_key_file_s3_arn='arn:aws:s3:::TDE_certs/mykeyfile.pvk',
@kms_password_key_arn='arn:aws:kms:us-west-2:123456789012:key/AKIAIOSFODNN7EXAMPLE';
1532
Amazon Relational Database Service User Guide
Transparent Data Encryption
The output resembles the following. Not all columns are shown here.
name certificate_id
principal_id
pvt_key_encryption_type_desc
issuer_name
cert_serial_number
thumbprint
subject start_date
expiry_date
pvt_key_last_backu
UserTDECertificate_tde_cert
343 1 ENCRYPTED_BY_MASTER_KEY
AnyCompany
79 3e 0x6BB218B34110388680B
AnyCompany
2022-04-05
2023-04-05
NULL
Shipping57 a3 FE1BA2D86C695096485B5
Shipping19:49:45.0000000
19:49:45.0000000
69 fd
1d 9e
47 2c
32 67
1d 9c
ca af
EXECUTE msdb.dbo.rds_drop_tde_certificate
@certificate_name='UserTDECertificate_certificate_name';
You can drop only restored (imported) TDE certificates. You can't drop RDS-created certificates.
EXECUTE msdb.dbo.rds_drop_tde_certificate
@certificate_name='UserTDECertificate_myTDEcertificate';
The following procedure backs up a TDE certificate and private key. The private key is encrypted using a
data key generated from your symmetric encryption KMS key.
1. Generate the data key using the AWS CLI generate-data-key command.
1533
Amazon Relational Database Service User Guide
Transparent Data Encryption
--key-id my_KMS_key_ID \
--key-spec AES_256
{
"CiphertextBlob": "AQIDAHimL2NEoAlOY6Bn7LJfnxi/OZe9kTQo/
XQXduug1rmerwGiL7g5ux4av9GfZLxYTDATAAAAfjB8BgkqhkiG9w0B
BwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMyCxLMi7GRZgKqD65AgEQgDtjvZLJo2cQ31Vetngzm2ybHDc
2RezQy3sAS6ZHrCjfnfn0c65bFdhsXxjSMnudIY7AKw==",
"Plaintext": "U/fpGtmzGCYBi8A2+0/9qcRQRK2zmG/aOn939ZnKi/0=",
"KeyId": "arn:aws:kms:us-west-2:123456789012:key/1234abcd-00ee-99ff-88dd-aa11bb22cc33"
}
You use the plain text output in the next step as the private key password.
2. Back up your TDE certificate as shown in the following example.
• Key – x-amz-meta-rds-tde-pwd
• Value – The CiphertextBlob value from generating the data key, as in the following example.
AQIDAHimL2NEoAlOY6Bn7LJfnxi/OZe9kTQo/
XQXduug1rmerwGiL7g5ux4av9GfZLxYTDATAAAAfjB8BgkqhkiG9w0B
BwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMyCxLMi7GRZgKqD65AgEQgDtjvZLJo2cQ31Vetngzm2ybH
2RezQy3sAS6ZHrCjfnfn0c65bFdhsXxjSMnudIY7AKw==
The following procedure restores an RDS for SQL Server TDE certificate to an on-premises DB instance.
You copy and restore the TDE certificate on your destination DB instance using the certificate backup,
corresponding private key file, and data key. The restored certificate is encrypted by the database master
key of the new server.
1. Copy the TDE certificate backup file and private key file from Amazon S3 to the destination instance.
For more information on copying files from Amazon S3, see Transferring files between RDS for SQL
Server and Amazon S3 (p. 1471).
2. Use your KMS key to decrypt the output cipher text to retrieve the plain text of the data key. The
cipher text is located in the S3 metadata of the private key backup file.
You use the plain text output in the next step as the private key password.
3. Use the following SQL command to restore your TDE certificate.
1534
Amazon Relational Database Service User Guide
Transparent Data Encryption
For more information on KMS decryption, see decrypt in the KMS section of the AWS CLI Command
Reference.
After the TDE certificate is restored on the destination DB instance, you can restore encrypted databases
with that certificate.
Note
You can use the same TDE certificate to encrypt multiple SQL Server databases on the source
DB instance. To migrate multiple databases to a destination instance, copy the TDE certificate
associated with them to the destination instance only once.
The following example removes the TDE encryption from a database called customerDatabase.
USE [customerDatabase]
GO
-- Wait until the encryption state of the database becomes 1. The state is 5 (Decryption in
progress) for a while
SELECT db_name(database_id) as DatabaseName, * FROM sys.dm_database_encryption_keys
GO
-- Alter to SIMPLE Recovery mode so that your encrypted log gets truncated
USE [master]
GO
ALTER DATABASE [customerDatabase] SET RECOVERY SIMPLE
GO
1. You can modify the DB instance to be associated with an option group without the TDE option.
2. You can remove the TDE option from the option group.
1535
Amazon Relational Database Service User Guide
SQL Server Audit
RDS uploads the completed audit logs to your S3 bucket, using the IAM role that you provide. If you
enable retention, RDS keeps your audit logs on your DB instance for the configured period of time.
For more information, see SQL Server Audit (database engine) in the Microsoft SQL Server
documentation.
Topics
• Support for SQL Server Audit (p. 1536)
• Adding SQL Server Audit to the DB instance options (p. 1537)
• Using SQL Server Audit (p. 1538)
• Viewing audit logs (p. 1538)
• Using SQL Server Audit with Multi-AZ instances (p. 1539)
• Configuring an S3 bucket (p. 1539)
• Manually creating an IAM role for SQL Server Audit (p. 1540)
RDS supports configuring the following option settings for SQL Server Audit.
S3_BUCKET_ARN A valid ARN in the format The ARN for the S3 bucket
arn:aws:s3:::bucket-name where you want to store your
or arn:aws:s3:::bucket- audit logs.
name/key-prefix
1536
Amazon Relational Database Service User Guide
SQL Server Audit
RDS supports SQL Server Audit in all AWS Regions except Middle East (Bahrain).
After you add the SQL Server Audit option, you don't need to restart your DB instance. As soon as the
option group is active, you can create audits and store audit logs in your S3 bucket.
• For IAM role, if you already have an IAM role with the required policies, you can choose that role.
To create a new IAM role, choose Create a New Role. For information about the required policies,
see Manually creating an IAM role for SQL Server Audit (p. 1540).
• For Select S3 destination, if you already have an S3 bucket that you want to use, choose it. To
create an S3 bucket, choose Create a New S3 Bucket.
• For Enable Compression, leave this option chosen to compress audit files. Compression is enabled
by default. To disable compression, clear Enable Compression.
• For Audit log retention, to keep audit records on the DB instance, choose this option. Specify a
retention time in hours. The maximum retention time is 35 days.
3. Apply the option group to a new or existing DB instance. Choose one of the following:
• If you are creating a new DB instance, apply the option group when you launch the instance.
• On an existing DB instance, apply the option group by modifying the instance and then attaching
the new option group. For more information, see Modifying an Amazon RDS DB instance (p. 401).
1537
Amazon Relational Database Service User Guide
SQL Server Audit
To remove auditing
1. Disable all of the audit settings inside SQL Server. To learn where audits are running, query the SQL
Server security catalog views. For more information, see Security catalog views in the Microsoft SQL
Server documentation.
2. Delete the SQL Server Audit option from the DB instance. Choose one of the following:
• Delete the SQL Server Audit option from the option group that the DB instance uses. This change
affects all DB instances that use the same option group. For more information, see Removing an
option from an option group (p. 343).
• Modify the DB instance, and then choose an option group without the SQL Server Audit option.
This change affects only the DB instance that you modify. You can specify the default (empty)
option group, or a different custom option group. For more information, see Modifying an
Amazon RDS DB instance (p. 401).
3. After you delete the SQL Server Audit option from the DB instance, you don't need to restart the
instance. Remove unneeded audit files from your S3 bucket.
Creating audits
You create server audits in the same way that you create them for on-premises database servers. For
information about how to create server audits, see CREATE SERVER AUDIT in the Microsoft SQL Server
documentation.
• Don't exceed the maximum number of supported server audits per instance of 50.
• Instruct SQL Server to write data to a binary file.
• Don't use RDS_ as a prefix in the server audit name.
• For FILEPATH, specify D:\rdsdbdata\SQLAudit.
• For MAXSIZE, specify a size between 2 MB and 50 MB.
• Don't configure MAX_ROLLOVER_FILES or MAX_FILES.
• Don't configure SQL Server to shut down the DB instance if it fails to write the audit record.
To avoid errors, don't use RDS_ as a prefix in the name of the database audit specification or server audit
specification.
1538
Amazon Relational Database Service User Guide
SQL Server Audit
After SQL Server finishes writing to an audit log file—when the file reaches its size limit—Amazon RDS
uploads the file to your S3 bucket. If retention is enabled, Amazon RDS moves the file into the retention
folder: D:\rdsdbdata\SQLAudit\transmitted.
For information about configuring retention, see Adding SQL Server Audit to the DB instance
options (p. 1537).
Audit records are kept on the DB instance until the audit log file is uploaded. You can view the audit
records by running the following command.
SELECT *
FROM msdb.dbo.rds_fn_get_audit_file
('D:\rdsdbdata\SQLAudit\*.sqlaudit'
, default
, default )
You can use the same command to view audit records in your retention folder by changing the filter to
D:\rdsdbdata\SQLAudit\transmitted\*.sqlaudit.
SELECT *
FROM msdb.dbo.rds_fn_get_audit_file
('D:\rdsdbdata\SQLAudit\transmitted\*.sqlaudit'
, default
, default )
Configuring an S3 bucket
The audit log files are automatically uploaded from the DB instance to your S3 bucket. The following
restrictions apply to the S3 bucket that you use as a target for audit files:
The target key that is used to store the data follows this naming schema: bucket-name/key-prefix/
instance-name/audit-name/node_file-name.ext
1539
Amazon Relational Database Service User Guide
SQL Server Audit
Note
You set both the bucket name and the key prefix values with the (S3_BUCKET_ARN) option
setting.
You can use the examples in this section to create the trust relationships and permissions policies you
need.
The following example shows a trust relationship for SQL Server Audit. It uses the service principal
rds.amazonaws.com to allow RDS to write to the S3 bucket. A service principal is an identifier that is
used to grant permissions to a service. Anytime you allow access to rds.amazonaws.com in this way,
you are allowing RDS to perform an action on your behalf. For more information about service principals,
see AWS JSON policy elements: Principal.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
1540
Amazon Relational Database Service User Guide
SQL Server Audit
We recommend using the aws:SourceArn and aws:SourceAccount global condition context keys in
resource-based trust relationships to limit the service's permissions to a specific resource. This is the most
effective way to protect against the confused deputy problem.
You might use both global condition context keys and have the aws:SourceArn value contain the
account ID. In this case, the aws:SourceAccount value and the account in the aws:SourceArn value
must use the same account ID when used in the same statement.
In the trust relationship, make sure to use the aws:SourceArn global condition context key with the full
Amazon Resource Name (ARN) of the resources accessing the role. For SQL Server Audit, make sure to
include both the DB option group and the DB instances, as shown in the following example.
Example trust relationship with global condition context key for SQL Server Audit
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceArn": [
"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier",
"arn:aws:rds:Region:my_account_ID:og:option_group_name"
]
}
}
}
]
}
In the following example of a permissions policy for SQL Server Audit, we specify an ARN for the Amazon
S3 bucket. You can use ARNs to identify a specific account, user, or role that you want grant access to. For
more information about using ARNs, see Amazon resource names (ARNs).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketACL",
"s3:GetBucketLocation"
],
1541
Amazon Relational Database Service User Guide
SQL Server Audit
"Resource": "arn:aws:s3:::bucket_name"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::bucket_name/key_prefix/*"
}
]
}
Note
The s3:ListAllMyBuckets action is required for verifying that the same AWS account owns
both the S3 bucket and the SQL Server DB instance. The action lists the names of the buckets in
the account.
S3 bucket namespaces are global. If you accidentally delete your bucket, another user can create
a bucket with the same name in a different account. Then the SQL Server Audit data is written
to the new bucket.
1542
Amazon Relational Database Service User Guide
SQL Server Analysis Services
You can turn on SSAS for existing or new DB instances. It's installed on the same DB instance as your
database engine. For more information on SSAS, see the Microsoft Analysis services documentation.
Amazon RDS supports SSAS for SQL Server Standard and Enterprise Editions on the following versions:
• Tabular mode:
• SQL Server 2019, version 15.00.4043.16.v1 and higher
• SQL Server 2017, version 14.00.3223.3.v1 and higher
• SQL Server 2016, version 13.00.5426.0.v1 and higher
• Multidimensional mode:
• SQL Server 2017, version 14.00.3381.3.v1 and higher
• SQL Server 2016, version 13.00.5882.1.v1 and higher
Contents
• Limitations (p. 1543)
• Turning on SSAS (p. 1544)
• Creating an option group for SSAS (p. 1544)
• Adding the SSAS option to the option group (p. 1545)
• Associating the option group with your DB instance (p. 1547)
• Allowing inbound access to your VPC security group (p. 1548)
• Enabling Amazon S3 integration (p. 1548)
• Deploying SSAS projects on Amazon RDS (p. 1549)
• Monitoring the status of a deployment task (p. 1549)
• Using SSAS on Amazon RDS (p. 1551)
• Setting up a Windows-authenticated user for SSAS (p. 1551)
• Adding a domain user as a database administrator (p. 1552)
• Creating an SSAS proxy (p. 1553)
• Scheduling SSAS database processing using SQL Server Agent (p. 1554)
• Revoking SSAS access from the proxy (p. 1555)
• Backing up an SSAS database (p. 1556)
• Restoring an SSAS database (p. 1556)
• Restoring a DB instance to a specified time (p. 1557)
• Changing the SSAS mode (p. 1557)
• Turning off SSAS (p. 1558)
• Troubleshooting SSAS issues (p. 1559)
Limitations
The following limitations apply to using SSAS on RDS for SQL Server:
1543
Amazon Relational Database Service User Guide
SQL Server Analysis Services
• RDS for SQL Server supports running SSAS in Tabular or Multidimensional mode. For more
information, see Comparing tabular and multidimensional solutions in the Microsoft documentation.
• You can only use one SSAS mode at a time. Before changing modes, make sure to delete all of the
SSAS databases.
For more information, see Changing the SSAS mode (p. 1557).
• Multidimensional mode isn't supported on SQL Server 2019.
• Multi-AZ instances aren't supported.
• Instances must use self-managed Active Directory or AWS Directory Service for Microsoft Active
Directory for SSAS authentication. For more information, see Working with Active Directory with RDS
for SQL Server (p. 1387).
• Users aren't given SSAS server administrator access, but they can be granted database-level
administrator access.
• The only supported port for accessing SSAS is 2383.
• You can't deploy projects directly. We provide an RDS stored procedure to do this. For more
information, see Deploying SSAS projects on Amazon RDS (p. 1549).
• Processing during deployment isn't supported.
• Using .xmla files for deployment isn't supported.
• SSAS project input files and database backup output files can only be in the D:\S3 folder on the DB
instance.
Turning on SSAS
Use the following process to turn on SSAS for your DB instance:
Console
The following console procedure creates an option group for SQL Server Standard Edition 2017.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose Create group.
1544
Amazon Relational Database Service User Guide
SQL Server Analysis Services
a. For Name, enter a name for the option group that is unique within your AWS account, such as
ssas-se-2017. The name can contain only letters, digits, and hyphens.
b. For Description, enter a brief description of the option group, such as SSAS option group
for SQL Server SE 2017. The description is used for display purposes.
c. For Engine, choose sqlserver-se.
d. For Major engine version, choose 14.00.
5. Choose Create.
CLI
The following CLI example creates an option group for SQL Server Standard Edition 2017.
Example
For Windows:
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group that you just created.
4. Choose Add option.
5. Under Option details, choose SSAS for Option name.
6. Under Option settings, do the following:
1545
Amazon Relational Database Service User Guide
SQL Server Analysis Services
Max memory specifies the upper threshold above which SSAS begins releasing memory more
aggressively to make room for requests that are running, and also new high-priority requests.
The number is a percentage of the total memory of the DB instance. The allowed values are 10–
80, and the default is 45.
b. For Mode, choose the SSAS server mode, Tabular or Multidimensional.
If you don't see the Mode option setting, it means that Multidimensional mode isn't supported
in your AWS Region. For more information, see Limitations (p. 1543).
Note
The port for accessing SSAS, 2383, is prepopulated.
7. Under Scheduling, choose whether to add the option immediately or at the next maintenance
window.
8. Choose Add option.
CLI
1. Create a JSON file, for example ssas-option.json, with the following parameters:
• OptionGroupName – The name of option group that you created or chose previously (ssas-
se-2017 in the following example).
• Port – The port that you use to access SSAS. The only supported port is 2383.
• VpcSecurityGroupMemberships – Memberships for VPC security groups for your RDS DB
instance.
• MAX_MEMORY – The upper threshold above which SSAS should begin releasing memory more
aggressively to make room for requests that are running, and also new high-priority requests. The
number is a percentage of the total memory of the DB instance. The allowed values are 10–80,
and the default is 45.
• MODE – The SSAS server mode, either Tabular or Multidimensional. Tabular is the default.
If you receive an error that the MODE option setting isn't valid, it means that Multidimensional
mode isn't supported in your AWS Region. For more information, see Limitations (p. 1543).
{
"OptionGroupName": "ssas-se-2017",
"OptionsToInclude": [
{
"OptionName": "SSAS",
"Port": 2383,
"VpcSecurityGroupMemberships": ["sg-0abcdef123"],
"OptionSettings": [{"Name":"MAX_MEMORY","Value":"60"},
{"Name":"MODE","Value":"Multidimensional"}]
}],
"ApplyImmediately": true
}
1546
Amazon Relational Database Service User Guide
SQL Server Analysis Services
Example
For Windows:
Console
• For a new DB instance, associate the option group with the DB instance when you launch the instance.
For more information, see Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, modify the instance and associate the new option group with it. For more
information, see Modifying an Amazon RDS DB instance (p. 401).
Note
If you use an existing instance, it must already have an Active Directory domain and AWS
Identity and Access Management (IAM) role associated with it. If you create a new instance,
specify an existing Active Directory domain and IAM role. For more information, see Working
with Active Directory with RDS for SQL Server (p. 1387).
CLI
You can associate your option group with a new or existing DB instance.
Note
If you use an existing instance, it must already have an Active Directory domain and IAM role
associated with it. If you create a new instance, specify an existing Active Directory domain
and IAM role. For more information, see Working with Active Directory with RDS for SQL
Server (p. 1387).
• Specify the same DB engine type and major version that you used when creating the option group.
Example
1547
Amazon Relational Database Service User Guide
SQL Server Analysis Services
--engine-version 14.00.3223.3.v1 \
--allocated-storage 100 \
--manage-master-user-password \
--master-username admin \
--storage-type gp2 \
--license-model li \
--domain-iam-role-name my-directory-iam-role \
--domain my-domain-id \
--option-group-name ssas-se-2017
For Windows:
Example
For Windows:
1548
Amazon Relational Database Service User Guide
SQL Server Analysis Services
• Amazon S3 integration is turned on. For more information, see Integrating an Amazon RDS for SQL
Server DB instance with Amazon S3 (p. 1464).
• The Processing Option configuration setting is set to Do Not Process. This setting means that
no processing happens after deployment.
• You have both the myssasproject.asdatabase and myssasproject.deploymentoptions files.
They're automatically generated when you build the SSAS project.
1. Download the .asdatabase (SSAS model) file from your S3 bucket to your DB instance, as shown
in the following example. For more information on the download parameters, see Downloading files
from an Amazon S3 bucket to a SQL Server DB instance (p. 1471).
exec msdb.dbo.rds_download_from_s3
@s3_arn_of_file='arn:aws:s3:::bucket_name/myssasproject.asdatabase',
[@rds_file_path='D:\S3\myssasproject.asdatabase'],
[@overwrite_file=1];
exec msdb.dbo.rds_download_from_s3
@s3_arn_of_file='arn:aws:s3:::bucket_name/myssasproject.deploymentoptions',
[@rds_file_path='D:\S3\myssasproject.deploymentoptions'],
[@overwrite_file=1];
exec msdb.dbo.rds_msbi_task
@task_type='SSAS_DEPLOY_PROJECT',
@file_path='D:\S3\myssasproject.asdatabase';
To see a list of all tasks, set the first parameter to NULL and the second parameter to 0, as shown in the
following example.
To get a specific task, set the first parameter to NULL and the second parameter to the task ID, as shown
in the following example.
1549
Amazon Relational Database Service User Guide
SQL Server Analysis Services
task_type For SSAS, tasks can have the following task types:
• SSAS_DEPLOY_PROJECT
• SSAS_ADD_DB_ADMIN_MEMBER
• SSAS_BACKUP_DB
• SSAS_RESTORE_DB
last_updated The date and time that the task status was last
updated.
created_at The date and time that the task was created.
1550
Amazon Relational Database Service User Guide
SQL Server Analysis Services
1. In SSMS, connect to SSAS using the user name and password for the Active Directory domain.
2. Expand Databases. The newly deployed SSAS database appears.
3. Locate the connection string, and update the user name and password to give access to the source
SQL database. Doing this is required for processing SSAS objects.
Depending on the size of the input data, the processing operation might take several minutes to
complete.
Topics
• Setting up a Windows-authenticated user for SSAS (p. 1551)
• Adding a domain user as a database administrator (p. 1552)
• Creating an SSAS proxy (p. 1553)
• Scheduling SSAS database processing using SQL Server Agent (p. 1554)
• Revoking SSAS access from the proxy (p. 1555)
1551
Amazon Relational Database Service User Guide
SQL Server Analysis Services
credentials, and work with the SQL Server Agent proxy. For more information, see Credentials (database
engine) and Create a SQL Server Agent proxy in the Microsoft documentation.
You can grant some or all of the following permissions as needed to Windows-authenticated users.
Example
USE [msdb]
GO
GRANT EXEC ON msdb.dbo.rds_msbi_task TO [mydomain\user_name] with grant option
GRANT SELECT ON msdb.dbo.rds_fn_task_status TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_task_status TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_cancel_task TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_download_from_s3 TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_upload_to_s3 TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_delete_from_filesystem TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_gather_file_details TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_add_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_update_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_grant_login_to_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_revoke_login_from_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_delete_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_enum_login_for_proxy to [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_enum_proxy_for_subsystem TO [mydomain\user_name] with grant
option
GRANT EXEC ON msdb.dbo.rds_sqlagent_proxy TO [mydomain\user_name] with grant option
ALTER ROLE [SQLAgentUserRole] ADD MEMBER [mydomain\user_name]
GO
• A database administrator can use SSMS to create a role with admin privileges, then add users to that
role.
• You can use the following stored procedure.
exec msdb.dbo.rds_msbi_task
@task_type='SSAS_ADD_DB_ADMIN_MEMBER',
@database_name='myssasdb',
@ssas_role_name='exampleRole',
@ssas_role_member='domain_name\domain_user_name';
1552
Amazon Relational Database Service User Guide
SQL Server Analysis Services
• Create the credential for the proxy. To do this, you can use SSMS or the following SQL statement.
USE [master]
GO
CREATE CREDENTIAL [SSAS_Credential] WITH IDENTITY = N'mydomain\user_name', SECRET =
N'mysecret'
GO
Note
IDENTITY must be a domain-authenticated login. Replace mysecret with the password for
the domain-authenticated login.
USE [msdb]
GO
EXEC msdb.dbo.sp_add_proxy
@proxy_name=N'SSAS_Proxy',@credential_name=N'SSAS_Credential',@description=N''
GO
2. Use the following SQL statement to grant access to the proxy to other users.
USE [msdb]
GO
EXEC msdb.dbo.sp_grant_login_to_proxy
@proxy_name=N'SSAS_Proxy',@login_name=N'mydomain\user_name'
GO
3. Use the following SQL statement to give the SSAS subsystem access to the proxy.
USE [msdb]
GO
EXEC msdb.dbo.rds_sqlagent_proxy
@task_type='GRANT_SUBSYSTEM_ACCESS',@proxy_name='SSAS_Proxy',@proxy_subsystem='SSAS'
GO
1. Use the following SQL statement to view the grantees of the proxy.
1553
Amazon Relational Database Service User Guide
SQL Server Analysis Services
USE [msdb]
GO
EXEC sp_help_proxy
GO
USE [msdb]
GO
EXEC msdb.dbo.sp_enum_proxy_for_subsystem
GO
• Use SSMS or T-SQL for creating the SQL Server Agent job. The following example uses T-SQL. You
can further configure its job schedule through SSMS or T-SQL.
• The @command parameter outlines the XML for Analysis (XMLA) command to be run by the SQL
Server Agent job. This example configures SSAS Multidimensional database processing.
• The @server parameter outlines the target SSAS server name of the SQL Server Agent job.
To call the SSAS service within the same RDS DB instance where the SQL Server Agent job resides,
use localhost:2383.
To call the SSAS service from outside the RDS DB instance, use the RDS endpoint. You can also
use the Kerberos Active Directory (AD) endpoint (your-DB-instance-name.your-AD-domain-
name) if the RDS DB instances are joined by the same domain. For external DB instances, make
sure to properly configure the VPC security group associated with the RDS DB instance for a secure
connection.
You can further edit the query to support various XMLA operations. Make edits either by directly
modifying the T-SQL query or by using the SSMS UI following SQL Server Agent job creation.
USE [msdb]
GO
DECLARE @jobId BINARY(16)
EXEC msdb.dbo.sp_add_job @job_name=N'SSAS_Job',
@enabled=1,
@notify_level_eventlog=0,
@notify_level_email=0,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@category_name=N'[Uncategorized (Local)]',
@job_id = @jobId OUTPUT
GO
EXEC msdb.dbo.sp_add_jobserver
@job_name=N'SSAS_Job',
@server_name = N'(local)'
GO
EXEC msdb.dbo.sp_add_jobstep @job_name=N'SSAS_Job', @step_name=N'Process_SSAS_Object',
@step_id=1,
1554
Amazon Relational Database Service User Guide
SQL Server Analysis Services
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N'ANALYSISCOMMAND',
@command=N'<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/
engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://
www.w3.org/2001/XMLSchema-instance"
xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/
engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2"
xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/
engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/
engine/200"
xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/
engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/
engine/300"
xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/
engine/300/300" xmlns:ddl400="http://schemas.microsoft.com/analysisservices/2012/
engine/400"
xmlns:ddl400_400="http://schemas.microsoft.com/analysisservices/2012/
engine/400/400" xmlns:ddl500="http://schemas.microsoft.com/analysisservices/2013/
engine/500"
xmlns:ddl500_500="http://schemas.microsoft.com/analysisservices/2013/
engine/500/500">
<Object>
<DatabaseID>Your_SSAS_Database_ID</DatabaseID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>',
@server=N'localhost:2383',
@database_name=N'master',
@flags=0,
@proxy_name=N'SSAS_Proxy'
GO
USE [msdb]
GO
EXEC msdb.dbo.rds_sqlagent_proxy
@task_type='REVOKE_SUBSYSTEM_ACCESS',@proxy_name='SSAS_Proxy',@proxy_subsystem='SSAS'
GO
USE [msdb]
GO
1555
Amazon Relational Database Service User Guide
SQL Server Analysis Services
EXEC msdb.dbo.sp_revoke_login_from_proxy
@proxy_name=N'SSAS_Proxy',@name=N'mydomain\user_name'
GO
USE [msdb]
GO
EXEC dbo.sp_delete_proxy @proxy_name = N'SSAS_Proxy'
GO
• A domain user with the admin role for a particular database can use SSMS to back up the database to
the D:\S3 folder.
For more information, see Adding a domain user as a database administrator (p. 1552).
• You can use the following stored procedure. This stored procedure doesn't support encryption.
exec msdb.dbo.rds_msbi_task
@task_type='SSAS_BACKUP_DB',
@database_name='myssasdb',
@file_path='D:\S3\ssas_db_backup.abf',
[@ssas_apply_compression=1],
[@ssas_overwrite_file=1];
You can't restore a database if there is an existing SSAS database with the same name. The stored
procedure for restoring doesn't support encrypted backup files.
exec msdb.dbo.rds_msbi_task
@task_type='SSAS_RESTORE_DB',
@database_name='mynewssasdb',
@file_path='D:\S3\ssas_db_backup.abf';
1556
Amazon Relational Database Service User Guide
SQL Server Analysis Services
1. Back up your SSAS databases to the D:\S3 folder on the source instance.
2. Transfer the backup files to the S3 bucket.
3. Transfer the backup files from the S3 bucket to the D:\S3 folder on the restored instance.
4. Run the stored procedure to restore the SSAS databases onto the restored instance.
You can also reprocess the SSAS project to restore the databases.
Console
The following Amazon RDS console procedure changes the SSAS mode to Tabular and sets the
MAX_MEMORY parameter to 70 percent.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group with the SSAS option that you want to modify (ssas-se-2017 in the
previous examples).
4. Choose Modify option.
5. Change the option settings:
AWS CLI
The following AWS CLI example changes the SSAS mode to Tabular and sets the MAX_MEMORY parameter
to 70 percent.
For the CLI command to work, make sure to include all of the required parameters, even if you're not
modifying them.
1557
Amazon Relational Database Service User Guide
SQL Server Analysis Services
Example
For Windows:
Turning off SSAS
To turn off SSAS, remove the SSAS option from its option group.
Important
Before you remove the SSAS option, delete your SSAS databases.
We highly recommend that you back up your SSAS databases before deleting them and
removing the SSAS option.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group with the SSAS option that you want to remove (ssas-se-2017 in the
previous examples).
4. Choose Delete option.
5. Under Deletion options, choose SSAS for Options to delete.
6. Under Apply immediately, choose Yes to delete the option immediately, or No to delete it at the
next maintenance window.
7. Choose Delete.
AWS CLI
1558
Amazon Relational Database Service User Guide
SQL Server Analysis Services
Example
For Windows:
Unable to configure the SSAS option. RDS event You can't change the SSAS mode if you still
The requested SSAS mode is new_mode, have SSAS databases that use the current
but the current DB instance has number mode. Delete the SSAS databases, then try
current_mode databases. Delete the again.
existing databases before switching
to new_mode mode. To regain access
to current_mode mode for database
deletion, either update the current DB
option group, or attach a new option
group with %s as the MODE option
setting value for the SSAS option.
Unable to remove the SSAS option RDS event You can't turn off SSAS if you still have SSAS
because there are number existing mode databases. Delete the SSAS databases, then try
databases. The SSAS option can't be again.
removed until all SSAS databases are
deleted. Add the SSAS option again,
delete all SSAS databases, and try again.
The SSAS option isn't enabled or is in RDS stored You can't run SSAS stored procedures when
the process of being enabled. Try again procedure the option is turned off, or when it's being
later. turned on.
The SSAS option is configured RDS stored You can't run SSAS stored procedures when
incorrectly. Make sure that the option procedure your option group membership isn't in the in-
group membership status is "in- sync status. This puts the SSAS option in an
sync", and review the RDS event incorrect configuration state.
logs for relevant SSAS configuration
error messages. Following these If your option group membership status
investigations, try again. If errors changes to failed due to SSAS option
continue to occur, contact AWS Support. modification, there are two possible reasons:
1559
Amazon Relational Database Service User Guide
SQL Server Analysis Services
Deployment failed. The change can RDS stored You can't deploy a Tabular database to a
only be deployed on a server running in procedure Multidimensional server, or a Multidimensional
deployment_file_mode mode. The database to a Tabular server.
current server mode is current_mode.
Make sure that you're using files with the
correct mode, and verify that the MODE option
setting is set to the appropriate value.
The restore failed. The backup file can RDS stored You can't restore a Tabular database to a
only be restored on a server running procedure Multidimensional server, or a Multidimensional
in restore_file_mode mode. The database to a Tabular server.
current server mode is current_mode.
Make sure that you're using files with the
correct mode, and verify that the MODE option
setting is set to the appropriate value.
The restore failed. The backup file RDS stored You can't restore an SSAS database with
and the RDS DB instance versions are procedure a version incompatible to the SQL Server
incompatible. instance version.
The restore failed. The backup file RDS stored You can't restore an SSAS database with a
specified in the restore operation is procedure damaged file.
damaged or is not an SSAS backup
file. Make sure that @rds_file_path is Make sure that the file isn't damaged or
correctly formatted. corrupted.
1560
Amazon Relational Database Service User Guide
SQL Server Analysis Services
The restore failed. The restored RDS stored The restored database name can't contain any
database name can't contain any procedure reserved words or characters that aren't valid,
reserved words or invalid characters: . , ; or be longer than 100 characters.
' ` : / \\ * | ? \" & % $ ! + = ( ) [ ] { } < >,
or be longer than 100 characters. For SSAS object naming conventions,
see Object naming rules in the Microsoft
documentation.
An invalid role name was provided. The RDS stored The role name can't contain any reserved
role name can't contain any reserved procedure strings.
strings.
For SSAS object naming conventions,
see Object naming rules in the Microsoft
documentation.
An invalid role name was provided. RDS stored The role name can't contain any reserved
The role name can't contain any of the procedure characters.
following reserved characters: . , ; ' ` : /
\\ * | ? \" & % $ ! + = ( ) [ ] { } < > For SSAS object naming conventions,
see Object naming rules in the Microsoft
documentation.
1561
Amazon Relational Database Service User Guide
SQL Server Integration Services
SSIS projects are organized into packages saved as XML-based .dtsx files. Packages can contain control
flows and data flows. You use data flows to represent ETL operations. After deployment, packages are
stored in SQL Server in the SSISDB database. SSISDB is an online transaction processing (OLTP) database
in the full recovery mode.
Amazon RDS for SQL Server supports running SSIS directly on an RDS DB instance. You can enable SSIS
on an existing or new DB instance. SSIS is installed on the same DB instance as your database engine.
RDS supports SSIS for SQL Server Standard and Enterprise Editions on the following versions:
Contents
• Limitations and recommendations (p. 1562)
• Enabling SSIS (p. 1564)
• Creating the option group for SSIS (p. 1564)
• Adding the SSIS option to the option group (p. 1565)
• Creating the parameter group for SSIS (p. 1566)
• Modifying the parameter for SSIS (p. 1567)
• Associating the option group and parameter group with your DB instance (p. 1567)
• Enabling S3 integration (p. 1569)
• Administrative permissions on SSISDB (p. 1569)
• Setting up a Windows-authenticated user for SSIS (p. 1569)
• Deploying an SSIS project (p. 1570)
• Monitoring the status of a deployment task (p. 1571)
• Using SSIS (p. 1572)
• Setting database connection managers for SSIS projects (p. 1573)
• Creating an SSIS proxy (p. 1573)
• Scheduling an SSIS package using SQL Server Agent (p. 1574)
• Revoking SSIS access from the proxy (p. 1575)
• Disabling SSIS (p. 1575)
• Dropping the SSISDB database (p. 1576)
• The DB instance must have an associated parameter group with the clr enabled parameter set to 1.
For more information, see Modifying the parameter for SSIS (p. 1567).
Note
If you enable the clr enabled parameter on SQL Server 2017 or 2019, you can't use the
common language runtime (CLR) on your DB instance. For more information, see Features not
supported and features with limited support (p. 1367).
• The following control flow tasks are supported:
• Analysis Services Execute DDL Task
• Analysis Services Processing Task
• Bulk Insert Task
• Check Database Integrity Task
• Data Flow Task
• Data Mining Query Task
• Data Profiling Task
• Execute Package Task
• Execute SQL Server Agent Job Task
• Execute SQL Task
• Execute T-SQL Statement Task
• Notify Operator Task
• Rebuild Index Task
• Reorganize Index Task
• Shrink Database Task
• Transfer Database Task
• Transfer Jobs Task
• Transfer Logins Task
• Transfer SQL Server Objects Task
• Update Statistics Task
• Only project deployment is supported.
• Running SSIS packages by using SQL Server Agent is supported.
• SSIS log records can be inserted only into user-created databases.
• Use only the D:\S3 folder for working with files. Files placed in any other directory are deleted. Be
aware of a few other file location details:
• Place SSIS project input and output files in the D:\S3 folder.
• For the Data Flow Task, change the location for BLOBTempStoragePath and
BufferTempStoragePath to a file inside the D:\S3 folder. The file path must start with D:\S3\.
• Ensure that all parameters, variables, and expressions used for file connections point to the D:\S3
folder.
• On Multi-AZ instances, files created by SSIS in the D:\S3 folder are deleted after a failover. For more
information, see Multi-AZ limitations for S3 integration (p. 1476).
• Upload the files created by SSIS in the D:\S3 folder to your Amazon S3 bucket to make them
durable.
• Import Column and Export Column transformations and the Script component on the Data Flow Task
aren't supported.
• You can't enable dump on running SSIS packages, and you can't add data taps on SSIS packages.
• The SSIS Scale Out feature isn't supported.
• You can't deploy projects directly. We provide RDS stored procedures to do this. For more information,
see Deploying an SSIS project (p. 1570). 1563
Amazon Relational Database Service User Guide
SQL Server Integration Services
• Build SSIS project (.ispac) files with the DoNotSavePasswords protection mode for deploying on
RDS.
• SSIS isn't supported on Always On instances with read replicas.
• You can't back up the SSISDB database that is associated with the SSIS option.
• Importing and restoring the SSISDB database from other instances of SSIS isn't supported.
• You can connect to other SQL Server DB instances or to an Oracle data source. Connecting to other
database engines, such as MySQL or PostgreSQL, isn't supported for SSIS on RDS for SQL Server.
For more information on connecting to an Oracle data source, see Linked Servers with Oracle
OLEDB (p. 1517).
Enabling SSIS
You enable SSIS by adding the SSIS option to your DB instance. Use the following process:
Note
If a database with the name SSISDB or a reserved SSIS login already exists on the DB instance,
you can't enable SSIS on the instance.
Console
The following procedure creates an option group for SQL Server Standard Edition 2016.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose Create group.
4. In the Create option group window, do the following:
a. For Name, enter a name for the option group that is unique within your AWS account, such as
ssis-se-2016. The name can contain only letters, digits, and hyphens.
b. For Description, enter a brief description of the option group, such as SSIS option group
for SQL Server SE 2016. The description is used for display purposes.
c. For Engine, choose sqlserver-se.
d. For Major engine version, choose 13.00.
5. Choose Create.
1564
Amazon Relational Database Service User Guide
SQL Server Integration Services
CLI
The following procedure creates an option group for SQL Server Standard Edition 2016.
Example
For Windows:
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group that you just created, ssis-se-2016 in this example.
4. Choose Add option.
5. Under Option details, choose SSIS for Option name.
6. Under Scheduling, choose whether to add the option immediately or at the next maintenance
window.
7. Choose Add option.
CLI
Example
1565
Amazon Relational Database Service User Guide
SQL Server Integration Services
--option-group-name ssis-se-2016 \
--options OptionName=SSIS \
--apply-immediately
For Windows:
Console
The following procedure creates a parameter group for SQL Server Standard Edition 2016.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose Create parameter group.
4. In the Create parameter group pane, do the following:
CLI
The following procedure creates a parameter group for SQL Server Standard Edition 2016.
Example
For Windows:
1566
Amazon Relational Database Service User Guide
SQL Server Integration Services
--db-parameter-group-name ssis-sqlserver-se-13 ^
--db-parameter-group-family "sqlserver-se-13.0" ^
--description "clr enabled parameter group"
Console
The following procedure modifies the parameter group that you created for SQL Server Standard Edition
2016.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose the parameter group, such as ssis-sqlserver-se-13.
4. Under Parameters, filter the parameter list for clr.
5. Choose clr enabled.
6. Choose Edit parameters.
7. From Values, choose 1.
8. Choose Save changes.
CLI
The following procedure modifies the parameter group that you created for SQL Server Standard Edition
2016.
Example
For Windows:
Associating the option group and parameter group with your DB instance
To associate the SSIS option group and parameter group with your DB instance, use the AWS
Management Console or the AWS CLI
1567
Amazon Relational Database Service User Guide
SQL Server Integration Services
Note
If you use an existing instance, it must already have an Active Directory domain and AWS
Identity and Access Management (IAM) role associated with it. If you create a new instance,
specify an existing Active Directory domain and IAM role. For more information, see Working
with Active Directory with RDS for SQL Server (p. 1387).
Console
To finish enabling SSIS, associate your SSIS option group and parameter group with a new or existing DB
instance:
• For a new DB instance, associate them when you launch the instance. For more information, see
Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, associate them by modifying the instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).
CLI
You can associate the SSIS option group and parameter group with a new or existing DB instance.
To create an instance with the SSIS option group and parameter group
• Specify the same DB engine type and major version as you used when creating the option group.
Example
For Windows:
1568
Amazon Relational Database Service User Guide
SQL Server Integration Services
To modify an instance and associate the SSIS option group and parameter group
Example
For Windows:
Enabling S3 integration
To download SSIS project (.ispac) files to your host for deployment, use S3 file integration. For more
information, see Integrating an Amazon RDS for SQL Server DB instance with Amazon S3 (p. 1464).
Because the master user is a SQL-authenticated user, you can't use the master user for executing SSIS
packages. The master user can use these privileges to create new SSISDB users and add them to the
ssis_admin and ssis_logreader roles. Doing this is useful for giving access to your domain users for using
SSIS.
1569
Amazon Relational Database Service User Guide
SQL Server Integration Services
Example
-- Create a server-level SQL login for the domain user, if it doesn't already exist
USE [master]
GO
CREATE LOGIN [mydomain\user_name] FROM WINDOWS
GO
-- Create a database-level account for the domain user, if it doesn't already exist
USE [SSISDB]
GO
CREATE USER [mydomain\user_name] FOR LOGIN [mydomain\user_name]
To run the stored procedures, log in as any user that you granted permissions for running the stored
procedures. For more information, see Setting up a Windows-authenticated user for SSIS (p. 1569).
1570
Amazon Relational Database Service User Guide
SQL Server Integration Services
exec msdb.dbo.rds_download_from_s3
@s3_arn_of_file='arn:aws:s3:::bucket_name/ssisproject.ispac',
[@rds_file_path='D:\S3\ssisproject.ispac'],
[@overwrite_file=1];
exec msdb.dbo.rds_msbi_task
@task_type='SSIS_DEPLOY_PROJECT',
@folder_name='DEMO',
@project_name='ssisproject',
@file_path='D:\S3\ssisproject.ispac';
To see a list of all tasks, set the first parameter to NULL and the second parameter to 0, as shown in the
following example.
To get a specific task, set the first parameter to NULL and the second parameter to the task ID, as shown
in the following example.
task_type SSIS_DEPLOY_PROJECT
1571
Amazon Relational Database Service User Guide
SQL Server Integration Services
last_updated The date and time that the task status was last
updated.
created_at The date and time that the task was created.
Using SSIS
After deploying the SSIS project into the SSIS catalog, you can run packages directly from SSMS or
schedule them by using SQL Server Agent. You must use a Windows-authenticated login for executing
SSIS packages. For more information, see Setting up a Windows-authenticated user for SSIS (p. 1569).
Topics
• Setting database connection managers for SSIS projects (p. 1573)
• Creating an SSIS proxy (p. 1573)
• Scheduling an SSIS package using SQL Server Agent (p. 1574)
• Revoking SSIS access from the proxy (p. 1575)
1572
Amazon Relational Database Service User Guide
SQL Server Integration Services
• For local database connections using AWS Managed Active Directory, you can use
SQL authentication or Windows authentication. For Windows authentication, use
DB_instance_name.fully_qualified_domain_name as the server name of the connection string.
• Create the credential for the proxy. To do this, you can use SSMS or the following SQL statement.
USE [master]
GO
CREATE CREDENTIAL [SSIS_Credential] WITH IDENTITY = N'mydomain\user_name', SECRET =
N'mysecret'
GO
Note
IDENTITY must be a domain-authenticated login. Replace mysecret with the password for
the domain-authenticated login.
Whenever the SSISDB primary host is changed, alter the SSIS proxy credentials to allow the
new host to access them.
USE [msdb]
GO
EXEC msdb.dbo.sp_add_proxy
@proxy_name=N'SSIS_Proxy',@credential_name=N'SSIS_Credential',@description=N''
GO
2. Use the following SQL statement to grant access to the proxy to other users.
USE [msdb]
GO
EXEC msdb.dbo.sp_grant_login_to_proxy
@proxy_name=N'SSIS_Proxy',@login_name=N'mydomain\user_name'
GO
3. Use the following SQL statement to give the SSIS subsystem access to the proxy.
USE [msdb]
1573
Amazon Relational Database Service User Guide
SQL Server Integration Services
GO
EXEC msdb.dbo.rds_sqlagent_proxy
@task_type='GRANT_SUBSYSTEM_ACCESS',@proxy_name='SSIS_Proxy',@proxy_subsystem='SSIS'
GO
1. Use the following SQL statement to view the grantees of the proxy.
USE [msdb]
GO
EXEC sp_help_proxy
GO
USE [msdb]
GO
EXEC msdb.dbo.sp_enum_proxy_for_subsystem
GO
• You can use SSMS or T-SQL for creating the SQL Server Agent job. The following example uses T-
SQL.
USE [msdb]
GO
DECLARE @jobId BINARY(16)
EXEC msdb.dbo.sp_add_job @job_name=N'MYSSISJob',
@enabled=1,
@notify_level_eventlog=0,
@notify_level_email=2,
@notify_level_page=2,
@delete_level=0,
@category_name=N'[Uncategorized (Local)]',
@job_id = @jobId OUTPUT
GO
EXEC msdb.dbo.sp_add_jobserver @job_name=N'MYSSISJob',@server_name=N'(local)'
GO
EXEC msdb.dbo.sp_add_jobstep @job_name=N'MYSSISJob',@step_name=N'ExecuteSSISPackage',
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=1,
@on_fail_action=2,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0,
@subsystem=N'SSIS',
@command=N'/ISSERVER "\"\SSISDB\MySSISFolder\MySSISProject\MySSISPackage.dtsx\"" /
SERVER "\"my-rds-ssis-instance.corp-ad.company.com/\""
/Par "\"$ServerOption::LOGGING_LEVEL(Int16)\"";1 /Par
"\"$ServerOption::SYNCHRONIZED(Boolean)\"";True /CALLERINFO SQLAGENT /REPORTING E',
@database_name=N'master',
1574
Amazon Relational Database Service User Guide
SQL Server Integration Services
@flags=0,
@proxy_name=N'SSIS_Proxy'
GO
USE [msdb]
GO
EXEC msdb.dbo.rds_sqlagent_proxy
@task_type='REVOKE_SUBSYSTEM_ACCESS',@proxy_name='SSIS_Proxy',@proxy_subsystem='SSIS'
GO
USE [msdb]
GO
EXEC msdb.dbo.sp_revoke_login_from_proxy
@proxy_name=N'SSIS_Proxy',@name=N'mydomain\user_name'
GO
USE [msdb]
GO
EXEC dbo.sp_delete_proxy @proxy_name = N'SSIS_Proxy'
GO
Disabling SSIS
To disable SSIS, remove the SSIS option from its option group.
Important
Removing the option doesn't delete the SSISDB database, so you can safely remove the option
without losing the SSIS projects.
You can re-enable the SSIS option after removal to reuse the SSIS projects that were previously
deployed to the SSIS catalog.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group with the SSIS option (ssis-se-2016 in the previous examples).
4. Choose Delete option.
5. Under Deletion options, choose SSIS for Options to delete.
1575
Amazon Relational Database Service User Guide
SQL Server Integration Services
6. Under Apply immediately, choose Yes to delete the option immediately, or No to delete it at the
next maintenance window.
7. Choose Delete.
CLI
Example
For Windows:
USE [msdb]
GO
EXEC dbo.rds_drop_ssis_database
GO
After dropping the SSISDB database, if you re-enable the SSIS option you get a fresh SSISDB catalog.
1576
Amazon Relational Database Service User Guide
SQL Server Reporting Services
Amazon RDS for SQL Server supports running SSRS directly on RDS DB instances. You can use SSRS with
existing or new DB instances.
RDS supports SSRS for SQL Server Standard and Enterprise Editions on the following versions:
Contents
• Limitations and recommendations (p. 1577)
• Turning on SSRS (p. 1578)
• Creating an option group for SSRS (p. 1578)
• Adding the SSRS option to your option group (p. 1579)
• Associating your option group with your DB instance (p. 1581)
• Allowing inbound access to your VPC security group (p. 1583)
• Report server databases (p. 1583)
• SSRS log files (p. 1583)
• Accessing the SSRS web portal (p. 1583)
• Using SSL on RDS (p. 1583)
• Granting access to domain users (p. 1584)
• Accessing the web portal (p. 1583)
• Deploying reports to SSRS (p. 1584)
• Configuring the report data source (p. 1585)
• Using SSRS Email to send reports (p. 1585)
• Revoking system-level permissions (p. 1586)
• Monitoring the status of a task (p. 1587)
• Turning off SSRS (p. 1588)
• Deleting the SSRS databases (p. 1589)
1577
Amazon Relational Database Service User Guide
SQL Server Reporting Services
Make sure to use the databases that are created when the SSRS option is added to the RDS DB
instance. For more information, see Report server databases (p. 1583).
• You can't configure SSRS to listen on the default SSL port (443). The allowed values are 1150–49511,
except 1234, 1434, 3260, 3343, 3389, and 47001.
• Subscriptions through a Microsoft Windows file share aren't supported.
• Using Reporting Services Configuration Manager isn't supported.
• Creating and modifying roles isn't supported.
• Modifying report server properties isn't supported.
• System administrator and system user roles aren't granted.
• You can't edit system-level role assignments through the web portal.
Turning on SSRS
Use the following process to turn on SSRS for your DB instance:
Console
The following procedure creates an option group for SQL Server Standard Edition 2017.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose Create group.
4. In the Create option group pane, do the following:
a. For Name, enter a name for the option group that is unique within your AWS account, such as
ssrs-se-2017. The name can contain only letters, digits, and hyphens.
b. For Description, enter a brief description of the option group, such as SSRS option group
for SQL Server SE 2017. The description is used for display purposes.
c. For Engine, choose sqlserver-se.
d. For Major engine version, choose 14.00.
5. Choose Create.
CLI
The following procedure creates an option group for SQL Server Standard Edition 2017.
1578
Amazon Relational Database Service User Guide
SQL Server Reporting Services
Example
For Windows:
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group that you just created, then choose Add option.
4. Under Option details, choose SSRS for Option name.
5. Under Option settings, do the following:
a. Enter the port for the SSRS service to listen on. The default is 8443. For a list of allowed values,
see Limitations and recommendations (p. 1577).
b. Enter a value for Max memory.
Max memory specifies the upper threshold above which no new memory allocation requests are
granted to report server applications. The number is a percentage of the total memory of the
DB instance. The allowed values are 10–80.
c. For Security groups, choose the VPC security group to associate with the option. Use the same
security group that is associated with your DB instance.
6. To use SSRS Email to send reports, choose the Configure email delivery options check box under
Email delivery in reporting services, and then do the following:
a. For Sender email address, enter the email address to use in the From field of messages sent by
SSRS Email.
Specify a user account that has permission to send mail from the SMTP server.
b. For SMTP server, specify the SMTP server or gateway to use.
1579
Amazon Relational Database Service User Guide
SQL Server Reporting Services
It can be an IP address, the NetBIOS name of a computer on your corporate intranet, or a fully
qualified domain name.
c. For SMTP port, enter the port to use to connect to the mail server. The default is 25.
d. To use authentication:
arn:aws:secretsmanager:Region:AccountId:secret:SecretName-6RandomCharacters
For example:
arn:aws:secretsmanager:us-west-2:123456789012:secret:MySecret-a1b2c3
For more information on creating the secret, see Using SSRS Email to send
reports (p. 1585).
e. Select the Use Secure Sockets Layer (SSL) check box to encrypt email messages using SSL.
7. Under Scheduling, choose whether to add the option immediately or at the next maintenance
window.
8. Choose Add option.
CLI
• OptionGroupName – The name of option group that you created or chose previously (ssrs-
se-2017 in the following example).
• Port – The port for the SSRS service to listen on. The default is 8443. For a list of allowed
values, see Limitations and recommendations (p. 1577).
• VpcSecurityGroupMemberships – VPC security group memberships for your RDS DB
instance.
• MAX_MEMORY – The upper threshold above which no new memory allocation requests are
granted to report server applications. The number is a percentage of the total memory of the
DB instance. The allowed values are 10–80.
b. (Optional) Set the following parameters to use SSRS Email:
arn:aws:secretsmanager:Region:AccountId:secret:SecretName-6RandomCharacters
For more information on creating the secret, see Using SSRS Email to send reports (p. 1585).
• SMTP_USE_ANONYMOUS_AUTHENTICATION – Set to true and don't include
SMTP_EMAIL_CREDENTIALS_SECRET_ARN if you don't want to use authentication.
The following example includes the SSRS Email parameters, using the secret ARN.
{
"OptionGroupName": "ssrs-se-2017",
"OptionsToInclude": [
{
"OptionName": "SSRS",
"Port": 8443,
"VpcSecurityGroupMemberships": ["sg-0abcdef123"],
"OptionSettings": [
{"Name": "MAX_MEMORY","Value": "60"},
{"Name": "SMTP_ENABLE_EMAIL","Value": "true"}
{"Name": "SMTP_SENDER_EMAIL_ADDRESS","Value": "[email protected]"},
{"Name": "SMTP_SERVER","Value": "email-smtp.us-west-2.amazonaws.com"},
{"Name": "SMTP_PORT","Value": "25"},
{"Name": "SMTP_USE_SSL","Value": "true"},
{"Name": "SMTP_EMAIL_CREDENTIALS_SECRET_ARN","Value":
"arn:aws:secretsmanager:us-west-2:123456789012:secret:MySecret-a1b2c3"}
]
}],
"ApplyImmediately": true
}
Example
For Windows:
If you use an existing DB instance, it must already have an Active Directory domain and AWS Identity and
Access Management (IAM) role associated with it. If you create a new instance, specify an existing Active
Directory domain and IAM role. For more information, see Working with Active Directory with RDS for
SQL Server (p. 1387).
1581
Amazon Relational Database Service User Guide
SQL Server Reporting Services
Console
You can associate your option group with a new or existing DB instance:
• For a new DB instance, associate the option group when you launch the instance. For more
information, see Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, modify the instance and associate the new option group. For more
information, see Modifying an Amazon RDS DB instance (p. 401).
CLI
You can associate your option group with a new or existing DB instance.
• Specify the same DB engine type and major version as you used when creating the option group.
Example
For Windows:
Example
1582
Amazon Relational Database Service User Guide
SQL Server Reporting Services
For Windows:
RDS owns and manages these databases, so database operations on them such as ALTER and DROP
aren't permitted. However, you can perform read operations on the rdsadmin_ReportServer database.
For existing SSRS instances, restarting the SSRS service might be necessary to access report server logs.
You can restart the service by updating the SSRS option.
For more information, see Working with Microsoft SQL Server logs (p. 1619).
For more information on SSL certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For more information about using SSL with SQL Server, see Using SSL with a
Microsoft SQL Server DB instance (p. 1456).
1583
Amazon Relational Database Service User Guide
SQL Server Reporting Services
exec msdb.dbo.rds_msbi_task
@task_type='SSRS_GRANT_PORTAL_PERMISSION',
@ssrs_group_or_username=N'AD_domain\user';
The domain user or user group is granted the RDS_SSRS_ROLE system role. This role has the following
system-level tasks granted to it:
• Run reports
• Manage jobs
• Manage shared schedules
• View shared schedules
The item-level role of Content Manager on the root folder is also granted.
https://rds_endpoint:port/Reports
• rds_endpoint – The endpoint for the RDS DB instance that you're using with SSRS.
You can find the endpoint on the Connectivity & security tab for your DB instance. For more
information, see Connecting to a DB instance running the Microsoft SQL Server database
engine (p. 1380).
• port – The listener port for SSRS that you set in the SSRS option.
https://myssrsinstance.cg034itsfake.us-east-1.rds.amazonaws.com:8443/Reports
2. Log in with the credentials for a domain user that you granted access with the
SSRS_GRANT_PORTAL_PERMISSION task.
1584
Amazon Relational Database Service User Guide
SQL Server Reporting Services
• The user who launched SSDT has access to the SSRS web portal.
• The TargetServerURL value in the SSRS project properties is set to the HTTPS endpoint of the RDS
DB instance suffixed with ReportServer, for example:
https://myssrsinstance.cg034itsfake.us-east-1.rds.amazonaws.com:8443/ReportServer
• For RDS for SQL Server DB instances joined to AWS Directory Service for Microsoft Active Directory,
use the fully qualified domain name (FQDN) as the data source name of the connection string. An
example is myssrsinstance.corp-ad.example.com, where myssrsinstance is the DB instance
name and corp-ad.example.com is the fully qualified domain name.
• For RDS for SQL Server DB instances joined to self-managed Active Directory, use ., or LocalHost as
the data source name of the connection string.
To configure SSRS Email, use the SSRS option settings. For more information, see Adding the SSRS
option to your option group (p. 1579).
After configuring SSRS Email, you can subscribe to reports on the report server. For more information,
see Email delivery in Reporting Services in the Microsoft documentation.
Integration with AWS Secrets Manager is required for SSRS Email to function on RDS. To integrate with
Secrets Manager, you create a secret.
Note
If you change the secret later, you also have to update the SSRS option in the option group.
1. Follow the steps in Create a secret in the AWS Secrets Manager User Guide.
• SMTP_USERNAME – Enter a user with permission to send mail from the SMTP server.
• SMTP_PASSWORD – Enter a password for the SMTP user.
c. For Encryption key, don't use the default AWS KMS key. Use your own existing key, or create a
new one.
The KMS key policy must allow the kms:Decrypt action, for example:
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {
"Service": [
"rds.amazonaws.com"
1585
Amazon Relational Database Service User Guide
SQL Server Reporting Services
]
},
"Action": [
"kms:Decrypt"
],
"Resource": "*"
}
2. Follow the steps in Attach a permissions policy to a secret in the AWS Secrets Manager User
Guide. The permissions policy gives the secretsmanager:GetSecretValue action to the
rds.amazonaws.com service principal.
We recommend that you use the aws:sourceAccount and aws:sourceArn conditions in the
policy to avoid the confused deputy problem. Use your AWS account for aws:sourceAccount and
the option group ARN for aws:sourceArn. For more information, see Preventing cross-service
confused deputy problems (p. 2640).
{
"Version" : "2012-10-17",
"Statement" : [ {
"Effect" : "Allow",
"Principal" : {
"Service" : "rds.amazonaws.com"
},
"Action" : "secretsmanager:GetSecretValue",
"Resource" : "*",
"Condition" : {
"StringEquals" : {
"aws:sourceAccount" : "123456789012"
},
"ArnLike" : {
"aws:sourceArn" : "arn:aws:rds:us-west-2:123456789012:og:ssrs-se-2017"
}
}
} ]
}
For more examples, see Permissions policy examples for AWS Secrets Manager in the AWS Secrets
Manager User Guide.
exec msdb.dbo.rds_msbi_task
@task_type='SSRS_REVOKE_PORTAL_PERMISSION',
@ssrs_group_or_username=N'AD_domain\user';
Doing this deletes the user from the RDS_SSRS_ROLE system role. It also deletes the user from the
Content Manager item-level role if the user has it.
1586
Amazon Relational Database Service User Guide
SQL Server Reporting Services
To see a list of all tasks, set the first parameter to NULL and the second parameter to 0, as shown in the
following example.
To get a specific task, set the first parameter to NULL and the second parameter to the task ID, as shown
in the following example.
task_type For SSRS, tasks can have the following task types:
• SSRS_GRANT_PORTAL_PERMISSION
• SSRS_REVOKE_PORTAL_PERMISSION
1587
Amazon Relational Database Service User Guide
SQL Server Reporting Services
last_updated The date and time that the task status was last
updated.
created_at The date and time that the task was created.
Turning off SSRS
To turn off SSRS, remove the SSRS option from its option group. Removing the option doesn't delete the
SSRS databases. For more information, see Deleting the SSRS databases (p. 1589).
You can turn SSRS on again by adding back the SSRS option. If you have also deleted the SSRS
databases, readding the option on the same DB instance creates new report server databases.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group with the SSRS option (ssrs-se-2017 in the previous examples).
4. Choose Delete option.
5. Under Deletion options, choose SSRS for Options to delete.
6. Under Apply immediately, choose Yes to delete the option immediately, or No to delete it at the
next maintenance window.
7. Choose Delete.
CLI
Example
For Linux, macOS, or Unix:
1588
Amazon Relational Database Service User Guide
SQL Server Reporting Services
--option-group-name ssrs-se-2017 \
--options SSRS \
--apply-immediately
For Windows:
To delete the report server databases, be sure to remove the SSRS option first.
exec msdb.dbo.rds_drop_ssrs_databases
1589
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator
In RDS, starting with SQL Server 2012 (version 11.00.5058.0.v1 and later), all editions of RDS for SQL
Server support distributed transactions. The support is provided using Microsoft Distributed Transaction
Coordinator (MSDTC). For in-depth information about MSDTC, see Distributed Transaction Coordinator in
the Microsoft documentation.
Contents
• Limitations (p. 1590)
• Enabling MSDTC (p. 1591)
• Creating the option group for MSDTC (p. 1591)
• Adding the MSDTC option to the option group (p. 1592)
• Creating the parameter group for MSDTC (p. 1594)
• Modifying the parameter for MSDTC (p. 1594)
• Associating the option group and parameter group with the DB instance (p. 1595)
• Using distributed transactions (p. 1597)
• Using XA transactions (p. 1597)
• Using transaction tracing (p. 1598)
• Modifying the MSDTC option (p. 1599)
• Disabling MSDTC (p. 1599)
• Troubleshooting MSDTC for RDS for SQL Server (p. 1600)
Limitations
The following limitations apply to using MSDTC on RDS for SQL Server:
• MSDTC isn't supported on instances using SQL Server Database Mirroring. For more information, see
Transactions - availability groups and database mirroring.
• The in-doubt xact resolution parameter must be set to 1 or 2. For more information, see
Modifying the parameter for MSDTC (p. 1594).
• MSDTC requires all hosts participating in distributed transactions to be resolvable using their host
names. RDS automatically maintains this functionality for domain-joined instances. However, for
standalone instances make sure to configure the DNS server manually.
• Java Database Connectivity (JDBC) XA transactions are supported for SQL Server 2017 version
14.00.3223.3 and higher, and SQL Server 2019.
• Distributed transactions that depend on client dynamic link libraries (DLLs) on RDS instances aren't
supported.
• Using custom XA dynamic link libraries isn't supported.
1590
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator
Enabling MSDTC
Use the following process to enable MSDTC for your DB instance:
Console
The following procedure creates an option group for SQL Server Standard Edition 2016.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose Create group.
4. In the Create option group pane, do the following:
a. For Name, enter a name for the option group that is unique within your AWS account, such as
msdtc-se-2016. The name can contain only letters, digits, and hyphens.
b. For Description, enter a brief description of the option group, such as MSDTC option group
for SQL Server SE 2016. The description is used for display purposes.
c. For Engine, choose sqlserver-se.
d. For Major engine version, choose 13.00.
5. Choose Create.
CLI
The following example creates an option group for SQL Server Standard Edition 2016.
Example
1591
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator
For Windows:
• Port – The port that you use to access MSDTC. Allowed values are 1150–49151 except for 1234, 1434,
3260, 3343, 3389, and 47001. The default value is 5000.
Make sure that the port you want to use is enabled in your firewall rules. Also, make sure as needed
that this port is enabled in the inbound and outbound rules for the security group associated with your
DB instance. For more information, see Can't connect to Amazon RDS DB instance (p. 2727).
• Security groups – The VPC security group memberships for your RDS DB instance.
• Authentication type – The authentication mode between hosts. The following authentication types
are supported:
• Mutual – The RDS instances are mutually authenticated to each other using integrated
authentication. If this option is selected, all instances associated with this option group must be
domain-joined.
• None – No authentication is performed between hosts. We don't recommend using this mode in
production environments.
• Transaction log size – The size of the MSDTC transaction log. Allowed values are 4–1024 MB. The
default size is 4 MB.
• Enable inbound connections – Whether to allow inbound MSDTC connections to instances associated
with this option group.
• Enable outbound connections – Whether to allow outbound MSDTC connections from instances
associated with this option group.
• Enable XA – Whether to allow XA transactions. For more information on the XA protocol, see XA
specification.
• Enable SNA LU – Whether to allow the SNA LU protocol to be used for distributed transactions. For
more information on SNA LU protocol support, see Managing IBM CICS LU 6.2 transactions in the
Microsoft documentation.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group that you just created.
1592
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator
a. For Port, enter the port number for accessing MSDTC. The default is 5000.
b. For Security groups, choose the VPC security group to associate with the option.
c. For Authentication type, choose Mutual or None.
d. For Transaction log size, enter a value from 4–1024. The default is 4.
7. Under Additional configuration, do the following:
a. For Connections, as needed choose Enable inbound connections and Enable outbound
connections.
b. For Allowed protocols, as needed choose Enable XA and Enable SNA LU.
8. Under Scheduling, choose whether to add the option immediately or at the next maintenance
window.
9. Choose Add option.
CLI
1. Create a JSON file, for example msdtc-option.json, with the following required parameters.
{
"OptionGroupName":"msdtc-se-2016",
"OptionsToInclude": [
{
"OptionName":"MSDTC",
"Port":5000,
"VpcSecurityGroupMemberships":["sg-0abcdef123"],
"OptionSettings":[{"Name":"AUTHENTICATION","Value":"MUTUAL"},
{"Name":"TRANSACTION_LOG_SIZE","Value":"4"}]
}],
"ApplyImmediately": true
}
Example
For Windows:
No reboot is required.
1593
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator
Console
The following example creates a parameter group for SQL Server Standard Edition 2016.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose Create parameter group.
4. In the Create parameter group pane, do the following:
CLI
The following example creates a parameter group for SQL Server Standard Edition 2016.
Example
For Windows:
For MSDTC, set the in-doubt xact resolution parameter to one of the following:
• 1 – Presume commit. Any MSDTC in-doubt transactions are presumed to have committed.
1594
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator
• 2 – Presume abort. Any MSDTC in-doubt transactions are presumed to have stopped.
For more information, see in-doubt xact resolution server configuration option in the Microsoft
documentation.
Console
The following example modifies the parameter group that you created for SQL Server Standard Edition
2016.
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose the parameter group, such as msdtc-sqlserver-se-13.
4. Under Parameters, filter the parameter list for xact.
5. Choose in-doubt xact resolution.
6. Choose Edit parameters.
7. Enter 1 or 2.
8. Choose Save changes.
CLI
The following example modifies the parameter group that you created for SQL Server Standard Edition
2016.
Example
For Linux, macOS, or Unix:
For Windows:
Associating the option group and parameter group with the DB instance
You can use the AWS Management Console or the AWS CLI to associate the MSDTC option group and
parameter group with the DB instance.
Console
You can associate the MSDTC option group and parameter group with a new or existing DB instance.
1595
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator
• For a new DB instance, associate them when you launch the instance. For more information, see
Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, associate them by modifying the instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).
Note
If you use an domain-joined existing DB instance, it must already have an Active Directory
domain and AWS Identity and Access Management (IAM) role associated with it. If you create
a new domain-joined instance, specify an existing Active Directory domain and IAM role.
For more information, see Working with AWS Managed Active Directory with RDS for SQL
Server (p. 1401).
CLI
You can associate the MSDTC option group and parameter group with a new or existing DB instance.
Note
If you use an existing domain-joined DB instance, it must already have an Active Directory
domain and IAM role associated with it. If you create a new domain-joined instance, specify an
existing Active Directory domain and IAM role. For more information, see Working with AWS
Managed Active Directory with RDS for SQL Server (p. 1401).
To create a DB instance with the MSDTC option group and parameter group
• Specify the same DB engine type and major version as you used when creating the option group.
Example
For Windows:
1596
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator
--db-parameter-group-name msdtc-sqlserver-se-13
To modify a DB instance and associate the MSDTC option group and parameter group
Example
For Windows:
In this case, promotion is automatic and doesn't require you to make any intervention. If there's only
one resource manager within the transaction, no promotion is performed. For more information about
implicit transaction scopes, see Implementing an implicit transaction using transaction scope in the
Microsoft documentation.
Using XA transactions
Starting from RDS for SQL Server 2017 version14.00.3223.3, you can control distributed transactions
using JDBC. When you set the Enable XA option setting to true in the MSDTC option, RDS
1597
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator
automatically enables JDBC transactions and grants the SqlJDBCXAUser role to the guest user. This
allows executing distributed transactions through JDBC. For more information, including a code example,
see Understanding XA transactions in the Microsoft documentation.
To start a new transaction tracing session, run the following example statement.
Note
Only one transaction tracing session can be active at one time. If a new tracing session START
command is issued while a tracing session is active, an error is returned and the active tracing
session remains unchanged.
This statement stops the active transaction tracing session and saves the transaction trace data into the
log directory on the RDS DB instance. The first row of the output contains the overall result, and the
following lines indicate details of the operation.
1598
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator
You can use the detailed information to query the name of the generated log file. For more information
about downloading log files from the RDS DB instance, see Monitoring Amazon RDS log files (p. 895).
The trace session logs remain on the instance for 35 days. Any older trace session logs are automatically
deleted.
To trace the status of a transaction tracing session, run the following statement.
This statement outputs the following as separate rows of the result set.
OK
SessionStatus: <Started|Stopped>
TraceAll: <True|False>
TraceAborted: <True|False>
TraceLongLived: <True|False>
The first line indicates the overall result of the operation: OK or ERROR with details, if applicable. The
subsequent lines indicate details about the tracing session status:
Disabling MSDTC
To disable MSDTC, remove the MSDTC option from its option group.
1599
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group with the MSDTC option (msdtc-se-2016 in the previous examples).
4. Choose Delete option.
5. Under Deletion options, choose MSDTC for Options to delete.
6. Under Apply immediately, choose Yes to delete the option immediately, or No to delete it at the
next maintenance window.
7. Choose Delete.
CLI
Example
For Windows:
• The inbound rules for the security group associated with the DB instance are configured correctly. For
more information, see Can't connect to Amazon RDS DB instance (p. 2727).
• Your client computer is configured correctly.
• The MSDTC firewall rules on your client computer are enabled.
Or, in Server Manager, choose Tools, and then choose Component Services.
1600
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator
2. Expand Component Services, expand Computers, expand My Computer, and then expand
Distributed Transaction Coordinator.
3. Open the context (right-click) menu for Local DTC and choose Properties.
4. Choose the Security tab.
5. Choose all of the following:
• Mutual Authentication Required – The client machine is joined to the same domain as other
nodes participating in distributed transaction, or there is a trust relationship configured between
domains.
• No Authentication Required – All other cases.
7. Choose OK to save your changes.
8. If prompted to restart the service, choose Yes.
Or, in Server Manager, choose Tools, and then choose Windows Firewall with Advanced Security.
Note
Depending on your operating system, Windows Firewall might be called Windows Defender
Firewall.
2. Choose Inbound Rules in the left pane.
3. Enable the following firewall rules, if they are not already enabled:
1601
Amazon Relational Database Service User Guide
Common DBA tasks for SQL Server
Topics
• Accessing the tempdb database on Microsoft SQL Server DB instances on Amazon RDS (p. 1603)
• Analyzing your database workload on an Amazon RDS for SQL Server DB instance with Database
Engine Tuning Advisor (p. 1605)
• Collations and character sets for Microsoft SQL Server (p. 1607)
• Creating a database user (p. 1611)
• Determining a recovery model for your Microsoft SQL Server database (p. 1611)
• Determining the last failover time (p. 1612)
• Disabling fast inserts during bulk loading (p. 1612)
• Dropping a Microsoft SQL Server database (p. 1613)
• Renaming a Microsoft SQL Server database in a Multi-AZ deployment (p. 1613)
• Resetting the db_owner role password (p. 1613)
• Restoring license-terminated DB instances (p. 1614)
• Transitioning a Microsoft SQL Server database from OFFLINE to ONLINE (p. 1614)
• Using change data capture (p. 1614)
• Using SQL Server Agent (p. 1617)
• Working with Microsoft SQL Server logs (p. 1619)
• Working with trace and dump files (p. 1620)
1602
Amazon Relational Database Service User Guide
Accessing the tempdb database
The master user for your DB instance is granted CONTROL access to tempdb so that this user can modify
the tempdb database options. The master user isn't the database owner of the tempdb database. If
necessary, the master user can grant CONTROL access to other users so that they can also modify the
tempdb database options.
Note
You can't run Database Console Commands (DBCC) on the tempdb database.
Database options such as the maximum file size options are persistent after you restart your DB instance.
You can modify the database options to optimize performance when importing data, and to prevent
running out of storage.
The following example demonstrates setting the size to 100 GB and file growth to 10 percent.
1603
Amazon Relational Database Service User Guide
Accessing the tempdb database
@target_size int null optional The new size for the file, in
megabytes.
The following example gets the names of the files for the tempdb database.
use tempdb;
GO
The following example shrinks a tempdb database file named test_file, and requests a new size of 10
megabytes:
The following example demonstrates setting the SIZE property to 1024 MB.
The tempdb database can't be replicated. No data that you store on your primary instance is replicated
to your secondary instance.
If you modify any database options on the tempdb database, you can capture those changes on the
secondary by using one of the following methods:
• First modify your DB instance and turn Multi-AZ off, then modify tempdb, and finally turn Multi-AZ
back on. This method doesn't involve any downtime.
For more information, see Modifying an Amazon RDS DB instance (p. 401).
• First modify tempdb in the original primary instance, then fail over manually, and finally modify
tempdb in the new primary instance. This method involves downtime.
1604
Amazon Relational Database Service User Guide
Analyzing database workload with
Database Engine Tuning Advisor
This section shows how to capture a workload for Tuning Advisor to analyze. This is the preferred process
for capturing a workload because Amazon RDS restricts host access to the SQL Server instance. For more
information, see Database Engine Tuning Advisor in the Microsoft documentation.
To use Tuning Advisor, you must provide what is called a workload to the advisor. A workload is a set
of Transact-SQL statements that run against a database or databases that you want to tune. Database
Engine Tuning Advisor uses trace files, trace tables, Transact-SQL scripts, or XML files as workload input
when tuning databases. When working with Amazon RDS, a workload can be a file on a client computer
or a database table on an Amazon RDS for SQL Server DB accessible to your client computer. The file or
the table must contain queries against the databases you want to tune in a format suitable for replay.
For Tuning Advisor to be most effective, a workload should be as realistic as possible. You can generate
a workload file or table by performing a trace against your DB instance. While a trace is running, you can
either simulate a load on your DB instance or run your applications with a normal load.
There are two types of traces: client-side and server-side. A client-side trace is easier to set up and you
can watch trace events being captured in real-time in SQL Server Profiler. A server-side trace is more
complex to set up and requires some Transact-SQL scripting. In addition, because the trace is written to
a file on the Amazon RDS DB instance, storage space is consumed by the trace. It is important to track of
how much storage space a running server-side trace uses because the DB instance could enter a storage-
full state and would no longer be available if it runs out of storage space.
For a client-side trace, when a sufficient amount of trace data has been captured in the SQL Server
Profiler, you can then generate the workload file by saving the trace to either a file on your local
computer or in a database table on a DB instance that is available to your client computer. The main
disadvantage of using a client-side trace is that the trace may not capture all queries when under heavy
loads. This could weaken the effectiveness of the analysis performed by the Database Engine Tuning
Advisor. If you need to run a trace under heavy loads and you want to ensure that it captures every query
during a trace session, you should use a server-side trace.
For a server-side trace, you must get the trace files on the DB instance into a suitable workload file or
you can save the trace to a table on the DB instance after the trace completes. You can use the SQL
Server Profiler to save the trace to a file on your local computer or have the Tuning Advisor read from the
trace table on the DB instance.
1. Start SQL Server Profiler. It is installed in the Performance Tools folder of your SQL Server instance
folder. You must load or define a trace definition template to start a client-side trace.
2. In the SQL Server Profiler File menu, choose New Trace. In the Connect to Server dialog box, enter
the DB instance endpoint, port, master user name, and password of the database you would like to
run a trace on.
1605
Amazon Relational Database Service User Guide
Analyzing database workload with
Database Engine Tuning Advisor
3. In the Trace Properties dialog box, enter a trace name and choose a trace definition template. A
default template, TSQL_Replay, ships with the application. You can edit this template to define your
trace. Edit events and event information under the Events Selection tab of the Trace Properties
dialog box.
For more information about trace definition templates and using the SQL Server Profiler to specify a
client-side trace, see Database Engine Tuning Advisor in the Microsoft documentation.
4. Start the client-side trace and watch SQL queries in real-time as they run against your DB instance.
5. Select Stop Trace from the File menu when you have completed the trace. Save the results as a file
or as a trace table on you DB instance.
The following is an abridged example script that starts a server-side trace and captures details to a
workload file. The trace initially saves to the file RDSTrace.trc in the D:\RDSDBDATA\Log directory and
rolls-over every 100 MB so subsequent trace files are named RDSTrace_1.trc, RDSTrace_2.trc, etc.
The following example is a script that stops a trace. Note that a trace created by the previous script
continues to run until you explicitly stop the trace or the process runs out of disk space.
You can save server-side trace results to a database table and use the database table as the workload
for the Tuning Advisor by using the fn_trace_gettable function. The following commands load the
results of all files named RDSTrace.trc in the D:\rdsdbdata\Log directory, including all rollover files like
RDSTrace_1.trc, into a table named RDSTrace in the current database.
1606
Amazon Relational Database Service User Guide
Collations and character sets
To save a specific rollover file to a table, for example the RDSTrace_1.trc file, specify the name of the
rollover file and substitute 1 instead of default as the last parameter to fn_trace_gettable.
The following code example demonstrates using the dta.exe command line utility against an Amazon
RDS DB instance with an endpoint of dta.cnazcmklsdei.us-east-1.rds.amazonaws.com. The
example includes the master user name admin and the master user password test, the example
database to tune is named machine named C:\RDSTrace.trc. The example command line code
also specifies a trace session named RDSTrace1 and specifies output files to the local machine named
RDSTrace.sql for the SQL output script, RDSTrace.txt for a result file, and RDSTrace.xml for
an XML file of the analysis. There is also an error table specified on the RDSDTA database named
RDSTraceErrors.
Here is the same example command line code except the input workload is a table on the remote
Amazon RDS instance named RDSTrace which is on the RDSDTA database.
For a full list of dta utility command-line parameters, see dta Utility in the Microsoft documentation.
Topics
• Server-level collation for Microsoft SQL Server (p. 1607)
• Database-level collation for Microsoft SQL Server (p. 1610)
1607
Amazon Relational Database Service User Guide
Collations and character sets
SQL_Latin1_General_CP1_CI_AS. The server collation is applied by default to all databases and database
objects.
Note
You can't change the collation when you restore from a DB snapshot.
Collation Description
1608
Amazon Relational Database Service User Guide
Collations and character sets
Collation Description
1609
Amazon Relational Database Service User Guide
Collations and character sets
Collation Description
• If you're using the Amazon RDS console, when creating a new DB instance choose Additional
configuration, then enter the collation in the Collation field. For more information, see Creating an
Amazon RDS DB instance (p. 300).
• If you're using the AWS CLI, use the --character-set-name option with the create-db-instance
command. For more information, see create-db-instance.
• If you're using the Amazon RDS API, use the CharacterSetName parameter with the
CreateDBInstance operation. For more information, see CreateDBInstance.
For example, the following query would change the default collation for the AccountName column to
Mohawk_100_CI_AS
The Microsoft SQL Server DB engine supports Unicode by the built-in NCHAR, NVARCHAR, and NTEXT
data types. For example, if you need CJK support, use these Unicode data types for character storage and
override the default server collation when creating your databases and tables. Here are several links from
Microsoft covering collation and Unicode support for SQL Server:
1610
Amazon Relational Database Service User Guide
Creating a database user
For an example of adding a database user to a role, see Adding a user to the SQLAgentUser
role (p. 1618).
Note
If you get permission errors when adding a user, you can restore privileges by modifying the
DB instance master user password. For more information, see Resetting the db_owner role
password (p. 1613).
It's important to understand the consequences before making a change to one of these settings. Each
setting can affect the others. For example:
• If you change a database's recovery model to SIMPLE or BULK_LOGGED while backup retention is
enabled, Amazon RDS resets the recovery model to FULL within five minutes. This also results in RDS
taking a snapshot of the DB instance.
• If you set backup retention to 0 days, RDS sets the recovery mode to SIMPLE.
• If you change a database's recovery model from SIMPLE to any other option while backup retention is
set to 0 days, RDS resets the recovery model to SIMPLE.
Important
Never change the recovery model on Multi-AZ instances, even if it seems you can do so—for
example, by using ALTER DATABASE. Backup retention, and therefore FULL recovery mode, is
required for Multi-AZ. If you alter the recovery model, RDS immediately changes it back to FULL.
This automatic reset forces RDS to completely rebuild the mirror. During this rebuild, the
availability of the database is degraded for about 30-90 minutes until the mirror is ready for
failover. The DB instance also experiences performance degradation in the same way it does
during a conversion from Single-AZ to Multi-AZ. How long performance is degraded depends on
the database storage size—the bigger the stored database, the longer the degradation.
For more information on SQL Server recovery models, see Recovery models (SQL Server) in the Microsoft
documentation.
1611
Amazon Relational Database Service User Guide
Determining the last failover time
execute msdb.dbo.rds_failover_time;
errorlog_available_from Shows the time from when error logs are available
in the log directory.
Note
The stored procedure searches all of the available SQL Server error logs in the log directory to
retrieve the most recent failover time. If the failover messages have been overwritten by SQL
Server, then the procedure doesn't retrieve the failover time.
This example shows the output when there is no recent failover in the error logs. No failover has
happened since 2020-04-29 23:59:00.01.
errorlog_available_from recent_failover_time
This example shows the output when there is a failover in the error logs. The most recent failover was at
2020-05-05 18:57:51.89.
errorlog_available_from recent_failover_time
However, with fast inserts bulk loads with small batch sizes can lead to increased unused space
consumed by objects. If increasing batch size isn't feasible, enabling trace flag 692 can help reduce
unused reserved space, but at the expense of performance. Enabling this trace flag disables fast inserts
while bulk loading data into heap or clustered indexes.
1612
Amazon Relational Database Service User Guide
Dropping a SQL Server database
You enable trace flag 692 as a startup parameter using DB parameter groups. For more information, see
Working with parameter groups (p. 347).
Trace flag 692 is supported for Amazon RDS on SQL Server 2016 and later. For more information on
trace flags, see DBCC TRACEON - trace flags in the Microsoft documentation.
--replace your-database-name with the name of the database you want to drop
EXECUTE msdb.dbo.rds_drop_database N'your-database-name'
Note
Use straight single quotes in the command. Smart quotes will cause an error.
After you use this procedure to drop the database, Amazon RDS drops all existing connections to the
database and removes the database's backup history.
For more information, see Adding Multi-AZ to a Microsoft SQL Server DB instance (p. 1451).
Note
If your instance doesn't use Multi-AZ, you don't need to change any settings before or after
running rdsadmin.dbo.rds_modify_db_name.
1613
Amazon Relational Database Service User Guide
Restoring license-terminated DB instances
You can restore from a snapshot of Standard Edition to either Standard Edition or Enterprise Edition.
You can restore from a snapshot of Enterprise Edition to either Standard Edition or Enterprise Edition.
To restore from a SQL Server snapshot after Amazon RDS has created a final snapshot of
your instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the snapshot of your SQL Server DB instance. Amazon RDS creates a final snapshot of your
DB instance. The name of the terminated instance snapshot is in the format instance_name-
final-snapshot. For example, if your DB instance name is mytest.cdxgahslksma.us-
east-1.rds.com, the final snapshot is called mytest-final-snapshot and is located in the
same AWS Region as the original DB instance.
4. For Actions, choose Restore Snapshot.
For more information about restoring from a snapshot, see Restoring from a DB snapshot (p. 615).
Before you use CDC with your Amazon RDS DB instances, enable it in the database by running
msdb.dbo.rds_cdc_enable_db. You must have master user privileges to enable CDC in the Amazon
1614
Amazon Relational Database Service User Guide
Using CDC
RDS DB instance. After CDC is enabled, any user who is db_owner of that database can enable or disable
CDC on tables in that database.
Important
During restores, CDC will be disabled. All of the related metadata is automatically removed from
the database. This applies to snapshot restores, point-in-time restores, and SQL Server Native
restores from S3. After performing one of these types of restores, you can re-enable CDC and
re-specify tables to track.
Topics
• Tracking tables with change data capture (p. 1615)
• Change data capture jobs (p. 1616)
• Change data capture for Multi-AZ instances (p. 1616)
For more information on CDC tables, functions, and stored procedures in SQL Server documentation, see
the following:
1615
Amazon Relational Database Service User Guide
Using CDC
To control behavior of CDC in a database, use native SQL Server procedures such as sp_cdc_enable_table
and sp_cdc_start_job. To change CDC job parameters, like maxtrans and maxscans, you can use
sp_cdc_change_job..
To get more information regarding the CDC jobs, you can query the following dynamic management
views:
• sys.dm_cdc_errors
• sys.dm_cdc_log_scan_sessions
• sysjobs
• sysjobhistory
Although this process runs quickly, it's still possible that the CDC jobs might run before RDS can correct
them. Here are three ways to force parameters to be consistent between primary and secondary replicas:
• Use the same job parameters for all the databases that have CDC enabled.
• Before you change the CDC job configuration, convert the Multi-AZ instance to Single-AZ.
• Manually transfer the parameters whenever you change them on the principal.
To view and define the CDC parameters that are used to recreate the CDC jobs after a failover, use
rds_show_configuration and rds_set_configuration.
The following example returns the value set for cdc_capture_maxtrans. For any parameter that is set
to RDS_DEFAULT, RDS automatically configures the value.
-- Show configuration for each parameter on either primary and secondary replicas.
exec rdsadmin.dbo.rds_show_configuration 'cdc_capture_maxtrans';
1616
Amazon Relational Database Service User Guide
Using SQL Server Agent
To set the CDC job parameters on the principal, use sys.sp_cdc_change_job instead.
When you create a SQL Server DB instance, the master user is enrolled in the SQLAgentUserRole role.
SQL Server Agent can run a job on a schedule, in response to a specific event, or on demand. For more
information, see SQL Server Agent in the Microsoft documentation.
Note
Avoid scheduling jobs to run during the maintenance and backup windows for your DB instance.
The maintenance and backup processes that are launched by AWS could interrupt a job or cause
it to be canceled.
In Multi-AZ deployments, SQL Server Agent jobs are replicated from the primary host to the
secondary host when the job replication feature is turned on. For more information, see Turning
on SQL Server Agent job replication (p. 1617).
Multi-AZ deployments have a limit of 100 SQL Server Agent jobs. If you need a higher limit,
request an increase by contacting AWS Support. Open the AWS Support Center page, sign in
if necessary, and choose Create case. Choose Service limit increase. Complete and submit the
form.
To view the history of an individual SQL Server Agent job in SQL Server Management Studio (SSMS),
open Object Explorer, right-click the job, and then choose View History.
Because SQL Server Agent is running on a managed host in a DB instance, some actions aren't supported:
• Running replication jobs and running command-line scripts by using ActiveX, Windows command shell,
or Windows PowerShell aren't supported.
• You can't manually start, stop, or restart SQL Server Agent.
• Email notifications through SQL Server Agent aren't available from a DB instance.
• SQL Server Agent alerts and operators aren't supported.
• Using SQL Server Agent to create backups isn't supported. Use Amazon RDS to back up your DB
instance.
You can run the stored procedure on all SQL Server versions supported by Amazon RDS for SQL Server.
Jobs in the following categories are replicated:
• [Uncategorized (Local)]
• [Uncategorized (Multi-Server)]
• [Uncategorized]
• Data Collector
• Database Engine Tuning Advisor
1617
Amazon Relational Database Service User Guide
Using SQL Server Agent
• Database Maintenance
• Full-Text
Only jobs that use T-SQL job steps are replicated. Jobs with step types such as SQL Server Integration
Services (SSIS), SQL Server Reporting Services (SSRS), Replication, and PowerShell aren't replicated. Jobs
that use Database Mail and server-level objects aren't replicated.
Important
The primary host is the source of truth for replication. Before turning on job replication, make
sure that your SQL Server Agent jobs are on the primary. If you don't do this, it could lead to the
deletion of your SQL Server Agent jobs if you turn on the feature when newer jobs are on the
secondary host.
You can use the following function to confirm whether replication is turned on.
The T-SQL query returns the following if SQL Server Agent jobs are replicating. If they're not replicating,
it returns nothing for object_class.
You can use the following function to find the last time objects were synchronized in UTC time.
For example, suppose that you modify a SQL Server Agent job at 01:00. You expect the most recent
synchronization time to be after 01:00, indicating that synchronization has taken place.
After synchronization, the values returned for date_created and date_modified on the secondary
node are expected to match.
For example, suppose that your master user name is admin and you want to give access to SQL Server
Agent to a user named theirname with a password theirpassword. In that case, you can use the
following procedure.
1618
Amazon Relational Database Service User Guide
Working with SQL Server logs
You can't use SSMS to delete SQL Server Agent jobs. If you try to do so, you get an error message similar
to the following:
As a managed service, RDS is restricted from running procedures that access the Windows registry. When
you use SSMS, it tries to run a process (xp_regread) for which RDS isn't authorized.
Note
On RDS for SQL Server, only members of the sysadmin role are allowed to update or delete jobs
owned by a different login.
1619
Amazon Relational Database Service User Guide
Working with trace and dump files
Only the latest log is active for watching. For example, suppose you have the following logs shown:
Only log/ERROR, as the most recent log, is being actively updated. You can choose to watch others, but
they are static and will not update.
• @index – the version of the log to retrieve. The default value is 0, which retrieves the current error log.
Specify 1 to retrieve the previous log, specify 2 to retrieve the one before that, and so on.
• @type – the type of log to retrieve. Specify 1 to retrieve an error log. Specify 2 to retrieve an agent
log.
Example
For more information on SQL Server errors, see Database engine errors in the Microsoft documentation.
1620
Amazon Relational Database Service User Guide
Working with trace and dump files
set @maxfilesize = 5
To view the current trace and dump file retention period, use the rds_show_configuration
procedure, as shown in the following example.
exec rdsadmin..rds_show_configuration;
To modify the retention period for trace files, use the rds_set_configuration procedure and set the
tracefile retention in minutes. The following example sets the trace file retention period to 24
hours.
To modify the retention period for dump files, use the rds_set_configuration procedure and set the
dumpfile retention in minutes. The following example sets the dump file retention period to 3 days.
For security reasons, you cannot delete a specific trace or dump file on a SQL Server DB instance. To
delete all unused trace or dump files, set the retention period for the files to 0.
1621
Amazon Relational Database Service User Guide
• MySQL 8.0
• MySQL 5.7
For more information about minor version support, see MySQL on Amazon RDS versions (p. 1627).
To create an Amazon RDS for MySQL DB instance, use the Amazon RDS management tools or interfaces.
You can then do the following:
To store and access the data in your DB instance, you use standard MySQL utilities and applications.
Amazon RDS for MySQL is compliant with many industry standards. For example, you can use RDS for
MySQL databases to build HIPAA-compliant applications. You can use RDS for MySQL databases to
store healthcare related information, including protected health information (PHI) under a Business
Associate Agreement (BAA) with AWS. Amazon RDS for MySQL also meets Federal Risk and Authorization
Management Program (FedRAMP) security requirements. In addition, Amazon RDS for MySQL has
received a FedRAMP Joint Authorization Board (JAB) Provisional Authority to Operate (P-ATO) at the
FedRAMP HIGH Baseline within the AWS GovCloud (US) Regions. For more information on supported
compliance standards, see AWS cloud compliance.
For information about the features in each version of MySQL, see The main features of MySQL in the
MySQL documentation.
Before creating a DB instance, complete the steps in Setting up for Amazon RDS (p. 174). When you
create a DB instance, the RDS master user gets DBA privileges, with some limitations. Use this account
for administrative tasks such as creating additional database accounts.
• DB instances
• DB snapshots
• Point-in-time restores
• Automated backups
• Manual backups
You can use DB instances running MySQL inside a virtual private cloud (VPC) based on Amazon VPC. You
can also add features to your MySQL DB instance by turning on various options. Amazon RDS supports
Multi-AZ deployments for MySQL as a high-availability, failover solution.
Important
To deliver a managed service experience, Amazon RDS doesn't provide shell access to DB
instances. It also restricts access to certain system procedures and tables that need advanced
1622
Amazon Relational Database Service User Guide
privileges. You can access your database using standard SQL clients such as the mysql client.
However, you can't access the host directly by using Telnet or Secure Shell (SSH).
Topics
• MySQL feature support on Amazon RDS (p. 1624)
• MySQL on Amazon RDS versions (p. 1627)
• Connecting to a DB instance running the MySQL database engine (p. 1630)
• Securing MySQL DB instance connections (p. 1637)
• Improving query performance for RDS for MySQL with Amazon RDS Optimized Reads (p. 1656)
• Improving write performance with Amazon RDS Optimized Writes for MySQL (p. 1659)
• Upgrading the MySQL DB engine (p. 1664)
• Importing data into a MySQL DB instance (p. 1674)
• Working with MySQL replication in Amazon RDS (p. 1708)
• Exporting data from a MySQL DB instance by using replication (p. 1728)
• Options for MySQL DB instances (p. 1732)
• Parameters for MySQL (p. 1742)
• Common DBA tasks for MySQL DB instances (p. 1744)
• Local time zone for MySQL DB instances (p. 1749)
• Known issues and limitations for Amazon RDS for MySQL (p. 1752)
• RDS for MySQL stored procedure reference (p. 1757)
1623
Amazon Relational Database Service User Guide
MySQL feature support
You can filter new Amazon RDS features on the What's New with Database? page. For Products, choose
Amazon RDS. Then search using keywords such as MySQL 2022.
Note
The following lists are not exhaustive.
Topics
• Supported storage engines for RDS for MySQL (p. 1624)
• Using memcached and other options with MySQL on Amazon RDS (p. 1624)
• InnoDB cache warming for MySQL on Amazon RDS (p. 1625)
• MySQL features not supported by Amazon RDS (p. 1625)
The Federated Storage Engine is currently not supported by Amazon RDS for MySQL.
For user-created schemas, the MyISAM storage engine does not support reliable recovery and can result
in lost or corrupt data when MySQL is restarted after a recovery, preventing Point-In-Time restore or
snapshot restore from working as intended. However, if you still choose to use MyISAM with Amazon
RDS, snapshots can be helpful under some conditions.
Note
System tables in the mysql schema can be in MyISAM storage.
If you want to convert existing MyISAM tables to InnoDB tables, you can use the ALTER TABLE
command (for example, alter table TABLE_NAME engine=innodb;). Bear in mind that MyISAM
and InnoDB have different strengths and weaknesses, so you should fully evaluate the impact of making
this switch on your applications before doing so.
MySQL 5.1, 5.5, and 5.6 are no longer supported in Amazon RDS. However, you can restore existing
MySQL 5.1, 5.5, and 5.6 snapshots. When you restore a MySQL 5.1, 5.5, or 5.6 snapshot, the DB instance
is automatically upgraded to MySQL 5.7.
1624
Amazon Relational Database Service User Guide
InnoDB cache warming
RDS for MySQL DB instances support InnoDB cache warming. To enable InnoDB cache warming, set
the innodb_buffer_pool_dump_at_shutdown and innodb_buffer_pool_load_at_startup
parameters to 1 in the parameter group for your DB instance. Changing these parameter values in a
parameter group will affect all MySQL DB instances that use that parameter group. To enable InnoDB
cache warming for specific MySQL DB instances, you might need to create a new parameter group for
those instances. For information on parameter groups, see Working with parameter groups (p. 347).
InnoDB cache warming primarily provides a performance benefit for DB instances that use standard
storage. If you use PIOPS storage, you do not commonly see a significant performance benefit.
Important
If your MySQL DB instance does not shut down normally, such as during a failover, then the
buffer pool state will not be saved to disk. In this case, MySQL loads whatever buffer pool file is
available when the DB instance is restarted. No harm is done, but the restored buffer pool might
not reflect the most recent state of the buffer pool prior to the restart. To ensure that you have
a recent state of the buffer pool available to warm the InnoDB cache on startup, we recommend
that you periodically dump the buffer pool "on demand."
You can create an event to dump the buffer pool automatically and on a regular interval. For
example, the following statement creates an event named periodic_buffer_pool_dump
that dumps the buffer pool every hour.
For more information on MySQL events, see Event syntax in the MySQL documentation.
• To dump the current state of the buffer pool to disk, call the
mysql.rds_innodb_buffer_pool_dump_now (p. 1784) stored procedure.
• To load the saved state of the buffer pool from disk, call the
mysql.rds_innodb_buffer_pool_load_now (p. 1784) stored procedure.
• To cancel a load operation in progress, call the mysql.rds_innodb_buffer_pool_load_abort (p. 1784)
stored procedure.
• Authentication Plugin
1625
Amazon Relational Database Service User Guide
Features not supported
Note
Global transaction IDs are supported for all RDS for MySQL 5.7 versions, and for RDS for MySQL
8.0.26 and higher 8.0 versions.
To deliver a managed service experience, Amazon RDS doesn't provide shell access to DB instances. It
also restricts access to certain system procedures and tables that require advanced privileges. Amazon
RDS supports access to databases on a DB instance using any standard SQL client application. Amazon
RDS doesn't allow direct host access to a DB instance by using Telnet, Secure Shell (SSH), or Windows
Remote Desktop Connection. When you create a DB instance, you are assigned to the db_owner role for
all databases on that instance, and you have all database-level permissions except for those used for
backups. Amazon RDS manages backups for you.
1626
Amazon Relational Database Service User Guide
MySQL versions
Topics
• Supported MySQL minor versions on Amazon RDS (p. 1627)
• Supported MySQL major versions on Amazon RDS (p. 1629)
• Deprecated versions for Amazon RDS for MySQL (p. 1629)
MySQL engine version Community release RDS release date RDS end of standard
date support date
8.0
5.7
1627
Amazon Relational Database Service User Guide
Supported MySQL minor versions
* Amazon RDS Extended Support eligible minor engine version. For more information, see Using Amazon
RDS Extended Support (p. 565).
You can specify any currently supported MySQL version when creating a new DB instance. You can
specify the major version (such as MySQL 5.7), and any supported minor version for the specified major
version. If no version is specified, Amazon RDS defaults to a supported version, typically the most recent
version. If a major version is specified but a minor version is not, Amazon RDS defaults to a recent release
of the major version you have specified. To see a list of supported versions, as well as defaults for newly
created DB instances, use the describe-db-engine-versions AWS CLI command.
For example, to list the supported engine versions for RDS for MySQL, run the following CLI command:
The default MySQL version might vary by AWS Region. To create a DB instance with a specific minor
version, specify the minor version during DB instance creation. You can determine the default minor
version for an AWS Region using the following AWS CLI command:
Replace major-engine-version with the major engine version, and replace region with the AWS
Region. For example, the following AWS CLI command returns the default MySQL minor engine version
for the 5.7 major version and the US West (Oregon) AWS Region (us-west-2):
With Amazon RDS, you control when to upgrade your MySQL instance to a new major version supported
by Amazon RDS. You can maintain compatibility with specific MySQL versions, test new versions with
your application before deploying in production, and perform major version upgrades at times that best
fit your schedule.
When automatic minor version upgrade is enabled, your DB instance will be upgraded automatically
to new MySQL minor versions as they are supported by Amazon RDS. This patching occurs during your
scheduled maintenance window. You can modify a DB instance to enable or disable automatic minor
version upgrades.
If you opt out of automatically scheduled upgrades, you can manually upgrade to a supported
minor version release by following the same procedure as you would for a major version update. For
information, see Upgrading a DB instance engine version (p. 429).
Amazon RDS currently supports the major version upgrades from MySQL version 5.6 to version 5.7,
and from MySQL version 5.7 to version 8.0. Because major version upgrades involve some compatibility
risk, they do not occur automatically; you must make a request to modify the DB instance. You should
thoroughly test any upgrade before upgrading your production instances. For information about
upgrading a MySQL DB instance, see Upgrading the MySQL DB engine (p. 1664).
You can test a DB instance against a new version before upgrading by creating a DB snapshot of your
existing DB instance, restoring from the DB snapshot to create a new DB instance, and then initiating a
version upgrade for the new DB instance. You can then experiment safely on the upgraded clone of your
DB instance before deciding whether or not to upgrade your original DB instance.
1628
Amazon Relational Database Service User Guide
Supported MySQL major versions
You can use the following dates to plan your testing and upgrade cycles.
Note
Dates with only a month and a year are approximate and are updated with an exact date when
it’s known.
For information about the Amazon RDS deprecation policy for MySQL, see Amazon RDS FAQs.
1629
Amazon Relational Database Service User Guide
Connecting to a DB instance running MySQL
To authenticate to your RDS DB instance, you can use one of the authentication methods for MySQL and
AWS Identity and Access Management (IAM) database authentication:
• To learn how to authenticate to MySQL using one of the authentication methods for MySQL, see
Authentication method in the MySQL documentation.
• To learn how to authenticate to MySQL using IAM database authentication, see IAM database
authentication for MariaDB, MySQL, and PostgreSQL (p. 2642).
You can connect to a MySQL DB instance by using tools like the MySQL command-line client. For more
information on using the MySQL command-line client, see mysql - the MySQL command-line client in
the MySQL documentation. One GUI-based application you can use to connect is MySQL Workbench. For
more information, see the Download MySQL Workbench page. For information about installing MySQL
(including the MySQL command-line client), see Installing and upgrading MySQL.
Most Linux distributions include the MariaDB client instead of the Oracle MySQL client. To install the
MySQL command-line client on Amazon Linux 2023, run the following command:
To install the MySQL command-line client on Amazon Linux 2, run the following command:
To install the MySQL command-line client on most DEB-based Linux distributions, run the following
command:
To check the version of your MySQL command-line client, run the following command:
mysql --version
To read the MySQL documentation for your current client version, run the following command:
man mysql
To connect to a DB instance from outside of its Amazon VPC, the DB instance must be publicly
accessible, access must be granted using the inbound rules of the DB instance's security group,
and other requirements must be met. For more information, see Can't connect to Amazon RDS DB
instance (p. 2727).
You can use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) encryption on connections
to a MySQL DB instance. For information, see Using SSL/TLS with a MySQL DB instance (p. 1639). If
1630
Amazon Relational Database Service User Guide
Finding the connection information
you are using AWS Identity and Access Management (IAM) database authentication, make sure to use
an SSL/TLS connection. For information, see IAM database authentication for MariaDB, MySQL, and
PostgreSQL (p. 2642).
You can also connect to a DB instance from a web server. For more information, see Tutorial: Create a
web server and an Amazon RDS DB instance (p. 249).
Note
For information on connecting to a MariaDB DB instance, see Connecting to a DB instance
running the MariaDB database engine (p. 1269).
Topics
• Finding the connection information for a MySQL DB instance (p. 1631)
• Connecting from the MySQL command-line client (unencrypted) (p. 1633)
• Connecting from MySQL Workbench (p. 1634)
• Connecting with the Amazon Web Services JDBC Driver for MySQL (p. 1635)
• Troubleshooting connections to your MySQL DB instance (p. 1636)
To connect to a DB instance, use any client for the MySQL DB engine. For example, you might use the
MySQL command-line client or MySQL Workbench.
To find the connection information for a DB instance, you can use the AWS Management Console, the
AWS CLI describe-db-instances command, or the Amazon RDS API DescribeDBInstances operation to list
its details.
Console
To find the connection information for a DB instance in the AWS Management Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases to display a list of your DB instances.
3. Choose the name of the MySQL DB instance to display its details.
4. On the Connectivity & security tab, copy the endpoint. Also, note the port number. You need both
the endpoint and the port number to connect to the DB instance.
1631
Amazon Relational Database Service User Guide
Finding the connection information
5. If you need to find the master user name, choose the Configuration tab and view the Master
username value.
AWS CLI
To find the connection information for a MySQL DB instance by using the AWS CLI, call the describe-db-
instances command. In the call, query for the DB instance ID, endpoint, port, and master user name.
1632
Amazon Relational Database Service User Guide
Connecting from the MySQL
command-line client (unencrypted)
For Windows:
[
[
"mydb1",
"mydb1.123456789012.us-east-1.rds.amazonaws.com",
3306,
"admin"
],
[
"mydb2",
"mydb2.123456789012.us-east-1.rds.amazonaws.com",
3306,
"admin"
]
]
RDS API
To find the connection information for a DB instance by using the Amazon RDS API, call the
DescribeDBInstances operation. In the output, find the values for the endpoint address, endpoint port,
and master user name.
To connect to a DB instance using the MySQL command-line client, enter the following command at
the command prompt. For the -h parameter, substitute the DNS name (endpoint) for your DB instance.
For the -P parameter, substitute the port for your DB instance. For the -u parameter, substitute the user
name of a valid database user, such as the master user. Enter the master user password when prompted.
After you enter the password for the user, you should see output similar to the following.
1633
Amazon Relational Database Service User Guide
Connecting from MySQL Workbench
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
1634
Amazon Relational Database Service User Guide
Connecting with the AWS JDBC Driver for MySQL
You can use the features of MySQL Workbench to customize connections. For example, you can use
the SSL tab to configure SSL/TLS connections. For information about using MySQL Workbench, see
the MySQL Workbench documentation. Encrypting client connections to MySQL DB instances with
SSL/TLS, see Encrypting client connections to MySQL DB instances with SSL/TLS (p. 1639).
6. Optionally, choose Test Connection to confirm that the connection to the DB instance is successful.
7. Choose Close.
8. From Database, choose Connect to Database.
9. From Stored Connection, choose your connection.
10. Choose OK.
The driver is drop-in compatible with the MySQL Connector/J driver. To install or upgrade your
connector, replace the MySQL connector .jar file (located in the application CLASSPATH) with the
AWS JDBC Driver for MySQL .jar file, and update the connection URL prefix from jdbc:mysql:// to
jdbc:mysql:aws://.
The AWS JDBC Driver for MySQL supports IAM database authentication. For more information, see
AWS IAM Database Authentication in the AWS JDBC Driver for MySQL GitHub repository. For more
information about IAM database authentication, see IAM database authentication for MariaDB, MySQL,
and PostgreSQL (p. 2642).
1635
Amazon Relational Database Service User Guide
Troubleshooting
• The DB instance was created using a security group that doesn't authorize connections from the device
or Amazon EC2 instance where the MySQL application or utility is running. The DB instance must have
a VPC security group that authorizes the connections. For more information, see Amazon VPC VPCs
and Amazon RDS (p. 2688).
You can add or edit an inbound rule in the security group. For Source, choose My IP. This allows access
to the DB instance from the IP address detected in your browser.
• The DB instance was created using the default port of 3306, and your company has firewall rules
blocking connections to that port from devices in your company network. To fix this failure, recreate
the instance with a different port.
For more information on connection issues, see Can't connect to Amazon RDS DB instance (p. 2727).
1636
Amazon Relational Database Service User Guide
Securing MySQL connections
Topics
• MySQL security on Amazon RDS (p. 1637)
• Using the Password Validation Plugin for RDS for MySQL (p. 1638)
• Encrypting client connections to MySQL DB instances with SSL/TLS (p. 1639)
• Updating applications to connect to MySQL DB instances using new SSL/TLS certificates (p. 1642)
• Using Kerberos authentication for MySQL (p. 1645)
• AWS Identity and Access Management controls who can perform Amazon RDS management actions
on DB instances. When you connect to AWS using IAM credentials, your IAM account must have IAM
policies that grant the permissions required to perform Amazon RDS management operations. For
more information, see Identity and access management for Amazon RDS (p. 2606).
• When you create a DB instance, you use a VPC security group to control which devices and Amazon
EC2 instances can open connections to the endpoint and port of the DB instance. These connections
can be made using Secure Sockets Layer (SSL) and Transport Layer Security (TLS). In addition, firewall
rules at your company can control whether devices running at your company can open connections to
the DB instance.
• To authenticate login and permissions for a MySQL DB instance, you can take either of the following
approaches, or a combination of them.
You can take the same approach as with a stand-alone instance of MySQL. Commands such as CREATE
USER, RENAME USER, GRANT, REVOKE, and SET PASSWORD work just as they do in on-premises
databases, as does directly modifying database schema tables. For information, see Access control and
account management in the MySQL documentation.
You can also use IAM database authentication. With IAM database authentication, you authenticate
to your DB instance by using an IAM user or IAM role and an authentication token. An authentication
token is a unique value that is generated using the Signature Version 4 signing process. By using IAM
database authentication, you can use the same credentials to control access to your AWS resources
and your databases. For more information, see IAM database authentication for MariaDB, MySQL, and
PostgreSQL (p. 2642).
Another option is Kerberos authentication for RDS for MySQL. The DB instance works with AWS
Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) to enable Kerberos
authentication. When users authenticate with a MySQL DB instance joined to the trusting domain,
authentication requests are forwarded. Forwarded requests go to the domain directory that you
create with AWS Directory Service. For more information, see Using Kerberos authentication for
MySQL (p. 1645).
When you create an Amazon RDS DB instance, the master user has the following default privileges:
• alter
• alter routine
• create
• create routine
1637
Amazon Relational Database Service User Guide
Password Validation Plugin
Note
Although it is possible to delete the master user on the DB instance, it is not recommended.
To recreate the master user, use the ModifyDBInstance RDS API operation or the modify-db-
instance AWS CLI command and specify a new master user password with the appropriate
parameter. If the master user does not exist in the instance, the master user is created with the
specified password.
To provide management services for each DB instance, the rdsadmin user is created when the DB
instance is created. Attempting to drop, rename, change the password, or change privileges for the
rdsadmin account will result in an error.
To allow management of the DB instance, the standard kill and kill_query commands have been
restricted. The Amazon RDS commands rds_kill and rds_kill_query are provided to allow you to
end user sessions or queries on DB instances.
1638
Amazon Relational Database Service User Guide
Encrypting with SSL/TLS
2. Configure the parameters for the plugin in the DB parameter group used by the DB instance.
For more information about the parameters, see Password Validation Plugin options and variables in
the MySQL documentation.
For more information about modifying DB instance parameters, see Modifying parameters in a DB
parameter group (p. 352).
After installing and enabling the password_validate plugin, reset existing passwords to comply with
your new validation policies.
Amazon RDS doesn't validate passwords. The MySQL DB instance performs password validation. If you
set a user password with the AWS Management Console, the modify-db-instance AWS CLI command,
or the ModifyDBInstance RDS API operation, the change can succeed even if the new password
doesn't satisfy your password policies. However, a new password is set in the MySQL DB instance only if
it satisfies the password policies. In this case, Amazon RDS records the following event.
"RDS-EVENT-0067" - An attempt to reset the master password for the DB instance has failed.
For more information about Amazon RDS events, see Working with Amazon RDS event
notification (p. 855).
Topics
• Using SSL/TLS with a MySQL DB instance (p. 1639)
• Requiring SSL/TLS for all connections to a MySQL DB instance (p. 1640)
• Connecting from the MySQL command-line client with SSL/TLS (encrypted) (p. 1640)
An SSL/TLS certificate created by Amazon RDS is the trusted root entity and should work in most cases
but might fail if your application does not accept certificate chains. If your application does not accept
certificate chains, you might need to use an intermediate certificate to connect to your AWS Region. For
example, you must use an intermediate certificate to connect to the AWS GovCloud (US) Regions using
SSL/TLS.
1639
Amazon Relational Database Service User Guide
Encrypting with SSL/TLS
For information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For more information about using SSL/TLS with MySQL, see Updating applications to
connect to MySQL DB instances using new SSL/TLS certificates (p. 1642).
MySQL uses OpenSSL for secure connections. Amazon RDS for MySQL supports Transport Layer Security
(TLS) versions 1.0, 1.1, 1.2, and 1.3. TLS support depends on the MySQL version. The following table
shows the TLS support for MySQL versions.
MySQL version TLS 1.0 TLS 1.1 TLS 1.2 TLS 1.3
You can require SSL/TLS connections for specific users accounts. For example, you can use one of the
following statements, depending on your MySQL version, to require SSL/TLS connections on the user
account encrypted_user.
For more information on SSL/TLS connections with MySQL, see the Using encrypted connections in the
MySQL documentation.
You can set the require_secure_transport parameter value by updating the DB parameter group
for your DB instance. You don't need to reboot your DB instance for the change to take effect.
MySQL Error 3159 (HY000): Connections using insecure transport are prohibited while --
require_secure_transport=ON.
For information about setting parameters, see Modifying parameters in a DB parameter group (p. 352).
For more information about the require_secure_transport parameter, see the MySQL
documentation.
1640
Amazon Relational Database Service User Guide
Encrypting with SSL/TLS
To find out which version you have, run the mysql command with the --version option. In the
following example, the output shows that the client program is from MariaDB.
$ mysql --version
mysql Ver 15.1 Distrib 10.5.15-MariaDB, for osx10.15 (x86_64) using readline 5.1
Most Linux distributions, such as Amazon Linux, CentOS, SUSE, and Debian have replaced MySQL with
MariaDB, and the mysql version in them is from MariaDB.
For information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591).
2. Use a MySQL command-line client to connect to a DB instance with SSL/TLS encryption. For the -h
parameter, substitute the DNS name (endpoint) for your DB instance. For the --ssl-ca parameter,
substitute the SSL/TLS certificate file name. For the -P parameter, substitute the port for your DB
instance. For the -u parameter, substitute the user name of a valid database user, such as the master
user. Enter the master user password when prompted.
The following example shows how to launch the client using the --ssl-ca parameter using the
MySQL 5.7 client or later:
To require that the SSL/TLS connection verifies the DB instance endpoint against the endpoint in the
SSL/TLS certificate, enter the following command:
The following example shows how to launch the client using the --ssl-ca parameter using the
MariaDB client:
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
1641
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates
This topic can help you to determine whether any client applications use SSL/TLS to connect to your DB
instances. If they do, you can further check whether those applications require certificate verification to
connect.
Note
Some applications are configured to connect to MySQL DB instances only if they can
successfully verify the certificate on the server. For such applications, you must update your
client application trust stores to include the new CA certificates.
You can specify the following SSL modes: disabled, preferred, and required. When
you use the preferred SSL mode and the CA certificate doesn't exist or isn't up to date, the
connection falls back to not using SSL and connects without encryption.
Because these later versions use the OpenSSL protocol, an expired server certificate doesn't
prevent successful connections unless the required SSL mode is specified.
We recommend avoiding preferred mode. In preferred mode, if the connection encounters
an invalid certificate, it stops using encryption and proceeds unencrypted.
After you update your CA certificates in the client application trust stores, you can rotate the certificates
on your DB instances. We strongly recommend testing these procedures in a development or staging
environment before implementing them in your production environments.
For more information about certificate rotation, see Rotating your SSL/TLS certificate (p. 2596). For
more information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For information about using SSL/TLS with MySQL DB instances, see Using SSL/TLS
with a MySQL DB instance (p. 1639).
Topics
• Determining whether any applications are connecting to your MySQL DB instance using
SSL (p. 1642)
• Determining whether a client requires certificate verification to connect (p. 1643)
• Updating your application trust store (p. 1644)
• Example Java code for establishing SSL connections (p. 1644)
In this sample output, you can see both your own session (admin) and an application logged in as
webapp1 are using SSL.
1642
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates
+----+-----------------+------------------+-----------------+
| id | user | host | connection_type |
+----+-----------------+------------------+-----------------+
| 8 | admin | 10.0.4.249:42590 | SSL/TLS |
| 4 | event_scheduler | localhost | NULL |
| 10 | webapp1 | 159.28.1.1:42189 | SSL/TLS |
+----+-----------------+------------------+-----------------+
3 rows in set (0.00 sec)
JDBC
The following example with MySQL Connector/J 8.0 shows one way to check an application's JDBC
connection properties to determine whether successful connections require a valid certificate. For more
information on all of the JDBC connection options for MySQL, see Configuration properties in the
MySQL documentation.
When using the MySQL Connector/J 8.0, an SSL connection requires verification against the server CA
certificate if your connection properties have sslMode set to VERIFY_CA or VERIFY_IDENTITY, as in
the following example.
Note
If you use either the MySQL Java Connector v5.1.38 or later, or the MySQL Java Connector
v8.0.9 or later to connect to your databases, even if you haven't explicitly configured your
applications to use SSL/TLS when connecting to your databases, these client drivers default to
using SSL/TLS. In addition, when using SSL/TLS, they perform partial certificate verification and
fail to connect if the database server certificate is expired.
MySQL
The following examples with the MySQL Client show two ways to check a script's MySQL connection to
determine whether successful connections require a valid certificate. For more information on all of the
connection options with the MySQL Client, see Client-side configuration for encrypted connections in
the MySQL documentation.
When using the MySQL 5.7 or MySQL 8.0 Client, an SSL connection requires verification against the
server CA certificate if for the --ssl-mode option you specify VERIFY_CA or VERIFY_IDENTITY, as in
the following example.
1643
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates
When using the MySQL 5.6 Client, an SSL connection requires verification against the server CA
certificate if you specify the --ssl-verify-server-cert option, as in the following example.
For information about downloading the root certificate, see Using SSL/TLS to encrypt a connection to a
DB instance (p. 2591).
For sample scripts that import certificates, see Sample script for importing certificates into your trust
store (p. 2603).
Note
When you update the trust store, you can retain older certificates in addition to adding the new
certificates.
If you are using the mysql JDBC driver in an application, set the following properties in the application.
System.setProperty("javax.net.ssl.trustStore", certs);
System.setProperty("javax.net.ssl.trustStorePassword", "password");
java -Djavax.net.ssl.trustStore=/path_to_truststore/MyTruststore.jks -
Djavax.net.ssl.trustStorePassword=my_truststore_password com.companyName.MyApplication
Note
Specify a password other than the prompt shown here as a security best practice.
1644
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL
System.setProperty("javax.net.ssl.trustStore", KEY_STORE_FILE_PATH);
System.setProperty("javax.net.ssl.trustStorePassword", KEY_STORE_PASS);
Important
After you have determined that your database connections use SSL/TLS and have updated
your application trust store, you can update your database to use the rds-ca-rsa2048-g1
certificates. For instructions, see step 3 in Updating your CA certificate by modifying your DB
instance (p. 2597).
Specify a password other than the prompt shown here as a security best practice.
Keeping all of your credentials in the same directory can save you time and effort. With this approach,
you have a centralized place for storing and managing credentials for multiple DB instances. Using a
directory can also improve your overall security profile.
1645
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL
1. Use AWS Managed Microsoft AD to create an AWS Managed Microsoft AD directory. You can use the
AWS Management Console, the AWS CLI, or the AWS Directory Service to create the directory. For
details about doing so, see Create your AWS Managed Microsoft AD directory in the AWS Directory
Service Administration Guide.
2. Create an AWS Identity and Access Management (IAM) role that uses the managed IAM policy
AmazonRDSDirectoryServiceAccess. The role allows Amazon RDS to make calls to your directory.
For the role to allow access, the AWS Security Token Service (AWS STS) endpoint must be activated
in the AWS Region for your AWS account. AWS STS endpoints are active by default in all AWS
Regions, and you can use them without any further actions. For more information, see Activating and
deactivating AWS STS in an AWS Region in the IAM User Guide.
3. Create and configure users in the AWS Managed Microsoft AD directory using the Microsoft Active
Directory tools. For more information about creating users in your Active Directory, see Manage users
and groups in AWS managed Microsoft AD in the AWS Directory Service Administration Guide.
4. Create or modify a MySQL DB instance. If you use either the CLI or RDS API in the create request,
specify a domain identifier with the Domain parameter. Use the d-* identifier that was generated
when you created your directory and the name of the role that you created.
If you modify an existing MySQL DB instance to use Kerberos authentication, set the domain and IAM
role parameters for the DB instance. Locate the DB instance in the same VPC as the domain directory.
5. Use the Amazon RDS master user credentials to connect to the MySQL DB instance. Create the user in
MySQL using the CREATE USER clause IDENTIFIED WITH 'auth_pam'. Users that you create this
way can log in to the MySQL DB instance using Kerberos authentication.
When you create an AWS Managed Microsoft AD directory, AWS Directory Service performs the following
tasks on your behalf:
1646
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL
Note
Be sure to save this password. AWS Directory Service doesn't store it. You can reset it, but you
can't retrieve it.
• Creates a security group for the directory controllers.
When you launch an AWS Managed Microsoft AD, AWS creates an Organizational Unit (OU) that contains
all of your directory's objects. This OU has the NetBIOS name that you typed when you created your
directory and is located in the domain root. The domain root is owned and managed by AWS.
The Admin account that was created with your AWS Managed Microsoft AD directory has permissions for
the most common administrative activities for your OU:
The Admin account also has rights to perform the following domain-wide activities:
• Manage DNS configurations (add, remove, or update records, zones, and forwarders)
• View DNS event logs
• View security event logs
1. Sign in to the AWS Management Console and open the AWS Directory Service console at https://
console.aws.amazon.com/directoryservicev2/.
2. In the navigation pane, choose Directories and choose Set up Directory.
3. Choose AWS Managed Microsoft AD. AWS Managed Microsoft AD is the only option that you can
currently use with Amazon RDS.
4. Enter the following information:
The password for the directory administrator. The directory creation process creates an
administrator account with the user name Admin and this password.
The directory administrator password and can't include the word "admin." The password is case-
sensitive and must be 8–64 characters in length. It must also contain at least one character from
three of the following four categories:
1647
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL
VPC
The VPC for the directory. Create the MySQL DB instance in this same VPC.
Subnets
Subnets for the directory servers. The two subnets must be in different Availability Zones.
7. Review the directory information and make any necessary changes. When the information is correct,
choose Create directory.
1648
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL
It takes several minutes for the directory to be created. When it has been successfully created, the Status
value changes to Active.
To see information about your directory, choose the directory name in the directory listing. Note the
Directory ID value because you need this value when you create or modify your MySQL DB instance.
1649
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL
When a DB instance is created using the AWS Management Console and the console user has the
iam:CreateRole permission, the console creates this role automatically. In this case, the role name
is rds-directoryservice-kerberos-access-role. Otherwise, you must create the IAM role
manually. When you create this IAM role, choose Directory Service, and attach the AWS managed
policy AmazonRDSDirectoryServiceAccess to it.
For more information about creating IAM roles for a service, see Creating a role to delegate permissions
to an AWS service in the IAM User Guide.
Note
The IAM role used for Windows Authentication for RDS for SQL Server can't be used for RDS for
MySQL.
Optionally, you can create policies with the required permissions instead of using the managed IAM
policy AmazonRDSDirectoryServiceAccess. In this case, the IAM role must have the following IAM
trust policy.
{
"Version": "2012-10-17",
1650
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"directoryservice.rds.amazonaws.com",
"rds.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
The role must also have the following IAM role policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ds:DescribeDirectories",
"ds:AuthorizeApplication",
"ds:UnauthorizeApplication",
"ds:GetAuthorizedApplicationDetails"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
To create users in an AWS Directory Service directory, you must be connected to an Amazon EC2 instance
based on Microsoft Windows. This instance must be a member of the AWS Directory Service directory
and be logged in as a user that has privileges to create users. For more information, see Manage users
and groups in AWS Managed Microsoft AD in the AWS Directory Service Administration Guide.
• Create a new MySQL DB instance using the console, the create-db-instance CLI command, or the
CreateDBInstance RDS API operation.
1651
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL
Kerberos authentication is only supported for MySQL DB instances in a VPC. The DB instance can be
in the same VPC as the directory, or in a different VPC. The DB instance must use a security group that
allows egress within the directory's VPC so the DB instance can communicate with the directory.
When you use the console to create, modify, or restore a DB instance, choose Password and Kerberos
authentication in the Database authentication section. Choose Browse Directory and then select the
directory, or choose Create a new directory.
When you use the AWS CLI or RDS API, associate a DB instance with a directory. The following
parameters are required for the DB instance to use the domain directory you created:
• For the --domain parameter, use the domain identifier ("d-*" identifier) generated when you created
the directory.
• For the --domain-iam-role-name parameter, use the role you created that uses the managed IAM
policy AmazonRDSDirectoryServiceAccess.
For example, the following CLI command modifies a DB instance to use a directory.
For Windows:
1652
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL
--db-instance-identifier mydbinstance ^
--domain d-ID ^
--domain-iam-role-name role-name
Important
If you modify a DB instance to enable Kerberos authentication, reboot the DB instance after
making the change.
You can allow an Active Directory user to authenticate with MySQL. To do this, first use the Amazon
RDS master user credentials to connect to the MySQL DB instance as with any other DB instance. After
you're logged in, create an externally authenticated user with PAM (Pluggable Authentication Modules)
in MySQL as shown following.
Replace testuser with the user name. Users (both humans and applications) from your domain can
now connect to the DB instance from a domain joined client machine using Kerberos authentication.
Important
We strongly recommended that clients use SSL/TLS connections when using PAM
authentication. If they don't use SSL/TLS connections, the password might be sent as clear text
in some cases. To require an SSL/TLS encrypted connection for your AD user, run the following
command:
UPDATE mysql.user SET ssl_type = 'any' WHERE ssl_type = '' AND PLUGIN = 'auth_pam'
and USER = 'testuser';
FLUSH PRIVILEGES;
For more information, see Using SSL/TLS with a MySQL DB instance (p. 1639).
For example, using the Amazon RDS API, you can do the following:
• To reattempt enabling Kerberos authentication for a failed membership, use the ModifyDBInstance
API operation and specify the current membership's directory ID.
• To update the IAM role name for membership, use the ModifyDBInstance API operation and specify
the current membership's directory ID and the new IAM role.
• To disable Kerberos authentication on a DB instance, use the ModifyDBInstance API operation and
specify none as the domain parameter.
• To move a DB instance from one domain to another, use the ModifyDBInstance API operation and
specify the domain identifier of the new domain as the domain parameter.
• To list membership for each DB instance, use the DescribeDBInstances API operation.
1653
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL
A request to enable Kerberos authentication can fail because of a network connectivity issue or an
incorrect IAM role. For example, suppose that you create a DB instance or modify an existing DB instance
and the attempt to enable Kerberos authentication fails. If this happens, re-issue the modify command
or modify the newly created DB instance to join the domain.
To create a database user that you can connect to using Kerberos authentication, use an IDENTIFIED
WITH clause on the CREATE USER statement. For instructions, see Step 5: Create Kerberos
authentication MySQL logins (p. 1653).
To avoid errors, use the MariaDB mysql client. You can download MariaDB software at https://
downloads.mariadb.org/.
At a command prompt, connect to one of the endpoints associated with your MySQL DB instance. Follow
the general procedures in Connecting to a DB instance running the MySQL database engine (p. 1630).
When you're prompted for the password, enter the Kerberos password associated with that user name.
• Only an AWS Managed Microsoft AD is supported. However, you can join RDS for MySQL DB instances
to shared Managed Microsoft AD domains owned by different accounts in the same AWS Region.
1654
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL
If you use the CLI or RDS API to delete a DB instance with this feature enabled, expect a delay.
• You can't set up a forest trust relationship between your on-premises or self-hosted Microsoft Active
Directory and the AWS Managed Microsoft AD.
1655
Amazon Relational Database Service User Guide
Improving query performance with RDS Optimized Reads
Topics
• Overview of RDS Optimized Reads (p. 1656)
• Use cases for RDS Optimized Reads (p. 1656)
• Best practices for RDS Optimized Reads (p. 1657)
• Using RDS Optimized Reads (p. 1657)
• Monitoring DB instances that use RDS Optimized Reads (p. 1658)
• Limitations for RDS Optimized Reads (p. 1658)
RDS Optimized Reads is turned on by default when a DB instance or Multi-AZ DB cluster uses a DB
instance class with an instance store, such as db.m5d or db.m6gd. With RDS Optimized Reads, some
temporary objects are stored on the instance store. These temporary objects include internal temporary
files, internal on-disk temp tables, memory map files, and binary log (binlog) cache files. For more
information about the instance store, see Amazon EC2 instance store in the Amazon Elastic Compute
Cloud User Guide for Linux Instances.
The workloads that generate temporary objects in MySQL for query processing can take advantage
of the instance store for faster query processing. This type of workload includes queries involving
sorts, hash aggregations, high-load joins, Common Table Expressions (CTEs), and queries on unindexed
columns. These instance store volumes provide higher IOPS and performance, regardless of the storage
configurations used for persistent Amazon EBS storage. Because RDS Optimized Reads offloads
operations on temporary objects to the instance store, the input/output operations per second (IOPS)
or throughput of the persistent storage (Amazon EBS) can now be used for operations on persistent
objects. These operations include regular data file reads and writes, and background engine operations,
such as flushing and insert buffer merges.
Note
Both manual and automated RDS snapshots only contain engine files for persistent objects. The
temporary objects created in the instance store aren't included in RDS snapshots.
• Applications that run analytical queries with complex common table expressions (CTEs), derived tables,
and grouping operations
1656
Amazon Relational Database Service User Guide
Best practices
• Read replicas that serve heavy read traffic with unoptimized queries
• Applications that run on-demand or dynamic reporting queries that involve complex operations, such
as queries with GROUP BY and ORDER BY clauses
• Workloads that use internal temporary tables for query processing
You can monitor the engine status variable created_tmp_disk_tables to determine the number of
disk-based temporary tables created on your DB instance.
• Applications that create large temporary tables, either directly or in procedures, to store intermediate
results
• Database queries that perform grouping or ordering on non-indexed columns
• Add retry logic for read-only queries in case they fail because the instance store is full during the
execution.
• Monitor the storage space available on the instance store with the CloudWatch metric
FreeLocalStorage. If the instance store is reaching its limit because of workload on the DB instance,
modify the DB instance to use a larger DB instance class.
• When your DB instance or Multi-AZ DB cluster has sufficient memory but is still reaching the storage
limit on the instance store, increase the binlog_cache_size value to maintain the session-specific
binlog entries in memory. This configuration prevents writing the binlog entries to temporary binlog
cache files on disk.
The binlog_cache_size parameter is session-specific. You can change the value for each new
session. The setting for this parameter can increase the memory utilization on the DB instance during
peak workload. Therefore, consider increasing the parameter value based on the workload pattern of
your application and available memory on the DB instance.
• Use the default value of MIXED for the binlog_format. Depending on the size of the transactions,
setting binlog_format to ROW can result in large binlog cache files on the instance store.
• Set the internal_tmp_mem_storage_engine parameter to TempTable, and set the
temptable_max_mmap parameter to match the size of the available storage on the instance store.
• Avoid performing bulk changes in a single transaction. These types of transactions can generate large
binlog cache files on the instance store and can cause issues when the instance store is full. Consider
splitting writes into multiple small transactions to minimize storage use for binlog cache files.
• Use the default value of ABORT_SERVER for the binlog_error_action parameter. Doing so avoids
issues with the binary logging on DB instances with backups enabled.
• Create an RDS for MySQL DB instance or Multi-AZ DB cluster using one of these DB instance classes.
For more information, see Creating an Amazon RDS DB instance (p. 300).
• Modify an existing RDS for MySQL DB instance or Multi-AZ DB cluster to use one of these DB instance
classes. For more information, see Modifying an Amazon RDS DB instance (p. 401).
1657
Amazon Relational Database Service User Guide
Monitoring
RDS Optimized Reads is available in all AWS Regions RDS where one or more of the DB instance classes
with local NVMe SSD storage are supported. For information about DB instance classes, see the section
called “DB instance classes” (p. 11).
DB instance class availability differs for AWS Regions. To determine whether a DB instance class is
supported in a specific AWS Region, see the section called “Determining DB instance class support in
AWS Regions” (p. 68).
If you don't want to use RDS Optimized Reads, modify your DB instance or Multi-AZ DB cluster so that it
doesn't use a DB instance class that supports the feature.
• FreeLocalStorage
• ReadIOPSLocalStorage
• ReadLatencyLocalStorage
• ReadThroughputLocalStorage
• WriteIOPSLocalStorage
• WriteLatencyLocalStorage
• WriteThroughputLocalStorage
These metrics provide data about available instance store storage, IOPS, and throughput. For
more information about these metrics, see Amazon CloudWatch instance-level metrics for Amazon
RDS (p. 806).
• RDS Optimized Reads is supported for RDS for MySQL version 8.0.28 and higher. For information
about RDS for MySQL versions, see MySQL on Amazon RDS versions (p. 1627).
• You can't change the location of temporary objects to persistent storage (Amazon EBS) on the DB
instance classes that support RDS Optimized Reads.
• When binary logging is enabled on a DB instance, the maximum transaction size is limited by the
size of the instance store. In MySQL, any session that requires more storage than the value of
binlog_cache_size writes transaction changes to temporary binlog cache files, which are created
on the instance store.
• Transactions can fail when the instance store is full.
1658
Amazon Relational Database Service User Guide
Improving write performance with
RDS Optimized Writes for MySQL
Topics
• Overview of RDS Optimized Writes (p. 1284)
• Using RDS Optimized Writes (p. 1660)
• Limitations for RDS Optimized Writes (p. 1663)
Relational databases, like MySQL, provide the ACID properties of atomicity, consistency, isolation, and
durability for reliable database transactions. To help provide these properties, MySQL uses a data
storage area called the doublewrite buffer that prevents partial page write errors. These errors occur
when there is a hardware failure while the database is updating a page, such as in the case of a power
outage. A MySQL database can detect partial page writes and recover with a copy of the page in the
doublewrite buffer. While this technique provides protection, it also results in extra write operations.
For more information about the MySQL doublewrite buffer, see Doublewrite Buffer in the MySQL
documentation.
With RDS Optimized Writes turned on, RDS for MySQL databases write only once when flushing data to
durable storage without using the doublewrite buffer. RDS Optimized Writes is useful if you run write-
heavy workloads on your RDS for MySQL databases. Examples of databases with write-heavy workloads
include ones that support digital payments, financial trading, and gaming applications.
These databases run on DB instance classes that use the AWS Nitro System. Because of the hardware
configuration in these systems, the database can write 16-KiB pages directly to data files reliably and
durably in one step. The AWS Nitro System makes RDS Optimized Writes possible.
You can set the new database parameter rds.optimized_writes to control the RDS Optimized Writes
feature for RDS for MySQL databases. Access this parameter in the DB parameter groups of RDS for
MySQL version 8.0. Set the parameter using the following values:
• AUTO – Turn on RDS Optimized Writes if the database supports it. Turn off RDS Optimized Writes if the
database doesn't support it. This setting is the default.
• OFF – Turn off RDS Optimized Writes even if the database supports it.
If you migrate an RDS for MySQL database that is configured to use RDS Optimized Writes to a DB
instance class that doesn't support the feature, RDS automatically turns off RDS Optimized Writes for the
database.
When RDS Optimized Writes is turned off, the database uses the MySQL doublewrite buffer.
To determine whether an RDS for MySQL database is using RDS Optimized Writes, view the current
value of the innodb_doublewrite parameter for the database. If the database is using RDS Optimized
Writes, this parameter is set to FALSE (0).
1659
Amazon Relational Database Service User Guide
Using
• You specify a DB engine version and DB instance class that support RDS Optimized Writes.
• RDS Optimized Writes is supported for RDS for MySQL version 8.0.30 and higher. For information
about RDS for MySQL versions, see MySQL on Amazon RDS versions (p. 1627).
• RDS Optimized Writes is supported for RDS for MySQL databases that use the following DB instance
classes:
• db.m7g
• db.m6g
• db.m6gd
• db.m6i
• db.m5d
• db.r7g
• db.r6g
• db.r6gd
• db.r6i
• db.r5
• db.r5b
• db.r5d
• db.x2iedn
For information about DB instance classes, see the section called “DB instance classes” (p. 11).
DB instance class availability differs for AWS Regions. To determine whether a DB instance class is
supported in a specific AWS Region, see the section called “Determining DB instance class support in
AWS Regions” (p. 68).
• In the parameter group associated with the database, the rds.optimized_writes parameter is set
to AUTO. In default parameter groups, this parameter is always set to AUTO.
If you want to use a DB engine version and DB instance class that support RDS Optimized Writes, but you
don't want to use this feature, then specify a custom parameter group when you create the database. In
this parameter group, set the rds.optimized_writes parameter to OFF. If you want the database to
use RDS Optimized Writes later, you can set the parameter to AUTO to turn it on. For information about
creating custom parameter groups and setting parameters, see Working with parameter groups (p. 347).
For information about creating a DB instance, see Creating an Amazon RDS DB instance (p. 300).
Console
When you use the RDS console to create an RDS for MySQL database, you can filter for the DB engine
versions and DB instance classes that support RDS Optimized Writes. After you turn on the filters, you
can choose from the available DB engine versions and DB instance classes.
To choose a DB engine version that supports RDS Optimized Writes, filter for the RDS for MySQL DB
engine versions that support it in Engine version, and then choose a version.
1660
Amazon Relational Database Service User Guide
Using
1661
Amazon Relational Database Service User Guide
Using
In the Instance configuration section, filter for the DB instance classes that support RDS Optimized
Writes, and then choose a DB instance class.
After you make these selections, you can choose other settings that meet your requirements and finish
creating the RDS for MySQL database with the console.
AWS CLI
To create a DB instance by using the AWS CLI, use the create-db-instance command. Make sure the --
engine-version and --db-instance-class values support RDS Optimized Writes. In addition, make
sure the parameter group associated with the DB instance has the rds.optimized_writes parameter
set to AUTO. This example associates the default parameter group with the DB instance.
For Windows:
RDS API
You can create a DB instance using the CreateDBInstance operation. When you use this operation, make
sure the EngineVersion and DBInstanceClass values support RDS Optimized Writes. In addition,
make sure the parameter group associated with the DB instance has the rds.optimized_writes
parameter set to AUTO.
1662
Amazon Relational Database Service User Guide
Limitations
• You can only modify a database to turn on RDS Optimized Writes if the database was created with a
DB engine version and DB instance class that support the feature. In this case, if RDS Optimized Writes
is turned off for the database, you can turn it on by setting the rds.optimized_writes parameter
to AUTO. For more information, see Using RDS Optimized Writes (p. 1660).
• You can only modify a database to turn on RDS Optimized Writes if the database was created after
the feature was released. The underlying file system format and organization that RDS Optimized
Writes needs is incompatible with the file system format of databases created before the feature was
released. By extension, you can't use any snapshots of previously created instances with this feature
because the snapshots use the older, incompatible file system.
Important
To convert the old format to the new format, you need to perform a full database migration.
If you want to use this feature on DB instances that were created before the feature was
released, create a new empty DB instance and manually migrate your older DB instance to
the newer DB instance. You can migrate your older DB instance using the native mysqldump
tool, replication, or AWS Database Migration Service. For more information, see mysqldump
— A Database Backup Program in the MySQL 8.0 Reference Manual, Working with MySQL
replication in Amazon RDS (p. 1708), and the AWS Database Migration Service User Guide. For
help with migrating using AWS tools, contact support.
• When you are restoring an RDS for MySQL database from a snapshot, you can only turn on RDS
Optimized Writes for the database if all of the following conditions apply:
• The snapshot was created from a database that supports RDS Optimized Writes.
• The snapshot was created from a database that was created after RDS Optimized Writes was
released.
• The snapshot is restored to a database that supports RDS Optimized Writes.
• The restored database is associated with a parameter group that has the rds.optimized_writes
parameter set to AUTO.
1663
Amazon Relational Database Service User Guide
Upgrading the MySQL DB engine
Major version upgrades can contain database changes that are not backward-compatible with existing
applications. As a result, you must manually perform major version upgrades of your DB instances. You
can initiate a major version upgrade by modifying your DB instance. However, before you perform a
major version upgrade, we recommend that you follow the instructions in Major version upgrades for
MySQL (p. 1665).
In contrast, minor version upgrades include only changes that are backward-compatible with existing
applications. You can initiate a minor version upgrade manually by modifying your DB instance. Or
you can enable the Auto minor version upgrade option when creating or modifying a DB instance.
Doing so means that your DB instance is automatically upgraded after Amazon RDS tests and approves
the new version. For information about performing an upgrade, see Upgrading a DB instance engine
version (p. 429).
If your MySQL DB instance is using read replicas, you must upgrade all of the read replicas before
upgrading the source instance. If your DB instance is in a Multi-AZ deployment, both the primary and
standby replicas are upgraded. Your DB instance will not be available until the upgrade is complete.
Database engine upgrades require downtime. The duration of the downtime varies based on the size of
your DB instance.
Tip
You can minimize the downtime required for DB instance upgrade by using a blue/green
deployment. For more information, see Using Amazon RDS Blue/Green Deployments for
database updates (p. 566).
Topics
• Overview of upgrading (p. 1664)
• Major version upgrades for MySQL (p. 1665)
• Testing an upgrade (p. 1669)
• Upgrading a MySQL DB instance (p. 1669)
• Automatic minor version upgrades for MySQL (p. 1669)
• Using a read replica to reduce downtime when upgrading a MySQL database (p. 1671)
Overview of upgrading
When you use the AWS Management Console to upgrade a DB instance, it shows the valid upgrade
targets for the DB instance. You can also use the following AWS CLI command to identify the valid
upgrade targets for a DB instance:
For Windows:
1664
Amazon Relational Database Service User Guide
Major version upgrades
For example, to identify the valid upgrade targets for a MySQL version 8.0.28 DB instance, run the
following AWS CLI command:
For Windows:
Amazon RDS takes two or more DB snapshots during the upgrade process. Amazon RDS takes up to
two snapshots of the DB instance before making any upgrade changes. If the upgrade doesn't work for
your databases, you can restore one of these snapshots to create a DB instance running the old version.
Amazon RDS takes another snapshot of the DB instance when the upgrade completes. Amazon RDS
takes these snapshots regardless of whether AWS Backup manages the backups for the DB instance.
Note
Amazon RDS only takes DB snapshots if you have set the backup retention period for your DB
instance to a number greater than 0. To change your backup retention period, see Modifying an
Amazon RDS DB instance (p. 401).
After the upgrade is complete, you can't revert to the previous version of the database engine. If you
want to return to the previous version, restore the first DB snapshot taken to create a new DB instance.
You control when to upgrade your DB instance to a new version supported by Amazon RDS. This level of
control helps you maintain compatibility with specific database versions and test new versions with your
application before deploying in production. When you are ready, you can perform version upgrades at
the times that best fit your schedule.
If your DB instance is using read replication, you must upgrade all of the Read Replicas before upgrading
the source instance.
If your DB instance is in a Multi-AZ deployment, both the primary and standby DB instances are
upgraded. The primary and standby DB instances are upgraded at the same time and you will experience
an outage until the upgrade is complete. The time for the outage varies based on your database engine,
engine version, and the size of your DB instance.
1665
Amazon Relational Database Service User Guide
Major version upgrades
Note
You can only create MySQL version 5.7 and 8.0 DB instances with latest-generation and current-
generation DB instance classes, in addition to the db.m3 previous-generation DB instance class.
In some cases, you want to upgrade a MySQL version 5.6 DB instance running on a previous-
generation DB instance class (other than db.m3) to a MySQL version 5.7 DB instance. In
these cases, first modify the DB instance to use a latest-generation or current-generation DB
instance class. After you do this, you can then modify the DB instance to use the MySQL version
5.7 database engine. For information on Amazon RDS DB instance classes, see DB instance
classes (p. 11).
Topics
• Overview of MySQL major version upgrades (p. 1666)
• Upgrades to MySQL version 5.7 might be slow (p. 1666)
• Prechecks for upgrades from MySQL 5.7 to 8.0 (p. 1667)
• Rollback after failure to upgrade from MySQL 5.7 to 8.0 (p. 1668)
To perform a major version upgrade for a MySQL version 5.6 DB instance on Amazon RDS to MySQL
version 5.7 or later, first perform any available OS updates. After OS updates are complete, upgrade to
each major version: 5.6 to 5.7 and then 5.7 to 8.0. MySQL DB instances created before April 24, 2014,
show an available OS update until the update has been applied. For more information on OS updates,
see Applying updates for a DB instance (p. 421).
During a major version upgrade of MySQL, Amazon RDS runs the MySQL binary mysql_upgrade to
upgrade tables, if necessary. Also, Amazon RDS empties the slow_log and general_log tables during
a major version upgrade. To preserve log information, save the log contents before the major version
upgrade.
MySQL major version upgrades typically complete in about 10 minutes. Some upgrades might take
longer because of the DB instance class size or because the instance doesn't follow certain operational
guidelines in Best practices for Amazon RDS (p. 286). If you upgrade a DB instance from the Amazon RDS
console, the status of the DB instance indicates when the upgrade is complete. If you upgrade using the
AWS Command Line Interface (AWS CLI), use the describe-db-instances command and check the Status
value.
Because this conversion rebuilds your tables, it might take a considerable amount of time to complete
the DB instance upgrade. The forced conversion occurs for any DB instances that are running a version
before MySQL version 5.6.4. It also occurs for any DB instances that were upgraded from a version before
MySQL version 5.6.4 to a version other than 5.7.
If your DB instance runs a version before MySQL version 5.6.4, or was upgraded from a version before
5.6.4, we recommend an extra step. In these cases, we recommend that you convert the datetime,
time, and timestamp columns in your database before upgrading your DB instance to MySQL version
5.7. This conversion can significantly reduce the amount of time required to upgrade the DB instance to
MySQL version 5.7. To upgrade your date and time columns to the new format, issue the ALTER TABLE
1666
Amazon Relational Database Service User Guide
Major version upgrades
<table_name> FORCE; command for each table that contains date or time columns. Because altering
a table locks the table as read-only, we recommend that you perform this update during a maintenance
window.
To find all tables in your database that have datetime, time, or timestamp columns and create an
ALTER TABLE <table_name> FORCE; command for each table, use the following query.
For more information, see Keywords and reserved words in the MySQL documentation.
• There must be no tables in the MySQL 5.7 mysql system database that have the same name as a table
used by the MySQL 8.0 data dictionary.
• There must be no obsolete SQL modes defined in your sql_mode system variable setting.
• There must be no tables or stored procedures with individual ENUM or SET column elements that
exceed 255 characters or 1020 bytes in length.
• Before upgrading to MySQL 8.0.13 or higher, there must be no table partitions that reside in shared
InnoDB tablespaces.
• There must be no queries and stored program definitions from MySQL 8.0.12 or lower that use ASC or
DESC qualifiers for GROUP BY clauses.
• Your MySQL 5.7 installation must not use features that are not supported in MySQL 8.0.
For more information, see Features removed in MySQL 8.0 in the MySQL documentation.
• There must be no foreign key constraint names longer than 64 characters.
• For improved Unicode support, consider converting objects that use the utf8mb3 charset to use
the utf8mb4 charset. The utf8mb3 character set is deprecated. Also, consider using utf8mb4 for
character set references instead of utf8, because currently utf8 is an alias for the utf8mb3 charset.
For more information, see The utf8mb3 character set (3-byte UTF-8 unicode encoding) in the MySQL
documentation.
When you start an upgrade from MySQL 5.7 to 8.0, Amazon RDS runs prechecks automatically to detect
these incompatibilities. For information about upgrading to MySQL 8.0, see Upgrading MySQL in the
MySQL documentation.
These prechecks are mandatory. You can't choose to skip them. The prechecks provide the following
benefits:
1667
Amazon Relational Database Service User Guide
Major version upgrades
The prechecks include some that are included with MySQL and some that were created specifically by
the Amazon RDS team. For information about the prechecks provided by MySQL, see Upgrade checker
utility.
The prechecks run before the DB instance is stopped for the upgrade, meaning that they don't cause
any downtime when they run. If the prechecks find an incompatibility, Amazon RDS automatically
cancels the upgrade before the DB instance is stopped. Amazon RDS also generates an event for the
incompatibility. For more information about Amazon RDS events, see Working with Amazon RDS event
notification (p. 855).
Amazon RDS records detailed information about each incompatibility in the log file
PrePatchCompatibility.log. In most cases, the log entry includes a link to the MySQL
documentation for correcting the incompatibility. For more information about viewing log files, see
Viewing and listing database log files (p. 895).
Due to the nature of the prechecks, they analyze the objects in your database. This analysis results in
resource consumption and increases the time for the upgrade to complete.
Note
Amazon RDS runs all of these prechecks only for an upgrade from MySQL 5.7 to MySQL 8.0. For
an upgrade from MySQL 5.6 to MySQL 5.7, prechecks are limited to confirming that there are no
orphan tables and that there is enough storage space to rebuild tables. Prechecks aren't run for
upgrades to releases lower than MySQL 5.7.
Typically, an upgrade fails because there are incompatibilities in the metadata between the databases
in your DB instance and the target MySQL version. When an upgrade fails, you can view the details
about these incompatibilities in the upgradeFailure.log file. Resolve the incompatibilities before
attempting to upgrade again.
During an unsuccessful upgrade attempt and rollback, your DB instance is restarted. Any pending
parameter changes are applied during the restart and persist after the rollback.
For more information about upgrading to MySQL 8.0, see the following topics in the MySQL
documentation:
Note
Currently, automatic rollback after upgrade failure is supported only for MySQL 5.7 to 8.0 major
version upgrades.
1668
Amazon Relational Database Service User Guide
Testing an upgrade
Testing an upgrade
Before you perform a major version upgrade on your DB instance, thoroughly test your database for
compatibility with the new version. In addition, thoroughly test all applications that access the database
for compatibility with the new version. We recommend that you use the following procedure.
1. Review the upgrade documentation for the new version of the database engine to see if there are
compatibility issues that might affect your database or applications:
In the AWS Management Console, these settings are under Additional configuration. The following
image shows the Auto minor version upgrade setting.
1669
Amazon Relational Database Service User Guide
Automatic minor version upgrades
For more information about these settings, see Settings for DB instances (p. 402).
For some RDS for MySQL major versions in some AWS Regions, one minor version is designated
by RDS as the automatic upgrade version. After a minor version has been tested and approved by
Amazon RDS, the minor version upgrade occurs automatically during your maintenance window. RDS
doesn't automatically set newer released minor versions as the automatic upgrade version. Before RDS
designates a newer automatic upgrade version, several criteria are considered, such as the following:
You can use the following AWS CLI command to determine the current automatic minor upgrade target
version for a specified MySQL minor version in a specific AWS Region.
For Windows:
For example, the following AWS CLI command determines the automatic minor upgrade target for
MySQL minor version 8.0.11 in the US East (Ohio) AWS Region (us-east-2).
1670
Amazon Relational Database Service User Guide
Upgrading with reduced downtime
For Windows:
----------------------------------
| DescribeDBEngineVersions |
+--------------+-----------------+
| AutoUpgrade | EngineVersion |
+--------------+-----------------+
| False | 8.0.15 |
| False | 8.0.16 |
| False | 8.0.17 |
| False | 8.0.19 |
| False | 8.0.20 |
| False | 8.0.21 |
| True | 8.0.23 |
| False | 8.0.25 |
+--------------+-----------------+
In this example, the AutoUpgrade value is True for MySQL version 8.0.23. So, the automatic minor
upgrade target is MySQL version 8.0.23, which is highlighted in the output.
A MySQL DB instance is automatically upgraded during your maintenance window if the following
criteria are met:
For more information, see Automatically upgrading the minor engine version (p. 431).
1671
Amazon Relational Database Service User Guide
Upgrading with reduced downtime
If you can't use a blue/green deployment and your MySQL DB instance is currently in use with a
production application, you can use the following procedure to upgrade the database version for your DB
instance. This procedure can reduce the amount of downtime for your application.
By using a read replica, you can perform most of the maintenance steps ahead of time and minimize the
necessary changes during the actual outage. With this technique, you can test and prepare the new DB
instance without making any changes to your existing DB instance.
The following procedure shows an example of upgrading from MySQL version 5.7 to MySQL version 8.0.
You can use the same general steps for upgrades to other major versions.
Note
When you are upgrading from MySQL version 5.7 to MySQL version 8.0, complete the prechecks
before performing the upgrade. For more information, see Prechecks for upgrades from MySQL
5.7 to 8.0 (p. 1667).
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Create a read replica of your MySQL 5.7 DB instance. This process creates an upgradable copy of
your database. Other read replicas of the DB instance might also exist.
a. In the console, choose Databases, and then choose the DB instance that you want to upgrade.
b. For Actions, choose Create read replica.
c. Provide a value for DB instance identifier for your read replica and ensure that the DB instance
class and other settings match your MySQL 5.7 DB instance.
d. Choose Create read replica.
3. (Optional) When the read replica has been created and Status shows Available, convert the read
replica into a Multi-AZ deployment and enable backups.
By default, a read replica is created as a Single-AZ deployment with backups disabled. Because the
read replica ultimately becomes the production DB instance, it is a best practice to configure a Multi-
AZ deployment and enable backups now.
a. In the console, choose Databases, and then choose the read replica that you just created.
b. Choose Modify.
c. For Multi-AZ deployment, choose Create a standby instance.
d. For Backup Retention Period, choose a positive nonzero value, such as 3 days, and then choose
Continue.
e. For Scheduling of modifications, choose Apply immediately.
f. Choose Modify DB instance.
4. When the read replica Status shows Available, upgrade the read replica to MySQL 8.0:
a. In the console, choose Databases, and then choose the read replica that you just created.
b. Choose Modify.
c. For DB engine version, choose the MySQL 8.0 version to upgrade to, and then choose Continue.
d. For Scheduling of modifications, choose Apply immediately.
e. Choose Modify DB instance to start the upgrade.
5. When the upgrade is complete and Status shows Available, verify that the upgraded read replica is
up-to-date with the source MySQL 5.7 DB instance. To verify, connect to the read replica and run the
SHOW REPLICA STATUS command. If the Seconds_Behind_Master field is 0, then replication is
up-to-date.
1672
Amazon Relational Database Service User Guide
Upgrading with reduced downtime
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
6. (Optional) Create a read replica of your read replica.
If you want the DB instance to have a read replica after it is promoted to a standalone DB instance,
you can create the read replica now.
a. In the console, choose Databases, and then choose the read replica that you just upgraded.
b. For Actions, choose Create read replica.
c. Provide a value for DB instance identifier for your read replica and ensure that the DB instance
class and other settings match your MySQL 5.7 DB instance.
d. Choose Create read replica.
7. (Optional) Configure a custom DB parameter group for the read replica.
If you want the DB instance to use a custom parameter group after it is promoted to a standalone
DB instance, you can create the DB parameter group now and associate it with the read replica.
a. Create a custom DB parameter group for MySQL 8.0. For instructions, see Creating a DB
parameter group (p. 350).
b. Modify the parameters that you want to change in the DB parameter group you just created. For
instructions, see Modifying parameters in a DB parameter group (p. 352).
c. In the console, choose Databases, and then choose the read replica.
d. Choose Modify.
e. For DB parameter group, choose the MySQL 8.0 DB parameter group you just created, and then
choose Continue.
f. For Scheduling of modifications, choose Apply immediately.
g. Choose Modify DB instance to start the upgrade.
8. Make your MySQL 8.0 read replica a standalone DB instance.
Important
When you promote your MySQL 8.0 read replica to a standalone DB instance, it is no longer
a replica of your MySQL 5.7 DB instance. We recommend that you promote your MySQL 8.0
read replica during a maintenance window when your source MySQL 5.7 DB instance is in
read-only mode and all write operations are suspended. When the promotion is completed,
you can direct your write operations to the upgraded MySQL 8.0 DB instance to ensure that
no write operations are lost.
In addition, we recommend that, before promoting your MySQL 8.0 read replica, you
perform all necessary data definition language (DDL) operations on your MySQL 8.0 read
replica. An example is creating indexes. This approach avoids negative effects on the
performance of the MySQL 8.0 read replica after it has been promoted. To promote a read
replica, use the following procedure.
a. In the console, choose Databases, and then choose the read replica that you just upgraded.
b. For Actions, choose Promote.
c. Choose Yes to enable automated backups for the read replica instance. For more information,
see Working with backups (p. 591).
d. Choose Continue.
e. Choose Promote Read Replica.
9. You now have an upgraded version of your MySQL database. At this point, you can direct your
applications to the new MySQL 8.0 DB instance.
1673
Amazon Relational Database Service User Guide
Importing data into a MySQL DB instance
Overview
Find techniques to import data into an RDS for MySQL DB instance in the following table.
Existing Any One time Some Create a backup of your on-premises Restoring
MySQL database, store it on Amazon S3, and then a backup
database restore the backup file to a new Amazon into a
on RDS DB instance running MySQL. MySQL
premises DB
or on instance (p. 1680)
Amazon
EC2
Any Any One Minimal Use AWS Database Migration Service What
existing time or to migrate the database with minimal is AWS
database ongoing downtime and, for many database DB Database
engines, continue ongoing replication. Migration
Service
and
Using a
MySQL-
compatible
database
as a
target
for AWS
DMS in
the AWS
Database
Migration
Service
User
Guide
Existing Any One Minimal Create a read replica for ongoing replication. Working
MySQL time or Promote the read replica for one-time with DB
DB ongoing creation of a new DB instance. instance
instance read
replicas (p. 438)
Existing Small One time Some Copy the data directly to your MySQL DB Importing
MariaDB instance using a command-line utility. data
or from a
MySQL MariaDB
database or
1674
Amazon Relational Database Service User Guide
Overview
Data not Medium One time Some Create flat files and import them using the Importing
stored mysqlimport utility. data
in an from any
existing source
database to a
MariaDB
or
MySQL
DB
instance (p. 1703)
Note
The 'mysql' system database contains authentication and authorization information required
to log in to your DB instance and access your data. Dropping, altering, renaming, or truncating
tables, data, or other contents of the 'mysql' database in your DB instance can result in error
and might render the DB instance and your data inaccessible. If this occurs, you can restore the
DB instance from a snapshot using the AWS CLI restore-db-instance-from-db-snapshot
command. You can recover the DB instance using the AWS CLI restore-db-instance-to-
point-in-time command.
1675
Amazon Relational Database Service User Guide
Importing data considerations
Binary log
Data loads incur a performance penalty and require additional free disk space (up to four times more)
when binary logging is enabled versus loading the same data with binary logging turned off. The severity
of the performance penalty and the amount of free disk space required is directly proportional to the
size of the transactions used to load the data.
Transaction size
Transaction size plays an important role in MySQL data loads. It has a major influence on resource
consumption, disk space utilization, resume process, time to recover, and input format (flat files or SQL).
This section describes how transaction size affects binary logging and makes the case for disabling
binary logging during large data loads. As noted earlier, binary logging is enabled and disabled by
setting the Amazon RDS automated backup retention period. Non-zero values enable binary logging,
and zero disables it. We also describe the impact of large transactions on InnoDB and why it's important
to keep transaction sizes small.
Small transactions
For small transactions, binary logging doubles the number of disk writes required to load the data. This
effect can severely degrade performance for other database sessions and increase the time required
to load the data. The degradation experienced depends in part upon the upload rate, other database
activity taking place during the load, and the capacity of your Amazon RDS DB instance.
The binary logs also consume disk space roughly equal to the amount of data loaded until they are
backed up and removed. Fortunately, Amazon RDS minimizes this by backing up and removing binary
logs on a frequent basis.
Large transactions
Large transactions incur a 3X penalty for IOPS and disk consumption with binary logging enabled. This
is due to the binary log cache spilling to disk, consuming disk space and incurring additional IO for
each write. The cache cannot be written to the binlog until the transaction commits or rolls back, so it
consumes disk space in proportion to the amount of data loaded. When the transaction commits, the
cache must be copied to the binlog, creating a third copy of the data on disk.
Because of this, there must be at least three times as much free disk space available to load the data
compared to loading with binary logging disabled. For example, 10 GiB of data loaded as a single
transaction consumes at least 30 GiB disk space during the load. It consumes 10 GiB for the table +
10 GiB for the binary log cache + 10 GiB for the binary log itself. The cache file remains on disk until
the session that created it terminates or the session fills its binary log cache again during another
transaction. The binary log must remain on disk until backed up, so it might be some time before the
extra 20 GiB is freed.
If the data was loaded using LOAD DATA LOCAL INFILE, yet another copy of the data is created if the
database has to be recovered from a backup made before the load. During recovery, MySQL extracts
the data from the binary log into a flat file. MySQL then runs LOAD DATA LOCAL INFILE, just as in the
original transaction. However, this time the input file is local to the database server. Continuing with the
example preceding, recovery fails unless there is at least 40 GiB free disk space available.
1676
Amazon Relational Database Service User Guide
Importing data considerations
After the load, set the backup retention period back to an appropriate (no zero) value.
You can't set the backup retention period to zero if the DB instance is a source DB instance for read
replicas.
InnoDB
The information in this section provides a strong argument for keeping transaction sizes small when
using InnoDB.
Undo
InnoDB generates undo to support features such as transaction rollback and MVCC. Undo is stored in the
InnoDB system tablespace (usually ibdata1) and is retained until removed by the purge thread. The purge
thread cannot advance beyond the undo of the oldest active transaction, so it is effectively blocked
until the transaction commits or completes a rollback. If the database is processing other transactions
during the load, their undo also accumulates in the system tablespace and cannot be removed even
if they commit and no other transaction needs the undo for MVCC. In this situation, all transactions
(including read-only transactions) that access any of the rows changed by any transaction (not just the
load transaction) slow down. The slowdown occurs because transactions scan through undo that could
have been purged if not for the long-running load transaction.
Undo is stored in the system tablespace, and the system tablespace never shrinks in size. Thus, large data
load transactions can cause the system tablespace to become quite large, consuming disk space that you
can't reclaim without recreating the database from scratch.
Rollback
InnoDB is optimized for commits. Rolling back a large transaction can take a very, very long time. In
some cases, it might be faster to perform a point-in-time recovery or restore a DB snapshot.
Flat files
Loading flat files with LOAD DATA LOCAL INFILE can be the fastest and least costly method of loading
data as long as transactions are kept relatively small. Compared to loading the same data with SQL, flat
files usually require less network traffic, lowering transmission costs and load much faster due to the
reduced overhead in the database.
• Resume capability – Keeping track of which files have been loaded is easy. If a problem arises
during the load, you can pick up where you left off with little effort. Some data might have to be
retransmitted to Amazon RDS, but with small files, the amount retransmitted is minimal.
1677
Amazon Relational Database Service User Guide
Importing data considerations
• Load data in parallel – If you've got IOPS and network bandwidth to spare with a single file load,
loading in parallel might save time.
• Throttle the load rate – Data load having a negative impact on other processes? Throttle the load by
increasing the interval between files.
Be careful
The advantages of LOAD DATA LOCAL INFILE diminish rapidly as transaction size increases. If breaking up
a large set of data into smaller ones isn't an option, SQL might be the better choice.
SQL
SQL has one main advantage over flat files: it's easy to keep transaction sizes small. However, SQL can
take significantly longer to load than flat files and it can be difficult to determine where to resume
the load after a failure. For example, mysqldump files are not restartable. If a failure occurs while
loading a mysqldump file, the file requires modification or replacement before the load can resume. The
alternative is to restore to the point in time before the load and replay the file after the cause of the
failure has been corrected.
To create a checkpoint, simply take a DB snapshot. Any previous DB snapshots taken for checkpoints can
be removed without affecting durability or restore time.
Snapshots are fast too, so frequent checkpointing doesn't add significantly to load time.
• Create all secondary indexes before loading. This is counter-intuitive for those familiar with other
databases. Adding or modifying a secondary index causes MySQL to create a new table with the index
changes, copy the data from the existing table to the new table, and drop the original table.
• Load data in PK order. This is particularly helpful for InnoDB tables, where load times can be reduced
by 75–80 percent and data file size cut in half.
• Disable foreign key constraints foreign_key_checks=0. For flat files loaded with LOAD DATA
LOCAL INFILE, this is required in many cases. For any load, disabling FK checks provides significant
performance gains. Just be sure to enable the constraints and verify the data after the load.
• Load in parallel unless already near a resource limit. Use partitioned tables when appropriate.
• Use multi-value inserts when loading with SQL to minimize overhead when running statements. When
using mysqldump, this is done automatically.
• Reduce InnoDB log IO innodb_flush_log_at_trx_commit=0
• If you are loading data into a DB instance that does not have read replicas, set the sync_binlog
parameter to 0 while loading data. When data loading is complete, set the sync_binlog parameter to
back to 1.
• Load data before converting the DB instance to a Multi-AZ deployment. However, if the DB instance
already uses a Multi-AZ deployment, switching to a Single-AZ deployment for data loading is not
recommended, because doing so only provides marginal improvements.
1678
Amazon Relational Database Service User Guide
Importing data considerations
Note
Using innodb_flush_log_at_trx_commit=0 causes InnoDB to flush its logs every second instead
of at each commit. This provides a significant speed advantage, but can lead to data loss during
a crash. Use with caution.
Topics
• Restoring a backup into a MySQL DB instance (p. 1680)
• Importing data from a MariaDB or MySQL database to a MariaDB or MySQL DB instance (p. 1688)
• Importing data to an Amazon RDS MariaDB or MySQL database with reduced downtime (p. 1690)
• Importing data from any source to a MariaDB or MySQL DB instance (p. 1703)
1679
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance
The scenario described in this section restores a backup of an on-premises database. You can use this
technique for databases in other locations, such as Amazon EC2 or non-AWS cloud services, as long as
the database is accessible.
Importing backup files from Amazon S3 is supported for MySQL in all AWS Regions.
We recommend that you import your database to Amazon RDS by using backup files if your on-premises
database can be offline while the backup file is created, copied, and restored. If your database can't be
offline, you can use binary log (binlog) replication to update your database after you have migrated
to Amazon RDS through Amazon S3 as explained in this topic. For more information, see Configuring
binary log file position replication with an external source instance (p. 1724). You can also use the AWS
Database Migration Service to migrate your database to Amazon RDS. For more information, see What is
AWS Database Migration Service?
• You can only import your data to a new DB instance, not an existing DB instance.
• You must use Percona XtraBackup to create the backup of your on-premises database.
• You can't import data from a DB snapshot export to Amazon S3.
• You can't migrate from a source database that has tables defined outside of the default MySQL data
directory.
• You must import your data to the default minor version of your MySQL major version in your AWS
Region. For example, if your major version is MySQL 8.0, and the default minor version for your AWS
Region is 8.0.28, then you must import your data into a MySQL version 8.0.28 DB instance. You can
upgrade your DB instance after importing. For information about determining the default minor
version, see MySQL on Amazon RDS versions (p. 1627).
• Backward migration is not supported for both major versions and minor versions. For example, you
can't migrate from version 8.0 to version 5.7, and you can't migrate from version 8.0.32 to version
8.0.31.
1680
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance
The following are examples of file names that are not allowed:
"innodb_data_file_path=ibdata1:50M; ibdata2:50M:autoextend" and
"innodb_data_file_path=ibdata01:50M:autoextend".
• The maximum size of the restored database is the maximum database size supported minus the size of
the backup. So, if the maximum database size supported is 64 TiB, and the size of the backup is 30 TiB,
then the maximum size of the restored database is 34 TiB, as in the following example:
For information about the maximum database size supported by Amazon RDS for MySQL, see General
Purpose SSD storage (p. 102) and Provisioned IOPS SSD storage (p. 104).
1681
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance
If you already have an Amazon S3 bucket, you can use that. If you don't have an Amazon S3 bucket, you
can create a new one. If you want to create a new bucket, see Creating a bucket.
Use the Percona XtraBackup tool to create your backup. For more information, see Creating your
database backup (p. 1682).
If you already have an IAM role, you can use that. If you don't have an IAM role, you can create a new one
manually. Alternatively, you can choose to have a new IAM role created for you in your account by the
wizard when you restore the database by using the AWS Management Console. If you want to create a
new IAM role manually, or attach trust and permissions policies to an existing IAM role, see Creating an
IAM role manually (p. 1684). If you want to have a new IAM role created for you, follow the procedure in
Console (p. 1685).
You can create a full backup of your MySQL database files using Percona XtraBackup. Alternatively, if you
already use Percona XtraBackup to back up your MySQL database files, you can upload your existing full
and incremental backup directories and files.
For more information about backing up your database with Percona XtraBackup, see Percona XtraBackup
- documentation and The xtrabackup binary on the Percona website.
For example, the following command creates a backup of a MySQL database and stores the files in the
folder /on-premises/s3-restore/backup folder.
If you want to compress your backup into a single file (which can be split later, if needed), you can save
your backup in one of the following formats:
• Gzip (.gz)
• tar (.tar)
1682
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance
Note
Percona XtraBackup 8.0 only supports Percona xbstream for compression.
The following command creates a backup of your MySQL database split into multiple Gzip files.
The following command creates a backup of your MySQL database split into multiple tar files.
The following command creates a backup of your MySQL database split into multiple xbstream files.
When copying your existing full and incremental backup files to an Amazon S3 bucket, you must
recursively copy the contents of the base directory. Those contents include the full backup and also all
incremental backup directories and files. This copy must preserve the directory structure in the Amazon
S3 bucket. Amazon RDS iterates through all files and directories. Amazon RDS uses the xtrabackup-
checkpoints file that is included with each incremental backup to identify the base directory, and to
order incremental backups by log sequence number (LSN) range.
Amazon RDS consumes your backup files in alphabetical order and also in natural number order. Use the
split option when you issue the xtrabackup command to ensure that your backup files are written
and named in the proper order.
Amazon RDS doesn't support partial backups created using Percona XtraBackup. You can't use the
following options to create a partial backup when you back up the source files for your database: --
tables, --tables-exclude, --tables-file, --databases, --databases-exclude, or --
databases-file.
Amazon RDS supports incremental backups created using Percona XtraBackup. For more information
about creating incremental backups using Percona XtraBackup, see Incremental backup.
1683
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance
To manually create a new IAM role for importing your database from Amazon S3, create a role to
delegate permissions from Amazon RDS to your Amazon S3 bucket. When you create an IAM role,
you attach trust and permissions policies. To import your backup files from Amazon S3, use trust and
permissions policies similar to the examples following. For more information about creating the role, see
Creating a role to delegate permissions to an AWS service.
Alternatively, you can choose to have a new IAM role created for you by the wizard when you restore the
database by using the AWS Management Console. If you want to have a new IAM role created for you,
follow the procedure in Console (p. 1685)
The trust and permissions policies require that you provide an Amazon Resource Name (ARN). For more
information about ARN formatting, see Amazon Resource Names (ARNs) and AWS service namespaces.
{
"Version": "2012-10-17",
"Statement":
[{
"Effect": "Allow",
"Principal": {"Service": "rds.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}
Example Permissions policy for importing from Amazon S3 — IAM user permissions
{
"Version":"2012-10-17",
"Statement":
[
{
"Sid":"AllowS3AccessRole",
"Effect":"Allow",
"Action":"iam:PassRole",
"Resource":"arn:aws:iam::IAM User ID:role/S3Access"
}
]
}
{
"Version": "2012-10-17",
"Statement":
[
{
"Effect": "Allow",
"Action":
[
"s3:ListBucket",
1684
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::bucket_name"
},
{
"Effect": "Allow",
"Action":
[
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket_name/prefix*"
}
]
}
Note
If you include a file name prefix, include the asterisk (*) after the prefix. If you don't want to
specify a prefix, specify only an asterisk.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the top right corner of the Amazon RDS console, choose the AWS Region in which to create your
DB instance. Choose the same AWS Region as the Amazon S3 bucket that contains your database
backup.
3. In the navigation pane, choose Databases.
4. Choose Restore from S3.
1685
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance
5. Under S3 destination:
If you don't specify a prefix, then RDS creates your DB instance using all of the files and folders
in the root folder of the S3 bucket. If you do specify a prefix, then RDS creates your DB instance
using the files and folders in the S3 bucket where the path for the file begins with the specified
prefix.
For example, suppose that you store your backup files on S3 in a subfolder named backups, and
you have multiple sets of backup files, each in its own directory (gzip_backup1, gzip_backup2,
and so on). In this case, you specify a prefix of backups/gzip_backup1 to restore from the files in
the gzip_backup1 folder.
1686
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance
In the AWS Management Console, only the default minor version is available. You can upgrade
your DB instance after importing.
7. For IAM role, you can choose an existing IAM role.
8. (Optional) You can also have a new IAM role created for you by choosing Create a new role and
entering the IAM role name.
9. Specify your DB instance information. For information about each setting, see Settings for DB
instances (p. 308).
Note
Be sure to allocate enough memory for your new DB instance so that the restore operation
can succeed.
You can also choose Enable storage autoscaling to allow for future growth automatically.
10. Choose additional settings as needed.
11. Choose Create database.
AWS CLI
To import data from Amazon S3 to a new MySQL DB instance by using the AWS CLI, call the restore-
db-instance-from-s3 command with the following parameters. For information about each setting, see
Settings for DB instances (p. 308).
Note
Be sure to allocate enough memory for your new DB instance so that the restore operation can
succeed.
You can also use the --max-allocated-storage parameter to enable storage autoscaling
and allow for future growth automatically.
• --allocated-storage
• --db-instance-identifier
• --db-instance-class
• --engine
• --master-username
• --manage-master-user-password
• --s3-bucket-name
• --s3-ingestion-role-arn
• --s3-prefix
• --source-engine
• --source-engine-version
Example
1687
Amazon Relational Database Service User Guide
Importing data from an external database
--db-instance-identifier myidentifier \
--db-instance-class db.m5.large \
--engine mysql \
--master-username admin \
--manage-master-user-password \
--s3-bucket-name mybucket \
--s3-ingestion-role-arn arn:aws:iam::account-number:role/rolename \
--s3-prefix bucketprefix \
--source-engine mysql \
--source-engine-version 8.0.32 \
--max-allocated-storage 1000
For Windows:
RDS API
To import data from Amazon S3 to a new MySQL DB instance by using the Amazon RDS API, call the
RestoreDBInstanceFromS3 operation.
A typical mysqldump command to move data from an external database to an Amazon RDS DB instance
looks similar to the following.
mysqldump -u local_user \
--databases database_name \
--single-transaction \
--compress \
--order-by-primary \
-plocal_password | mysql -u RDS_user \
--port=port_number \
--host=host_name \
1688
Amazon Relational Database Service User Guide
Importing data from an external database
-pRDS_password
Important
Make sure not to leave a space between the -p option and the entered password.
Specify credentials other than the prompts shown here as a security best practice.
Make sure that you're aware of the following recommendations and considerations:
• Exclude the following schemas from the dump file: sys, performance_schema, and
information_schema. The mysqldump utility excludes these schemas by default.
• If you need to migrate users and privileges, consider using a tool that generates the data control
language (DCL) for recreating them, such as the pt-show-grants utility.
• To perform the import, make sure the user doing so has access to the DB instance. For more
information, see Controlling access with security groups (p. 2680).
• -u local_user – Use to specify a user name. In the first usage of this parameter, you specify the
name of a user account on the local MariaDB or MySQL database identified by the --databases
parameter.
• --databases database_name – Use to specify the name of the database on the local MariaDB or
MySQL instance that you want to import into Amazon RDS.
• --single-transaction – Use to ensure that all of the data loaded from the local database is
consistent with a single point in time. If there are other processes changing the data while mysqldump
is reading it, using this parameter helps maintain data integrity.
• --compress – Use to reduce network bandwidth consumption by compressing the data from the local
database before sending it to Amazon RDS.
• --order-by-primary – Use to reduce load time by sorting each table's data by its primary key.
• -plocal_password – Use to specify a password. In the first usage of this parameter, you specify the
password for the user account identified by the first -u parameter.
• -u RDS_user – Use to specify a user name. In the second usage of this parameter, you specify the
name of a user account on the default database for the MariaDB or MySQL DB instance identified by
the --host parameter.
• --port port_number – Use to specify the port for your MariaDB or MySQL DB instance. By default,
this is 3306 unless you changed the value when creating the instance.
• --host host_name – Use to specify the Domain Name System (DNS) name from the Amazon RDS DB
instance endpoint, for example, myinstance.123456789012.us-east-1.rds.amazonaws.com.
You can find the endpoint value in the instance details in the Amazon RDS Management Console.
• -pRDS_password – Use to specify a password. In the second usage of this parameter, you specify the
password for the user account identified by the second -u parameter.
Make sure to create any stored procedures, triggers, functions, or events manually in your Amazon RDS
database. If you have any of these objects in the database that you are copying, then exclude them when
you run mysqldump. To do so, include the following parameters with your mysqldump command: --
routines=0 --triggers=0 --events=0.
The following example copies the world sample database on the local host to a MySQL DB instance.
1689
Amazon Relational Database Service User Guide
Importing data with reduced downtime
--databases world \
--single-transaction \
--compress \
--order-by-primary \
--routines=0 \
--triggers=0 \
--events=0 \
-plocalpassword | mysql -u rdsuser \
--port=3306 \
--host=myinstance.123456789012.us-east-1.rds.amazonaws.com \
-prdspassword
For Windows, run the following command in a command prompt that has been opened by right-clicking
Command Prompt on the Windows programs menu and choosing Run as administrator:
mysqldump -u localuser ^
--databases world ^
--single-transaction ^
--compress ^
--order-by-primary ^
--routines=0 ^
--triggers=0 ^
--events=0 ^
-plocalpassword | mysql -u rdsuser ^
--port=3306 ^
--host=myinstance.123456789012.us-east-1.rds.amazonaws.com ^
-prdspassword
Note
Specify credentials other than the prompts shown here as a security best practice.
In this procedure, you transfer a copy of your database data to an Amazon EC2 instance and import the
data into a new Amazon RDS database. You then use replication to bring the Amazon RDS database
up-to-date with your live external instance, before redirecting your application to the Amazon RDS
database. Configure MariaDB replication based on global transaction identifiers (GTIDs) if the external
instance is MariaDB 10.0.24 or higher and the target instance is RDS for MariaDB. Otherwise, configure
replication based on binary log coordinates. We recommend GTID-based replication if your external
database supports it because GTID-based replication is a more reliable method. For more information,
see Global transaction ID in the MariaDB documentation.
Note
If you want to import data into a MySQL DB instance and your scenario supports it, we
recommend moving data in and out of Amazon RDS by using backup files and Amazon S3. For
more information, see Restoring a backup into a MySQL DB instance (p. 1680).
1690
Amazon Relational Database Service User Guide
Importing data with reduced downtime
Note
We don't recommend that you use this procedure with source MySQL databases from MySQL
versions earlier than version 5.5 because of potential replication issues. For more information,
see Replication compatibility between MySQL versions in the MySQL documentation.
1691
Amazon Relational Database Service User Guide
Importing data with reduced downtime
You can use the mysqldump utility to create a database backup in either SQL or delimited-text format.
We recommend that you do a test run with each format in a non-production environment to see which
method minimizes the amount of time that mysqldump runs.
We also recommend that you weigh mysqldump performance against the benefit offered by using the
delimited-text format for loading. A backup using delimited-text format creates a tab-separated text
file for each table being dumped. To reduce the amount of time required to import your database, you
can load these files in parallel using the LOAD DATA LOCAL INFILE command. For more information
about choosing a mysqldump format and then loading the data, see Using mysqldump for backups in
the MySQL documentation.
Before you start the backup operation, make sure to set the replication options on the MariaDB or
MySQL database that you are copying to Amazon RDS. The replication options include turning on
binary logging and setting a unique server ID. Setting these options causes your server to start logging
database transactions and prepares it to be a source replication instance later in this process.
Note
Use the --single-transaction option with mysqldump because it dumps a consistent
state of the database. To ensure a valid dump file, don't run data definition language (DDL)
statements while mysqldump is running. You can schedule a maintenance window for these
operations.
Exclude the following schemas from the dump file: sys, performance_schema, and
information_schema. The mysqldump utility excludes these schemas by default.
To migrate users and privileges, consider using a tool that generates the data control language
(DCL) for recreating them, such as the pt-show-grants utility.
sudo vi /etc/my.cnf
Add the log_bin and server_id options to the [mysqld] section. The log_bin option provides
a file name identifier for binary log files. The server_id option provides a unique identifier for the
server in source-replica relationships.
The following example shows the updated [mysqld] section of a my.cnf file.
[mysqld]
log-bin=mysql-bin
server-id=1
1692
Amazon Relational Database Service User Guide
Importing data with reduced downtime
Specify --master-data=2 to create a backup file that can be used to start replication between
servers. For more information, see the mysqldump documentation.
To improve performance and ensure data integrity, use the --order-by-primary and --single-
transaction options of mysqldump.
To avoid including the MySQL system database in the backup, do not use the --all-databases
option with mysqldump. For more information, see Creating a data snapshot using mysqldump in the
MySQL documentation.
Use chmod if necessary to make sure that the directory where the backup file is being created is
writeable.
Important
On Windows, run the command window as an administrator.
• To produce SQL output, use the following command.
sudo mysqldump \
--databases database_name \
--master-data=2 \
--single-transaction \
--order-by-primary \
-r backup.sql \
-u local_user \
-p password
Note
Specify credentials other than the prompts shown here as a security best practice.
For Windows:
mysqldump ^
--databases database_name ^
--master-data=2 ^
--single-transaction ^
--order-by-primary ^
-r backup.sql ^
-u local_user ^
-p password
Note
Specify credentials other than the prompts shown here as a security best practice.
• To produce delimited-text output, use the following command.
sudo mysqldump \
--tab=target_directory \
--fields-terminated-by ',' \
--fields-enclosed-by '"' \
--lines-terminated-by 0x0d0a \
database_name \
1693
Amazon Relational Database Service User Guide
Importing data with reduced downtime
--master-data=2 \
--single-transaction \
--order-by-primary \
-p password
For Windows:
mysqldump ^
--tab=target_directory ^
--fields-terminated-by "," ^
--fields-enclosed-by """ ^
--lines-terminated-by 0x0d0a ^
database_name ^
--master-data=2 ^
--single-transaction ^
--order-by-primary ^
-p password
Note
Specify credentials other than the prompts shown here as a security best practice.
Make sure to create any stored procedures, triggers, functions, or events manually in
your Amazon RDS database. If you have any of these objects in the database that you
are copying, exclude them when you run mysqldump. To do so, include the following
arguments with your mysqldump command: --routines=0 --triggers=0 --
events=0.
When using the delimited-text format, a CHANGE MASTER TO comment is returned when you
run mysqldump. This comment contains the master log file name and position. If the external
instance is other than MariaDB version 10.0.24 or higher, note the values for MASTER_LOG_FILE
and MASTER_LOG_POS. You need these values when setting up replication.
If you are using SQL format, you can get the master log file name and position in the CHANGE
MASTER TO comment in the backup file. If the external instance is MariaDB version 10.0.24 or
higher, you can get the GTID in the next step.
2. If the external instance you are using is MariaDB version 10.0.24 or higher, you use GTID-based
replication. Run SHOW MASTER STATUS on the external MariaDB instance to get the binary log file
name and position, then convert them to a GTID by running BINLOG_GTID_POS on the external
MariaDB instance.
gzip backup.sql
1694
Amazon Relational Database Service User Guide
Importing data with reduced downtime
1695
Amazon Relational Database Service User Guide
Importing data with reduced downtime
security group and add an inbound rule, choose Security Groups in the EC2 console navigation pane,
choose your security group, and then add an inbound rule for MySQL or Aurora specifying the private
IP address of your EC2 instance. To learn how to add an inbound rule to a VPC security group, see
Adding and removing rules in the Amazon VPC User Guide.
4. Copy your compressed database backup file from your local system to your Amazon EC2 instance.
Use chmod if necessary to make sure that you have write permission for the target directory of the
Amazon EC2 instance. You can use scp or a Secure Shell (SSH) client to copy the file. The following is
an example.
Important
Be sure to copy sensitive data using a secure network transfer protocol.
5. Connect to your Amazon EC2 instance and install the latest updates and the MySQL client tools using
the following commands.
For more information, see Connect to your instance in the Amazon Elastic Compute Cloud User Guide
for Linux.
Important
This example installs the MySQL client on an Amazon Machine Image (AMI) for an Amazon
Linux distribution. To install the MySQL client on a different distribution, such as Ubuntu or
Red Hat Enterprise Linux, this example doesn't work. For information about installing MySQL,
see Installing and Upgrading MySQL in the MySQL documentation.
6. While connected to your Amazon EC2 instance, decompress your database backup file. The following
are examples.
• To decompress SQL output, use the following command.
gzip backup.sql.gz -d
1696
Amazon Relational Database Service User Guide
Importing data with reduced downtime
To create a MariaDB or MySQL DB instance, follow the instructions in Creating an Amazon RDS DB
instance (p. 300) and use the following guidelines:
• Specify a DB engine version that is compatible with your source DB instance, as follows:
• If your source instance is MySQL 5.5.x, the Amazon RDS DB instance must be MySQL.
• If your source instance is MySQL 5.6.x or 5.7.x, the Amazon RDS DB instance must be MySQL or
MariaDB.
• If your source instance is MySQL 8.0.x, the Amazon RDS DB instance must be MySQL 8.0.x.
• If your source instance is MariaDB 5.5 or higher, the Amazon RDS DB instance must be MariaDB.
• Specify the same virtual private cloud (VPC) and VPC security group as for your Amazon EC2
instance. This approach ensures that your Amazon EC2 instance and your Amazon RDS instance
are visible to each other over the network. Make sure your DB instance is publicly accessible. To
set up replication with your source database as described later, your DB instance must be publicly
accessible.
• Don't configure multiple Availability Zones, backup retention, or read replicas until after you have
imported the database backup. When that import is completed, you can configure Multi-AZ and
backup retention for the production instance.
3. Review the default configuration options for the Amazon RDS database. If the default parameter
group for the database doesn't have the configuration options that you want, find a different one
1697
Amazon Relational Database Service User Guide
Importing data with reduced downtime
that does or create a new parameter group. For more information on creating a parameter group,
see Working with parameter groups (p. 347).
4. Connect to the new Amazon RDS database as the master user. Create the users required to support
the administrators, applications, and services that need to access the instance. The hostname for the
Amazon RDS database is the Endpoint value for this instance without including the port number.
An example is mysampledb.123456789012.us-west-2.rds.amazonaws.com. You can find the
endpoint value in the database details in the Amazon RDS Management Console.
5. Connect to your Amazon EC2 instance. For more information, see Connect to your instance in the
Amazon Elastic Compute Cloud User Guide for Linux.
6. Connect to your Amazon RDS database as a remote host from your Amazon EC2 instance using the
mysql command. The following is an example.
• For delimited-text format, first create the database, if it isn't the default database you created
when setting up the Amazon RDS database.
mysql> LOAD DATA LOCAL INFILE 'table1.txt' INTO TABLE table1 FIELDS TERMINATED BY ','
ENCLOSED BY '"' LINES TERMINATED BY '0x0d0a';
$ mysql> LOAD DATA LOCAL INFILE 'table2.txt' INTO TABLE table2 FIELDS TERMINATED BY
',' ENCLOSED BY '"' LINES TERMINATED BY '0x0d0a';
etc...
To improve performance, you can perform these operations in parallel from multiple connections
so that all of your tables are created and then loaded at the same time.
Note
If you used any data-formatting options with mysqldump when you initially dumped
the table, make sure to use the same options with mysqlimport or LOAD DATA LOCAL
INFILE to ensure proper interpretation of the data file contents.
8. Run a simple SELECT query against one or two of the tables in the imported database to verify that
the import was successful.
1698
Amazon Relational Database Service User Guide
Importing data with reduced downtime
If you no longer need the Amazon EC2 instance used in this procedure, terminate the EC2 instance
to reduce your AWS resource usage. To terminate an EC2 instance, see Terminating an instance in the
Amazon EC2 User Guide.
The permissions required to start replication on an Amazon RDS database are restricted and not
available to your Amazon RDS master user. Because of this, make sure to use either the Amazon RDS
mysql.rds_set_external_master (p. 1769) command or the mysql.rds_set_external_master_gtid (p. 1345)
command to configure replication, and the mysql.rds_start_replication (p. 1780) command to start
replication between your live database and your Amazon RDS database.
To start replication
Earlier, you turned on binary logging and set a unique server ID for your source database. Now you can
set up your Amazon RDS database as a replica with your live database as the source replication instance.
1. In the Amazon RDS Management Console, add the IP address of the server that hosts the source
database to the VPC security group for the Amazon RDS database. For more information on modifying
a VPC security group, see Security groups for your VPC in the Amazon Virtual Private Cloud User Guide.
You might also need to configure your local network to permit connections from the IP address of
your Amazon RDS database, so that it can communicate with your source instance. To find the IP
address of the Amazon RDS database, use the host command.
host rds_db_endpoint
1699
Amazon Relational Database Service User Guide
Importing data with reduced downtime
The hostname is the DNS name from the Amazon RDS database endpoint, for example
myinstance.123456789012.us-east-1.rds.amazonaws.com. You can find the endpoint value
in the instance details in the Amazon RDS Management Console.
2. Using the client of your choice, connect to the source instance and create a user to be used for
replication. This account is used solely for replication and must be restricted to your domain to
improve security. The following is an example.
MySQL 8.0
Note
Specify credentials other than the prompts shown here as a security best practice.
3. For the source instance, grant REPLICATION CLIENT and REPLICATION SLAVE privileges to
your replication user. For example, to grant the REPLICATION CLIENT and REPLICATION SLAVE
privileges on all databases for the 'repl_user' user for your domain, issue the following command.
MySQL 8.0
Note
Specify credentials other than the prompts shown here as a security best practice.
4. If you used SQL format to create your backup file and the external instance is not MariaDB 10.0.24 or
higher, look at the contents of that file.
cat backup.sql
The file includes a CHANGE MASTER TO comment that contains the master log file name and
position. This comment is included in the backup file when you use the --master-data option with
mysqldump. Note the values for MASTER_LOG_FILE and MASTER_LOG_POS.
--
-- Position to start replication or point-in-time recovery from
--
If you used delimited text format to create your backup file and the external instance isn't MariaDB
10.0.24 or higher, you should already have binary log coordinates from step 1 of the procedure at "To
create a backup copy of your existing database" in this topic.
1700
Amazon Relational Database Service User Guide
Importing data with reduced downtime
If the external instance is MariaDB 10.0.24 or higher, you should already have the GTID from which to
start replication from step 2 of the procedure at "To create a backup copy of your existing database" in
this topic.
5. Make the Amazon RDS database the replica. If the external instance isn't MariaDB 10.0.24 or higher,
connect to the Amazon RDS database as the master user and identify the source database as the
source replication instance by using the mysql.rds_set_external_master (p. 1769) command. Use the
master log file name and master log position that you determined in the previous step if you have a
SQL format backup file. Or use the name and position that you determined when creating the backup
files if you used delimited-text format. The following is an example.
Note
Specify credentials other than the prompts shown here as a security best practice.
If the external instance is MariaDB 10.0.24 or higher, connect to the Amazon RDS database as
the master user and identify the source database as the source replication instance by using the
mysql.rds_set_external_master_gtid (p. 1345) command. Use the GTID that you determined in step 2
of the procedure at "To create a backup copy of your existing database" in this topic.. The following is
an example.
CALL mysql.rds_start_replication;
7. On the Amazon RDS database, run the SHOW REPLICA STATUS command to determine when the
replica is up-to-date with the source replication instance. The results of the SHOW REPLICA STATUS
command include the Seconds_Behind_Master field. When the Seconds_Behind_Master field
returns 0, then the replica is up-to-date with the source replication instance.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS.
If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
For a MariaDB 10.5, 10.6, or 10.11 DB instance, run the mysql.rds_replica_status (p. 1344) procedure
instead of the MySQL command.
8. After the Amazon RDS database is up-to-date, turn on automated backups so you can restore
that database if needed. You can turn on or modify automated backups for your Amazon RDS
database using the Amazon RDS Management Console. For more information, see Working with
backups (p. 591).
1701
Amazon Relational Database Service User Guide
Importing data with reduced downtime
To redirect your live application to your MariaDB or MySQL database and stop
replication
1. To add the VPC security group for the Amazon RDS database, add the IP address of the server that
hosts the application. For more information on modifying a VPC security group, see Security groups
for your VPC in the Amazon Virtual Private Cloud User Guide.
2. Verify that the Seconds_Behind_Master field in the SHOW REPLICA STATUS command results is 0,
which indicates that the replica is up-to-date with the source replication instance.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS.
If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
For a MariaDB 10.5, 10.6, or 10.11 DB instance, run the mysql.rds_replica_status (p. 1344) procedure
instead of the MySQL command.
3. Close all connections to the source when their transactions complete.
4. Update your application to use the Amazon RDS database. This update typically involves changing the
connection settings to identify the hostname and port of the Amazon RDS database, the user account
and password to connect with, and the database to use.
5. Connect to the DB instance.
1702
Amazon Relational Database Service User Guide
Importing data from any source
CALL mysql.rds_stop_replication;
7. Run the mysql.rds_reset_external_master (p. 1769) command on your Amazon RDS database to reset
the replication configuration so this instance is no longer identified as a replica.
CALL mysql.rds_reset_external_master;
8. Turn on additional Amazon RDS features such as Multi-AZ support and read replicas. For more
information, see Configuring and managing a Multi-AZ deployment (p. 492) and Working with DB
instance read replicas (p. 438).
We also recommend creating DB snapshots of the target Amazon RDS DB instance before and after the
data load. Amazon RDS DB snapshots are complete backups of your DB instance that can be used to
restore your DB instance to a known state. When you initiate a DB snapshot, I/O operations to your DB
instance are momentarily suspended while your database is backed up.
Creating a DB snapshot immediately before the load makes it possible for you to restore the database
to its state before the load, if you need to. A DB snapshot taken immediately after the load protects
you from having to load the data again in case of a mishap and can also be used to seed new database
instances.
The following list shows the steps to take. Each step is discussed in more detail following.
Whenever possible, order the data by the primary key of the table being loaded. Doing this drastically
improves load times and minimizes disk storage requirements.
The speed and efficiency of this procedure depends on keeping the size of the files small. If the
uncompressed size of any individual file is larger than 1 GiB, split it into multiple files and load each one
separately.
1703
Amazon Relational Database Service User Guide
Importing data from any source
On Unix-like systems (including Linux), use the split command. For example, the following command
splits the sales.csv file into multiple files of less than 1 GiB, splitting only at line breaks (-C 1024m).
The new files are named sales.part_00, sales.part_01, and so on.
Of course, this might not be possible or practical. If you can't stop applications from accessing the DB
instance before the load, take steps to ensure the availability and integrity of your data. The specific
steps required vary greatly depending upon specific use cases and site requirements.
The example following uses the AWS CLI create-db-snapshot command to create a DB snapshot of
the AcmeRDS instance and give the DB snapshot the identifier "preload".
For Windows:
You can also use the restore from DB snapshot functionality to create test DB instances for dry runs or to
undo changes made during the load.
Keep in mind that restoring a database from a DB snapshot creates a new DB instance that, like all
DB instances, has a unique identifier and endpoint. To restore the DB instance without changing the
endpoint, first delete the DB instance so that you can reuse the endpoint.
For example, to create a DB instance for dry runs or other testing, you give the DB instance its own
identifier. In the example, AcmeRDS-2" is the identifier. The example connects to the DB instance using
the endpoint associated with AcmeRDS-2.
1704
Amazon Relational Database Service User Guide
Importing data from any source
For Windows:
To reuse the existing endpoint, first delete the DB instance and then give the restored database the same
identifier.
For Windows:
The preceding example takes a final DB snapshot of the DB instance before deleting it. This is optional
but recommended.
Turning off automated backups erases all existing backups, so point-in-time recovery isn't possible after
automated backups have been turned off. Disabling automated backups is a performance optimization
and isn't required for data loads. Manual DB snapshots aren't affected by turning off automated backups.
All existing manual DB snapshots are still available for restore.
Turning off automated backups reduces load time by about 25 percent and reduces the amount of
storage space required during the load. If you plan to load data into a new DB instance that contains
no data, turning off backups is an easy way to speed up the load and avoid using the additional storage
needed for backups. However, in some cases you might plan to load into a DB instance that already
contains data. If so, weigh the benefits of turning off backups against the impact of losing the ability to
perform point-in-time-recovery.
DB instances have automated backups turned on by default (with a one day retention period). To turn off
automated backups, set the backup retention period to zero. After the load, you can turn backups back
on by setting the backup retention period to a nonzero value. To turn on or turn off backups, Amazon
RDS shuts the DB instance down and restarts it to turn MariaDB or MySQL logging on or off.
1705
Amazon Relational Database Service User Guide
Importing data from any source
Use the AWS CLI modify-db-instance command to set the backup retention to zero and apply the
change immediately. Setting the retention period to zero requires a DB instance restart, so wait until the
restart has completed before proceeding.
For Windows:
You can check the status of your DB instance with the AWS CLI describe-db-instances command.
The following example displays the DB instance status of the AcmeRDS DB instance.
Use the --compress option to minimize network traffic. The --fields-terminated-by=',' option is used for
CSV files, and the --local option specifies that the incoming data is located on the client. Without the --
local option, the Amazon RDS DB instance looks for the data on the database host, so always specify the
--local option. For the --host option, specify the DB instance endpoint of the RDS for MySQL DB instance.
In the following examples, replace master_user with the master username for your DB instance.
Replace hostname with the endpoint for your DB instance. An example of a DB instance endpoint is my-
db-instance.123456789012.us-west-2.rds.amazonaws.com.
For RDS for MySQL version 8.0.15 and higher, run the following statement before using the mysqlimport
utility.
mysqlimport --local \
--compress \
--user=master_user \
--password \
--host=hostname \
--fields-terminated-by=',' Acme sales.part_*
For Windows:
1706
Amazon Relational Database Service User Guide
Importing data from any source
mysqlimport --local ^
--compress ^
--user=master_user ^
--password ^
--host=hostname ^
--fields-terminated-by="," Acme sales.part_*
For very large data loads, take additional DB snapshots periodically between loading files and note
which files have been loaded. If a problem occurs, you can easily resume from the point of the last DB
snapshot, avoiding lengthy reloads.
The following example uses the AWS CLI modify-db-instance command to turn on automated
backups for the AcmeRDS DB instance and set the retention period to one day.
For Windows:
1707
Amazon Relational Database Service User Guide
Working with MySQL replication
You can use global transaction identifiers (GTIDs) for replication with RDS for MySQL. For more
information, see Using GTID-based replication for Amazon RDS for MySQL (p. 1719).
You can also set up replication between an RDS for MySQL DB instance and a MariaDB or MySQL instance
that is external to Amazon RDS. For information about configuring replication with an external source,
see Configuring binary log file position replication with an external source instance (p. 1724).
For any of these replication options, you can use either row-based replication, statement-based, or mixed
replication. Row-based replication only replicates the changed rows that result from a SQL statement.
Statement-based replication replicates the entire SQL statement. Mixed replication uses statement-
based replication when possible, but switches to row-based replication when SQL statements that
are unsafe for statement-based replication are run. In most cases, mixed replication is recommended.
The binary log format of the DB instance determines whether replication is row-based, statement-
based, or mixed. For information about setting the binary log format, see Configuring MySQL binary
logging (p. 921).
Note
You can configure replication to import databases from a MariaDB or MySQL instance
that is external to Amazon RDS, or to export databases to such instances. For more
information, see Importing data to an Amazon RDS MariaDB or MySQL database with
reduced downtime (p. 1690) and Exporting data from a MySQL DB instance by using
replication (p. 1728).
Topics
• Working with MySQL read replicas (p. 1708)
• Using GTID-based replication for Amazon RDS for MySQL (p. 1719)
• Configuring binary log file position replication with an external source instance (p. 1724)
Topics
• Configuring read replicas with MySQL (p. 1709)
• Configuring replication filters with MySQL (p. 1709)
• Configuring delayed replication with MySQL (p. 1714)
• Updating read replicas with MySQL (p. 1716)
• Working with Multi-AZ read replica deployments with MySQL (p. 1716)
• Using cascading read replicas with RDS for MySQL (p. 1716)
• Monitoring MySQL read replicas (p. 1717)
• Starting and stopping replication with MySQL read replicas (p. 1718)
• Troubleshooting a MySQL read replica problem (p. 1718)
1708
Amazon Relational Database Service User Guide
Working with MySQL read replicas
On RDS for MySQL version 5.7.37 and higher MySQL 5.7 versions and RDS for MySQL 8.0.28 and
higher 8.0 versions, you can configure replication using global transaction identifiers (GTIDs). For more
information, see Using GTID-based replication for Amazon RDS for MySQL (p. 1719).
You can create up to 15 read replicas from one DB instance within the same Region. For replication to
operate effectively, each read replica should have the same amount of compute and storage resources as
the source DB instance. If you scale the source DB instance, also scale the read replicas.
RDS for MySQL supports cascading read replicas. To learn how to configure cascading read replicas, see
Using cascading read replicas with RDS for MySQL (p. 1716).
You can run multiple read replica create and delete actions at the same time that reference the same
source DB instance. When you perform these actions, stay within the limit of 15 read replicas for each
source instance.
A read replica of a MySQL DB instance can't use a lower DB engine version than its source DB instance.
1. Stop all data manipulation language (DML) and data definition language (DDL) operations on non-
transactional tables in the source DB instance and wait for them to complete. SELECT statements can
continue running.
2. Flush and lock the tables in the source DB instance.
3. Create the read replica using one of the methods in the following sections.
4. Check the progress of the read replica creation using, for example, the DescribeDBInstances API
operation. Once the read replica is available, unlock the tables of the source DB instance and resume
normal database operations.
• To reduce the size of a read replica. With replication filtering, you can exclude the databases and tables
that aren't needed on the read replica.
• To exclude databases and tables from read replicas for security reasons.
• To replicate different databases and tables for specific use cases at different read replicas. For example,
you might use specific read replicas for analytics or sharding.
• For a DB instance that has read replicas in different AWS Regions, to replicate different databases or
tables in different AWS Regions.
1709
Amazon Relational Database Service User Guide
Working with MySQL read replicas
Note
You can also use replication filters to specify which databases and tables are replicated
with a primary MySQL DB instance that is configured as a replica in an inbound replication
topology. For more information about this configuration, see Configuring binary log file position
replication with an external source instance (p. 1724).
Topics
• Setting replication filtering parameters for RDS for MySQL (p. 1710)
• Replication filtering limitations for RDS for MySQL (p. 1711)
• Replication filtering examples for RDS for MySQL (p. 1711)
• Viewing the replication filters for a read replica (p. 1713)
• replicate-do-db – Replicate changes to the specified databases. When you set this parameter for a
read replica, only the databases specified in the parameter are replicated.
• replicate-ignore-db – Don't replicate changes to the specified databases. When the replicate-
do-db parameter is set for a read replica, this parameter isn't evaluated.
• replicate-do-table – Replicate changes to the specified tables. When you set this parameter for a
read replica, only the tables specified in the parameter are replicated. Also, when the replicate-do-
db or replicate-ignore-db parameter is set, make sure to include the database that includes the
specified tables in replication with the read replica.
• replicate-ignore-table – Don't replicate changes to the specified tables. When the replicate-
do-table parameter is set for a read replica, this parameter isn't evaluated.
• replicate-wild-do-table – Replicate tables based on the specified database and table
name patterns. The % and _ wildcard characters are supported. When the replicate-do-db or
replicate-ignore-db parameter is set, make sure to include the database that includes the
specified tables in replication with the read replica.
• replicate-wild-ignore-table – Don't replicate tables based on the specified database and table
name patterns. The % and _ wildcard characters are supported. When the replicate-do-table or
replicate-wild-do-table parameter is set for a read replica, this parameter isn't evaluated.
The parameters are evaluated in the order that they are listed. For more information about how these
parameters work, see the MySQL documentation:
By default, each of these parameters has an empty value. On each read replica, you can use these
parameters to set, change, and delete replication filters. When you set one of these parameters, separate
each filter from others with a comma.
You can use the % and _ wildcard characters in the replicate-wild-do-table and replicate-
wild-ignore-table parameters. The % wildcard matches any number of characters, and the _
wildcard matches only one character.
The binary logging format of the source DB instance is important for replication because it determines
the record of data changes. The setting of the binlog_format parameter determines whether the
1710
Amazon Relational Database Service User Guide
Working with MySQL read replicas
replication is row-based or statement-based. For more information, see Configuring MySQL binary
logging (p. 921).
Note
All data definition language (DDL) statements are replicated as statements, regardless of the
binlog_format setting on the source DB instance.
You can set parameters in a parameter group using the AWS Management Console, AWS CLI, or RDS API.
For information about setting parameters, see Modifying parameters in a DB parameter group (p. 352).
When you set parameters in a parameter group, all of the DB instances associated with the parameter
group use the parameter settings. If you set the replication filtering parameters in a parameter group,
make sure that the parameter group is associated only with read replicas. Leave the replication filtering
parameters empty for source DB instances.
The following examples set the parameters using the AWS CLI. These examples set ApplyMethod to
immediate so that the parameter changes occur immediately after the CLI command completes. If you
want a pending change to be applied after the read replica is rebooted, set ApplyMethod to pending-
reboot.
The following example includes the mydb1 and mydb2 databases in replication.
1711
Amazon Relational Database Service User Guide
Working with MySQL read replicas
For Windows:
The following example includes the table1 and table2 tables in database mydb1 in replication.
For Windows:
The following example includes tables with names that begin with order and return in database mydb
in replication.
For Windows:
The following example excludes the mydb5 and mydb6 databases from replication.
1712
Amazon Relational Database Service User Guide
Working with MySQL read replicas
For Windows:
The following example excludes tables table1 in database mydb5 and table2 in database mydb6 from
replication.
For Windows:
The following example excludes tables with names that begin with order and return in database
mydb7 from replication.
For Windows:
1713
Amazon Relational Database Service User Guide
Working with MySQL read replicas
• Check the settings of the replication filtering parameters in the parameter group associated with the
read replica.
For instructions, see Viewing parameter values for a DB parameter group (p. 359).
• In a MySQL client, connect to the read replica and run the SHOW REPLICA STATUS statement.
In the output, the following fields show the replication filters for the read replica:
• Replicate_Do_DB
• Replicate_Ignore_DB
• Replicate_Do_Table
• Replicate_Ignore_Table
• Replicate_Wild_Do_Table
• Replicate_Wild_Ignore_Table
For more information about these fields, see Checking Replication Status in the MySQL
documentation.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS.
If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
• Stop replication to the read replica before the change that caused the disaster is sent to it.
You specify a location just before the disaster using the mysql.rds_start_replication_until (p. 1780)
stored procedure.
• Promote the read replica to be the new source DB instance by using the instructions in Promoting a
read replica to be a standalone DB instance (p. 447).
Note
• On RDS for MySQL 8.0, delayed replication is supported for MySQL 8.0.28 and higher. On RDS
for MySQL 5.7, delayed replication is supported for MySQL 5.7.37 and higher.
• Use stored procedures to configure delayed replication. You can't configure delayed
replication with the AWS Management Console, the AWS CLI, or the Amazon RDS API.
• On RDS for MySQL 5.7.37 and higher MySQL 5.7 versions and RDS for MySQL 8.0.28
and higher 8.0 versions, you can use replication based on global transaction identifiers
(GTIDs) in a delayed replication configuration. If you use GTID-based replication, use
the mysql.rds_start_replication_until_gtid (p. 1781) stored procedure instead of the
mysql.rds_start_replication_until (p. 1780) stored procedure. For more information
about GTID-based replication, see Using GTID-based replication for Amazon RDS for
MySQL (p. 1719).
Topics
1714
Amazon Relational Database Service User Guide
Working with MySQL read replicas
1. Using a MySQL client, connect to the MySQL DB instance to be the source for read replicas as the
master user.
2. Run the mysql.rds_set_configuration (p. 1758) stored procedure with the target delay
parameter.
For example, run the following stored procedure to specify that replication is delayed by at least one
hour (3,600 seconds) for any read replica created from the current DB instance.
Note
After running this stored procedure, any read replica you create using the AWS CLI or
Amazon RDS API is configured with replication delayed by the specified number of seconds.
1. Using a MySQL client, connect to the read replica as the master user.
2. Use the mysql.rds_stop_replication (p. 1782) stored procedure to stop replication.
3. Run the mysql.rds_set_source_delay (p. 1777) stored procedure.
For example, run the following stored procedure to specify that replication to the read replica is
delayed by at least one hour (3600 seconds).
call mysql.rds_set_source_delay(3600);
1. Using a MySQL client, connect to the source MySQL DB instance as the master user.
2. Run the mysql.rds_start_replication_until (p. 1780) stored procedure.
1715
Amazon Relational Database Service User Guide
Working with MySQL read replicas
The following example initiates replication and replicates changes until it reaches location 120 in
the mysql-bin-changelog.000777 binary log file. In a disaster recovery scenario, assume that
location 120 is just before the disaster.
call mysql.rds_start_replication_until(
'mysql-bin-changelog.000777',
120);
Replication stops automatically when the stop point is reached. The following RDS event is generated:
Replication has been stopped since the replica reached the stop point specified
by the rds_start_replication_until stored procedure.
Although you can enable updates by setting the read_only parameter to 0 in the DB parameter group
for the read replica, we recommend that you don't do so because it can cause problems if the read
replica becomes incompatible with the source DB instance. For maintenance operations, we recommend
that you use blue/green deployments. For more information, see Using Blue/Green Deployments for
database updates (p. 566).
If you disable read-only on a read replica, change the value of the read_only parameter back to 1 as
soon as possible.
You can create a read replica as a Multi-AZ DB instance. Amazon RDS creates a standby of your replica in
another Availability Zone for failover support for the replica. Creating your read replica as a Multi-AZ DB
instance is independent of whether the source database is a Multi-AZ DB instance.
With cascading read replicas, your RDS for MySQL DB instance sends data to the first read replica in the
chain. That read replica then sends data to the second replica in the chain, and so on. The end result is
that all read replicas in the chain have the changes from the RDS for MySQL DB instance, but without the
overhead solely on the source DB instance.
1716
Amazon Relational Database Service User Guide
Working with MySQL read replicas
You can create a series of up to three read replicas in a chain from a source RDS for MySQL DB instance.
For example, suppose that you have an RDS for MySQL DB instance, mysql-main. You can do the
following:
• Starting with mysql-main, create the first read replica in the chain, read-replica-1.
• Next, from read-replica-1, create the next read replica in the chain, read-replica-2.
• Finally, from read-replica-2, create the third read replica in the chain, read-replica-3.
You can't create another read replica beyond this third cascading read replica in the series for mysql-
main. A complete series of instances from an RDS for MySQL source DB instance through to the end of a
series of cascading read replicas can consist of at most four DB instances.
For cascading read replicas to work, each source RDS for MySQL DB instance must have automated
backups turned on. To turn on automatic backups on a read replica, first create the read replica, and
then modify the read replica to turn on automatic backups. For more information, see Creating a read
replica (p. 445).
As with any read replica, you can promote a read replica that's part of a cascade. Promoting a read
replica from within a chain of read replicas removes that replica from the chain. For example, suppose
that you want to move some of the workload from your mysql-main DB instance to a new instance for
use by the accounting department only. Assuming the chain of three read replicas from the example, you
decide to promote read-replica-2. The chain is affected as follows:
For more information about promoting read replicas, see Promoting a read replica to be a standalone DB
instance (p. 447).
Common causes for replication lag for MySQL are the following:
• A network outage.
• Writing to tables that have different indexes on a read replica. If the read_only parameter is set to 0
on the read replica, replication can break if the read replica becomes incompatible with the source DB
instance. After you've performed maintenance tasks on the read replica, we recommend that you set
the read_only parameter back to 1.
• Using a nontransactional storage engine such as MyISAM. Replication is only supported for the InnoDB
storage engine on MySQL.
When the ReplicaLag metric reaches 0, the replica has caught up to the source DB instance. If the
ReplicaLag metric returns -1, then replication is currently not active. ReplicaLag = -1 is equivalent to
Seconds_Behind_Master = NULL.
1717
Amazon Relational Database Service User Guide
Working with MySQL read replicas
If replication is stopped for more than 30 consecutive days, either manually or due to a replication
error, Amazon RDS terminates replication between the source DB instance and all read replicas. It does
so to prevent increased storage requirements on the source DB instance and long failover times. The
read replica DB instance is still available. However, replication can't be resumed because the binary logs
required by the read replica are deleted from the source DB instance after replication is terminated. You
can create a new read replica for the source DB instance to reestablish replication.
The replication technologies for MySQL are asynchronous. Because they are asynchronous, occasional
BinLogDiskUsage increases on the source DB instance and ReplicaLag on the read replica are to be
expected. For example, a high volume of write operations to the source DB instance can occur in parallel.
In contrast, write operations to the read replica are serialized using a single I/O thread, which can lead to
a lag between the source instance and read replica. For more information about read-only replicas in the
MySQL documentation, see Replication implementation details.
You can do several things to reduce the lag between updates to a source DB instance and the subsequent
updates to the read replica, such as the following:
• Sizing a read replica to have a storage size and DB instance class comparable to the source DB
instance.
• Ensuring that parameter settings in the DB parameter groups used by the source DB instance and
the read replica are compatible. For more information and an example, see the discussion of the
max_allowed_packet parameter later in this section.
Amazon RDS monitors the replication status of your read replicas and updates the Replication State
field of the read replica instance to Error if replication stops for any reason. An example might be if
DML queries run on your read replica conflict with the updates made on the source DB instance.
1718
Amazon Relational Database Service User Guide
Using GTID-based replication
You can review the details of the associated error thrown by the MySQL engine by viewing the
Replication Error field. Events that indicate the status of the read replica are also generated,
including RDS-EVENT-0045 (p. 887), RDS-EVENT-0046 (p. 888), and RDS-EVENT-0047 (p. 883). For
more information about events and subscribing to events, see Working with Amazon RDS event
notification (p. 855). If a MySQL error message is returned, review the error number in the MySQL error
message documentation.
One common issue that can cause replication errors is when the value for the max_allowed_packet
parameter for a read replica is less than the max_allowed_packet parameter for the source DB
instance. The max_allowed_packet parameter is a custom parameter that you can set in a DB
parameter group. You use max_allowed_packet to specify the maximum size of DML code that can
be run on the database. In some cases, the max_allowed_packet value in the DB parameter group
associated with a read replica is smaller than the max_allowed_packet value in the DB parameter
group associated with the source DB instance. In these cases, the replication process can throw the error
Packet bigger than 'max_allowed_packet' bytes and stop replication. To fix the error, have
the source DB instance and read replica use DB parameter groups with the same max_allowed_packet
parameter values.
Other common situations that can cause replication errors include the following:
• Writing to tables on a read replica. In some cases, you might create indexes on a read replica that are
different from the indexes on the source DB instance. If you do, set the read_only parameter to 0
to create the indexes. If you write to tables on the read replica, it might break replication if the read
replica becomes incompatible with the source DB instance. After you perform maintenance tasks on
the read replica, we recommend that you set the read_only parameter back to 1.
• Using a non-transactional storage engine such as MyISAM. Read replicas require a transactional
storage engine. Replication is only supported for the InnoDB storage engine on MySQL.
• Using unsafe nondeterministic queries such as SYSDATE(). For more information, see Determination
of safe and unsafe statements in binary logging.
If you decide that you can safely skip an error, you can follow the steps described in the section Skipping
the current replication error (p. 1744). Otherwise, you can first delete the read replica. Then you create
an instance using the same DB instance identifier so that the endpoint remains the same as that of your
old read replica. If a replication error is fixed, the Replication State changes to replicating.
If you use binlog replication and aren't familiar with GTID-based replication with MySQL, see Replication
with global transaction identifiers in the MySQL documentation for background.
GTID-based replication is supported for all RDS for MySQL 5.7 versions, and RDS for MySQL version
8.0.26 and higher MySQL 8.0 versions. All MySQL DB instances in a replication configuration must meet
this requirement.
Topics
• Overview of global transaction identifiers (GTIDs) (p. 1720)
• Parameters for GTID-based replication (p. 1720)
• Configuring GTID-based replication for new read replicas (p. 1721)
• Configuring GTID-based replication for existing read replicas (p. 1721)
1719
Amazon Relational Database Service User Guide
Using GTID-based replication
• Disabling GTID-based replication for a MySQL DB instance with read replicas (p. 1723)
In a replication configuration, GTIDs are unique across all DB instances. GTIDs simplify replication
configuration because when you use them, you don't have to refer to log file positions. GTIDs also make
it easier to track replicated transactions and determine whether the source instance and replicas are
consistent.
You can use GTID-based replication to replicate data with RDS for MySQL read replicas. You can
configure GTID-based replication when you are creating new read replicas, or you can convert existing
read replicas to use GTID-based replication.
You can also use GTID-based replication in a delayed replication configuration with RDS for MySQL. For
more information, see Configuring delayed replication with MySQL (p. 1714).
gtid_mode OFF, OFF_PERMISSIVE, OFF specifies that new transactions are anonymous
ON_PERMISSIVE, ON transactions (that is, don't have GTIDs), and a
transaction must be anonymous to be replicated.
enforce_gtid_consistency
OFF, ON, WARN OFF allows transactions to violate GTID
consistency.
1720
Amazon Relational Database Service User Guide
Using GTID-based replication
Note
In the AWS Management Console, the gtid_mode parameter appears as gtid-mode.
For GTID-based replication, use these settings for the parameter group for your DB instance or read
replica:
• ON and ON_PERMISSIVE apply only to outgoing replication from an RDS DB instance. Both of these
values cause your RDS DB instance to use GTIDs for transactions that are replicated. ON requires that
the target database also use GTID-based replication. ON_PERMISSIVE makes GTID-based replication
optional on the target database.
• OFF_PERMISSIVE, if set, means that your RDS DB instances can accept incoming replication from
a source database. They can do this regardless of whether the source database uses GTID-based
replication.
• OFF, if set, means that your RDS DB instance only accepts incoming replication from source databases
that don't use GTID-based replication.
For more information about parameter groups, see Working with parameter groups (p. 347).
1. Make sure that the parameter group associated with the DB instance has the following parameter
settings:
• gtid_mode – ON or ON_PERMISSIVE
• enforce_gtid_consistency – ON
For more information about setting configuration parameters using parameter groups, see Working
with parameter groups (p. 347).
2. If you changed the parameter group of the DB instance, reboot the DB instance. For more
information on how to do so, see Rebooting a DB instance (p. 436).
3. Create one or more read replicas of the DB instance. For more information on how to do so, see
Creating a read replica (p. 445).
Amazon RDS attempts to establish GTID-based replication between the MySQL DB instance and the read
replicas using the MASTER_AUTO_POSITION. If the attempt fails, Amazon RDS uses log file positions for
replication with the read replicas. For more information about the MASTER_AUTO_POSITION, see GTID
auto-positioning in the MySQL documentation.
1. If the DB instance or any read replica is using an 8.0 version of RDS for MySQL version lower than
8.0.26, upgrade the DB instance or read replica to 8.0.26 or a higher MySQL 8.0 version. All RDS for
MySQL 5.7 versions support GTID-based replication.
For more information, see Upgrading the MySQL DB engine (p. 1664).
1721
Amazon Relational Database Service User Guide
Using GTID-based replication
2. (Optional) Reset the GTID parameters and test the behavior of the DB instance and read replicas:
a. Make sure that the parameter group associated with the DB instance and each read replica has
the enforce_gtid_consistency parameter set to WARN.
For more information about setting configuration parameters using parameter groups, see
Working with parameter groups (p. 347).
b. If you changed the parameter group of the DB instance, reboot the DB instance. If you changed
the parameter group for a read replica, reboot the read replica.
If you see warnings about GTID-incompatible transactions, adjust your application so that it
only uses GTID-compatible features. Make sure that the DB instance is not generating any
warnings about GTID-incompatible transactions before proceeding to the next step.
3. Reset the GTID parameters for GTID-based replication that allows anonymous transactions until the
read replicas have processed all of them.
a. Make sure that the parameter group associated with the DB instance and each read replica has
the following parameter settings:
• gtid_mode – ON_PERMISSIVE
• enforce_gtid_consistency – ON
b. If you changed the parameter group of the DB instance, reboot the DB instance. If you changed
the parameter group for a read replica, reboot the read replica.
4. Wait for all of your anonymous transactions to be replicated. To check that these are replicated, do
the following:
For example, if the file name is mysql-bin-changelog.000031 and the position is 107, run
the following statement.
If the read replica is past the specified position, the query returns immediately. Otherwise, the
function waits. Proceed to the next step when the query returns for all read replicas.
5. Reset the GTID parameters for GTID-based replication only.
a. Make sure that the parameter group associated with the DB instance and each read replica has
the following parameter settings:
• gtid_mode – ON
• enforce_gtid_consistency – ON
b. Reboot the DB instance and each read replica.
1722
Amazon Relational Database Service User Guide
Using GTID-based replication
CALL mysql.rds_set_master_auto_position(1);
a. Make sure that the parameter group associated with the MySQL DB instance and each read
replica has gtid_mode set to ON_PERMISSIVE.
For more information about setting configuration parameters using parameter groups, see
Working with parameter groups (p. 347).
b. Reboot the MySQL DB instance and each read replica. For more information about rebooting,
see Rebooting a DB instance (p. 436).
3. Reset the gtid_mode to OFF_PERMISSIVE:
a. Make sure that the parameter group associated with the MySQL DB instance and each read
replica has gtid_mode set to OFF_PERMISSIVE.
b. Reboot the MySQL DB instance and each read replica.
4. Wait for all of the GTID transactions to be applied on all of the read replicas. To check that these are
applied, do the following:
Wait for all of the GTID transactions to be applied on the Aurora primary instance. To check that
these are applied, do the following:
File Position
------------------------------------
mysql-bin-changelog.000031 107
------------------------------------
For example, if the file name is mysql-bin-changelog.000031 and the position is 107, run
the following statement.
1723
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance
If the read replica is past the specified position, the query returns immediately. Otherwise, the
function waits. When the query returns for all read replicas, go to the next step.
5. Reset the GTID parameters to disable GTID-based replication:
a. Make sure that the parameter group associated with the MySQL DB instance and each read
replica has the following parameter settings:
• gtid_mode – OFF
• enforce_gtid_consistency – OFF
b. Reboot the MySQL DB instance and each read replica.
Topics
• Before you begin (p. 1331)
• Configuring binary log file position replication with an external source instance (p. 1331)
The permissions required to start replication on an Amazon RDS DB instance are restricted and not
available to your Amazon RDS master user. Because of this, make sure that you use the Amazon RDS
mysql.rds_set_external_master (p. 1769) and mysql.rds_start_replication (p. 1780) commands to set up
replication between your live database and your Amazon RDS database.
To set the binary logging format for a MySQL or MariaDB database, update the binlog_format
parameter. If your DB instance uses the default DB instance parameter group, create a new DB parameter
group to modify binlog_format settings. We recommend that you use the default setting for
binlog_format, which is MIXED. However, you can also set binlog_format to ROW or STATEMENT if
you need a specific binary log (binlog) format. Reboot your DB instance for the change to take effect.
For information about setting the binlog_format parameter, see Configuring MySQL binary
logging (p. 921). For information about the implications of different MySQL replication types,
see Advantages and disadvantages of statement-based and row-based replication in the MySQL
documentation.
• Monitor failover events for the Amazon RDS DB instance that is your replica. If a failover occurs,
then the DB instance that is your replica might be recreated on a new host with a different network
address. For information on how to monitor failover events, see Working with Amazon RDS event
notification (p. 855).
1724
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance
• Maintain the binlogs on your source instance until you have verified that they have been applied to
the replica. This maintenance makes sure that you can restore your source instance in the event of a
failure.
• Turn on automated backups on your Amazon RDS DB instance. Turning on automated backups makes
sure that you can restore your replica to a particular point in time if you need to re-synchronize your
source instance and replica. For information on backups and point-in-time restore, see Backing up and
restoring (p. 590).
2. Run the SHOW MASTER STATUS command on the source MySQL or MariaDB instance to determine
the binlog location.
File Position
------------------------------------
mysql-bin-changelog.000031 107
------------------------------------
3. Copy the database from the external instance to the Amazon RDS DB instance using mysqldump.
For very large databases, you might want to use the procedure in Importing data to an Amazon RDS
MariaDB or MySQL database with reduced downtime (p. 1690).
For Windows:
Note
Make sure that there isn't a space between the -p option and the entered password.
1725
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance
To specify the host name, user name, port, and password to connect to your Amazon RDS DB
instance, use the --host, --user (-u), --port and -p options in the mysql command. The
host name is the Domain Name Service (DNS) name from the Amazon RDS DB instance endpoint,
for example myinstance.123456789012.us-east-1.rds.amazonaws.com. You can find the
endpoint value in the instance details in the AWS Management Console.
4. Make the source MySQL or MariaDB instance writeable again.
For more information on making backups for use with replication, see the MySQL documentation.
5. In the AWS Management Console, add the IP address of the server that hosts the external database
to the virtual private cloud (VPC) security group for the Amazon RDS DB instance. For more
information on modifying a VPC security group, see Security groups for your VPC in the Amazon
Virtual Private Cloud User Guide.
The IP address can change when the following conditions are met:
• You are using a public IP address for communication between the external source instance and the
DB instance.
• The external source instance was stopped and restarted.
If these conditions are met, verify the IP address before adding it.
You might also need to configure your local network to permit connections from the IP address of
your Amazon RDS DB instance. You do this so that your local network can communicate with your
external MySQL or MariaDB instance. To find the IP address of the Amazon RDS DB instance, use the
host command.
host db_instance_endpoint
The host name is the DNS name from the Amazon RDS DB instance endpoint.
6. Using the client of your choice, connect to the external instance and create a user to use for
replication. Use this account solely for replication and restrict it to your domain to improve security.
The following is an example.
Note
Specify a password other than the prompt shown here as a security best practice.
7. For the external instance, grant REPLICATION CLIENT and REPLICATION SLAVE privileges to
your replication user. For example, to grant the REPLICATION CLIENT and REPLICATION SLAVE
privileges on all databases for the 'repl_user' user for your domain, issue the following command.
8. Make the Amazon RDS DB instance the replica. To do so, first connect to the Amazon RDS DB
instance as the master user. Then identify the external MySQL or MariaDB database as the source
instance by using the mysql.rds_set_external_master (p. 1769) command. Use the master log file
name and master log position that you determined in step 2. The following is an example.
1726
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance
Note
On RDS for MySQL, you can choose to use delayed replication by running the
mysql.rds_set_external_master_with_delay (p. 1774) stored procedure instead.
On RDS for MySQL, one reason to use delayed replication is to turn on disaster
recovery with the mysql.rds_start_replication_until (p. 1780) stored procedure.
Currently, RDS for MariaDB supports delayed replication but doesn't support the
mysql.rds_start_replication_until procedure.
9. On the Amazon RDS DB instance, issue the mysql.rds_start_replication (p. 1780) command to start
replication.
CALL mysql.rds_start_replication;
1727
Amazon Relational Database Service User Guide
Exporting data from a MySQL DB instance
The external MySQL database can run either on-premises in your data center, or on an Amazon EC2
instance. The external MySQL database must run the same version as the source MySQL DB instance, or a
later version.
Replication to an external MySQL database is only supported during the time it takes to export a
database from the source MySQL DB instance. The replication should be terminated when the data has
been exported and applications can start accessing the external MySQL instance.
The following list shows the steps to take. Each step is discussed in more detail in later sections.
• If the external MySQL database is running in an Amazon EC2 instance in a virtual private cloud
(VPC) based on the Amazon VPC service, specify the egress rules in a VPC security group. For more
information, see Controlling access with security groups (p. 2680).
• If the external MySQL database is installed on-premises, specify the egress rules in a firewall.
5. If the external MySQL database is running in a VPC, configure rules for the VPC access control list
(ACL) rules in addition to the security group egress rule:
• Configure an ACL ingress rule allowing TCP traffic to ports 1024–65535 from the IP address of the
source MySQL DB instance.
• Configure an ACL egress rule allowing outbound TCP traffic to the port and IP address of the
source MySQL DB instance.
1728
Amazon Relational Database Service User Guide
Prepare the source MySQL DB instance
For more information about Amazon VPC network ACLs, see Network ACLs in Amazon VPC User
Guide.
6. (Optional) Set the max_allowed_packet parameter to the maximum size to avoid replication
errors. We recommend this setting.
1. Ensure that your client computer has enough disk space available to save the binary logs while
setting up replication.
2. Connect to the source MySQL DB instance, and create a replication account by following the
directions in Creating a user for replication in the MySQL documentation.
3. Configure ingress rules on the system running the source MySQL DB instance to allow the external
MySQL database to connect during replication. Specify an ingress rule that allows TCP connections
to the port used by the source MySQL DB instance from the IP address of the external MySQL
database.
4. Specify the egress rules:
• If the source MySQL DB instance is running in a VPC, specify the ingress rules in a VPC security
group. For more information, see Controlling access with security groups (p. 2680).
5. If source MySQL DB instance is running in a VPC, configure VPC ACL rules in addition to the security
group ingress rule:
• Configure an ACL ingress rule to allow TCP connections to the port used by the Amazon RDS
instance from the IP address of the external MySQL database.
• Configure an ACL egress rule to allow TCP connections from ports 1024–65535 to the IP address
of the external MySQL database.
For more information about Amazon VPC network ACLs, see Network ACLs in the Amazon VPC User
Guide.
6. Ensure that the backup retention period is set long enough that no binary logs are purged during
the export. If any of the logs are purged before the export has completed, you must restart
replication from the beginning. For more information about setting the backup retention period, see
Working with backups (p. 591).
7. Use the mysql.rds_set_configuration stored procedure to set the binary log retention
period long enough that the binary logs aren't purged during the export. For more information, see
Accessing MySQL binary logs (p. 922).
8. Create an Amazon RDS read replica from the source MySQL DB instance to further ensure that the
binary logs of the source MySQL DB instance are not purged. For more information, see Creating a
read replica (p. 445).
9. After the Amazon RDS read replica has been created, call the mysql.rds_stop_replication
stored procedure to stop the replication process. The source MySQL DB instance no longer purges its
binary log files, so they are available for the replication process.
10. (Optional) Set both the max_allowed_packet parameter and the slave_max_allowed_packet
parameter to the maximum size to avoid replication errors. The maximum size for both parameters
is 1 GB. We recommend this setting for both parameters. For information about setting parameters,
see Modifying parameters in a DB parameter group (p. 352).
1729
Amazon Relational Database Service User Guide
Copy the database
1. Connect to the RDS read replica of the source MySQL DB instance, and run the MySQL SHOW
REPLICA STATUS\G statement. Note the values for the following:
• Master_Host
• Master_Port
• Master_Log_File
• Exec_Master_Log_Pos
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
2. Use the mysqldump utility to create a snapshot, which copies the data from Amazon RDS to your
local client computer. Ensure that your client computer has enough space to hold the mysqldump
files from the databases to be replicated. This process can take several hours for very large
databases. Follow the directions in Creating a data snapshot using mysqldump in the MySQL
documentation.
The following example runs mysqldump on a client and writes the dump to a file.
mysqldump -h source_MySQL_DB_instance_endpoint \
-u user \
-ppassword \
--port=3306 \
--single-transaction \
--routines \
--triggers \
--databases database database2 > path/rds-dump.sql
For Windows:
mysqldump -h source_MySQL_DB_instance_endpoint ^
-u user ^
-ppassword ^
--port=3306 ^
--single-transaction ^
--routines ^
--triggers ^
--databases database database2 > path\rds-dump.sql
You can load the backup file into the external MySQL database. For more information, see
Reloading SQL-Format Backups in the MySQL documentation. You can run another utility to load
the data into the external MySQL database.
1730
Amazon Relational Database Service User Guide
Complete the export
1. Use the MySQL CHANGE MASTER statement to configure the external MySQL database. Specify the
ID and password of the user granted REPLICATION SLAVE permissions. Specify the Master_Host,
Master_Port, Relay_Master_Log_File, and Exec_Master_Log_Pos values that you got from
the MySQL SHOW REPLICA STATUS\G statement that you ran on the RDS read replica. For more
information, see the MySQL documentation.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
2. Use the MySQL START REPLICA command to initiate replication from the source MySQL DB
instance to the external MySQL database.
Doing this starts replication from the source MySQL DB instance and exports all source changes that
have occurred after you stopped replication from the Amazon RDS read replica.
Note
Previous versions of MySQL used START SLAVE instead of START REPLICA. If you are
using a MySQL version before 8.0.23, then use START SLAVE.
3. Run the MySQL SHOW REPLICA STATUS\G command on the external MySQL database to verify
that it is operating as a read replica. For more information about interpreting the results, see the
MySQL documentation.
4. After replication on the external MySQL database has caught up with the source MySQL DB instance,
use the MySQL STOP REPLICA command to stop replication from the source MySQL DB instance.
Note
Previous versions of MySQL used STOP SLAVE instead of STOP REPLICA. If you are using a
MySQL version before 8.0.23, then use STOP SLAVE.
5. On the Amazon RDS read replica, call the mysql.rds_start_replication stored procedure.
Doing this allows Amazon RDS to start purging the binary log files from the source MySQL DB
instance.
1731
Amazon Relational Database Service User Guide
Options for MySQL
1732
Amazon Relational Database Service User Guide
MariaDB Audit Plugin
The MariaDB Audit Plugin records database activity, including users logging on to the database and
queries run against the database. The record of database activity is stored in a log file.
Note
Currently, the MariaDB Audit Plugin is only supported for the following RDS for MySQL versions:
SERVER_AUDIT_FILE_PATH
/rdsdbdata/ /rdsdbdata/ The location of the log file. The log file
log/audit/ log/audit/ contains the record of the activity specified in
SERVER_AUDIT_EVENTS. For more information,
see Viewing and listing database log files (p. 895)
and MySQL database log files (p. 915).
1–1000000000 1000000
SERVER_AUDIT_FILE_ROTATE_SIZE The size in bytes that when reached, causes the
file to rotate. For more information, see Overview
of RDS for MySQL database logs (p. 915).
0–100 9
SERVER_AUDIT_FILE_ROTATIONS The number of log rotations to save when
server_audit_output_type=file. If set
to 0, then the log file never rotates. For more
information, see Overview of RDS for MySQL
database logs (p. 915) and Downloading a
database log file (p. 896).
SERVER_AUDIT_EVENTS
CONNECT, CONNECT, The types of activity to record in the log. Installing
QUERY, QUERY the MariaDB Audit Plugin is itself logged.
QUERY_DDL,
QUERY_DML, • CONNECT: Log successful and unsuccessful
QUERY_DML_NO_SELECT, connections to the database, and
QUERY_DCL disconnections from the database.
• QUERY: Log the text of all queries run against
the database.
• QUERY_DDL: Similar to the QUERY event, but
returns only data definition language (DDL)
queries (CREATE, ALTER, and so on).
• QUERY_DML: Similar to the QUERY event, but
returns only data manipulation language (DML)
queries (INSERT, UPDATE, and so on, and also
SELECT).
1733
Amazon Relational Database Service User Guide
MariaDB Audit Plugin
Multiple
SERVER_AUDIT_INCL_USERS None Include only activity from the specified
comma- users. By default, activity is recorded for
separated all users. SERVER_AUDIT_INCL_USERS
values and SERVER_AUDIT_EXCL_USERS are
mutually exclusive. If you add values
to SERVER_AUDIT_INCL_USERS,
make sure no values are added to
SERVER_AUDIT_EXCL_USERS.
Multiple
SERVER_AUDIT_EXCL_USERS None Exclude activity from the specified users.
comma- By default, activity is recorded for all
separated users. SERVER_AUDIT_INCL_USERS
values and SERVER_AUDIT_EXCL_USERS are
mutually exclusive. If you add values
to SERVER_AUDIT_EXCL_USERS,
make sure no values are added to
SERVER_AUDIT_INCL_USERS.
SERVER_AUDIT_LOGGING
ON ON Logging is active. The only valid value is ON.
Amazon RDS does not support deactivating
logging. If you want to deactivate logging, remove
the MariaDB Audit Plugin. For more information,
see Removing the MariaDB Audit Plugin (p. 1736).
0–2147483647 1024
SERVER_AUDIT_QUERY_LOG_LIMIT The limit on the length of the query string in a
record.
1734
Amazon Relational Database Service User Guide
MariaDB Audit Plugin
After you add the MariaDB Audit Plugin, you don't need to restart your DB instance. As soon as the
option group is active, auditing begins immediately.
Important
Adding the MariaDB Audit Plugin to a DB instance might cause an outage. We recommend
adding the MariaDB Audit Plugin during a maintenance window or during a time of low
database workload.
1. Determine the option group you want to use. You can create a new option group or use an existing
option group. If you want to use an existing option group, skip to the next step. Otherwise, create a
custom DB option group. Choose mysql for Engine, and choose 5.7 or 8.0 for Major engine version.
For more information, see Creating an option group (p. 332).
2. Add the MARIADB_AUDIT_PLUGIN option to the option group, and configure the option settings.
For more information about adding options, see Adding an option to an option group (p. 335). For
more information about each setting, see Audit Plugin option settings (p. 1733).
3. Apply the option group to a new or existing DB instance.
• For a new DB instance, you apply the option group when you launch the instance. For more
information, see Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, you apply the option group by modifying the instance and attaching
the new option group. For more information, see Modifying an Amazon RDS DB instance (p. 401).
The audit log files include the following comma-delimited information in rows, in the specified order:
Field Description
timestamp The YYYYMMDD followed by the HH:MI:SS (24-hour clock) for the logged event.
serverhost The name of the instance that the event is logged for.
1735
Amazon Relational Database Service User Guide
MariaDB Audit Plugin
Field Description
queryid The query ID number, which can be used for finding the relational table events and
related queries. For TABLE events, multiple lines are added.
operation The recorded action type. Possible values are: CONNECT, QUERY, READ, WRITE,
CREATE, ALTER, RENAME, and DROP.
object For QUERY events, this value indicates the query that the database performed. For
TABLE events, it indicates the table name.
connection_type The security state of the connection to the server. Possible values are:
• 0 – Undefined
• 1 – TCP/IP
• 2 – Socket
• 3 – Named pipe
• 4 – SSL/TLS
• 5 – Shared memory
This field is included only for RDS for MySQL version 5.7.34 and higher 5.7 versions,
and all 8.0 versions.
To remove the MariaDB Audit Plugin from a DB instance, do one of the following:
• Remove the MariaDB Audit Plugin option from the option group it belongs to. This change affects all
DB instances that use the option group. For more information, see Removing an option from an option
group (p. 343)
1736
Amazon Relational Database Service User Guide
MariaDB Audit Plugin
• Modify the DB instance and specify a different option group that doesn't include the plugin. This
change affects a single DB instance. You can specify the default (empty) option group, or a different
custom option group. For more information, see Modifying an Amazon RDS DB instance (p. 401).
1737
Amazon Relational Database Service User Guide
memcached
The memcached interface is a simple, key-based cache. Applications use memcached to insert,
manipulate, and retrieve key-value data pairs from the cache. MySQL 5.6 introduced a plugin that
implements a daemon service that exposes data from InnoDB tables through the memcached protocol.
For more information about the MySQL memcached plugin, see InnoDB integration with memcached.
1. Determine the security group to use for controlling access to the memcached interface. If the set
of applications already using the SQL interface are the same set that will access the memcached
interface, you can use the existing VPC security group used by the SQL interface. If a different set of
applications will access the memcached interface, define a new VPC or DB security group. For more
information about managing security groups, see Controlling access with security groups (p. 2680)
2. Create a custom DB option group, selecting MySQL as the engine type and version. For more
information about creating an option group, see Creating an option group (p. 332).
3. Add the MEMCACHED option to the option group. Specify the port that the memcached interface will
use, and the security group to use in controlling access to the interface. For more information about
adding options, see Adding an option to an option group (p. 335).
4. Modify the option settings to configure the memcached parameters, if necessary. For more
information about how to modify option settings, see Modifying an option setting (p. 340).
5. Apply the option group to an instance. Amazon RDS enables memcached support for that instance
when the option group is applied:
• You enable memcached support for a new instance by specifying the custom option group when
you launch the instance. For more information about launching a MySQL instance, see Creating an
Amazon RDS DB instance (p. 300).
• You enable memcached support for an existing instance by specifying the custom option group
when you modify the instance. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
6. Specify which columns in your MySQL tables can be accessed through the memcached interface.
The memcached plug-in creates a catalog table named containers in a dedicated database named
innodb_memcache. You insert a row into the containers table to map an InnoDB table for access
through memcached. You specify a column in the InnoDB table that is used to store the memcached
key values, and one or more columns that are used to store the data values associated with the
key. You also specify a name that a memcached application uses to refer to that set of columns. For
details on inserting rows in the containers table, see InnoDB memcached plugin internals. For an
example of mapping an InnoDB table and accessing it through memcached, see Writing applications
for the InnoDB memcached plugin.
7. If the applications accessing the memcached interface are on different computers or EC2 instances
than the applications using the SQL interface, add the connection information for those computers
to the VPC security group associated with the MySQL instance. For more information about
managing security groups, see Controlling access with security groups (p. 2680).
You turn off the memcached support for an instance by modifying the instance and specifying the
default option group for your MySQL version. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
1738
Amazon Relational Database Service User Guide
memcached
You can take the following actions to help increase the security of the memcached interface:
• Specify a different port than the default of 11211 when adding the MEMCACHED option to the option
group.
• Ensure that you associate the memcached interface with a VPC security group that limits access to
known, trusted client addresses and EC2 instances. For more information about managing security
groups, see Controlling access with security groups (p. 2680).
<?php
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the top right corner of the AWS Management Console, select the region that contains the DB
instance.
3. In the navigation pane, choose Databases.
4. Choose the MySQL DB instance name to display its details.
5. In the Connect section, note the value of the Endpoint field. The DNS name is the same as the
endpoint. Also, note that the port in the Connect section is not used to access the memcached
interface.
6. In the Details section, note the name listed in the Option Group field.
7. In the navigation pane, choose Option groups.
8. Choose the name of the option group used by the MySQL DB instance to show the option group
details. In the Options section, note the value of the Port setting for the MEMCACHED option.
1739
Amazon Relational Database Service User Guide
memcached
Amazon RDS configures these MySQL memcached parameters, and they cannot be
modified: DAEMON_MEMCACHED_LIB_NAME, DAEMON_MEMCACHED_LIB_PATH, and
INNODB_API_ENABLE_BINLOG. The parameters that MySQL administrators set by using
daemon_memcached_options are available as individual MEMCACHED option settings in Amazon RDS.
1740
Amazon Relational Database Service User Guide
memcached
• v – Logs errors and warnings while running the main event loop.
• vv – In addition to the information logged by v, also logs each client command and the response.
• vvv – In addition to the information logged by vv, also logs internal state transitions.
1741
Amazon Relational Database Service User Guide
Parameters for MySQL
RDS for MySQL parameters are set to the default values of the storage engine that you have selected.
For more information about MySQL parameters, see the MySQL documentation. For more information
about MySQL storage engines, see Supported storage engines for RDS for MySQL (p. 1624).
You can view the parameters available for a specific RDS for MySQL version using the RDS console or the
AWS CLI. For information about viewing the parameters in a MySQL parameter group in the RDS console,
see Viewing parameter values for a DB parameter group (p. 359).
Using the AWS CLI, you can view the parameters for an RDS for MySQL version by running the
describe-engine-default-parameters command. Specify one of the following values for the --
db-parameter-group-family option:
• mysql8.0
• mysql5.7
For example, to view the parameters for RDS for MySQL version 8.0, run the following command.
{
"EngineDefaults": {
"Parameters": [
{
"ParameterName": "activate_all_roles_on_login",
"ParameterValue": "0",
"Description": "Automatically set all granted roles as active after the
user has authenticated successfully.",
"Source": "engine-default",
"ApplyType": "dynamic",
"DataType": "boolean",
"AllowedValues": "0,1",
"IsModifiable": true
},
{
"ParameterName": "allow-suspicious-udfs",
"Description": "Controls whether user-defined functions that have only an
xxx symbol for the main function can be loaded",
"Source": "engine-default",
"ApplyType": "static",
"DataType": "boolean",
"AllowedValues": "0,1",
"IsModifiable": false
},
{
"ParameterName": "auto_generate_certs",
"Description": "Controls whether the server autogenerates SSL key and
certificate files in the data directory, if they do not already exist.",
"Source": "engine-default",
"ApplyType": "static",
"DataType": "boolean",
"AllowedValues": "0,1",
1742
Amazon Relational Database Service User Guide
Parameters for MySQL
"IsModifiable": false
},
...
To list only the modifiable parameters for RDS for MySQL version 8.0, run the following command.
For Windows:
1743
Amazon Relational Database Service User Guide
Common DBA tasks for MySQL
For information about working with MySQL log files on Amazon RDS, see MySQL database log
files (p. 915).
Topics
• Ending a session or query (p. 1744)
• Skipping the current replication error (p. 1744)
• Working with InnoDB tablespaces to improve crash recovery times (p. 1745)
• Managing the Global Status History (p. 1747)
CALL mysql.rds_kill(thread-ID)
CALL mysql.rds_kill_query(thread-ID)
For example, to end the session that is running on thread 99, you would type the following:
CALL mysql.rds_kill(99);
To end the query that is running on thread 99, you would type the following:
CALL mysql.rds_kill_query(99);
For information about the values returned, see the MySQL documentation.
Previous versions of and MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
You can skip an error on your read replica in the following ways.
Topics
1744
Amazon Relational Database Service User Guide
Working with InnoDB tablespaces
to improve crash recovery times
CALL mysql.rds_skip_repl_error;
This command has no effect if you run it on the source DB instance, or on a read replica that hasn't
encountered a replication error.
For more information, such as the versions of MySQL that support mysql.rds_skip_repl_error, see
mysql.rds_skip_repl_error (p. 1779).
Important
If you attempt to call mysql.rds_skip_repl_error and encounter the following error:
ERROR 1305 (42000): PROCEDURE mysql.rds_skip_repl_error does not exist,
then upgrade your MySQL DB instance to the latest minor version or one of the minimum minor
versions listed in mysql.rds_skip_repl_error (p. 1779).
We recommend setting this parameter in a separate DB parameter group. You can associate this DB
parameter group only with the read replicas that need to skip errors. Following this best practice reduces
the potential impact on other DB instances and read replicas.
Important
Setting a nondefault value for this parameter can lead to replication inconsistency. Only set this
parameter to a nondefault value if you have exhausted other options to resolve the problem
and you are sure of the potential impact on your read replica's data.
1745
Amazon Relational Database Service User Guide
Working with InnoDB tablespaces
to improve crash recovery times
Amazon RDS sets the default value for innodb_file_per_table parameter to 1, which allows you to
drop individual InnoDB tables and reclaim storage used by those tables for the DB instance. In most use
cases, setting the innodb_file_per_table parameter to 1 is the recommended setting.
You should set the innodb_file_per_table parameter to 0 when you have a large number of tables,
such as over 1000 tables when you use standard (magnetic) or general purpose SSD storage or over
10,000 tables when you use Provisioned IOPS storage. When you set this parameter to 0, individual
tablespaces are not created and this can improve the time it takes for database crash recovery.
MySQL processes each metadata file, which includes tablespaces, during the crash recovery cycle.
The time it takes MySQL to process the metadata information in the shared tablespace is negligible
compared to the time it takes to process thousands of tablespace files when there are multiple
tablespaces. Because the tablespace number is stored within the header of each file, the aggregate time
to read all the tablespace files can take up to several hours. For example, a million InnoDB tablespaces
on standard storage can take from five to eight hours to process during a crash recovery cycle. In some
cases, InnoDB can determine that it needs additional cleanup after a crash recovery cycle so it will begin
another crash recovery cycle, which will extend the recovery time. Keep in mind that a crash recovery
cycle also entails rolling-back transactions, fixing broken pages, and other operations in addition to the
processing of tablespace information.
Since the innodb_file_per_table parameter resides in a parameter group, you can change the
parameter value by editing the parameter group used by your DB instance without having to reboot the
DB instance. After the setting is changed, for example, from 1 (create individual tables) to 0 (use shared
tablespace), new InnoDB tables will be added to the shared tablespace while existing tables continue to
have individual tablespaces. To move an InnoDB table to the shared tablespace, you must use the ALTER
TABLE command.
For example, the following query returns an ALTER TABLE statement for every InnoDB table that is not
in the shared tablespace.
1746
Amazon Relational Database Service User Guide
Managing the Global Status History
Rebuilding a MySQL table to move the table's metadata to the shared tablespace requires additional
storage space temporarily to rebuild the table, so the DB instance must have storage space available.
During rebuilding, the table is locked and inaccessible to queries. For small tables or tables not
frequently accessed, this might not be an issue. For large tables or tables frequently accessed in a heavily
concurrent environment, you can rebuild tables on a read replica.
You can create a read replica and migrate table metadata to the shared tablespace on the read replica.
While the ALTER TABLE statement blocks access on the read replica, the source DB instance is not
affected. The source DB instance will continue to generate its binary logs while the read replica lags
during the table rebuilding process. Because the rebuilding requires additional storage space and the
replay log file can become large, you should create a read replica with storage allocated that is larger
than the source DB instance.
To create a read replica and rebuild InnoDB tables to use the shared tablespace, take the following steps:
1. Make sure that backup retention is enabled on the source DB instance so that binary logging is
enabled.
2. Use the AWS Management Console or AWS CLI to create a read replica for the source DB instance.
Because the creation of a read replica involves many of the same processes as crash recovery, the
creation process can take some time if there is a large number of InnoDB tablespaces. Allocate more
storage space on the read replica than is currently used on the source DB instance.
3. When the read replica has been created, create a parameter group with the parameter settings
read_only = 0 and innodb_file_per_table = 0. Then associate the parameter group with the
read replica.
4. Issue the following SQL statement for all tables that you want migrated on the replica:
5. When all of your ALTER TABLE statements have completed on the read replica, verify that the read
replica is connected to the source DB instance and that the two instances are in sync.
6. Use the console or CLI to promote the read replica to be the instance. Make sure that the parameter
group used for the new standalone DB instance has the innodb_file_per_table parameter set
to 0. Change the name of the new standalone DB instance, and point any applications to the new
standalone DB instance.
MySQL maintains many status variables that provide information about its operation. Their value can
help you detect locking or memory issues on a DB instance. The values of these status variables are
cumulative since last time the DB instance was started. You can reset most status variables to 0 by using
the FLUSH STATUS command.
To allow for monitoring of these values over time, Amazon RDS provides a set of procedures that
will snapshot the values of these status variables over time and write them to a table, along with any
changes since the last snapshot. This infrastructure, called Global Status History (GoSH), is installed on
all MySQL DB instances starting with versions 5.5.23. GoSH is disabled by default.
To enable GoSH, you first enable the event scheduler from a DB parameter group by setting the
parameter event_scheduler to ON. For MySQL DB instances running MySQL 5.7, also set the
parameter show_compatibility_56 to 1. For information about creating and modifying a DB
parameter group, see Working with parameter groups (p. 347). For information about the side effects of
enabling this parameter, see show_compatibility_56 in the MySQL 5.7 Reference Manual.
1747
Amazon Relational Database Service User Guide
Managing the Global Status History
You can then use the procedures in the following table to enable and configure GoSH. First connect
to your MySQL DB instance, then issue the appropriate commands as shown following. For more
information, see Connecting to a DB instance running the MySQL database engine (p. 1630). For each
procedure, type the following:
CALL procedure-name;
Procedure Description
When GoSH is running, you can query the tables that it writes to. For example, to query the hit ratio of
the Innodb buffer pool, you would issue the following query:
1748
Amazon Relational Database Service User Guide
Local time zone
To set the local time zone for a DB instance, set the time_zone parameter in the parameter group for
your DB instance to one of the supported values listed later in this section. When you set the time_zone
parameter for a parameter group, all DB instances and read replicas that are using that parameter group
change to use the new local time zone. For information on setting parameters in a parameter group, see
Working with parameter groups (p. 347).
After you set the local time zone, all new connections to the database reflect the change. If you have any
open connections to your database when you change the local time zone, you won't see the local time
zone update until after you close the connection and open a new connection.
You can set a different local time zone for a DB instance and one or more of its read replicas. To do this,
use a different parameter group for the DB instance and the replica or replicas and set the time_zone
parameter in each parameter group to a different local time zone.
If you are replicating across AWS Regions, then the source DB instance and the read replica use different
parameter groups (parameter groups are unique to an AWS Region). To use the same local time zone
for each instance, you must set the time_zone parameter in the instance's and read replica's parameter
groups.
When you restore a DB instance from a DB snapshot, the local time zone is set to UTC. You can update
the time zone to your local time zone after the restore is complete. If you restore a DB instance to a
point in time, then the local time zone for the restored DB instance is the time zone setting from the
parameter group of the restored DB instance.
The Internet Assigned Numbers Authority (IANA) publishes new time zones at https://www.iana.org/
time-zones several times a year. Every time RDS releases a new minor maintenance release of MySQL, it
ships with the latest time zone data at the time of the release. When you use the latest RDS for MySQL
versions, you have recent time zone data from RDS. To ensure that your DB instance has recent time
zone data, we recommend upgrading to a higher DB engine version. Alternatively, you can modify the
time zone tables in MariaDB DB instances manually. To do so, you can use SQL commands or run the
mysql_tzinfo_to_sql tool in a SQL client. After updating the time zone data manually, reboot your DB
instance so that the changes take effect. RDS doesn't modify or reset the time zone data of running DB
instances. New time zone data is installed only when you perform a database engine version upgrade.
You can set your local time zone to one of the following values.
Africa/Cairo Asia/Riyadh
Africa/Casablanca Asia/Seoul
Africa/Harare Asia/Shanghai
Africa/Monrovia Asia/Singapore
Africa/Nairobi Asia/Taipei
Africa/Tripoli Asia/Tehran
Africa/Windhoek Asia/Tokyo
America/Araguaina Asia/Ulaanbaatar
America/Asuncion Asia/Vladivostok
1749
Amazon Relational Database Service User Guide
Local time zone
America/Bogota Asia/Yakutsk
America/Buenos_Aires Asia/Yerevan
America/Caracas Atlantic/Azores
America/Chihuahua Australia/Adelaide
America/Cuiaba Australia/Brisbane
America/Denver Australia/Darwin
America/Fortaleza Australia/Hobart
America/Guatemala Australia/Perth
America/Halifax Australia/Sydney
America/Manaus Brazil/East
America/Matamoros Canada/Newfoundland
America/Monterrey Canada/Saskatchewan
America/Montevideo Canada/Yukon
America/Phoenix Europe/Amsterdam
America/Santiago Europe/Athens
America/Tijuana Europe/Dublin
Asia/Amman Europe/Helsinki
Asia/Ashgabat Europe/Istanbul
Asia/Baghdad Europe/Kaliningrad
Asia/Baku Europe/Moscow
Asia/Bangkok Europe/Paris
Asia/Beirut Europe/Prague
Asia/Calcutta Europe/Sarajevo
Asia/Damascus Pacific/Auckland
Asia/Dhaka Pacific/Fiji
Asia/Irkutsk Pacific/Guam
Asia/Jerusalem Pacific/Honolulu
Asia/Kabul Pacific/Samoa
Asia/Karachi US/Alaska
Asia/Kathmandu US/Central
Asia/Krasnoyarsk US/Eastern
Asia/Magadan US/East-Indiana
1750
Amazon Relational Database Service User Guide
Local time zone
Asia/Muscat US/Pacific
Asia/Novosibirsk UTC
1751
Amazon Relational Database Service User Guide
Known issues and limitations
Topics
• InnoDB reserved word (p. 1752)
• Storage-full behavior for Amazon RDS for MySQL (p. 1752)
• Inconsistent InnoDB buffer pool size (p. 1753)
• Index merge optimization returns incorrect results (p. 1753)
• Log file size (p. 1754)
• MySQL parameter exceptions for Amazon RDS DB instances (p. 1754)
• MySQL file size limits in Amazon RDS (p. 1754)
• MySQL Keyring Plugin not supported (p. 1756)
• Custom ports (p. 1756)
• MySQL stored procedure limitations (p. 1756)
• GTID-based replication with an external source instance (p. 1756)
• The DB instance has less than 20,000 MiB of storage, and available storage reaches 200 MiB or less.
• The DB instance has more than 102,400 MiB of storage, and available storage reaches 1024 MiB or
less.
• The DB instance has between 20,000 MiB and 102,400 MiB of storage, and has less than 1% of storage
available.
After Amazon RDS stops a DB instance automatically because it reached the storage-full state, you
can still modify it. To restart the DB instance, complete at least one of the following:
For more information about storage autoscaling, see Managing capacity automatically with Amazon
RDS storage autoscaling (p. 480).
• Modify the DB instance to increase its storage capacity.
For more information about increasing storage capacity, see Increasing DB instance storage
capacity (p. 478).
1752
Amazon Relational Database Service User Guide
Inconsistent InnoDB buffer pool size
After you make one of these changes, the DB instance is restarted automatically. For information about
modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401).
innodb_buffer_pool_chunk_size = 536870912
innodb_buffer_pool_instances = 4
innodb_buffer_pool_size = (536870912 * 4) * 8 = 17179869184
For details on this MySQL 5.7 bug, see https://bugs.mysql.com/bug.php?id=79379 in the MySQL
documentation.
For example, consider a query on a table with two indexes where the search arguments reference the
indexed columns.
In this case, the search engine will search both indexes. However, due to the bug, the merged results are
incorrect.
• Set the optimizer_switch parameter to index_merge=off in the DB parameter group for your
MySQL DB instance. For information on setting DB parameter group parameters, see Working with
parameter groups (p. 347).
• Upgrade your MySQL DB instance to MySQL version 5.7 or 8.0. For more information, see Upgrading
the MySQL DB engine (p. 1664).
• If you cannot upgrade your instance or change the optimizer_switch parameter, you can work
around the bug by explicitly identifying an index for the query, for example:
1753
Amazon Relational Database Service User Guide
Log file size
For more information, see Index merge optimization in the MySQL documentation.
lower_case_table_names
Because Amazon RDS uses a case-sensitive file system, setting the value of the
lower_case_table_names server parameter to 2 (names stored as given but compared in lowercase) is
not supported. The following are the supported values for Amazon RDS for MySQL DB instances:
• 0 (names stored as given and comparisons are case-sensitive) is supported for all RDS for MySQL
versions.
• 1 (names stored in lowercase and comparisons are not case-sensitive) is supported for RDS for MySQL
version 5.7 and version 8.0.28 and higher 8.0 versions.
When a parameter group is associated with a MySQL DB instance with a version lower than 8.0, we
recommend that you avoid changing the lower_case_table_names parameter in the parameter
group. Changing it could cause inconsistencies with point-in-time recovery backups and read replica DB
instances.
When a parameter group is associated with a version 8.0 MySQL DB instance, you can't modify the
lower_case_table_names parameter in the parameter group.
Read replicas should always use the same lower_case_table_names parameter value as the source
DB instance.
long_query_time
You can set the long_query_time parameter to a floating point value so that you can log slow queries
to the MySQL slow query log with microsecond resolution. You can set a value such as 0.1 seconds, which
would be 100 milliseconds, to help when debugging slow transactions that take less than one second.
1754
Amazon Relational Database Service User Guide
MySQL file size limits in Amazon RDS
system tablespace to a maximum size of 16 TB. InnoDB file-per-table tablespaces (with tables each in
their own tablespace) is set by default for MySQL DB instances.
Note
Some existing DB instances have a lower limit. For example, MySQL DB instances created before
April 2014 have a file and table size limit of 2 TB. This 2 TB file size limit also applies to DB
instances or read replicas created from DB snapshots taken before April 2014, regardless of
when the DB instance was created.
There are advantages and disadvantages to using InnoDB file-per-table tablespaces, depending on your
application. To determine the best approach for your application, see File-per-table tablespaces in the
MySQL documentation.
We don't recommend allowing tables to grow to the maximum file size. In general, a better practice is to
partition data into smaller tables, which can improve performance and recovery times.
One option that you can use for breaking up a large table into smaller tables is partitioning. Partitioning
distributes portions of your large table into separate files based on rules that you specify. For example,
if you store transactions by date, you can create partitioning rules that distribute older transactions into
separate files using partitioning. Then periodically, you can archive the historical transaction data that
doesn't need to be readily available to your application. For more information, see Partitioning in the
MySQL documentation.
Because there is no single system table or view that provides the size of all the tables and the InnoDB
system tablespace, you must query multiple tables to determine the size of the tablespaces.
To determine the size of the InnoDB system tablespace and the data dictionary tablespace
• Use the following SQL command to determine if any of your tablespaces are too large and are
candidates for partitioning.
Note
The data dictionary tablespace is specific to MySQL 8.0.
To determine the size of InnoDB user tables outside of the InnoDB system tablespace (for
MySQL 5.7 versions)
• Use the following SQL command to determine if any of your tables are too large and are candidates
for partitioning.
SELECT SPACE,NAME,ROUND((ALLOCATED_SIZE/1024/1024/1024), 2)
as "Tablespace Size (GB)"
FROM information_schema.INNODB_SYS_TABLESPACES ORDER BY 3 DESC;
To determine the size of InnoDB user tables outside of the InnoDB system tablespace (for
MySQL 8.0 versions)
• Use the following SQL command to determine if any of your tables are too large and are candidates
for partitioning.
SELECT SPACE,NAME,ROUND((ALLOCATED_SIZE/1024/1024/1024), 2)
as "Tablespace Size (GB)"
FROM information_schema.INNODB_TABLESPACES ORDER BY 3 DESC;
1755
Amazon Relational Database Service User Guide
MySQL Keyring Plugin not supported
• Use the following SQL command to determine if any of your non-InnoDB user tables are too large.
• Set the innodb_file_per_table parameter to 1 in the parameter group for the DB instance.
• Set the innodb_file_per_table parameter to 0 in the parameter group for the DB instance.
For information on updating a parameter group, see Working with parameter groups (p. 347).
When you have enabled or disabled InnoDB file-per-table tablespaces, you can issue an ALTER TABLE
command to move a table from the global tablespace to its own tablespace, or from its own tablespace
to the global tablespace as shown in the following example:
Custom ports
Amazon RDS blocks connections to custom port 33060 for the MySQL engine. Choose a different port
for your MySQL engine.
1756
Amazon Relational Database Service User Guide
RDS for MySQL stored procedures
Topics
• Configuring (p. 1758)
• Ending a session or query (p. 1761)
• Logging (p. 1763)
• Managing the Global Status History (p. 1764)
• Replicating (p. 1767)
• Warming the InnoDB cache (p. 1784)
1757
Amazon Relational Database Service User Guide
Configuring
Configuring
The following stored procedures set and show configuration parameters, such as for binary log file
retention.
Topics
• mysql.rds_set_configuration (p. 1758)
• mysql.rds_show_configuration (p. 1760)
mysql.rds_set_configuration
Specifies the number of hours to retain binary logs or the number of seconds to delay replication.
Syntax
CALL mysql.rds_set_configuration(name,value);
Parameters
name
Usage notes
The mysql.rds_set_configuration procedure supports the following configuration parameters:
The configuration parameters are stored permanently and survive any DB instance reboot or failover.
The binlog retention hours parameter is used to specify the number of hours to retain binary log
files. Amazon RDS normally purges a binary log as soon as possible, but the binary log might still be
required for replication with a MySQL database external to RDS.
The default value of binlog retention hours is NULL. For RDS for MySQL, NULL means binary logs
aren't retained (0 hours).
To specify the number of hours to retain binary logs on a DB instance, use the
mysql.rds_set_configuration stored procedure and specify a period with enough time for
replication to occur, as shown in the following example.
1758
Amazon Relational Database Service User Guide
Configuring
For MySQL DB instances, the maximum binlog retention hours value is 168 (7 days).
After you set the retention period, monitor storage usage for the DB instance to make sure that the
retained binary logs don't take up too much storage.
source delay
Use the source delay parameter in a read replica to specify the number of seconds to delay
replication from the read replica to its source DB instance. Amazon RDS normally replicates changes
as soon as possible, but you might want some environments to delay replication. For example, when
replication is delayed, you can roll forward a delayed read replica to the time just before a disaster. If a
table is dropped accidentally, you can use delayed replication to quickly recover it. The default value of
target delay is 0 (don't delay replication).
When you use this parameter, it runs mysql.rds_set_source_delay (p. 1777) and applies CHANGE primary
TO MASTER_DELAY = input value. If successful, the procedure saves the source delay parameter to
the mysql.rds_configuration table.
To specify the number of seconds for Amazon RDS to delay replication to a source DB instance, use
the mysql.rds_set_configuration stored procedure and specify the number of seconds to delay
replication. In the following example, the replication is delayed by at least one hour (3,600 seconds).
The limit for the source delay parameter is one day (86400 seconds).
Note
The source delay parameter isn't supported for RDS for MySQL version 8.0 or MariaDB
versions below 10.2.
target delay
Use the target delay parameter to specify the number of seconds to delay replication between
a DB instance and any future RDS-managed read replicas created from this instance. This parameter
is ignored for non-RDS-managed read replicas. Amazon RDS normally replicates changes as soon as
possible, but you might want some environments to delay replication. For example, when replication
is delayed, you can roll forward a delayed read replica to the time just before a disaster. If a table is
dropped accidentally, you can use delayed replication to recover it quickly. The default value of target
delay is 0 (don't delay replication).
For disaster recovery, you can use this configuration parameter with
the mysql.rds_start_replication_until (p. 1780) stored procedure or the
mysql.rds_start_replication_until_gtid (p. 1781) stored procedure. To roll forward changes to a delayed
read replica to the time just before a disaster, you can run the mysql.rds_set_configuration
procedure with this parameter set. After the mysql.rds_start_replication_until or
mysql.rds_start_replication_until_gtid procedure stops replication, you can promote the read
replica to be the new primary DB instance by using the instructions in Promoting a read replica to be a
standalone DB instance (p. 447).
To specify the number of seconds for Amazon RDS to delay replication to a read replica, use the
mysql.rds_set_configuration stored procedure and specify the number of seconds to delay
1759
Amazon Relational Database Service User Guide
Configuring
replication. The following example specifies that replication is delayed by at least one hour (3,600
seconds).
The limit for the target delay parameter is one day (86400 seconds).
Note
The target delay parameter isn't supported for RDS for MySQL version 8.0 or MariaDB
versions earlier than 10.2.
mysql.rds_show_configuration
The number of hours that binary logs are retained.
Syntax
CALL mysql.rds_show_configuration;
Usage notes
To verify the number of hours that Amazon RDS retains binary logs, use the
mysql.rds_show_configuration stored procedure.
Examples
The following example displays the retention period:
call mysql.rds_show_configuration;
name value description
binlog retention hours 24 binlog retention hours specifies the
duration in hours before binary logs are automatically deleted.
1760
Amazon Relational Database Service User Guide
Ending a session or query
Topics
• mysql.rds_kill (p. 1761)
• mysql.rds_kill_query (p. 1761)
mysql.rds_kill
Ends a connection to the MySQL server.
Syntax
CALL mysql.rds_kill(processID);
Parameters
processID
Usage notes
Each connection to the MySQL server runs in a separate thread. To end a connection, use the
mysql.rds_kill procedure and pass in the thread ID of that connection. To obtain the thread ID, use
the MySQL SHOW PROCESSLIST command.
For information about limitations, see MySQL stored procedure limitations (p. 1756).
Examples
The following example ends a connection with a thread ID of 4243:
CALL mysql.rds_kill(4243);
mysql.rds_kill_query
Ends a query running against the MySQL server.
Syntax
CALL mysql.rds_kill_query(processID);
Parameters
processID
The identity of the process or thread that is running the query to be ended.
1761
Amazon Relational Database Service User Guide
Ending a session or query
Usage notes
To stop a query running against the MySQL server, use the mysql_rds_kill_query procedure and
pass in the connection ID of the thread that is running the query. The procedure then terminates the
connection.
To obtain the ID, query the MySQL INFORMATION_SCHEMA PROCESSLIST table or use the MySQL SHOW
PROCESSLIST command. The value in the ID column from SHOW PROCESSLIST or SELECT * FROM
INFORMATION_SCHEMA.PROCESSLIST is the processID.
For information about limitations, see MySQL stored procedure limitations (p. 1756).
Examples
The following example stops a query with a query thread ID of 230040:
CALL mysql.rds_kill_query(230040);
1762
Amazon Relational Database Service User Guide
Logging
Logging
The following stored procedures rotate MySQL logs to backup tables. For more information, see MySQL
database log files (p. 915).
Topics
• mysql.rds_rotate_general_log (p. 1763)
• mysql.rds_rotate_slow_log (p. 1763)
mysql.rds_rotate_general_log
Rotates the mysql.general_log table to a backup table.
Syntax
CALL mysql.rds_rotate_general_log;
Usage notes
You can rotate the mysql.general_log table to a backup table by calling the
mysql.rds_rotate_general_log procedure. When log tables are rotated, the current log table is
copied to a backup log table and the entries in the current log table are removed. If a backup log table
already exists, then it is deleted before the current log table is copied to the backup. You can query
the backup log table if needed. The backup log table for the mysql.general_log table is named
mysql.general_log_backup.
You can run this procedure only when the log_output parameter is set to TABLE.
mysql.rds_rotate_slow_log
Rotates the mysql.slow_log table to a backup table.
Syntax
CALL mysql.rds_rotate_slow_log;
Usage notes
You can rotate the mysql.slow_log table to a backup table by calling the
mysql.rds_rotate_slow_log procedure. When log tables are rotated, the current log table is copied
to a backup log table and the entries in the current log table are removed. If a backup log table already
exists, then it is deleted before the current log table is copied to the backup.
You can query the backup log table if needed. The backup log table for the mysql.slow_log table is
named mysql.slow_log_backup.
1763
Amazon Relational Database Service User Guide
Managing the Global Status History
The following stored procedures manage how the Global Status History is collected and maintained.
Topics
• mysql.rds_collect_global_status_history (p. 1764)
• mysql.rds_disable_gsh_collector (p. 1764)
• mysql.rds_disable_gsh_rotation (p. 1764)
• mysql.rds_enable_gsh_collector (p. 1764)
• mysql.rds_enable_gsh_rotation (p. 1765)
• mysql.rds_rotate_global_status_history (p. 1765)
• mysql.rds_set_gsh_collector (p. 1765)
• mysql.rds_set_gsh_rotation (p. 1765)
mysql.rds_collect_global_status_history
Takes a snapshot on demand for the Global Status History.
Syntax
CALL mysql.rds_collect_global_status_history;
mysql.rds_disable_gsh_collector
Turns off snapshots taken by the Global Status History.
Syntax
CALL mysql.rds_disable_gsh_collector;
mysql.rds_disable_gsh_rotation
Turns off rotation of the mysql.global_status_history table.
Syntax
CALL mysql.rds_disable_gsh_rotation;
mysql.rds_enable_gsh_collector
Turns on the Global Status History to take default snapshots at intervals specified by
rds_set_gsh_collector.
Syntax
1764
Amazon Relational Database Service User Guide
Managing the Global Status History
CALL mysql.rds_enable_gsh_collector;
mysql.rds_enable_gsh_rotation
Turns on rotation of the contents of the mysql.global_status_history table to
mysql.global_status_history_old at intervals specified by rds_set_gsh_rotation.
Syntax
CALL mysql.rds_enable_gsh_rotation;
mysql.rds_rotate_global_status_history
Rotates the contents of the mysql.global_status_history table to
mysql.global_status_history_old on demand.
Syntax
CALL mysql.rds_rotate_global_status_history;
mysql.rds_set_gsh_collector
Specifies the interval, in minutes, between snapshots taken by the Global Status History.
Syntax
CALL mysql.rds_set_gsh_collector(intervalPeriod);
Parameters
intervalPeriod
mysql.rds_set_gsh_rotation
Specifies the interval, in days, between rotations of the mysql.global_status_history table.
Syntax
CALL mysql.rds_set_gsh_rotation(intervalPeriod);
Parameters
intervalPeriod
1765
Amazon Relational Database Service User Guide
Managing the Global Status History
1766
Amazon Relational Database Service User Guide
Replicating
Replicating
The following stored procedures control how transactions are replicated from an external database
into RDS for MySQL, or from RDS for MySQL to an external database. To learn how to use replication
based on global transaction identifiers (GTIDs) with RDS for MySQL, see Using GTID-based replication for
Amazon RDS for MySQL (p. 1719).
Topics
• mysql.rds_next_master_log (p. 1767)
• mysql.rds_reset_external_master (p. 1769)
• mysql.rds_set_external_master (p. 1769)
• mysql.rds_set_external_master_with_auto_position (p. 1772)
• mysql.rds_set_external_master_with_delay (p. 1774)
• mysql.rds_set_master_auto_position (p. 1777)
• mysql.rds_set_source_delay (p. 1777)
• mysql.rds_skip_transaction_with_gtid (p. 1778)
• mysql.rds_skip_repl_error (p. 1779)
• mysql.rds_start_replication (p. 1780)
• mysql.rds_start_replication_until (p. 1780)
• mysql.rds_start_replication_until_gtid (p. 1781)
• mysql.rds_stop_replication (p. 1782)
mysql.rds_next_master_log
Changes the source database instance log position to the start of the next binary log on the source
database instance. Use this procedure only if you are receiving replication I/O error 1236 on a read
replica.
Syntax
CALL mysql.rds_next_master_log(
curr_master_log
);
Parameters
curr_master_log
The index of the current master log file. For example, if the current file is named mysql-bin-
changelog.012345, then the index is 12345. To determine the current master log file name, run
the SHOW REPLICA STATUS command and view the Master_Log_File field.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
Usage notes
The master user must run the mysql.rds_next_master_log procedure.
1767
Amazon Relational Database Service User Guide
Replicating
Warning
Call mysql.rds_next_master_log only if replication fails after a failover of a Multi-AZ
DB instance that is the replication source, and the Last_IO_Errno field of SHOW REPLICA
STATUS reports I/O error 1236.
Calling mysql.rds_next_master_log can result in data loss in the read replica if transactions
in the source instance were not written to the binary log on disk before the failover event
occurred.
You can reduce the chance of this happening by setting the source instance parameters
sync_binlog and innodb_support_xa to 1, although this might reduce performance. For
more information, see Troubleshooting a MySQL read replica problem (p. 1718).
Examples
Assume replication fails on an RDS for MySQL read replica. Running SHOW REPLICA STATUS\G on the
read replica returns the following result:
1768
Amazon Relational Database Service User Guide
Replicating
The Last_IO_Errno field shows that the instance is receiving I/O error 1236. The Master_Log_File
field shows that the file name is mysql-bin-changelog.012345, which means that the log file
index is 12345. To resolve the error, you can call mysql.rds_next_master_log with the following
parameter:
CALL mysql.rds_next_master_log(12345);
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS. If
you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
mysql.rds_reset_external_master
Reconfigures an RDS for MySQL DB instance to no longer be a read replica of an instance of MySQL
running external to Amazon RDS.
Important
To run this procedure, autocommit must be enabled. To enable it, set the autocommit
parameter to 1. For information about modifying parameters, see Modifying parameters in a DB
parameter group (p. 352).
Syntax
CALL mysql.rds_reset_external_master;
Usage notes
The master user must run the mysql.rds_reset_external_master procedure. This procedure must
be run on the MySQL DB instance to be removed as a read replica of a MySQL instance running external
to Amazon RDS.
Note
We recommend that you use read replicas to manage replication between two Amazon RDS
DB instances when possible. When you do so, we recommend that you use only this and
other replication-related stored procedures. These practices enable more complex replication
topologies between Amazon RDS DB instances. We offer these stored procedures primarily to
enable replication with MySQL instances running external to Amazon RDS. For information
about managing replication between Amazon RDS DB instances, see Working with DB instance
read replicas (p. 438).
For more information about using replication to import data from an instance of MySQL running
external to Amazon RDS, see Configuring binary log file position replication with an external source
instance (p. 1724).
mysql.rds_set_external_master
Configures an RDS for MySQL DB instance to be a read replica of an instance of MySQL running external
to Amazon RDS.
Important
To run this procedure, autocommit must be enabled. To enable it, set the autocommit
parameter to 1. For information about modifying parameters, see Modifying parameters in a DB
parameter group (p. 352).
Note
You can use the mysql.rds_set_external_master_with_delay (p. 1774) stored procedure to
configure an external source database instance and delayed replication.
1769
Amazon Relational Database Service User Guide
Replicating
Syntax
CALL mysql.rds_set_external_master (
host_name
, host_port
, replication_user_name
, replication_user_password
, mysql_binary_log_file_name
, mysql_binary_log_file_location
, ssl_encryption
);
Parameters
host_name
The host name or IP address of the MySQL instance running external to Amazon RDS to become the
source database instance.
host_port
The port used by the MySQL instance running external to Amazon RDS to be configured as the
source database instance. If your network configuration includes Secure Shell (SSH) port replication
that converts the port number, specify the port number that is exposed by SSH.
replication_user_name
The ID of a user with REPLICATION CLIENT and REPLICATION SLAVE permissions on the MySQL
instance running external to Amazon RDS. We recommend that you provide an account that is used
solely for replication with the external instance.
replication_user_password
The name of the binary log on the source database instance that contains the replication
information.
mysql_binary_log_file_location
The location in the mysql_binary_log_file_name binary log at which replication starts reading
the replication information.
You can determine the binlog file name and location by running SHOW MASTER STATUS on the
source database instance.
ssl_encryption
A value that specifies whether Secure Socket Layer (SSL) encryption is used on the replication
connection. 1 specifies to use SSL encryption, 0 specifies to not use encryption. The default is 0.
Note
The MASTER_SSL_VERIFY_SERVER_CERT option isn't supported. This option is set to 0,
which means that the connection is encrypted, but the certificates aren't verified.
Usage notes
The master user must run the mysql.rds_set_external_master procedure. This procedure must be
run on the MySQL DB instance to be configured as the read replica of a MySQL instance running external
to Amazon RDS.
1770
Amazon Relational Database Service User Guide
Replicating
Before you run mysql.rds_set_external_master, you must configure the instance of MySQL
running external to Amazon RDS to be a source database instance. To connect to the MySQL
instance running external to Amazon RDS, you must specify replication_user_name and
replication_user_password values that indicate a replication user that has REPLICATION CLIENT
and REPLICATION SLAVE permissions on the external instance of MySQL.
1. Using the MySQL client of your choice, connect to the external instance of MySQL and create a user
account to be used for replication. The following is an example.
MySQL 5.7
MySQL 8.0
Note
Specify a password other than the prompt shown here as a security best practice.
2. On the external instance of MySQL, grant REPLICATION CLIENT and REPLICATION SLAVE
privileges to your replication user. The following example grants REPLICATION CLIENT and
REPLICATION SLAVE privileges on all databases for the 'repl_user' user for your domain.
MySQL 5.7
MySQL 8.0
To use encrypted replication, configure source database instance to use SSL connections.
Note
We recommend that you use read replicas to manage replication between two Amazon RDS
DB instances when possible. When you do so, we recommend that you use only this and
other replication-related stored procedures. These practices enable more complex replication
topologies between Amazon RDS DB instances. We offer these stored procedures primarily to
enable replication with MySQL instances running external to Amazon RDS. For information
about managing replication between Amazon RDS DB instances, see Working with DB instance
read replicas (p. 438).
When mysql.rds_set_external_master is called, Amazon RDS records the time, user, and an action
of set master in the mysql.rds_history and mysql.rds_replication_status tables.
1771
Amazon Relational Database Service User Guide
Replicating
Examples
When run on a MySQL DB instance, the following example configures the DB instance to be a read replica
of an instance of MySQL running external to Amazon RDS.
call mysql.rds_set_external_master(
'Externaldb.some.com',
3306,
'repl_user',
'password',
'mysql-bin-changelog.0777',
120,
0);
mysql.rds_set_external_master_with_auto_position
Configures an RDS for MySQL DB instance to be a read replica of an instance of MySQL running external
to Amazon RDS. This procedure also configures delayed replication and replication based on global
transaction identifiers (GTIDs).
Important
To run this procedure, autocommit must be enabled. To enable it, set the autocommit
parameter to 1. For information about modifying parameters, see Modifying parameters in a DB
parameter group (p. 352).
Syntax
CALL mysql.rds_set_external_master_with_auto_position (
host_name
, host_port
, replication_user_name
, replication_user_password
, ssl_encryption
, delay
);
Parameters
host_name
The host name or IP address of the MySQL instance running external to Amazon RDS to become the
source database instance.
host_port
The port used by the MySQL instance running external to Amazon RDS to be configured as the
source database instance. If your network configuration includes Secure Shell (SSH) port replication
that converts the port number, specify the port number that is exposed by SSH.
replication_user_name
The ID of a user with REPLICATION CLIENT and REPLICATION SLAVE permissions on the MySQL
instance running external to Amazon RDS. We recommend that you provide an account that is used
solely for replication with the external instance.
replication_user_password
1772
Amazon Relational Database Service User Guide
Replicating
ssl_encryption
A value that specifies whether Secure Socket Layer (SSL) encryption is used on the replication
connection. 1 specifies to use SSL encryption, 0 specifies to not use encryption. The default is 0.
Note
The MASTER_SSL_VERIFY_SERVER_CERT option isn't supported. This option is set to 0,
which means that the connection is encrypted, but the certificates aren't verified.
delay
The minimum number of seconds to delay replication from source database instance.
Usage notes
The master user must run the mysql.rds_set_external_master_with_auto_position procedure.
This procedure must be run on the MySQL DB instance to be configured as the read replica of a MySQL
instance running external to Amazon RDS.
This procedure is supported for all RDS for MySQL 5.7 versions, and RDS for MySQL 8.0.26 and higher
8.0 versions.
1. Using the MySQL client of your choice, connect to the external instance of MySQL and create a user
account to be used for replication. The following is an example.
2. On the external instance of MySQL, grant REPLICATION CLIENT and REPLICATION SLAVE
privileges to your replication user. The following example grants REPLICATION CLIENT and
REPLICATION SLAVE privileges on all databases for the 'repl_user' user for your domain.
For more information, see Configuring binary log file position replication with an external source
instance (p. 1724).
Note
We recommend that you use read replicas to manage replication between two Amazon RDS
DB instances when possible. When you do so, we recommend that you use only this and
other replication-related stored procedures. These practices enable more complex replication
topologies between Amazon RDS DB instances. We offer these stored procedures primarily to
enable replication with MySQL instances running external to Amazon RDS. For information
about managing replication between Amazon RDS DB instances, see Working with DB instance
read replicas (p. 438).
1773
Amazon Relational Database Service User Guide
Replicating
to start the replication process. You can call mysql.rds_reset_external_master (p. 1769) to remove the
read replica configuration.
For disaster recovery, you can use this procedure with the mysql.rds_start_replication_until (p. 1780)
or mysql.rds_start_replication_until_gtid (p. 1781) stored procedure. To roll forward
changes to a delayed read replica to the time just before a disaster, you can run the
mysql.rds_set_external_master_with_auto_position procedure. After the
mysql.rds_start_replication_until_gtid procedure stops replication, you can promote the read
replica to be the new primary DB instance by using the instructions in Promoting a read replica to be a
standalone DB instance (p. 447).
Examples
When run on a MySQL DB instance, the following example configures the DB instance to be a read replica
of an instance of MySQL running external to Amazon RDS. It sets the minimum replication delay to one
hour (3,600 seconds) on the MySQL DB instance. A change from the MySQL source database instance
running external to Amazon RDS isn't applied on the MySQL DB instance read replica for at least one
hour.
call mysql.rds_set_external_master_with_auto_position(
'Externaldb.some.com',
3306,
'repl_user',
'SomePassW0rd',
0,
3600);
mysql.rds_set_external_master_with_delay
Configures an RDS for MySQL DB instance to be a read replica of an instance of MySQL running external
to Amazon RDS and configures delayed replication.
Important
To run this procedure, autocommit must be enabled. To enable it, set the autocommit
parameter to 1. For information about modifying parameters, see Modifying parameters in a DB
parameter group (p. 352).
Syntax
CALL mysql.rds_set_external_master_with_delay (
host_name
, host_port
, replication_user_name
, replication_user_password
, mysql_binary_log_file_name
, mysql_binary_log_file_location
, ssl_encryption
1774
Amazon Relational Database Service User Guide
Replicating
, delay
);
Parameters
host_name
The host name or IP address of the MySQL instance running external to Amazon RDS that will
become the source database instance.
host_port
The port used by the MySQL instance running external to Amazon RDS to be configured as the
source database instance. If your network configuration includes SSH port replication that converts
the port number, specify the port number that is exposed by SSH.
replication_user_name
The ID of a user with REPLICATION CLIENT and REPLICATION SLAVE permissions on the MySQL
instance running external to Amazon RDS. We recommend that you provide an account that is used
solely for replication with the external instance.
replication_user_password
The name of the binary log on the source database instance contains the replication information.
mysql_binary_log_file_location
The location in the mysql_binary_log_file_name binary log at which replication will start
reading the replication information.
You can determine the binlog file name and location by running SHOW MASTER STATUS on the
source database instance.
ssl_encryption
A value that specifies whether Secure Socket Layer (SSL) encryption is used on the replication
connection. 1 specifies to use SSL encryption, 0 specifies to not use encryption. The default is 0.
Note
The MASTER_SSL_VERIFY_SERVER_CERT option isn't supported. This option is set to 0,
which means that the connection is encrypted, but the certificates aren't verified.
delay
The minimum number of seconds to delay replication from source database instance.
Usage notes
The master user must run the mysql.rds_set_external_master_with_delay procedure. This
procedure must be run on the MySQL DB instance to be configured as the read replica of a MySQL
instance running external to Amazon RDS.
1775
Amazon Relational Database Service User Guide
Replicating
replication_user_password. These values must indicate a replication user that has REPLICATION
CLIENT and REPLICATION SLAVE permissions on the external instance of MySQL.
1. Using the MySQL client of your choice, connect to the external instance of MySQL and create a user
account to be used for replication. The following is an example.
2. On the external instance of MySQL, grant REPLICATION CLIENT and REPLICATION SLAVE
privileges to your replication user. The following example grants REPLICATION CLIENT and
REPLICATION SLAVE privileges on all databases for the 'repl_user' user for your domain.
For more information, see Configuring binary log file position replication with an external source
instance (p. 1724).
Note
We recommend that you use read replicas to manage replication between two Amazon RDS
DB instances when possible. When you do so, we recommend that you use only this and
other replication-related stored procedures. These practices enable more complex replication
topologies between Amazon RDS DB instances. We offer these stored procedures primarily to
enable replication with MySQL instances running external to Amazon RDS. For information
about managing replication between Amazon RDS DB instances, see Working with DB instance
read replicas (p. 438).
For disaster recovery, you can use this procedure with the mysql.rds_start_replication_until (p. 1780)
or mysql.rds_start_replication_until_gtid (p. 1781) stored procedure. To roll forward
changes to a delayed read replica to the time just before a disaster, you can run
the mysql.rds_set_external_master_with_delay procedure. After the
mysql.rds_start_replication_until procedure stops replication, you can promote the read
replica to be the new primary DB instance by using the instructions in Promoting a read replica to be a
standalone DB instance (p. 447).
1776
Amazon Relational Database Service User Guide
Replicating
Examples
When run on a MySQL DB instance, the following example configures the DB instance to be a read replica
of an instance of MySQL running external to Amazon RDS. It sets the minimum replication delay to one
hour (3,600 seconds) on the MySQL DB instance. A change from the MySQL source database instance
running external to Amazon RDS isn't applied on the MySQL DB instance read replica for at least one
hour.
call mysql.rds_set_external_master_with_delay(
'Externaldb.some.com',
3306,
'repl_user',
'SomePassW0rd',
'mysql-bin-changelog.000777',
120,
0,
3600);
mysql.rds_set_master_auto_position
Sets the replication mode to be based on either binary log file positions or on global transaction
identifiers (GTIDs).
Syntax
CALL mysql.rds_set_master_auto_position (
auto_position_mode
);
Parameters
auto_position_mode
A value that indicates whether to use log file position replication or GTID-based replication:
• 0 – Use the replication method based on binary log file position. The default is 0.
• 1 – Use the GTID-based replication method.
Usage notes
The master user must run the mysql.rds_set_master_auto_position procedure.
This procedure is supported for all RDS for MySQL 5.7 versions, and RDS for MySQL 8.0.26 and higher
8.0 versions.
mysql.rds_set_source_delay
Sets the minimum number of seconds to delay replication from source database instance to the current
read replica. Use this procedure when you are connected to a read replica to delay replication from its
source database instance.
Syntax
CALL mysql.rds_set_source_delay(
delay
1777
Amazon Relational Database Service User Guide
Replicating
);
Parameters
delay
The minimum number of seconds to delay replication from the source database instance.
Usage notes
The master user must run the mysql.rds_set_source_delay procedure.
For disaster recovery, you can use this procedure with the mysql.rds_start_replication_until (p. 1780)
stored procedure or the mysql.rds_start_replication_until_gtid (p. 1781) stored procedure. To
roll forward changes to a delayed read replica to the time just before a disaster, you can run the
mysql.rds_set_source_delay procedure. After the mysql.rds_start_replication_until or
mysql.rds_start_replication_until_gtid procedure stops replication, you can promote the read
replica to be the new primary DB instance by using the instructions in Promoting a read replica to be a
standalone DB instance (p. 447).
Examples
To delay replication from source database instance to the current read replica for at least one hour
(3,600 seconds), you can call mysql.rds_set_source_delay with the following parameter:
CALL mysql.rds_set_source_delay(3600);
mysql.rds_skip_transaction_with_gtid
Skips replication of a transaction with the specified global transaction identifier (GTID) on a MySQL DB
instance.
You can use this procedure for disaster recovery when a specific GTID transaction is known to cause
a problem. Use this stored procedure to skip the problematic transaction. Examples of problematic
transactions include transactions that disable replication, delete important data, or cause the DB
instance to become unavailable.
Syntax
CALL mysql.rds_skip_transaction_with_gtid (
gtid_to_skip
);
1778
Amazon Relational Database Service User Guide
Replicating
Parameters
gtid_to_skip
Usage notes
The master user must run the mysql.rds_skip_transaction_with_gtid procedure.
This procedure is supported for all RDS for MySQL 5.7 versions, and RDS for MySQL 8.0.26 and higher
8.0 versions.
Examples
The following example skips replication of the transaction with the GTID 3E11FA47-71CA-11E1-9E33-
C80AA9429562:23.
call mysql.rds_skip_transaction_with_gtid('3E11FA47-71CA-11E1-9E33-C80AA9429562:23');
mysql.rds_skip_repl_error
Skips and deletes a replication error on a MySQL DB read replica.
Syntax
CALL mysql.rds_skip_repl_error;
Usage notes
The master user must run the mysql.rds_skip_repl_error procedure on a read replica. For more
information about this procedure, see Calling the mysql.rds_skip_repl_error procedure (p. 1745).
To determine if there are errors, run the MySQL SHOW REPLICA STATUS\G command. If a replication
error isn't critical, you can run mysql.rds_skip_repl_error to skip the error. If there are multiple
errors, mysql.rds_skip_repl_error deletes the first error, then warns that others are present.
You can then use SHOW REPLICA STATUS\G to determine the correct course of action for the next
error. For information about the values returned, see SHOW REPLICA STATUS statement in the MySQL
documentation.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS. If
you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
For more information about addressing replication errors with Amazon RDS, see Troubleshooting a
MySQL read replica problem (p. 1718).
This error message appears if you run the procedure on the primary instance instead of the read replica.
You must run this procedure on the read replica for the procedure to work.
This error message might also appear if you run the procedure on the read replica, but replication can't
be restarted successfully.
1779
Amazon Relational Database Service User Guide
Replicating
If you need to skip a large number of errors, the replication lag can increase beyond the default retention
period for binary log (binlog) files. In this case, you might encounter a fatal error due to binlog files
being purged before they have been replayed on the read replica. This purge causes replication to stop,
and you can no longer call the mysql.rds_skip_repl_error command to skip replication errors.
You can mitigate this issue by increasing the number of hours that binlog files are retained on your
source database instance. After you have increased the binlog retention time, you can restart replication
and call the mysql.rds_skip_repl_error command as needed.
To set the binlog retention time, use the mysql.rds_set_configuration (p. 1758) procedure and specify
a configuration parameter of 'binlog retention hours' along with the number of hours to retain
binlog files on the DB cluster. The following example sets the retention period for binlog files to 48
hours.
mysql.rds_start_replication
Initiates replication from an RDS for MySQL DB instance.
Note
You can use the mysql.rds_start_replication_until (p. 1780) or
mysql.rds_start_replication_until_gtid (p. 1781) stored procedure to initiate replication from an
RDS for MySQL DB instance and stop replication at the specified binary log file location.
Syntax
CALL mysql.rds_start_replication;
Usage notes
The master user must run the mysql.rds_start_replication procedure.
You can also call mysql.rds_start_replication on the read replica to restart any replication
process that you previously stopped by calling mysql.rds_stop_replication. For more information,
see Working with DB instance read replicas (p. 438).
mysql.rds_start_replication_until
Initiates replication from an RDS for MySQL DB instance and stops replication at the specified binary log
file location.
Syntax
CALL mysql.rds_start_replication_until (
1780
Amazon Relational Database Service User Guide
Replicating
replication_log_file
, replication_stop_point
);
Parameters
replication_log_file
The name of the binary log on the source database instance contains the replication information.
replication_stop_point
The location in the replication_log_file binary log at which replication will stop.
Usage notes
The master user must run the mysql.rds_start_replication_until procedure.
You can use this procedure with delayed replication for disaster recovery. If you have delayed replication
configured, you can use this procedure to roll forward changes to a delayed read replica to the time
just before a disaster. After this procedure stops replication, you can promote the read replica to be the
new primary DB instance by using the instructions in Promoting a read replica to be a standalone DB
instance (p. 447).
You can configure delayed replication using the following stored procedures:
The file name specified for the replication_log_file parameter must match the source database
instance binlog file name.
When the replication_stop_point parameter specifies a stop location that is in the past, replication
is stopped immediately.
Examples
The following example initiates replication and replicates changes until it reaches location 120 in the
mysql-bin-changelog.000777 binary log file.
call mysql.rds_start_replication_until(
'mysql-bin-changelog.000777',
120);
mysql.rds_start_replication_until_gtid
Initiates replication from an RDS for MySQL DB instance and stops replication immediately after the
specified global transaction identifier (GTID).
1781
Amazon Relational Database Service User Guide
Replicating
Syntax
CALL mysql.rds_start_replication_until_gtid(gtid);
Parameters
gtid
Usage notes
The master user must run the mysql.rds_start_replication_until_gtid procedure.
This procedure is supported for all RDS for MySQL 5.7 versions, and RDS for MySQL 8.0.26 and higher
8.0 versions.
You can use this procedure with delayed replication for disaster recovery. If you have delayed replication
configured, you can use this procedure to roll forward changes to a delayed read replica to the time
just before a disaster. After this procedure stops replication, you can promote the read replica to be the
new primary DB instance by using the instructions in Promoting a read replica to be a standalone DB
instance (p. 447).
You can configure delayed replication using the following stored procedures:
When the gtid parameter specifies a transaction that has already been run by the replica, replication is
stopped immediately.
Examples
The following example initiates replication and replicates changes until it reaches GTID
3E11FA47-71CA-11E1-9E33-C80AA9429562:23.
call mysql.rds_start_replication_until_gtid('3E11FA47-71CA-11E1-9E33-C80AA9429562:23');
mysql.rds_stop_replication
Stops replication from a MySQL DB instance.
Syntax
CALL mysql.rds_stop_replication;
Usage notes
The master user must run the mysql.rds_stop_replication procedure.
If you are configuring replication to import data from an instance of MySQL running external to Amazon
RDS, you call mysql.rds_stop_replication on the read replica to stop the replication process
1782
Amazon Relational Database Service User Guide
Replicating
after the import has completed. For more information, see Restoring a backup into a MySQL DB
instance (p. 1680).
If you are configuring replication to export data to an instance of MySQL external to Amazon RDS, you
call mysql.rds_start_replication and mysql.rds_stop_replication on the read replica to
control some replication actions, such as purging binary logs. For more information, see Exporting data
from a MySQL DB instance by using replication (p. 1728).
You can also use mysql.rds_stop_replication to stop replication between two Amazon RDS DB
instances. You typically stop replication to perform a long running operation on the read replica, such
as creating a large index on the read replica. You can restart any replication process that you stopped by
calling mysql.rds_start_replication (p. 1780) on the read replica. For more information, see Working with
DB instance read replicas (p. 438).
1783
Amazon Relational Database Service User Guide
Warming the InnoDB cache
Topics
• mysql.rds_innodb_buffer_pool_dump_now (p. 1784)
• mysql.rds_innodb_buffer_pool_load_abort (p. 1784)
• mysql.rds_innodb_buffer_pool_load_now (p. 1784)
mysql.rds_innodb_buffer_pool_dump_now
Dumps the current state of the buffer pool to disk.
Syntax
CALL mysql.rds_innodb_buffer_pool_dump_now();
Usage notes
The master user must run the mysql.rds_innodb_buffer_pool_dump_now procedure.
mysql.rds_innodb_buffer_pool_load_abort
Cancels a load of the saved buffer pool state while in progress.
Syntax
CALL mysql.rds_innodb_buffer_pool_load_abort();
Usage notes
The master user must run the mysql.rds_innodb_buffer_pool_load_abort procedure.
mysql.rds_innodb_buffer_pool_load_now
Loads the saved state of the buffer pool from disk.
Syntax
CALL mysql.rds_innodb_buffer_pool_load_now();
Usage notes
The master user must run the mysql.rds_innodb_buffer_pool_load_now procedure.
1784
Amazon Relational Database Service User Guide
Note
Oracle Database 11g, Oracle Database 12c, and Oracle Database 18c are legacy versions that are
no longer supported in Amazon RDS.
Before creating a DB instance, complete the steps in the Setting up for Amazon RDS (p. 174) section of
this guide. When you create a DB instance using your master account, the account gets DBA privileges,
with some limitations. Use this account for administrative tasks such as creating additional database
accounts. You can't use SYS, SYSTEM, or other Oracle-supplied administrative accounts.
• DB instances
• DB snapshots
• Point-in-time restores
• Automated backups
• Manual backups
You can use DB instances running Oracle inside a VPC. You can also add features to your Oracle DB
instance by enabling various options. Amazon RDS supports Multi-AZ deployments for Oracle as a high-
availability, failover solution.
Important
To deliver a managed service experience, Amazon RDS doesn't provide shell access to DB
instances. It also restricts access to certain system procedures and tables that need advanced
privileges. You can access your database using standard SQL clients such as Oracle SQL*Plus.
However, you can't access the host directly by using Telnet or Secure Shell (SSH).
Topics
• Overview of Oracle on Amazon RDS (p. 1786)
• Connecting to your RDS for Oracle DB instance (p. 1806)
• Securing Oracle DB instance connections (p. 1816)
• Working with CDBs in RDS for Oracle (p. 1840)
• Administering your Oracle DB instance (p. 1847)
• Configuring advanced RDS for Oracle features (p. 1936)
• Importing data into Oracle on Amazon RDS (p. 1947)
• Working with read replicas for Amazon RDS for Oracle (p. 1973)
• Adding options to Oracle DB instances (p. 1990)
• Upgrading the RDS for Oracle DB engine (p. 2103)
• Using third-party software with your RDS for Oracle DB instance (p. 2114)
1785
Amazon Relational Database Service User Guide
Oracle overview
Topics
• RDS for Oracle features (p. 1786)
• RDS for Oracle releases (p. 1789)
• RDS for Oracle licensing options (p. 1793)
• RDS for Oracle users and privileges (p. 1796)
• RDS for Oracle instance classes (p. 1796)
• RDS for Oracle database architecture (p. 1800)
• RDS for Oracle parameters (p. 1801)
• RDS for Oracle character sets (p. 1801)
• RDS for Oracle limitations (p. 1804)
You can filter new Amazon RDS features on the What's New with Database? page. For Products, choose
Amazon RDS. Then search using keywords such as Oracle 2022.
Note
The following lists are not exhaustive.
Topics
• New features in RDS for Oracle (p. 1786)
• Supported features in RDS for Oracle (p. 1786)
• Unsupported features in RDS for Oracle (p. 1788)
• Advanced Compression
1786
Amazon Relational Database Service User Guide
Oracle features
For more information, see Oracle Application Express (APEX) (p. 2009).
• Automatic Memory Management
• Automatic Undo Management
• Automatic Workload Repository (AWR)
For more information, see Generating performance reports with Automatic Workload Repository
(AWR) (p. 1875).
• Active Data Guard with Maximum Performance in the same AWS Region or across AWS Regions
For more information, see Working with read replicas for Amazon RDS for Oracle (p. 1973).
• Blockchain tables (Oracle Database 21c and higher)
For more information, see Managing Blockchain Tables in the Oracle Database documentation.
• Continuous Query Notification (version 12.1.0.2.v7 and higher)
For more information, see Using Continuous Query Notification (CQN) in the Oracle documentation.
• Data Redaction
• Database Change Notification
For more information, see Database Change Notification in the Oracle documentation.
Note
This feature changes to Continuous Query Notification in Oracle Database 12c Release 1
(12.1) and higher.
• Database In-Memory (Oracle Database 12c and higher)
• Distributed Queries and Transactions
• Edition-Based Redefinition
For more information, see Setting the default edition for a DB instance (p. 1879).
• EM Express (12c and higher)
For more information, see Managing Gradual Database Password Rollover for Applications in the
Oracle Database documentation.
• HugePages
For more information, see Turning on HugePages for an RDS for Oracle instance (p. 1942).
• Import/export (legacy and Data Pump) and SQL*Loader
For more information, see Importing data into Oracle on Amazon RDS (p. 1947).
• Java Virtual Machine (JVM)
For more information, see Oracle Java virtual machine (p. 2031).
• JavaScript (Oracle Database 21c and higher)
1787
Amazon Relational Database Service User Guide
Oracle features
• Locator
The multitenant architecture is supported for all Oracle Database 19c and higher releases. For more
information, see Overview of RDS for Oracle CDBs (p. 1840) and Limitations of a single-tenant
CDB (p. 1805).
• Network encryption
For more information, see Oracle native network encryption (p. 2057) and Oracle Secure Sockets
Layer (p. 2068).
• Partitioning
• Application-level sharding (but not the Oracle Sharding feature)
• Spatial and Graph
For more information, see Oracle Transparent Data Encryption (p. 2097).
• Unified Auditing, Mixed Mode
For more information, see Mixed mode auditing in the Oracle documentation.
• XML DB (without the XML DB Protocol Server)
1788
Amazon Relational Database Service User Guide
Oracle versions
Note
The preceding list is not exhaustive.
Warning
In general, Amazon RDS doesn't prevent you from creating schemas for unsupported features.
However, if you create schemas for Oracle features and components that require SYSDBA
privileges, you can damage the data dictionary and affect the availability of your DB instance.
Use only supported features and schemas that are available in Adding options to Oracle DB
instances (p. 1990).
Topics
• Oracle Database 21c with Amazon RDS (p. 1789)
• Oracle Database 19c with Amazon RDS (p. 1791)
• Oracle Database 12c with Amazon RDS (p. 1792)
In this section, you can find the features and changes important to using Oracle Database 21c (21.0.0.0)
on Amazon RDS. For a complete list of the changes, see the Oracle database 21c documentation. For
a complete list of features supported by each Oracle Database 21c edition, see Permitted features,
options, and management packs by Oracle database offering in the Oracle documentation.
Topics
• New parameters (p. 1790)
• Changes for the compatible parameter (p. 1791)
• Removed parameters (p. 1791)
1789
Amazon Relational Database Service User Guide
Oracle versions
New parameters
The following table shows the new Amazon RDS parameters for Oracle Database 21c (21.0.0.0).
optimizer_capture_sql_quarantine
true | false false Y Enables or disables the deep
vectorization framework.
result_cache_execution_threshold
0 to 2 Y Specifies the maximum number
68719476736 of times a PL/SQL function can be
executed before its result is stored
in the result cache.
result_cache_max_temp_result
0 to 100 5 Y Specifies the percentage of
RESULT_CACHE_MAX_TEMP_SIZE
that any single cached query result
can consume.
1790
Amazon Relational Database Service User Guide
Oracle versions
result_cache_max_temp_size 0 to Y
RESULT_CACHE_SIZE Specifies the maximum amount of
2199023255552 * 10 temporary tablespace (in bytes)
that can be consumed by the result
cache.
tablespace_encryption_default_algorithm
GOST256 | AES128 Y Specifies the default algorithm the
SEED128 | database uses when encrypting a
ARIA256 | tablespace.
ARIA192 |
ARIA128 |
3DES168 |
AES256 |
AES192 |
AES128
compatible 21.0.0
Removed parameters
The following parameters were removed in Oracle Database 21c (21.0.0.0):
• remote_os_authent
• sec_case_sensitive_logon
• unified_audit_sga_queue_size
Oracle Database 19c (19.0.0.0) includes many new features and updates from the previous version. In
this section, you can find the features and changes important to using Oracle Database 19c (19.0.0.0)
on Amazon RDS. For a complete list of the changes, see the Oracle database 19c documentation. For
a complete list of features supported by each Oracle Database 19c edition, see Permitted features,
options, and management packs by Oracle database offering in the Oracle documentation.
1791
Amazon Relational Database Service User Guide
Oracle versions
Topics
• New parameters (p. 1792)
• Changes to the compatible parameter (p. 1792)
• Removed parameters (p. 1792)
New parameters
The following table shows the new Amazon RDS parameters for Oracle Database 19c (19.0.0.0).
The compatible parameter has a new maximum value for Oracle Database 19c (19.0.0.0) on Amazon
RDS. The following table shows the new default value.
compatible 19.0.0
Removed parameters
• exafusion_enabled
• max_connections
• o7_dictionary_access
Topics
• Oracle Database 12c Release 2 (12.2.0.1) with Amazon RDS (p. 1792)
• Oracle Database 12c Release 1 (12.1.0.2) with Amazon RDS (p. 1793)
1792
Amazon Relational Database Service User Guide
Oracle licensing
Support, indicating the end of support for this release. For more information, see the end of support
timeline on AWS re:Post.
Date Action
April 1, 2022 Amazon RDS began automatic upgrades of your Oracle Database 12c Release 2
(12.2.0.1) instances to Oracle Database 19c.
April 1, 2022 Amazon RDS began automatic upgrades to Oracle Database 19c for any Oracle
Database 12c Release 2 (12.2.0.1) DB instances restored from snapshots.
The automatic upgrade occurs during maintenance windows. If maintenance
windows aren't available when the upgrade needs to occur, Amazon RDS
upgrades the engine immediately.
Date Action
August 1, 2022 Amazon RDS began automatic upgrades of your Oracle Database 12c Release 1
(12.1.0.2) instances to the latest Release Update (RU) for Oracle Database 19c.
The automatic upgrade occurs during maintenance windows. If maintenance
windows aren't available when the upgrade needs to occur, Amazon RDS
upgrades the engine immediately.
August 1, 2022 Amazon RDS began automatic upgrades to Oracle Database 19c for any Oracle
Database 12c Release 1 (12.1.0.2) DB instances restored from snapshots.
License Included
In the License Included model, you don't need to purchase Oracle Database licenses separately. AWS
holds the license for the Oracle database software. In this model, if you have an AWS Support account
with case support, contact AWS Support for both Amazon RDS and Oracle Database service requests.
The License Included model is only supported on Amazon RDS for Oracle Database Standard Edition Two
(SE2).
1793
Amazon Relational Database Service User Guide
Oracle licensing
In this model, you continue to use your active Oracle support account, and you contact Oracle directly
for Oracle Database service requests. If you have an AWS Support account with case support, you can
contact AWS Support for Amazon RDS issues. Amazon Web Services and Oracle have a multi-vendor
support process for cases that require assistance from both organizations.
Amazon RDS supports the BYOL model only for Oracle Database Enterprise Edition (EE) and Oracle
Database Standard Edition Two (SE2).
The following table shows the product information filters for RDS for Oracle.
License Pack data guard See Working with read replicas for Amazon RDS for
Oracle (p. 1973) (Oracle Active Data Guard)
To track license usage of your Oracle DB instances, you can create a license configuration. In this case,
RDS for Oracle resources that match the product information filter are automatically associated with the
license configuration. Discovery of Oracle DB instances can take up to 24 hours.
Console
To create a license configuration to track the license usage of your Oracle DB instances
1. Go to https://console.aws.amazon.com/license-manager/.
2. Create a license configuration.
For instructions, see Create a license configuration in the AWS License Manager User Guide.
Add a rule for an RDS Product Information Filter in the Product Information panel.
For more information, see ProductInformation in the AWS License Manager API Reference.
AWS CLI
To create a license configuration by using the AWS CLI, call the create-license-configuration command.
Use the --cli-input-json or --cli-input-yaml parameters to pass the parameters to the
command.
1794
Amazon Relational Database Service User Guide
Oracle licensing
Example
The following code creates a license configuration for Oracle Enterprise Edition.
{
"Name": "rds-oracle-ee",
"Description": "RDS Oracle Enterprise Edition",
"LicenseCountingType": "vCPU",
"LicenseCountHardLimit": false,
"ProductInformationList": [
{
"ResourceType": "RDS",
"ProductInformationFilterList": [
{
"ProductInformationFilterName": "Engine Edition",
"ProductInformationFilterValue": ["oracle-ee"],
"ProductInformationFilterComparator": "EQUALS"
}
]
}
]
}
For more information about product information, see Automated discovery of resource inventory in the
AWS License Manager User Guide.
For more information about the --cli-input parameter, see Generating AWS CLI skeleton and input
parameters from a JSON or YAML input file in the AWS CLI User Guide.
1795
Amazon Relational Database Service User Guide
Oracle users and privileges
If you use the Bring Your Own License model, you must have a license for both the primary DB instance
and the standby DB instance in a Multi-AZ deployment.
Topics
• Limitations for Oracle DBA privileges (p. 1796)
• How to manage privileges on SYS objects (p. 1796)
The predefined role DBA normally allows all administrative privileges on an Oracle database. When you
create a DB instance, your master user account gets DBA privileges (with some limitations). To deliver a
managed experience, an RDS for Oracle database doesn't provide the following privileges for the DBA
role:
• ALTER DATABASE
• ALTER SYSTEM
• CREATE ANY DIRECTORY
• DROP ANY DIRECTORY
• GRANT ANY PRIVILEGE
• GRANT ANY ROLE
For more RDS for Oracle system privilege and role information, see Master user account
privileges (p. 2682).
1796
Amazon Relational Database Service User Guide
Oracle instance classes
RDS for Oracle also offers instance classes that are optimized for workloads that require additional
memory, storage, and I/O per vCPU. These instance classes use the following naming convention:
db.r5b.instance_size.tpcthreads_per_core.memratio
db.r5.instance_size.tpcthreads_per_core.memratio
db.r5b.4xlarge.tpc2.mem2x
The following table lists all instance classes supported for Oracle Database. Oracle Database 12c Release
1 (12.1.0.2) and Oracle Database 12c Release 2 (12.2.0.2) are listed in the table, but support for these
releases is deprecated. For information about the memory attributes of each type, see RDS for Oracle
instance types.
Oracle edition Oracle Database 19c and higher, Oracle Oracle Database 12c Release 1
Database 12c Release 2 (12.2.0.1) (12.1.0.2) (deprecated)
(deprecated)
db.m5.large–db.m5.24xlarge
db.r5d.large–db.r5d.24xlarge db.r5b.large–db.r5b.24xlarge
db.r5b.8xlarge.tpc2.mem3x db.r5.8xlarge.tpc2.mem3x
db.r5b.6xlarge.tpc2.mem4x db.r5.6xlarge.tpc2.mem4x
db.r5b.4xlarge.tpc2.mem4x db.r5.4xlarge.tpc2.mem4x
db.r5b.4xlarge.tpc2.mem3x db.r5.4xlarge.tpc2.mem3x
db.r5b.4xlarge.tpc2.mem2x db.r5.4xlarge.tpc2.mem2x
db.r5b.2xlarge.tpc2.mem8x db.r5.2xlarge.tpc2.mem8x
1797
Amazon Relational Database Service User Guide
Oracle instance classes
Oracle edition Oracle Database 19c and higher, Oracle Oracle Database 12c Release 1
Database 12c Release 2 (12.2.0.1) (12.1.0.2) (deprecated)
(deprecated)
db.r5b.2xlarge.tpc2.mem4x db.r5.2xlarge.tpc2.mem4x
db.r5b.2xlarge.tpc1.mem2x db.r5.2xlarge.tpc1.mem2x
db.r5b.xlarge.tpc2.mem4x db.r5.xlarge.tpc2.mem4x
db.r5b.xlarge.tpc2.mem2x db.r5.xlarge.tpc2.mem2x
db.r5b.large.tpc1.mem2x db.r5.large.tpc1.mem2x
db.r5b.large–db.r5b.24xlarge db.r5.large–db.r5.24xlarge
db.r5.12xlarge.tpc2.mem2x db.x1e.xlarge–db.x1e.32xlarge
db.r5.8xlarge.tpc2.mem3x db.x1.16xlarge–db.x1.32xlarge
db.r5.6xlarge.tpc2.mem4x db.z1d.large–db.z1d.12xlarge
db.r5.4xlarge.tpc2.mem4x
db.r5.4xlarge.tpc2.mem3x
db.r5.4xlarge.tpc2.mem2x
db.r5.2xlarge.tpc2.mem8x
db.r5.2xlarge.tpc2.mem4x
db.r5.2xlarge.tpc1.mem2x
db.r5.xlarge.tpc2.mem4x
db.r5.xlarge.tpc2.mem2x
db.r5.large.tpc1.mem2x
db.r5.large–db.r5.24xlarge
db.x2iedn.xlarge–db.x2iedn.32xlarge
db.x2iezn.2xlarge–db.x2iezn.12xlarge
db.x2idn.16xlarge–db.x2idn.32xlarge
db.x1e.xlarge–db.x1e.32xlarge
db.x1.16xlarge–db.x1.32xlarge
db.z1d.large–db.z1d.12xlarge
db.t3.small–db.t3.2xlarge db.t3.micro–db.t3.2xlarge
1798
Amazon Relational Database Service User Guide
Oracle instance classes
Oracle edition Oracle Database 19c and higher, Oracle Oracle Database 12c Release 1
Database 12c Release 2 (12.2.0.1) (12.1.0.2) (deprecated)
(deprecated)
db.m5.large–db.m5.4xlarge
db.r5d.large–db.r5d.4xlarge db.r5.4xlarge.tpc2.mem3x
db.r5.4xlarge.tpc2.mem4x db.r5.4xlarge.tpc2.mem2x
db.r5.4xlarge.tpc2.mem3x db.r5.2xlarge.tpc2.mem8x
db.r5.4xlarge.tpc2.mem2x db.r5.2xlarge.tpc2.mem4x
db.r5.2xlarge.tpc2.mem8x db.r5.2xlarge.tpc1.mem2x
db.r5.2xlarge.tpc2.mem4x db.r5.xlarge.tpc2.mem4x
db.r5.2xlarge.tpc1.mem2x db.r5.xlarge.tpc2.mem2x
db.r5.xlarge.tpc2.mem4x db.r5.large.tpc1.mem2x
db.r5.xlarge.tpc2.mem2x db.r5.large–db.r5.4xlarge
db.r5.large.tpc1.mem2x db.r5b.large–db.r5b.4xlarge
db.r5.large–db.r5.4xlarge db.z1d.large–db.z1d.3xlarge
db.r5b.large–db.r5b.4xlarge
db.x2iedn.xlarge–db.x2iedn.4xlarge
db.x2iezn.2xlarge–db.x2iezn.4xlarge
db.z1d.large–db.z1d.3xlarge
db.t3.small–db.t3.2xlarge db.t3.micro–db.t3.2xlarge
db.r5.large–db.r5.4xlarge db.r5.large–db.r5.4xlarge
db.t3.small–db.t3.2xlarge db.t3.micro–db.t3.2xlarge
1799
Amazon Relational Database Service User Guide
Oracle database architecture
Note
We encourage all BYOL customers to consult their licensing agreement to assess the impact
of Amazon RDS for Oracle deprecations. For more information on the compute capacity of DB
instance classes supported by RDS for Oracle, see DB instance classes (p. 11) and Configuring
the processor for a DB instance class in RDS for Oracle (p. 71).
Note
If you have DB snapshots of DB instances that were using deprecated DB instance classes, you
can choose a DB instance class that is not deprecated when you restore the DB snapshots. For
more information, see Restoring from a DB snapshot (p. 615).
The preceding DB instance classes have been replaced by better performing DB instance classes that are
generally available at a lower cost. If you have DB instances that use deprecated DB instance classes, you
have the following options:
• Allow Amazon RDS to modify each DB instance automatically to use a comparable non-deprecated DB
instance class. For deprecation timelines, see DB instance class types (p. 11).
• Change the DB instance class yourself by modifying the DB instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).
If you have DB snapshots of DB instances that were using deprecated DB instance classes, you can choose
a DB instance class that is not deprecated when you restore the DB snapshots. For more information, see
Restoring from a DB snapshot (p. 615).
For Oracle Database 19c and higher, RDS for Oracle supports the single-tenant configuration of the
multitenant architecture. In this case, your CDB contains only one PDB. The single-tenant configuration
of the multitenant architecture uses the same RDS APIs as the non-CDB architecture. Thus, your
experience with a PDB is mostly identical to your experience with a non-CDB.
Note
You can't access the CDB itself.
In Oracle Database 21c and higher, all databases are CDBs. In contrast, you can create an Oracle
Database 19c DB instance as either a CDB or non-CDB. You can't upgrade a non-CDB to a CDB, but you
convert an Oracle Database 19c non-CDB to a CDB, and then upgrade it. You can't convert a CDB to a
non-CDB.
1800
Amazon Relational Database Service User Guide
Oracle parameters
For example, to view the supported parameters for the Enterprise Edition of Oracle Database 19c, run
the following command.
DB character set
The Oracle database character set is used in the CHAR, VARCHAR2, and CLOB data types. The database
also uses this character set for metadata such as table names, column names, and SQL statements. The
Oracle database character set is typically referred to as the DB character set.
You set the character set when you create a DB instance. You can't change the DB character set after you
create the database.
Value Description
1801
Amazon Relational Database Service User Guide
Oracle character sets
Value Description
1802
Amazon Relational Database Service User Guide
Oracle character sets
Value Description
• NLS_DATE_FORMAT
• NLS_LENGTH_SEMANTICS
• NLS_NCHAR_CONV_EXCP
• NLS_TIME_FORMAT
• NLS_TIME_TZ_FORMAT
• NLS_TIMESTAMP_FORMAT
• NLS_TIMESTAMP_TZ_FORMAT
For information about modifying instance parameters, see Working with parameter groups (p. 347).
You can set other NLS initialization parameters in your SQL client. For example, the following statement
sets the NLS_LANGUAGE initialization parameter to GERMAN in a SQL client that is connected to an
Oracle DB instance:
For information about connecting to an Oracle DB instance with a SQL client, see Connecting to your
RDS for Oracle DB instance (p. 1806).
• AL16UTF16 (default)
• UTF8
You can specify either value with the --nchar-character-set-name parameter of the create-
db-instance command (AWS CLI version 2 only). If you use the Amazon RDS API, specify the
NcharCharacterSetName parameter of CreateDBInstance operation. You can't change the national
character set after you create the database.
1803
Amazon Relational Database Service User Guide
Oracle limitations
For more information about Unicode in Oracle databases, see Supporting multilingual databases with
unicode in the Oracle documentation.
Topics
• Oracle file size limits in Amazon RDS (p. 1804)
• Public synonyms for Oracle-supplied schemas (p. 1804)
• Schemas for unsupported features (p. 1804)
• Limitations for Oracle DBA privileges (p. 1796)
• Limitations of a single-tenant CDB (p. 1805)
• Deprecation of TLS 1.0 and 1.1 Transport Layer Security (p. 1805)
You can create public synonyms referencing objects in your own schemas.
The predefined role DBA normally allows all administrative privileges on an Oracle database. When you
create a DB instance, your master user account gets DBA privileges (with some limitations). To deliver a
1804
Amazon Relational Database Service User Guide
Oracle limitations
managed experience, an RDS for Oracle database doesn't provide the following privileges for the DBA
role:
• ALTER DATABASE
• ALTER SYSTEM
• CREATE ANY DIRECTORY
• DROP ANY DIRECTORY
• GRANT ANY PRIVILEGE
• GRANT ANY ROLE
Use the master user account for administrative tasks such as creating additional user accounts in the
database. You can't use SYS, SYSTEM, and other Oracle-supplied administrative accounts.
The following operations work in a single-tenant CDB, but no customer-visible mechanism can detect
the current status of the operations:
Note
Auditing information isn't available from within the PDB.
1805
Amazon Relational Database Service User Guide
Connecting to your Oracle DB instance
In this topic, you learn how to use Oracle SQL Developer or SQL*Plus to connect to an RDS for Oracle DB
instance. For an example that walks you through the process of creating and connecting to a sample DB
instance, see Creating and connecting to an Oracle DB instance (p. 222).
Topics
• Finding the endpoint of your RDS for Oracle DB instance (p. 1806)
• Connecting to your DB instance using Oracle SQL developer (p. 1808)
• Connecting to your DB instance using SQL*Plus (p. 1810)
• Considerations for security groups (p. 1811)
• Considerations for process architecture (p. 1811)
• Troubleshooting connections to your Oracle DB instance (p. 1811)
• Modifying connection properties using sqlnet.ora parameters (p. 1812)
You can find the endpoint for a DB instance using the Amazon RDS console or the AWS CLI.
Note
If you are using Kerberos authentication, see Connecting to Oracle with Kerberos
authentication (p. 1831).
Console
To find the endpoint using the console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the console, choose the AWS Region of your DB instance.
3. Find the DNS name and port number for your DB instance.
1806
Amazon Relational Database Service User Guide
Finding the endpoint
AWS CLI
To find the endpoint of an Oracle DB instance by using the AWS CLI, call the describe-db-instances
command.
Search for Endpoint in the output to find the DNS name and port number for your DB instance. The
Address line in the output contains the DNS name. The following is an example of the JSON endpoint
output.
"Endpoint": {
"HostedZoneId": "Z1PVIF0B656C1W",
"Port": 3306,
"Address": "myinstance.123456789012.us-west-2.rds.amazonaws.com"
1807
Amazon Relational Database Service User Guide
SQL developer
},
Note
The output might contain information for multiple DB instances.
To connect to your DB instance, you need its DNS name and port number. For information about finding
the DNS name and port number for a DB instance, see Finding the endpoint of your RDS for Oracle DB
instance (p. 1806).
3. In the New/Select Database Connection dialog box, provide the information for your DB instance:
• For Connection Name, enter a name that describes the connection, such as Oracle-RDS.
• For Username, enter the name of the database administrator for the DB instance.
• For Password, enter the password for the database administrator.
• For Hostname, enter the DNS name of the DB instance.
• For Port, enter the port number.
1808
Amazon Relational Database Service User Guide
SQL developer
• For SID, enter the DB name. You can find the DB name on the Configuration tab of your database
details page.
4. Choose Connect.
5. You can now start creating your own databases and running queries against your DB instance and
databases as usual. To run a test query against your DB instance, do the following:
a. In the Worksheet tab for your connection, enter the following SQL query.
1809
Amazon Relational Database Service User Guide
SQL*Plus
To connect to your DB instance, you need its DNS name and port number. For information about finding
the DNS name and port number for a DB instance, see Finding the endpoint of your RDS for Oracle DB
instance (p. 1806).
sqlplus 'user_name@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dns_name)(PORT=port))
(CONNECT_DATA=(SID=database_name)))'
For Windows:
sqlplus user_name@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dns_name)(PORT=port))
(CONNECT_DATA=(SID=database_name)))
After you enter the password for the user, the SQL prompt appears.
1810
Amazon Relational Database Service User Guide
Security group considerations
SQL>
Note
The shorter format connection string (EZ Connect), such as sqlplus USER/
PASSWORD@longer-than-63-chars-rds-endpoint-here:1521/database-identifier,
might encounter a maximum character limit, so you we recommend that you don't use it to
connect.
After you create the new security group, you modify your DB instance to associate it with the security
group. For more information, see Modifying an Amazon RDS DB instance (p. 401).
You can enhance security by using SSL to encrypt connections to your DB instance. For more information,
see Oracle Secure Sockets Layer (p. 2068).
You might consider using shared server processes when a high number of user sessions are using too
much memory on the server. You might also consider shared server processes when sessions connect
and disconnect very often, resulting in performance issues. There are also disadvantages to using
shared server processes. For example, they can strain CPU resources, and they are more complicated to
configure and administer.
For more information about dedicated and shared server processes, see About dedicated and shared
server processes in the Oracle documentation. For more information about configuring shared server
processes on an RDS for Oracle DB instance, see How do I configure Amazon RDS for Oracle database to
work with shared servers? in the Knowledge Center.
Unable to connect to your For a newly created DB instance, the DB instance has a status of
DB instance. creating until it is ready to use. When the state changes to available,
you can connect to the DB instance. Depending on the DB instance
class and the amount of storage, it can take up to 20 minutes before
the new DB instance is available.
Unable to connect to your If you can't send or receive communications over the port that you
DB instance. specified when you created the DB instance, you can't connect to the
DB instance. Check with your network administrator to verify that the
1811
Amazon Relational Database Service User Guide
Modifying Oracle sqlnet.ora parameters
Unable to connect to your The access rules enforced by your local firewall and the IP addresses
DB instance. you authorized to access your DB instance in the security group for the
DB instance might not match. The problem is most likely the inbound
or outbound rules on your firewall.
You can add or edit an inbound rule in the security group. For Source,
choose My IP. This allows access to the DB instance from the IP address
detected in your browser. For more information, see Amazon VPC VPCs
and Amazon RDS (p. 2688).
Connect failed because Make sure that you specified the server name and port number
target host or object does correctly. For Server name, enter the DNS name from the console.
not exist – Oracle, Error:
ORA-12545 For information about finding the DNS name and port number for
a DB instance, see Finding the endpoint of your RDS for Oracle DB
instance (p. 1806).
Invalid username/password; You were able to reach the DB instance, but the connection was
logon denied – Oracle, refused. This is usually caused by providing an incorrect user name or
Error: ORA-01017 password. Verify the user name and password, and then retry.
TNS:listener does not Ensure the correct SID is entered. The SID is the same as your DB name.
currently know of SID given Find the DB name in the Configuration tab of the Databases page for
in connect descriptor - your instance. You can also find the DB name using the AWS CLI:
Oracle, ERROR: ORA-12505
aws rds describe-db-instances --query 'DBInstances[*].
[DBInstanceIdentifier,DBName]' --output text
For more information on connection issues, see Can't connect to Amazon RDS DB instance (p. 2727).
For more information about why you might set sqlnet.ora parameters, see Configuring profile
parameters in the Oracle documentation.
1812
Amazon Relational Database Service User Guide
Modifying Oracle sqlnet.ora parameters
parameters are sqlnet.ora parameters. For example, in an Oracle parameter group in Amazon RDS, the
default_sdu_size sqlnet.ora parameter is sqlnetora.default_sdu_size.
For information about managing parameter groups and setting parameter values, see Working with
parameter groups (p. 347).
sqlnetora.sqlnet.allowed_logon_version_client
8, 10, Dynamic Minimum authentication protocol
11, 12 version allowed for clients, and
servers acting as clients, to
establish a connection to Oracle
DB instances.
sqlnetora.sqlnet.allowed_logon_version_server
8, 9, 10, Dynamic Minimum authentication protocol
11, 12, version allowed to establish a
12a connection to Oracle DB instances.
1813
Amazon Relational Database Service User Guide
Modifying Oracle sqlnet.ora parameters
The default value for each supported sqlnet.ora parameter is the Oracle default for the release. For
information about default values for Oracle Database 12c, see Parameters for the sqlnet.ora file in the
Oracle Database 12c documentation.
In Oracle parameter groups, the sqlnetora. prefix identifies which parameters are sqlnet.ora
parameters.
To view the all of the sqlnet.ora parameters for an Oracle DB instance, call the AWS CLI download-db-
log-file-portion command. Specify the DB instance identifier, the log file name, and the type of output.
Example
The following code lists all of the sqlnet.ora parameters for mydbinstance.
1814
Amazon Relational Database Service User Guide
Modifying Oracle sqlnet.ora parameters
For Windows:
For information about connecting to an Oracle DB instance in a SQL client, see Connecting to your RDS
for Oracle DB instance (p. 1806).
1815
Amazon Relational Database Service User Guide
Securing Oracle connections
Topics
• Using SSL with an RDS for Oracle DB instance (p. 1816)
• Updating applications to connect to Oracle DB instances using new SSL/TLS certificates (p. 1816)
• Using native network encryption with an RDS for Oracle DB instance (p. 1819)
• Configuring Kerberos authentication for Amazon RDS for Oracle (p. 1819)
• Configuring UTL_HTTP access using certificates and an Oracle wallet (p. 1832)
To enable SSL encryption for an Oracle DB instance, add the Oracle SSL option to the option group
associated with the DB instance. Amazon RDS uses a second port, as required by Oracle, for SSL
connections. Doing this allows both clear text and SSL-encrypted communication to occur at the same
time between a DB instance and an Oracle client. For example, you can use the port with clear text
communication to communicate with other resources inside a VPC while using the port with SSL-
encrypted communication to communicate with resources outside the VPC.
For more information, see Oracle Secure Sockets Layer (p. 2068).
Note
You can't use both SSL and Oracle native network encryption (NNE) on the same DB instance.
Before you can use SSL encryption, you must disable any other connection encryption.
This topic can help you to determine whether any client applications use SSL/TLS to connect to your DB
instances.
Important
When you change the certificate for an Amazon RDS for Oracle DB instance, only the database
listener is restarted. The DB instance isn't restarted. Existing database connections are
unaffected, but new connections will encounter errors for a brief period while the listener is
restarted.
Note
For client applications that use SSL/TLS to connect to your DB instances, you must update your
client application trust stores to include the new CA certificates.
1816
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates
After you update your CA certificates in the client application trust stores, you can rotate the certificates
on your DB instances. We strongly recommend testing these procedures in a development or staging
environment before implementing them in your production environments.
For more information about certificate rotation, see Rotating your SSL/TLS certificate (p. 2596). For
more information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For information about using SSL/TLS with Oracle DB instances, see Oracle Secure
Sockets Layer (p. 2068).
Topics
• Finding out whether applications connect using SSL (p. 1817)
• Updating your application trust store (p. 1817)
• Example Java code for establishing SSL connections (p. 1818)
Check the listener log to determine whether there are SSL connections. The following is sample output
in a listener log.
When PROTOCOL has the value tcps for an entry, it shows an SSL connection. However, when HOST
is 127.0.0.1, you can ignore the entry. Connections from 127.0.0.1 are a local management
agent on the DB instance. These connections aren't external SSL connections. Therefore, you have
applications connecting using SSL if you see listener log entries where PROTOCOL is tcps and HOST is
not 127.0.0.1.
To check the listener log, you can publish the log to Amazon CloudWatch Logs. For more information,
see Publishing Oracle logs to Amazon CloudWatch Logs (p. 927).
1. Download the new root certificate that works for all AWS Regions and put the file in the
ssl_wallet directory.
For information about downloading the root certificate, see Using SSL/TLS to encrypt a connection
to a DB instance (p. 2591).
2. Run the following command to update the Oracle wallet.
1817
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates
Replace the file name with the one that you downloaded.
3. Run the following command to confirm that the wallet was updated successfully.
Trusted Certificates:
Subject: CN=Amazon RDS Root 2019 CA,OU=Amazon RDS,O=Amazon Web Services\,
Inc.,L=Seattle,ST=Washington,C=US
For information about downloading the root certificate, see Using SSL/TLS to encrypt a connection to a
DB instance (p. 2591).
For sample scripts that import certificates, see Sample script for importing certificates into your trust
store (p. 2603).
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.Properties;
1818
Amazon Relational Database Service User Guide
Encrypting with NNE
// If no exception, that means handshake has passed, and an SSL connection can be
opened
}
}
Important
After you have determined that your database connections use SSL/TLS and have updated
your application trust store, you can update your database to use the rds-ca-rsa2048-g1
certificates. For instructions, see step 3 in Updating your CA certificate by modifying your DB
instance (p. 2597).
• You can control NNE on the client and server using settings in the NNE option:
• SQLNET.ALLOW_WEAK_CRYPTO_CLIENTS and SQLNET.ALLOW_WEAK_CRYPTO
• SQLNET.CRYPTO_CHECKSUM_CLIENT and SQLNET.CRYPTO_CHECKSUM_SERVER
• SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT and SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER
• SQLNET.ENCRYPTION_CLIENT and SQLNET.ENCRYPTION_SERVER
• SQLNET.ENCRYPTION_TYPES_CLIENT and SQLNET.ENCRYPTION_TYPES_SERVER
• In most cases, you don't need to configure your client or server. In contrast, TSL requires you to
configure both client and server.
• No certificates are required. In TLS, the server requires a certificate (which eventually expires), and the
client requires a trusted root certificate for the certificate authority that issued the server’s certificate.
To enable NNE encryption for an Oracle DB instance, add the Oracle NNE option to the option group
associated with the DB instance. For more information, see Oracle native network encryption (p. 2057).
Note
You can't use both NNE and TLS on the same DB instance.
Keeping all of your credentials in the same directory can save you time and effort. You have a centralized
place for storing and managing credentials for multiple database instances. A directory can also improve
your overall security profile.
1819
Amazon Relational Database Service User Guide
Configuring Kerberos authentication
Topics
• Setting up Kerberos authentication for Oracle DB instances (p. 1820)
• Managing a DB instance in a domain (p. 1829)
• Connecting to Oracle with Kerberos authentication (p. 1831)
• Step 1: Create a directory using the AWS Managed Microsoft AD (p. 1820)
• Step 2: Create a trust (p. 1824)
• Step 3: Configure IAM permissions for Amazon RDS (p. 1824)
• Step 4: Create and configure users (p. 1826)
• Step 5: Enable cross-VPC traffic between the directory and the DB instance (p. 1826)
• Step 6: Create or modify an Oracle DB instance (p. 1826)
• Step 7: Create Kerberos authentication Oracle logins (p. 1828)
• Step 8: Configure an Oracle client (p. 1829)
Note
During the setup, RDS creates an Oracle database user named
[email protected] with the CREATE SESSION privilege, where
example.com is your domain name. This user corresponds to the user that Directory Service
creates inside your Managed Active Directory. Periodically, RDS uses the credentials provided by
the Directory Service to log in to your Oracle database. Afterwards, RDS immediately destroys
the ticket cache.
When you create an AWS Managed Microsoft AD directory, AWS Directory Service performs the following
tasks on your behalf:
1820
Amazon Relational Database Service User Guide
Configuring Kerberos authentication
Note
Be sure to save this password. AWS Directory Service doesn't store it. You can reset it, but you
can't retrieve it.
• Creates a security group for the directory controllers.
When you launch an AWS Managed Microsoft AD, AWS creates an Organizational Unit (OU) that contains
all of your directory's objects. This OU has the NetBIOS name that you typed when you created your
directory and is located in the domain root. The domain root is owned and managed by AWS.
The Admin account that was created with your AWS Managed Microsoft AD directory has permissions for
the most common administrative activities for your OU:
The Admin account also has rights to perform the following domain-wide activities:
• Manage DNS configurations (add, remove, or update records, zones, and forwarders)
• View DNS event logs
• View security event logs
To create the directory, use the AWS Management Console, the AWS CLI, or the AWS Directory Service
API. Make sure to open the relevant outbound ports on the directory security group so that the directory
can communicate with the Oracle DB instance.
1. Sign in to the AWS Management Console and open the AWS Directory Service console at https://
console.aws.amazon.com/directoryservicev2/.
2. In the navigation pane, choose Directories and choose Set up Directory.
3. Choose AWS Managed Microsoft AD. AWS Managed Microsoft AD is the only option that you can
currently use with Amazon RDS.
4. Enter the following information:
The password for the directory administrator. The directory creation process creates an
administrator account with the user name Admin and this password.
1821
Amazon Relational Database Service User Guide
Configuring Kerberos authentication
The directory administrator password and can't include the word "admin." The password is case-
sensitive and must be 8–64 characters in length. It must also contain at least one character from
three of the following four categories:
• Lowercase letters (a–z)
• Uppercase letters (A–Z)
• Numbers (0–9)
• Non-alphanumeric characters (~!@#$%^&*_-+=`|\(){}[]:;"'<>,.?/)
Confirm password
VPC
The VPC for the directory. Create the Oracle DB instance in this same VPC.
Subnets
Subnets for the directory servers. The two subnets must be in different Availability Zones.
7. Review the directory information and make any necessary changes. When the information is correct,
choose Create directory.
1822
Amazon Relational Database Service User Guide
Configuring Kerberos authentication
It takes several minutes for the directory to be created. When it has been successfully created, the Status
value changes to Active.
To see information about your directory, choose the directory name in the directory listing. Note the
Directory ID value because you need this value when you create or modify your Oracle DB instance.
1823
Amazon Relational Database Service User Guide
Configuring Kerberos authentication
To get Kerberos authentication using an on-premises or self-hosted Microsoft Active Directory, create a
forest trust or external trust. The trust can be one-way or two-way. For more information about setting
up forest trusts using AWS Directory Service, see When to create a trust relationship in the AWS Directory
Service Administration Guide.
1824
Amazon Relational Database Service User Guide
Configuring Kerberos authentication
access-role automatically. Otherwise, you must create the IAM role manually. When you
create an IAM role manually, choose Directory Service, and attach the AWS managed policy
AmazonRDSDirectoryServiceAccess to it.
For more information about creating IAM roles for a service, see Creating a role to delegate permissions
to an AWS service in the IAM User Guide.
Note
The IAM role used for Windows Authentication for RDS for Microsoft SQL Server can't be used
for RDS for Oracle.
To limit the permissions that Amazon RDS gives another service for a specific resource, we recommend
using the aws:SourceArn and aws:SourceAccount global condition context keys in resource policies.
The most effective way to protect against the confused deputy problem is to use the aws:SourceArn
global condition context key with the full ARN of an Amazon RDS resource. For more information, see
Preventing cross-service confused deputy problems (p. 2640).
The following example shows how you can use the aws:SourceArn and aws:SourceAccount global
condition context keys in Amazon RDS to prevent the confused deputy problem.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"directoryservice.rds.amazonaws.com",
"rds.amazonaws.com"
]
},
"Action": "sts:AssumeRole",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:rds:us-east-1:123456789012:db:mydbinstance"
},
"StringEquals": {
"aws:SourceAccount": "123456789012"
}
}
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ds:DescribeDirectories",
"ds:AuthorizeApplication",
"ds:UnauthorizeApplication",
"ds:GetAuthorizedApplicationDetails"
],
1825
Amazon Relational Database Service User Guide
Configuring Kerberos authentication
"Effect": "Allow",
"Resource": "*"
}
]
}
To create users in an AWS Directory Service directory, you must be connected to a Windows-based
Amazon EC2 instance that is a member of the AWS Directory Service directory. At the same time, you
must be logged in as a user that has privileges to create users. For more information about creating users
in your Microsoft Active Directory, see Manage users and groups in AWS Managed Microsoft AD in the
AWS Directory Service Administration Guide.
Step 5: Enable cross-VPC traffic between the directory and the DB instance
If you plan to locate the directory and the DB instance in the same VPC, skip this step and move on to
Step 6: Create or modify an Oracle DB instance (p. 1826).
If you plan to locate the directory and the DB instance in different AWS accounts or VPCs, configure
cross-VPC traffic using VPC peering or AWS Transit Gateway. The following procedure enables traffic
between VPCs using VPC peering. Follow the instructions in What is VPC peering? in the Amazon Virtual
Private Cloud Peering Guide.
1. Set up appropriate VPC routing rules to ensure that network traffic can flow both ways.
2. Ensure that the DB instance's security group can receive inbound traffic from the directory's security
group. For more information, see Best practices for AWS Managed Microsoft AD in the AWS
Directory Service Administration Guide.
3. Ensure that there is no network access control list (ACL) rule to block traffic.
If a different AWS account owns the directory, you must share the directory.
1. Start sharing the directory with the AWS account that the DB instance will be created in by following
the instructions in Tutorial: Sharing your AWS Managed Microsoft AD directory for seamless EC2
Domain-join in the AWS Directory Service Administration Guide.
2. Sign in to the AWS Directory Service console using the account for the DB instance, and ensure that
the domain has the SHARED status before proceeding.
3. While signed into the AWS Directory Service console using the account for the DB instance, note the
Directory ID value. You use this directory ID to join the DB instance to the domain.
• Create a new Oracle DB instance using the console, the create-db-instance CLI command, or the
CreateDBInstance RDS API operation.
1826
Amazon Relational Database Service User Guide
Configuring Kerberos authentication
• Modify an existing Oracle DB instance using the console, the modify-db-instance CLI command, or the
ModifyDBInstance RDS API operation.
Kerberos authentication is only supported for Oracle DB instances in a VPC. The DB instance can be in
the same VPC as the directory, or in a different VPC. When you create or modify the DB instance, do the
following:
• Provide the domain identifier (d-* identifier) that was generated when you created your directory.
• Provide the name of the IAM role that you created.
• Ensure that the DB instance security group can receive inbound traffic from the directory security
group and send outbound traffic to the directory.
When you use the console to create a DB instance, choose Password and Kerberos authentication in
the Database authentication section. Choose Browse Directory and then select the directory, or choose
Create a new directory.
When you use the console to modify or restore a DB instance, choose the directory in the Kerberos
authentication section, or choose Create a new directory.
1827
Amazon Relational Database Service User Guide
Configuring Kerberos authentication
When you use the AWS CLI, the following parameters are required for the DB instance to be able to use
the directory that you created:
• For the --domain parameter, use the domain identifier ("d-*" identifier) generated when you created
the directory.
• For the --domain-iam-role-name parameter, use the role you created that uses the managed IAM
policy AmazonRDSDirectoryServiceAccess.
For example, the following CLI command modifies a DB instance to use a directory.
For Windows:
Important
If you modify a DB instance to enable Kerberos authentication, reboot the DB instance after
making the change.
Note
MANAGED_SERVICE_USER is a service account whose name is randomly generated by Directory
Service for RDS. During the Kerberos authentication setup, RDS for Oracle creates a user with
the same name and assigns it the CREATE SESSION privilege. The Oracle DB user is identified
externally as [email protected], where EXAMPLE.COM is the name of
your domain. Periodically, RDS uses the credentials provided by the Directory Service to log in to
your Oracle database. Afterward, RDS immediately destroys the ticket cache.
1. Connect to the Oracle DB instance using your Amazon RDS master user credentials.
2. Create an externally authenticated user in Oracle database.
In the following example, replace [email protected] with the user name and domain
name.
Users (both humans and applications) from your domain can now connect to the Oracle DB instance
from a domain joined client machine using Kerberos authentication.
1828
Amazon Relational Database Service User Guide
Configuring Kerberos authentication
• Create a configuration file named krb5.conf (Linux) or krb5.ini (Windows) to point to the domain.
Configure the Oracle client to use this configuration file.
• Verify that traffic can flow between the client host and AWS Directory Service over DNS port 53 over
TCP/UDP, Kerberos ports (88 and 464 for managed AWS Directory Service) over TCP, and LDAP port
389 over TCP.
• Verify that traffic can flow between the client host and the DB instance over the database port.
[libdefaults]
default_realm = EXAMPLE.COM
[realms]
EXAMPLE.COM = {
kdc = example.com
admin_server = example.com
}
[domain_realm]
.example.com = CORP.EXAMPLE.COM
example.com = CORP.EXAMPLE.COM
Following is sample content for on-premise Microsoft AD. In your krb5.conf or krb5.ini file, replace on-
prem-ad-server-name with the name of your on-premises AD server.
[libdefaults]
default_realm = ONPREM.COM
[realms]
AWSAD.COM = {
kdc = awsad.com
admin_server = awsad.com
}
ONPREM.COM = {
kdc = on-prem-ad-server-name
admin_server = on-prem-ad-server-name
}
[domain_realm]
.awsad.com = AWSAD.COM
awsad.com= AWSAD.COM
.onprem.com = ONPREM.COM
onprem.com= ONPREM.COM
Note
After you configure your krb5.ini or krb5.conf file, we recommend that you reboot the server.
SQLNET.AUTHENTICATION_SERVICES=(KERBEROS5PRE,KERBEROS5)
SQLNET.KERBEROS5_CONF=path_to_krb5.conf_file
For an example of a SQL Developer configuration, see Document 1609359.1 from Oracle Support.
1829
Amazon Relational Database Service User Guide
Configuring Kerberos authentication
authentication. You can also move a DB instance to be externally authenticated by one Microsoft Active
Directory to another.
• To reattempt enabling Kerberos authentication for a failed membership, use the modify-db-instance
CLI command and specify the current membership's directory ID for the --domain option.
• To disable Kerberos authentication on a DB instance, use the modify-db-instance CLI command and
specify none for the --domain option.
• To move a DB instance from one domain to another, use the modify-db-instance CLI command and
specify the domain identifier of the new domain for the --domain option.
A request to enable Kerberos authentication can fail because of a network connectivity issue or an
incorrect IAM role. If the attempt to enable Kerberos authentication fails when you create or modify a
DB instance, make sure that you're using the correct IAM role. Then modify the DB instance to join the
domain.
Note
Only Kerberos authentication with Amazon RDS for Oracle sends traffic to the domain's DNS
servers. All other DNS requests are treated as outbound network access on your DB instances
running Oracle. For more information about outbound network access with Amazon RDS for
Oracle, see Setting up a custom DNS server (p. 1865).
Note
In a read replica configuration, this procedure is available only on the source DB instance and
not on the read replica.
1830
Amazon Relational Database Service User Guide
Configuring Kerberos authentication
The SELECT statement returns the ID of the task in a VARCHAR2 data type. You can view the status of
an ongoing task in a bdump file. The bdump files are located in the /rdsdbdata/log/trace directory.
Each bdump file name is in the following format.
dbtask-task-id.log
You can view the result by displaying the task's output file.
kinit username
Replace username with the user name and, at the prompt, enter the password stored in the Microsoft
Active Directory for the user.
2. Open SQL*Plus and connect using the DNS name and port number for the Oracle DB instance.
For more information about connecting to an Oracle DB instance in SQL*Plus, see Connecting to your
DB instance using SQL*Plus (p. 1810).
1831
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access
UTL_HTTP
This package makes HTTP calls from SQL and PL/SQL. You can use it to access data on the Internet
over HTTP. For more information, see UTL_HTTP in the Oracle documentation.
UTL_TCP
This package provides TCP/IP client-side access functionality in PL/SQL. This package is useful to
PL/SQL applications that use Internet protocols and email. For more information, see UTL_TCP in
the Oracle documentation.
UTL_SMTP
This package provides interfaces to the SMTP commands that enable a client to dispatch emails to
an SMTP server. For more information, see UTL_SMTP in the Oracle documentation.
Before configuring your instance for network access, review the following requirements and
considerations:
• To use SMTP with the UTL_MAIL option, see Oracle UTL_MAIL (p. 2099).
• The Domain Name Server (DNS) name of the remote host can be any of the following:
• Publicly resolvable.
• The endpoint of an Amazon RDS DB instance.
• Resolvable through a custom DNS server. For more information, see Setting up a custom DNS
server (p. 1865).
• The private DNS name of an Amazon EC2 instance in the same VPC or a peered VPC. In this case,
make sure that the name is resolvable through a custom DNS server. Alternatively, to use the DNS
provided by Amazon, you can enable the enableDnsSupport attribute in the VPC settings and
enable DNS resolution support for the VPC peering connection. For more information, see DNS
support in your VPC and Modifying your VPC peering connection.
• To connect securely to remote SSL/TLS resources, we recommend that you create and upload
customized Oracle wallets. By using the Amazon S3 integration with Amazon RDS for Oracle feature,
you can download a wallet from Amazon S3 into Oracle DB instances. For information about
Amazon S3 integration for Oracle, see Amazon S3 integration (p. 1992).
• You can establish database links between Oracle DB instances over an SSL/TLS endpoint if the Oracle
SSL option is configured for each instance. No further configuration is required. For more information,
see Oracle Secure Sockets Layer (p. 2068).
By completing the following tasks, you can configure UTL_HTTP.REQUEST to work with websites that
require client authentication certificates during the SSL handshake. You can also configure password
authentication for UTL_HTTP access to websites by modifying the Oracle wallet generation commands
and the DBMS_NETWORK_ACL_ADMIN.APPEND_WALLET_ACE procedure. For more information, see
DBMS_NETWORK_ACL_ADMIN in the Oracle Database documentation.
Note
You can adapt the following tasks for UTL_SMTP, which enables you to send emails over SSL/
TLS (including Amazon Simple Email Service).
Topics
1832
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access
You can get the root certificate in various ways. For example, you can do the following:
For AWS services, root certificates typically reside in the Amazon trust services repository.
You might want to configure secure connections without using client certificates for authentication. In
this case, you can skip the Java keystore steps in the following procedure.
1. Place the root and client certificates in a single directory, and then change into this directory.
2. Convert the .p12 client certificate to the Java keystore.
Note
If you're not using client certificates for authentication, you can skip this step.
The following example converts the client certificate named client_certificate.p12 to the
Java keystore named client_keystore.jks. The keystore is then included in the Oracle wallet.
The keystore password is P12PASSWORD.
3. Create a directory for your Oracle wallet that is different from the certificate directory.
mkdir -p /tmp/wallet
The following example sets the Oracle wallet password to P12PASSWORD, which is the same
password used by the Java keystore in a previous step. Using the same password is convenient, but
not necessary. The -auto_login parameter turns on the automatic login feature, so that you don’t
need to specify a password every time you want to access it.
1833
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access
Note
Specify a password other than the prompt shown here as a security best practice.
The following example adds the keystore client_keystore.jks to the Oracle wallet named /
tmp/wallet. In this example, you specify the same password for the Java keystore and the Oracle
wallet.
6. Add the root certificate for your target website to the Oracle wallet.
The following example adds a certificate named Intermediate.cer. Repeat this step as many
times as need to load all intermediate certificates.
8. Confirm that your newly created Oracle wallet has the required certificates.
1. Complete the prerequisites for Amazon S3 integration with Oracle, and add the S3_INTEGRATION
option to your Oracle DB instance. Ensure that the IAM role for the option has access to the Amazon
S3 bucket you are using.
EXEC rdsadmin.rdsadmin_util.create_directory('WALLET_DIR');
1834
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access
For more information, see Creating and dropping directories in the main data storage
space (p. 1926).
3. Upload the Oracle wallet to your Amazon S3 bucket.
The following example removes the existing wallet, which is named cwallet.sso.
5. Download the Oracle wallet from your Amazon S3 bucket to the Oracle DB instance.
The following example downloads the wallet named cwallet.sso from the Amazon S3 bucket
named my_s3_bucket to the DB instance directory named WALLET_DIR.
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => 'my_s3_bucket',
p_s3_prefix => 'cwallet.sso',
p_directory_name => 'WALLET_DIR')
AS TASK_ID FROM DUAL;
Download this wallet only if you want to require a password for every use of the wallet. The
following example downloads password-protected wallet ewallet.p12.
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => 'my_s3_bucket',
p_s3_prefix => 'ewallet.p12',
p_directory_name => 'WALLET_DIR')
AS TASK_ID FROM DUAL;
Substitute the task ID returned from the preceding steps for dbtask-1234567890123-4567.log
in the following example.
8. Check the contents of the directory that you're using to store the Oracle wallet.
For more information, see Listing files in a DB instance directory (p. 1927).
1835
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access
2. If you don't want to configure an existing database user, create a new user. Otherwise, skip to the
next step.
3. Grant permission to your database user on the directory containing your Oracle wallet.
The following example grants read access to user my-user on directory WALLET_DIR.
BEGIN
rdsadmin.rdsadmin_util.grant_sys_object('UTL_HTTP', UPPER('my-user'));
END;
/
BEGIN
rdsadmin.rdsadmin_util.grant_sys_object('UTL_FILE', UPPER('my-user'));
END;
/
BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 'secret.encrypted-website.com',
lower_port => 443,
upper_port => 443,
ace => xs$ace_type(privilege_list => xs$name_list('http'),
principal_name => 'my-user',
principal_type => xs_acl.ptype_db));
END;
/
1836
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access
For more information, see Configuring Access Control for External Network Services in the Oracle
Database documentation.
3. (Optional) Create an ACE for your user and target website on the standard port.
You might need to use the standard port if some web pages are served from the standard web
server port (80) instead of the secure port (443).
BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 'secret.encrypted-website.com',
lower_port => 80,
upper_port => 80,
ace => xs$ace_type(privilege_list => xs$name_list('http'),
principal_name => 'my-user',
principal_type => xs_acl.ptype_db));
END;
/
BEGIN
rdsadmin.rdsadmin_util.grant_sys_object('UTL_HTTP', UPPER('my-user'));
END;
/
7. Grant permission to your database user to use certificates for client authentication and your Oracle
wallet for connections.
Note
If you're not using client certificates for authentication, you can skip this step.
DECLARE
l_wallet_path all_directories.directory_path%type;
BEGIN
SELECT DIRECTORY_PATH
1837
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access
INTO l_wallet_path
FROM ALL_DIRECTORIES
WHERE UPPER(DIRECTORY_NAME)='WALLET_DIR';
DBMS_NETWORK_ACL_ADMIN.APPEND_WALLET_ACE(
wallet_path => 'file:/' || l_wallet_path,
ace => xs$ace_type(privilege_list => xs
$name_list('use_client_certificates'),
principal_name => 'my-user',
principal_type => xs_acl.ptype_db));
END;
/
1. Log in your RDS for Oracle DB instance as a database user with UTL_HTTP permissions.
2. Confirm that a connection to your target website can resolve the host address.
The following query fails because UTL_HTTP requires the location of the Oracle wallet with the
certificates.
DECLARE
l_wallet_path all_directories.directory_path%type;
BEGIN
SELECT DIRECTORY_PATH
INTO l_wallet_path
FROM ALL_DIRECTORIES
WHERE UPPER(DIRECTORY_NAME)='WALLET_DIR';
UTL_HTTP.SET_WALLET('file:/' || l_wallet_path);
END;
/
5. (Optional) Test website access by storing your query in a variable and using EXECUTE IMMEDIATE.
DECLARE
l_wallet_path all_directories.directory_path%type;
v_webpage_sql VARCHAR2(1000);
v_results VARCHAR2(32767);
BEGIN
SELECT DIRECTORY_PATH
1838
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access
INTO l_wallet_path
FROM ALL_DIRECTORIES
WHERE UPPER(DIRECTORY_NAME)='WALLET_DIR';
v_webpage_sql := 'SELECT UTL_HTTP.REQUEST(''secret.encrypted-website.com'', '''',
''file:/' ||l_wallet_path||''') FROM DUAL';
DBMS_OUTPUT.PUT_LINE(v_webpage_sql);
EXECUTE IMMEDIATE v_webpage_sql INTO v_results;
DBMS_OUTPUT.PUT_LINE(v_results);
END;
/
6. (Optional) Find the file system location of your Oracle wallet directory.
Use the output from the previous command to make an HTTP request. For example, if the directory
is rdsdbdata/userdirs/01, run the following query.
1839
Amazon Relational Database Service User Guide
Working with CDBs
Topics
• Overview of RDS for Oracle CDBs (p. 1840)
• Configuring an RDS for Oracle CDB (p. 1841)
• Backing up and restoring a CDB (p. 1844)
• Converting an RDS for Oracle non-CDB to a CDB (p. 1844)
• Upgrading your CDB (p. 1846)
Starting with Oracle Database 21c, all databases are CDBs. If your DB instance runs Oracle Database 19c,
you can create either a CDB or a non-CDB. A non-CDB uses the traditional Oracle database architecture
and can't contain PDBs. You can convert an Oracle Database 19c non-CDB to a CDB, but you can't
convert a CDB to a non-CDB. You can only upgrade a CDB to a CDB.
Topics
• Single-tenant configuration (p. 1840)
• Creation and conversion options in a CDB (p. 1840)
• User accounts and privileges in a CDB (p. 1841)
• Parameter group families in a CDB (p. 1841)
• PDB portability in a CDB (p. 1841)
Single-tenant configuration
RDS for Oracle supports the single-tenant configuration of the Oracle multitenant architecture. This
means that an RDS for Oracle DB instance can contain only one PDB. You name the PDB when you create
your DB instance. The CDB name defaults to RDSCDB and can't be changed.
In RDS for Oracle, your client application interacts with the PDB rather than the CDB. Your experience
with a PDB is mostly identical to your experience with a non-CDB. You use the same Amazon RDS APIs in
the single-tenant configuration as you do in the non-CDB architecture. You can't access the CDB itself.
1840
Amazon Relational Database Service User Guide
Configuring a CDB
Oracle Database 19c CDB or non-CDB Non-CDB to CDB (April 21c CDB (from 19c CDB
2021 RU or higher) only)
As shown in the preceding table, you can't directly upgrade a non-CDB to a CDB in a new major version.
But you can convert an Oracle Database 19c non-CDB to an Oracle Database 19c CDB, and then upgrade
the Oracle Database 19c CDB to an Oracle Database 21c CDB. For more information, see Converting an
RDS for Oracle non-CDB to a CDB (p. 1844).
The RDS master user is a local user account in the PDB, which you name when you create your DB
instance. If you create new user accounts, these users will also be local users residing in the PDB. You
can't use any user accounts to create new PDBs or modify the state of the existing PDB.
The rdsadmin user is a common user account. You can run RDS for Oracle packages that exist in this
account, but you can't log in as rdsadmin. For more information, see About Common Users and Local
Users in the Oracle documentation.
• oracle-ee-cdb-21
• oracle-se2-cdb-21
• oracle-ee-cdb-19
• oracle-se2-cdb-19
You specify parameters at the CDB level rather than the PDB level. The PDB inherits parameter
settings from the CDB. For more information about setting parameters, see Working with parameter
groups (p. 347). For best practices, see Working with DB parameter groups (p. 297).
Topics
• Creating an RDS for Oracle CDB instance (p. 1842)
• Connecting to a PDB in your RDS for Oracle CDB (p. 1841)
1841
Amazon Relational Database Service User Guide
Configuring a CDB
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to
create the CDB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database.
5. In Choose a database creation method, select Standard Create.
6. In Engine options, choose Oracle.
7. For Database management type, choose Amazon RDS.
8. For Architecture settings, choose Multitenant architecture.
9. Choose the settings that you want based on the options listed in Settings for DB instances (p. 308).
Note the following:
• For Master username, enter the name for a local user in your PDB. You can't use the master
username to log in to the CDB root.
• For Initial database name, enter the name of your PDB. You can't name the CDB, which has the
default name RDSCDB.
10. Choose Create database.
AWS CLI
To create a DB instance by using the AWS CLI, call the create-db-instance command with the following
parameters:
• --db-instance-identifier
• --db-instance-class
• --engine { oracle-ee-cdb | oracle-se2-cdb }
• --master-username
• --master-user-password
• --allocated-storage
• --backup-retention-period
For information about each setting, see Settings for DB instances (p. 308).
This following example creates an RDS for Oracle DB instance named my-cdb-inst. The PDB is named
mypdb.
Example
1842
Amazon Relational Database Service User Guide
Configuring a CDB
--engine oracle-ee-cdb \
--db-instance-identifier my-cdb-inst \
--db-name mypdb \
--allocated-storage 250 \
--db-instance-class db.t3.large \
--master-username pdb_admin \
--master-user-password masteruserpassword \
--backup-retention-period 3
For Windows:
Note
Specify a password other than the prompt shown here as a security best practice.
{
"DBInstance": {
"DBInstanceIdentifier": "my-cdb-inst",
"DBInstanceClass": "db.t3.large",
"Engine": "oracle-ee-cdb",
"DBInstanceStatus": "creating",
"MasterUsername": "pdb_user",
"DBName": "MYPDB",
"AllocatedStorage": 250,
"PreferredBackupWindow": "04:59-05:29",
"BackupRetentionPeriod": 3,
"DBSecurityGroups": [],
"VpcSecurityGroups": [
{
"VpcSecurityGroupId": "sg-0a2bcd3e",
"Status": "active"
}
],
"DBParameterGroups": [
{
"DBParameterGroupName": "default.oracle-ee-cdb-19",
"ParameterApplyStatus": "in-sync"
}
],
"DBSubnetGroup": {
"DBSubnetGroupName": "default",
"DBSubnetGroupDescription": "default",
"VpcId": "vpc-1234567a",
"SubnetGroupStatus": "Complete",
...
RDS API
To create a DB instance by using the Amazon RDS API, call the CreateDBInstance operation.
For information about each setting, see Settings for DB instances (p. 308).
1843
Amazon Relational Database Service User Guide
Backing up and restoring a CDB
For information about finding the preceding information, see Finding the endpoint of your RDS for
Oracle DB instance (p. 1806).
sqlplus 'master_user_name@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=endpoint)(PORT=port))
(CONNECT_DATA=(SID=pdb_name)))'
For Windows:
sqlplus master_user_name@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=endpoint)(PORT=port))
(CONNECT_DATA=(SID=pdb_name)))
After you enter the password for the user, the SQL prompt appears.
SQL>
Note
The shorter format connection string (Easy connect or EZCONNECT), such as sqlplus
username/password@LONGER-THAN-63-CHARS-RDS-ENDPOINT-HERE:1521/database-
identifier, might encounter a maximum character limit and should not be used to connect.
1844
Amazon Relational Database Service User Guide
Converting a non-CDB to a CDB
database architecture in the same operation. Therefore, to upgrade an Oracle Database 19c non-CDB to
an Oracle Database 21c CDB, you first need to convert the non-CDB to a CDB, and then upgrade the 19c
CDB to a 21c CDB.
• Make sure that you specify oracle-ee-cdb or oracle-se2-cdb for the engine type. These are the
only supported values.
• Make sure that your DB engine runs Oracle Database 19c with an April 2021 or later RU.
• You can't convert a CDB to a non-CDB. You can only convert a non-CDB to a CDB.
• You can't convert a primary or replica database that has Oracle Data Guard enabled.
• You can't upgrade the DB engine version and convert a non-CDB to a CDB in the same operation.
• The considerations for option and parameter groups are the same as for upgrading the DB engine. For
more information, see Considerations for Oracle DB upgrades (p. 2108).
Console
To convert a non-CDB to a CDB
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region where your DB
instance resides.
3. In the navigation pane, choose Databases, and then choose the non-CDB instance that you want to
convert to a CDB instance.
4. Choose Modify.
5. For Architecture settings, select Multitenant architecture.
6. (Optional) For DB parameter group, choose a new parameter group for your CDB instance. The
same parameter group considerations apply when converting a DB instance as when upgrading a DB
instance. For more information, see Parameter group considerations (p. 2109).
7. (Optional) For Option group, choose a new option group for your CDB instance. The same option
group considerations apply when converting a DB instance as when upgrading a DB instance. For
more information, see Option group considerations (p. 2109).
8. When all the changes are as you want them, choose Continue and check the summary of
modifications.
9. (Optional) Choose Apply immediately to apply the changes immediately. Choosing this option
can cause downtime in some cases. For more information, see Using the Apply Immediately
setting (p. 402).
10. On the confirmation page, review your changes. If they are correct, choose Modify DB instance.
AWS CLI
To convert the non-CDB on your DB instance to a CDB, set --engine to oracle-ee-cdb or oracle-
se2-cdb in the AWS CLI command modify-db-instance.
The following example converts the DB instance named my-non-cdb and specifies a custom option
group and parameter group.
1845
Amazon Relational Database Service User Guide
Upgrading your CDB
Example
For Windows:
RDS API
To convert a non-CDB to a CDB, specify Engine in the RDS API operation ModifyDBInstance.
The procedure for upgrading a CDB to a CDB is the same as for upgrading a non-CDB to a non-CDB. For
more information, see Upgrading the RDS for Oracle DB engine (p. 2103).
1846
Amazon Relational Database Service User Guide
Administering your Oracle DB
The following tasks are common to all RDS databases, but Oracle has special considerations. For
example, connect to an Oracle Database using the Oracle clients SQL*Plus and SQL Developer.
1847
Amazon Relational Database Service User Guide
Administering your Oracle DB
Following, you can find a description for Amazon RDS–specific implementations of common DBA tasks
for RDS Oracle. To deliver a managed service experience, Amazon RDS doesn't provide shell access to
DB instances. Also, RDS restricts access to certain system procedures and tables that require advanced
privileges. In many of the tasks, you run the rdsadmin package, which is an Amazon RDS–specific tool
that enables you to administer your database.
The following are common DBA tasks for DB instances running Oracle:
1848
Amazon Relational Database Service User Guide
Administering your Oracle DB
Setting up a custom —
DNS server (p. 1865)
1849
Amazon Relational Database Service User Guide
Administering your Oracle DB
1850
Amazon Relational Database Service User Guide
Administering your Oracle DB
rdsadmin.rdsadmin_util.resize_datafile or
rdsadmin.rdsadmin_util.autoextend_datafile procedure
Oracle method: —
1851
Amazon Relational Database Service User Guide
Administering your Oracle DB
Downloading archived redo logs from Amazon S3 (p. 1895) Amazon RDS method:
rdsadmin.rdsadmin_archive_log_downl
Accessing online and archived redo logs (p. 1894) Amazon RDS method:
rdsadmin.rdsadmin_master_util.creat
Enabling and disabling block change tracking (p. 1903) Amazon RDS method:
rdsadmin_rman_util.procedure
1852
Amazon Relational Database Service User Guide
Administering your Oracle DB
Oracle method:
dbms_scheduler.set_attribute
Oracle method:
dbms_scheduler.set_attribute
Setting the time zone for Oracle Scheduler jobs (p. 1916) Amazon RDS method:
dbms_scheduler.set_scheduler_attrib
Oracle method:
dbms_scheduler.set_scheduler_attrib
Turning off Oracle Scheduler jobs owned by SYS (p. 1917) Amazon RDS method:
rdsadmin.rdsadmin_dbms_scheduler.di
Oracle method:
dbms_scheduler.disable
Turning on Oracle Scheduler jobs owned by SYS (p. 1917) Amazon RDS method:
rdsadmin.rdsadmin_dbms_scheduler.en
Oracle method:
dbms_scheduler.enable
Modifying the Oracle Scheduler repeat interval for jobs of Amazon RDS method:
CALENDAR type (p. 1918) rdsadmin.rdsadmin_dbms_scheduler.se
Oracle method:
dbms_scheduler.set_attribute
Modifying the Oracle Scheduler repeat interval for jobs of NAMED Amazon RDS method:
type (p. 1918) rdsadmin.rdsadmin_dbms_scheduler.se
Oracle method:
dbms_scheduler.set_attribute
Oracle method:
dbms_isched.set_no_commit_flag
1853
Amazon Relational Database Service User Guide
Administering your Oracle DB
Creating and dropping directories in the main data storage Amazon RDS method:
space (p. 1926) rdsadmin.rdsadmin_util.create_direc
Oracle method: —
Oracle method: —
Setting parameters for advisor tasks (p. 1930) Amazon RDS method:
rdsadmin.rdsadmin_util.advisor_task
1854
Amazon Relational Database Service User Guide
System tasks
Oracle method: —
Oracle method: —
You can also use Amazon RDS procedures for Amazon S3 integration with Oracle and for running OEM
Management Agent database tasks. For more information, see Amazon S3 integration (p. 1992) and
Performing database tasks with the Management Agent (p. 2045).
Topics
• Disconnecting a session (p. 1855)
• Terminating a session (p. 1856)
• Canceling a SQL statement in a session (p. 1857)
• Enabling and disabling restricted sessions (p. 1858)
• Flushing the shared pool (p. 1858)
• Flushing the buffer cache (p. 1859)
• Flushing the database smart flash cache (p. 1859)
• Granting SELECT or EXECUTE privileges to SYS objects (p. 1859)
• Revoking SELECT or EXECUTE privileges on SYS objects (p. 1861)
• Granting privileges to non-master users (p. 1861)
• Creating custom functions to verify passwords (p. 1862)
• Setting up a custom DNS server (p. 1865)
• Setting and unsetting system diagnostic events (p. 1866)
Disconnecting a session
To disconnect the current session by ending the dedicated server process, use the Amazon RDS
procedure rdsadmin.rdsadmin_util.disconnect. The disconnect procedure has the following
parameters.
1855
Amazon Relational Database Service User Guide
System tasks
begin
rdsadmin.rdsadmin_util.disconnect(
sid => sid,
serial => serial_number);
end;
/
To get the session identifier and the session serial number, query the V$SESSION view. The following
example gets all sessions for the user AWSUSER.
The database must be open to use this method. For more information about disconnecting a session, see
ALTER SYSTEM in the Oracle documentation.
Terminating a session
To terminate a session, use the Amazon RDS procedure rdsadmin.rdsadmin_util.kill. The kill
procedure has the following parameters.
1856
Amazon Relational Database Service User Guide
System tasks
To get the session identifier and the session serial number, query the V$SESSION view. The following
example gets all sessions for the user AWSUSER.
BEGIN
rdsadmin.rdsadmin_util.kill(
sid => sid,
serial => serial_number,
method => 'IMMEDIATE');
END;
/
BEGIN
rdsadmin.rdsadmin_util.kill(
sid => sid,
serial => serial_number,
method => 'PROCESS');
END;
/
begin
rdsadmin.rdsadmin_util.cancel(
sid => sid,
serial => serial_number,
sql_id => sql_id);
end;
/
To get the session identifier, the session serial number, and the SQL identifier of a SQL statement, query
the V$SESSION view. The following example gets all sessions and SQL identifiers for the user AWSUSER.
1857
Amazon Relational Database Service User Guide
System tasks
select SID, SERIAL#, SQL_ID, STATUS from V$SESSION where USERNAME = 'AWSUSER';
The following example shows how to enable and disable restricted sessions.
LOGINS
-------
ALLOWED
LOGINS
----------
RESTRICTED
LOGINS
-------
ALLOWED
1858
Amazon Relational Database Service User Guide
System tasks
EXEC rdsadmin.rdsadmin_util.flush_shared_pool;
EXEC rdsadmin.rdsadmin_util.flush_buffer_cache;
EXEC rdsadmin.rdsadmin_util.flush_flash_cache;
For more information about using the database smart flash cache with RDS for Oracle, see Storing
temporary data in an RDS for Oracle instance store (p. 1936).
1859
Amazon Relational Database Service User Guide
System tasks
The following example grants select privileges on an object named V_$SESSION to a user named USER1.
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'V_$SESSION',
p_grantee => 'USER1',
p_privilege => 'SELECT');
end;
/
The following example grants select privileges on an object named V_$SESSION to a user named USER1
with the grant option.
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'V_$SESSION',
p_grantee => 'USER1',
p_privilege => 'SELECT',
p_grant_option => true);
end;
/
To be able to grant privileges on an object, your account must have those privileges granted to it directly
with the grant option, or via a role granted using with admin option. In the most common case, you
may want to grant SELECT on a DBA view that has been granted to the SELECT_CATALOG_ROLE role. If
that role isn't already directly granted to your user using with admin option, then you can't transfer
the privilege. If you have the DBA privilege, then you can grant the role directly to another user.
Objects already granted to PUBLIC do not need to be re-granted. If you use the grant_sys_object
procedure to re-grant access, the procedure call succeeds.
1860
Amazon Relational Database Service User Guide
System tasks
The following example revokes select privileges on an object named V_$SESSION from a user named
USER1.
begin
rdsadmin.rdsadmin_util.revoke_sys_object(
p_obj_name => 'V_$SESSION',
p_revokee => 'USER1',
p_privilege => 'SELECT');
end;
/
You can grant EXECUTE privileges for many objects in the SYS schema by using the
EXECUTE_CATALOG_ROLE role. The EXECUTE_CATALOG_ROLE role gives users EXECUTE privileges
for packages and procedures in the data dictionary. The following example grants the role
EXECUTE_CATALOG_ROLE to a user named user1.
1861
Amazon Relational Database Service User Guide
System tasks
The following example gets the permissions that the roles SELECT_CATALOG_ROLE and
EXECUTE_CATALOG_ROLE allow.
SELECT *
FROM ROLE_TAB_PRIVS
WHERE ROLE IN ('SELECT_CATALOG_ROLE','EXECUTE_CATALOG_ROLE')
ORDER BY ROLE, TABLE_NAME ASC;
The following example creates a non-master user named user1, grants the CREATE SESSION privilege,
and grants the SELECT privilege on a database named sh.sales.
• To use standard verification logic, and to store your function in the SYS schema, use the
create_verify_function procedure.
• To use custom verification logic, or to avoid storing your function in the SYS schema, use the
create_passthrough_verify_fcn procedure.
1862
Amazon Relational Database Service User Guide
System tasks
boolean
p_disallow_simple_strings true No Set to true to disallow
simple strings as the
password.
There are restrictions on the name of your custom function. Your custom function can't have the same
name as an existing system object. The name can be no more than 30 characters long. Also, the name
must include one of the following strings: PASSWORD, VERIFY, COMPLEXITY, ENFORCE, or STRENGTH.
The following example creates a function named CUSTOM_PASSWORD_FUNCTION. The function requires
that a password has at least 12 characters, 2 uppercase characters, 1 digit, and 1 special character, and
that the password disallows the @ character.
begin
rdsadmin.rdsadmin_password_verify.create_verify_function(
p_verify_function_name => 'CUSTOM_PASSWORD_FUNCTION',
p_min_length => 12,
p_min_uppercase => 2,
p_min_digits => 1,
p_min_special => 1,
p_disallow_at_sign => true);
end;
/
1863
Amazon Relational Database Service User Guide
System tasks
To see the text of your verification function, query DBA_SOURCE. The following example gets the text of
a custom password function named CUSTOM_PASSWORD_FUNCTION.
SELECT TEXT
FROM DBA_SOURCE
WHERE OWNER = 'SYS'
AND NAME = 'CUSTOM_PASSWORD_FUNCTION'
ORDER BY LINE;
To associate your verification function with a user profile, use alter profile. The following example
associates a verification function with the DEFAULT user profile.
To see what user profiles are associated with what verification functions, query DBA_PROFILES. The
following example gets the profiles that are associated with the custom verification function named
CUSTOM_PASSWORD_FUNCTION.
The following example gets all profiles and the password verification functions that they are associated
with.
You can create a custom function to verify passwords by using the Amazon RDS procedure
rdsadmin.rdsadmin_password_verify.create_passthrough_verify_fcn. The
create_passthrough_verify_fcn procedure has the following parameters.
1864
Amazon Relational Database Service User Guide
System tasks
The following example creates a password verification function that uses the logic from the function
named PASSWORD_LOGIC_EXTRA_STRONG.
begin
rdsadmin.rdsadmin_password_verify.create_passthrough_verify_fcn(
p_verify_function_name => 'CUSTOM_PASSWORD_FUNCTION',
p_target_owner => 'TEST_USER',
p_target_function_name => 'PASSWORD_LOGIC_EXTRA_STRONG');
end;
/
To associate the verification function with a user profile, use alter profile. The following example
associates the verification function with the DEFAULT user profile.
Amazon RDS Oracle allows Domain Name Service (DNS) resolution from a custom DNS server owned by
the customer. You can resolve only fully qualified domain names from your Amazon RDS DB instance
through your custom DNS server.
After you set up your custom DNS name server, it takes up to 30 minutes to propagate the changes to
your DB instance. After the changes are propagated to your DB instance, all outbound network traffic
requiring a DNS lookup queries your DNS server over port 53.
To set up a custom DNS server for your Amazon RDS for Oracle DB instance, do the following:
• From the DHCP options set attached to your virtual private cloud (VPC), set the domain-name-
servers option to the IP address of your DNS name server. For more information, see DHCP options
sets.
1865
Amazon Relational Database Service User Guide
System tasks
Note
The domain-name-servers option accepts up to four values, but your Amazon RDS DB
instance uses only the first value.
• Ensure that your DNS server can resolve all lookup queries, including public DNS names, Amazon EC2
private DNS names, and customer-specific DNS names. If the outbound network traffic contains any
DNS lookups that your DNS server can't handle, your DNS server must have appropriate upstream DNS
providers configured.
• Configure your DNS server to produce User Datagram Protocol (UDP) responses of 512 bytes or less.
• Configure your DNS server to produce Transmission Control Protocol (TCP) responses of 1024 bytes or
less.
• Configure your DNS server to allow inbound traffic from your Amazon RDS DB instances over port 53.
If your DNS server is in an Amazon VPC, the VPC must have a security group that contains inbound
rules that permit UDP and TCP traffic on port 53. If your DNS server is not in an Amazon VPC, it must
have appropriate firewall allow-listing to permit UDP and TCP inbound traffic on port 53.
For more information, see Security groups for your VPC and Adding and removing rules.
• Configure the VPC of your Amazon RDS DB instance to allow outbound traffic over port 53. Your VPC
must have a security group that contains outbound rules that allow UDP and TCP traffic on port 53.
For more information, see Security groups for your VPC and Adding and removing rules.
• The routing path between the Amazon RDS DB instance and the DNS server has to be configured
correctly to allow DNS traffic.
• If the Amazon RDS DB instance and the DNS server are not in the same VPC, a peering connection
has to be set up between them. For more information, see What is VPC peering?
For more information, see Version 19.0.0.0.ru-2020-10.rur-2020-10.r1 in the Amazon RDS for Oracle
Release Notes.
• 12.2.0.1.ru-2020-10.rur-2020-10.r1 and higher Oracle Database 12c Release 2 (12.2.0.1) versions
For more information, see Version 12.2.0.1.ru-2020-10.rur-2020-10.r1 in the Amazon RDS for Oracle
Release Notes.
• 12.1.0.2.V22 and higher Oracle Database 12c Release 1 (12.1.0.2) versions
For more information, see Version 12.1.0.2.v22 in the Amazon RDS for Oracle Release Notes.
para
Important
Internally, the rdsadmin.rdsadmin_util package sets events by using the ALTER SYSTEM
SET EVENTS statement. This ALTER SYSTEM statement isn't documented in the Oracle
Database documentation. Some system diagnostic events can generate large amounts of
tracing information, cause contention, or affect database availability. We recommend that you
test specific diagnostic events in your nonproduction database, and only set events in your
production database under guidance of Oracle Support.
1866
Amazon Relational Database Service User Guide
System tasks
The following example lists all system events that you can set.
SET SERVEROUTPUT ON
EXEC rdsadmin.rdsadmin_util.list_allowed_system_events;
The following sample output lists event numbers and their descriptions. Use the Amazon RDS procedures
set_system_event to set these events and unset_system_event to unset them.
Note
The list of the allowed system events can change over time. To
make sure that you have the most recent list of eligible events, use
rdsadmin.rdsadmin_util.list_allowed_system_events.
1867
Amazon Relational Database Service User Guide
System tasks
The procedure set_system_event constructs and runs the required ALTER SYSTEM SET EVENTS
statements according to the following principles:
The following example sets event 942 at level 3, and event 10442 at level 10. Sample output is included.
SET SERVEROUTPUT ON
EXEC rdsadmin.rdsadmin_util.list_set_system_events;
The following sample output shows the list of events, the event type, the level at which the events are
currently set, and the time when the event was set.
1868
Amazon Relational Database Service User Guide
Database tasks
The following example unsets events 942 and 10442. Sample output is included.
Topics
• Changing the global name of a database (p. 1870)
• Creating and sizing tablespaces (p. 1870)
• Setting the default tablespace (p. 1871)
• Setting the default temporary tablespace (p. 1871)
• Creating a temporary tablespace on the instance store (p. 1871)
• Adding a tempfile to the instance store on a read replica (p. 1872)
• Dropping tempfiles on a read replica (p. 1872)
• Checkpointing a database (p. 1873)
• Setting distributed recovery (p. 1873)
• Setting the database time zone (p. 1873)
• Working with Oracle external tables (p. 1874)
• Generating performance reports with Automatic Workload Repository (AWR) (p. 1875)
• Adjusting database links for use with DB instances in a VPC (p. 1879)
• Setting the default edition for a DB instance (p. 1879)
• Enabling auditing for the SYS.AUD$ table (p. 1880)
• Disabling auditing for the SYS.AUD$ table (p. 1880)
• Cleaning up interrupted online index builds (p. 1881)
1869
Amazon Relational Database Service User Guide
Database tasks
The database must be open for the name change to occur. For more information about changing the
global name of a database, see ALTER DATABASE in the Oracle documentation.
By default, if you don't specify a data file size, tablespaces are created with the default of AUTOEXTEND
ON, and no maximum size. In the following example, the tablespace users1 is autoextensible.
Because of these default settings, tablespaces can grow to consume all allocated storage. We
recommend that you specify an appropriate maximum size on permanent and temporary tablespaces,
and that you carefully monitor space usage.
The following example creates a tablespace named users2 with a starting size of 1 gigabyte. Because a
data file size is specified, but AUTOEXTEND ON isn't specified, the tablespace isn't autoextensible.
The following example creates a tablespace named users3 with a starting size of 1 gigabyte,
autoextend turned on, and a maximum size of 10 gigabytes.
1870
Amazon Relational Database Service User Guide
Database tasks
We recommend that you don't use smallfile tablespaces because you can't resize smallfile tablespaces
with RDS for Oracle. However, you can add a data file to a smallfile tablespace. To determine whether a
tablespace is bigfile or smallfile, query DBA_TABLESPACES as follows.
You can resize a bigfile tablespace by using ALTER TABLESPACE. You can specify the size in kilobytes
(K), megabytes (M), gigabytes (G), or terabytes (T). The following example resizes a bigfile tablespace
named users_bf to 200 MB.
The following example adds an additional data file to a smallfile tablespace named users_sf.
ALTER TABLESPACE users_sf ADD DATAFILE SIZE 100000M AUTOEXTEND ON NEXT 250m
MAXSIZE UNLIMITED;
1871
Amazon Relational Database Service User Guide
Database tasks
The following example creates the temporary tablespace temp01 in the instance store.
Important
When you run rdsadmin_util.create_inst_store_tmp_tblspace, the newly created
temporary tablespace is not automatically set as the default temporary tablespace. To set it as
the default, see Setting the default temporary tablespace (p. 1871).
For more information, see Storing temporary data in an RDS for Oracle instance store (p. 1936).
• You dropped a tempfile from the tablespace on your read replica. For more information, see Dropping
tempfiles on a read replica (p. 1872).
• You created a new temporary tablespace on the primary DB instance. In this case, RDS for Oracle
synchronizes the metadata to the read replica.
You can add a tempfile to the empty temporary tablespace, and store the tempfile in the
instance store. To create a tempfile in the instance store, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.add_inst_store_tempfile. You can use this procedure only on a read
replica. The procedure has the following parameters.
In the following example, the empty temporary tablespace temp01 exists on your read replica. Run the
following command to create a tempfile for this tablespace, and store it in the instance store.
For more information, see Storing temporary data in an RDS for Oracle instance store (p. 1936).
1. Drop the current tempfiles in the temporary tablespace on the read replica.
2. Create new tempfiles on different storage.
1872
Amazon Relational Database Service User Guide
Database tasks
Assume that a temporary tablespace named temp01 resides in the instance store on your read replica.
Drop all tempfiles in this tablespace by running the following command.
For more information, see Storing temporary data in an RDS for Oracle instance store (p. 1936).
Checkpointing a database
To checkpoint the database, use the Amazon RDS procedure rdsadmin.rdsadmin_util.checkpoint.
The checkpoint procedure has no parameters.
EXEC rdsadmin.rdsadmin_util.checkpoint;
EXEC rdsadmin.rdsadmin_util.enable_distr_recovery;
EXEC rdsadmin.rdsadmin_util.disable_distr_recovery;
The Timezone option changes the time zone at the host level and affects all date columns and values
such as SYSDATE. For more information, see Oracle time zone (p. 2087).
• The Amazon RDS procedure rdsadmin.rdsadmin_util.alter_db_time_zone
The alter_db_time_zone procedure changes the time zone for only certain data types, and doesn't
change SYSDATE. There are additional restrictions on setting the time zone listed in the Oracle
documentation.
1873
Amazon Relational Database Service User Guide
Database tasks
Note
You can also set the default time zone for Oracle Scheduler. For more information, see Setting
the time zone for Oracle Scheduler jobs (p. 1916).
The following example changes the time zone to UTC plus three hours.
The following example changes the time zone to the Africa/Algiers time zone.
After you alter the time zone by using the alter_db_time_zone procedure, reboot your DB instance
for the change to take effect. For more information, see Rebooting a DB instance (p. 436). For
information about upgrading time zones, see Time zone considerations (p. 2110).
With Amazon RDS, you can store external table files in directory objects. You can create a directory
object, or you can use one that is predefined in the Oracle database, such as the DATA_PUMP_DIR
directory. For information about creating directory objects, see Creating and dropping directories in the
main data storage space (p. 1926). You can query the ALL_DIRECTORIES view to list the directory objects
for your Amazon RDS Oracle DB instance.
Note
Directory objects point to the main data storage space (Amazon EBS volume) used by your
instance. The space used—along with data files, redo logs, audit, trace, and other files—counts
against allocated storage.
You can move an external data file from one Oracle database to another by using the
DBMS_FILE_TRANSFER package or the UTL_FILE package. The external data file is moved from a
directory on the source database to the specified directory on the destination database. For information
about using DBMS_FILE_TRANSFER, see Importing using Oracle Data Pump (p. 1948).
After you move the external data file, you can create an external table with it. The following example
creates an external table that uses the emp_xt_file1.txt file in the USER_DIR1 directory.
1874
Amazon Relational Database Service User Guide
Database tasks
user_name VARCHAR2(20)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY USER_DIR1
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(emp_id,first_name,last_name,user_name)
)
LOCATION ('emp_xt_file1.txt')
)
PARALLEL
REJECT LIMIT UNLIMITED;
Suppose that you want to move data that is in an Amazon RDS Oracle DB instance into an external data
file. In this case, you can populate the external data file by creating an external table and selecting the
data from the table in the database. For example, the following SQL statement creates the orders_xt
external table by querying the orders table in the database.
In this example, the data is populated in the orders_xt.dmp file in the DATA_PUMP_DIR directory.
• AWR reports
• Active Session History (ASH) reports
• Automatic Database Diagnostic Monitor (ADDM) reports
• Oracle Data Pump Export dump files of AWR data
1875
Amazon Relational Database Service User Guide
Database tasks
You can use the rdsadmin_diagnostic_util procedures in the following Amazon RDS for Oracle DB
engine versions:
For a blog that explains how to work with diagnostic reports in a replication scenario, see Generate AWR
reports for Amazon RDS for Oracle read replicas.
dump_directory BDUMP No
VARCHAR2 The directory to write the report or export file to. If you
specify a nondefault directory, the user that runs the
rdsadmin_diagnostic_util procedures must have write
permissions for the directory.
p_tag —
VARCHAR2 No A string that can be used to distinguish between backups
to indicate the purpose or usage of backups, such as
incremental or daily.
1876
Amazon Relational Database Service User Guide
Database tasks
report_type VARCHAR2
HTML No The format of the report. Valid values are TEXT and HTML.
You typically use the following parameters when managing ASH with the rdsadmin_diagnostic_util
package.
slot_width NUMBER 0 No The duration of the slots (in seconds) used in the "Top
Activity" section of the ASH report. If this parameter
isn't specified, the time interval between begin_time
and end_time uses no more than 10 slots.
The following example generates a AWR report for the snapshot range 101–106. The output text file is
named awrrpt_101_106.txt. You can access this report from the AWS Management Console.
EXEC rdsadmin.rdsadmin_diagnostic_util.awr_report(101,106,'TEXT');
1877
Amazon Relational Database Service User Guide
Database tasks
The following example generates an HTML report for the snapshot range 63–65. The output HTML file
is named awrrpt_63_65.html. The procedure writes the report to the nondefault database directory
named AWR_RPT_DUMP.
EXEC rdsadmin.rdsadmin_diagnostic_util.awr_report(63,65,'HTML','AWR_RPT_DUMP');
The following example extracts the snapshot range 101–106. The output dump file is named
awrextract_101_106.dmp. You can access this file through the console.
EXEC rdsadmin.rdsadmin_diagnostic_util.awr_extract(101,106);
The following example extracts the snapshot range 63–65. The output dump file is named
awrextract_63_65.dmp. The file is stored in the nondefault database directory named
AWR_RPT_DUMP.
EXEC rdsadmin.rdsadmin_diagnostic_util.awr_extract(63,65,'AWR_RPT_DUMP');
The following example generates an ADDM report for the snapshot range 101–106. The output text file
is named addmrpt_101_106.txt. You can access the report through the console.
EXEC rdsadmin.rdsadmin_diagnostic_util.addm_report(101,106);
The following example generates an ADDM report for the snapshot range 63–65. The output text
file is named addmrpt_63_65.txt. The file is stored in the nondefault database directory named
ADDM_RPT_DUMP.
EXEC rdsadmin.rdsadmin_diagnostic_util.addm_report(63,65,'ADDM_RPT_DUMP');
The following example generates an ASH report that includes the data from 14 minutes ago until the
current time. The name of the output file uses the format ashrptbegin_timeend_time.txt, where
begin_time and end_time use the format YYYYMMDDHH24MISS. You can access the file through the
console.
BEGIN
rdsadmin.rdsadmin_diagnostic_util.ash_report(
begin_time => SYSDATE-14/1440,
end_time => SYSDATE,
report_type => 'TEXT');
1878
Amazon Relational Database Service User Guide
Database tasks
END;
/
The following example generates an ASH report that includes the data from November 18, 2019,
at 6:07 PM through November 18, 2019, at 6:15 PM. The name of the output HTML report is
ashrpt_20190918180700_20190918181500.html. The report is stored in the nondefault database
directory named AWR_RPT_DUMP.
BEGIN
rdsadmin.rdsadmin_diagnostic_util.ash_report(
begin_time => TO_DATE('2019-09-18 18:07:00', 'YYYY-MM-DD HH24:MI:SS'),
end_time => TO_DATE('2019-09-18 18:15:00', 'YYYY-MM-DD HH24:MI:SS'),
report_type => 'html',
dump_directory => 'AWR_RPT_DUMP');
END;
/
The security group of each DB instance must allow ingress to and egress from the other DB instance. The
inbound and outbound rules can refer to security groups from the same VPC or a peered VPC. For more
information, see Updating your security groups to reference peered VPC security groups.
If you have configured a custom DNS server using the DHCP Option Sets in your VPC, your custom DNS
server must be able to resolve the name of the database link target. For more information, see Setting
up a custom DNS server (p. 1865).
For more information about using database links with Oracle Data Pump, see Importing using Oracle
Data Pump (p. 1948).
You can set the default edition of an Amazon RDS Oracle DB instance using the Amazon RDS procedure
rdsadmin.rdsadmin_util.alter_default_edition.
The following example sets the default edition for the Amazon RDS Oracle DB instance to RELEASE_V1.
EXEC rdsadmin.rdsadmin_util.alter_default_edition('RELEASE_V1');
The following example sets the default edition for the Amazon RDS Oracle DB instance back to the
Oracle default.
EXEC rdsadmin.rdsadmin_util.alter_default_edition('ORA$BASE');
1879
Amazon Relational Database Service User Guide
Database tasks
For more information about Oracle edition-based redefinition, see About editions and edition-based
redefinition in the Oracle documentation.
Enabling auditing is supported for Oracle DB instances running the following versions:
Note
In a single-tenant CDB, the following operations work, but no customer-visible mechanism can
detect the current status of the operations. Auditing information isn't available from within the
PDB. For more information, see Limitations of a single-tenant CDB (p. 1805).
The following query returns the current audit configuration for SYS.AUD$ for a database.
EXEC rdsadmin.rdsadmin_master_util.audit_all_sys_aud_table;
For more information, see AUDIT (traditional auditing) in the Oracle documentation.
The following query returns the current audit configuration for SYS.AUD$ for a database:
1880
Amazon Relational Database Service User Guide
Database tasks
EXEC rdsadmin.rdsadmin_master_util.noaudit_all_sys_aud_table;
For more information, see NOAUDIT (traditional auditing) in the Oracle documentation.
Specify
rdsadmin.rdsadmin_dbms_repair.lo
to try to get a lock on the
underlying object but not
retry if the lock fails.
declare
is_clean boolean;
begin
is_clean := rdsadmin.rdsadmin_dbms_repair.online_index_clean(
object_id => 1234567890,
wait_for_lock => rdsadmin.rdsadmin_dbms_repair.lock_nowait
);
end;
/
1881
Amazon Relational Database Service User Guide
Database tasks
• rdsadmin.rdsadmin_dbms_repair.create_repair_table
• rdsadmin.rdsadmin_dbms_repair.create_orphan_keys_table
• rdsadmin.rdsadmin_dbms_repair.drop_repair_table
• rdsadmin.rdsadmin_dbms_repair.drop_orphan_keys_table
• rdsadmin.rdsadmin_dbms_repair.purge_repair_table
• rdsadmin.rdsadmin_dbms_repair.purge_orphan_keys_table
The following procedures take the same parameters as their counterparts in the DBMS_REPAIR package
for Oracle databases:
• rdsadmin.rdsadmin_dbms_repair.check_object
• rdsadmin.rdsadmin_dbms_repair.dump_orphan_keys
• rdsadmin.rdsadmin_dbms_repair.fix_corrupt_blocks
• rdsadmin.rdsadmin_dbms_repair.rebuild_freelists
• rdsadmin.rdsadmin_dbms_repair.segment_fix_status
• rdsadmin.rdsadmin_dbms_repair.skip_corrupt_blocks
For more information about handling database corruption, see DBMS_REPAIR in the Oracle
documentation.
This example shows the basic workflow for responding to corrupt blocks. Your steps will depend on the
location and nature of your block corruption.
Important
Before attempting to repair corrupt blocks, review the DBMS_REPAIR documentation carefully.
1. Run the following procedures to create repair tables if they don't already exist.
EXEC rdsadmin.rdsadmin_dbms_repair.create_repair_table;
EXEC rdsadmin.rdsadmin_dbms_repair.create_orphan_keys_table;
2. Run the following procedures to check for existing records and purge them if appropriate.
EXEC rdsadmin.rdsadmin_dbms_repair.purge_repair_table;
EXEC rdsadmin.rdsadmin_dbms_repair.purge_orphan_keys_table;
SET SERVEROUTPUT ON
DECLARE v_num_corrupt INT;
BEGIN
v_num_corrupt := 0;
rdsadmin.rdsadmin_dbms_repair.check_object (
1882
Amazon Relational Database Service User Guide
Database tasks
SELECT SKIP_CORRUPT
FROM DBA_TABLES
WHERE OWNER = '&corruptionOwner'
AND TABLE_NAME = '&corruptionTable';
4. Use the skip_corrupt_blocks procedure to enable or disable corruption skipping for affected
tables. Depending on the situation, you may also need to extract data to a new table, and then drop
the table containing the corrupt block.
Run the following procedure to enable corruption skipping for affected tables.
begin
rdsadmin.rdsadmin_dbms_repair.skip_corrupt_blocks (
schema_name => '&corruptionOwner',
object_name => '&corruptionTable',
object_type => rdsadmin.rdsadmin_dbms_repair.table_object,
flags => rdsadmin.rdsadmin_dbms_repair.skip_flag);
end;
/
select skip_corrupt from dba_tables where owner = '&corruptionOwner' and table_name =
'&corruptionTable';
begin
rdsadmin.rdsadmin_dbms_repair.skip_corrupt_blocks (
schema_name => '&corruptionOwner',
object_name => '&corruptionTable',
object_type => rdsadmin.rdsadmin_dbms_repair.table_object,
flags => rdsadmin.rdsadmin_dbms_repair.noskip_flag);
end;
/
5. When you have completed all repair work, run the following procedures to drop the repair tables.
EXEC rdsadmin.rdsadmin_dbms_repair.drop_repair_table;
EXEC rdsadmin.rdsadmin_dbms_repair.drop_orphan_keys_table;
1883
Amazon Relational Database Service User Guide
Database tasks
an appropriate maximum size on permanent and temporary tablespaces, and that you carefully monitor
space usage.
• rdsadmin.rdsadmin_util.resize_datafile
• rdsadmin.rdsadmin_util.autoextend_datafile
1884
Amazon Relational Database Service User Guide
Database tasks
EXEC rdsadmin.rdsadmin_util.resize_datafile(4,'500M');
The following example turns off autoextension for data file 4. It also turns on autoextension for data file
5, with an increment of 128 MB and no maximum size.
EXEC rdsadmin.rdsadmin_util.autoextend_datafile(4,'OFF');
EXEC rdsadmin.rdsadmin_util.autoextend_datafile(5,'ON','128M','UNLIMITED');
• rdsadmin.rdsadmin_util.resize_temp_tablespace
• rdsadmin.rdsadmin_util.resize_tempfile
• rdsadmin.rdsadmin_util.autoextend_tempfile
1885
Amazon Relational Database Service User Guide
Database tasks
The following examples resize a temporary tablespace named TEMP to the size of 4 GB.
EXEC rdsadmin.rdsadmin_util.resize_temp_tablespace('TEMP','4G');
EXEC rdsadmin.rdsadmin_util.resize_temp_tablespace('TEMP','4096000000');
The following example resizes a temporary tablespace based on the temp file with the file identifier 1 to
the size of 2 MB.
EXEC rdsadmin.rdsadmin_util.resize_tempfile(1,'2M');
The following example turns off autoextension for temp file 1. It also sets the maximum autoextension
size of temp file 2 to 10 GB, with an increment of 100 MB.
EXEC rdsadmin.rdsadmin_util.autoextend_tempfile(1,'OFF');
EXEC rdsadmin.rdsadmin_util.autoextend_tempfile(2,'ON','100M','10G');
For more information about read replicas for Oracle DB instances see Working with read replicas for
Amazon RDS for Oracle (p. 1973).
To purge the entire recycle bin, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.purge_dba_recyclebin. However, this procedure can't purge the
recycle bin of SYS and RDSADMIN objects. If you need to purge these objects, contact AWS Support.
1886
Amazon Relational Database Service User Guide
Database tasks
EXEC rdsadmin.rdsadmin_util.purge_dba_recyclebin;
1887
Amazon Relational Database Service User Guide
Log tasks
The following example changes the default redacted value to * for the CHAR data type:
The following example changes the default redacted values for NUMBER, DATE, and CHAR data types:
BEGIN
rdsadmin.rdsadmin_util.dbms_redact_upd_full_rdct_val(
p_number_val=>1,
p_date_val=>to_date('1900-01-01','YYYY-MM-DD'),
p_varchar_val=>'X');
END;
/
After you alter the default values for full redaction with the dbms_redact_upd_full_rdct_val
procedure, reboot your DB instance for the change to take effect. For more information, see Rebooting a
DB instance (p. 436).
For more information, see Oracle database log files (p. 924).
Topics
• Setting force logging (p. 1889)
• Setting supplemental logging (p. 1889)
• Switching online log files (p. 1890)
• Adding online redo logs (p. 1890)
• Dropping online redo logs (p. 1891)
• Resizing online redo logs (p. 1891)
• Retaining archived redo logs (p. 1893)
• Accessing online and archived redo logs (p. 1894)
• Downloading archived redo logs from Amazon S3 (p. 1895)
1888
Amazon Relational Database Service User Guide
Log tasks
begin
rdsadmin.rdsadmin_util.alter_supplemental_logging(
p_action => 'ADD');
1889
Amazon Relational Database Service User Guide
Log tasks
end;
/
The following example enables supplemental logging for all fixed-length maximum size columns.
begin
rdsadmin.rdsadmin_util.alter_supplemental_logging(
p_action => 'ADD',
p_type => 'ALL');
end;
/
The following example enables supplemental logging for primary key columns.
begin
rdsadmin.rdsadmin_util.alter_supplemental_logging(
p_action => 'ADD',
p_type => 'PRIMARY KEY');
end;
/
EXEC rdsadmin.rdsadmin_util.switch_logfile;
1890
Amazon Relational Database Service User Guide
Log tasks
You can only drop logs that have a status of unused or inactive. The following example gets the statuses
of the logs.
GROUP# STATUS
---------- ----------------
1 CURRENT
2 INACTIVE
3 INACTIVE
4 UNUSED
1891
Amazon Relational Database Service User Guide
Log tasks
EXEC rdsadmin.rdsadmin_util.switch_logfile;
EXEC rdsadmin.rdsadmin_util.checkpoint;
1892
Amazon Relational Database Service User Guide
Log tasks
begin
rdsadmin.rdsadmin_util.set_configuration(
name => 'archivelog retention hours',
value => '24');
end;
/
commit;
Note
The commit is required for the change to take effect.
To view how long archived redo logs are kept for your DB instance, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.show_configuration.
1893
Amazon Relational Database Service User Guide
Log tasks
set serveroutput on
EXEC rdsadmin.rdsadmin_util.show_configuration;
The output shows the current setting for archivelog retention hours. The following output shows
that archived redo logs are kept for 48 hours.
Because the archived redo logs are retained on your DB instance, ensure that your DB instance has
enough allocated storage for the retained logs. To determine how much space your DB instance has used
in the last X hours, you can run the following query, replacing X with the number of hours.
RDS for Oracle only generates archived redo logs when the backup retention period of your DB instance
is greater than zero. By default the backup retention period is greater than zero.
When the archived log retention period expires, RDS for Oracle removes the archived redo logs from your
DB instance. To support restoring your DB instance to a point in time, Amazon RDS retains the archived
redo logs outside of your DB instance based on the backup retention period. To modify the backup
retention period, see Modifying an Amazon RDS DB instance (p. 401).
Note
In some cases, you might be using JDBC on Linux to download archived redo logs and
experience long latency times and connection resets. In such cases, the issues might be caused
by the default random number generator setting on your Java client. We recommend setting
your JDBC drivers to use a nonblocking random number generator.
1. Create directory objects that provide read-only access to the physical file paths.
You can read the files by using PL/SQL. For more information about reading files from directory
objects, see Listing files in a DB instance directory (p. 1927) and Reading files in a DB instance
directory (p. 1927).
1894
Amazon Relational Database Service User Guide
Log tasks
The following code creates directories that provide read-only access to your online and archived redo log
files:
Important
This code also revokes the DROP ANY DIRECTORY privilege.
EXEC rdsadmin.rdsadmin_master_util.create_archivelog_dir;
EXEC rdsadmin.rdsadmin_master_util.create_onlinelog_dir;
The following code drops the directories for your online and archived redo log files.
EXEC rdsadmin.rdsadmin_master_util.drop_archivelog_dir;
EXEC rdsadmin.rdsadmin_master_util.drop_onlinelog_dir;
The following code grants and revokes the DROP ANY DIRECTORY privilege.
EXEC rdsadmin.rdsadmin_master_util.revoke_drop_any_directory;
EXEC rdsadmin.rdsadmin_master_util.grant_drop_any_directory;
• Backup retention policy – Logs inside of this policy are available in Amazon S3. Logs outside of this
policy are removed.
• Archived log retention policy – Logs inside of this policy are available on your DB instance. Logs
outside of this policy are removed.
If logs aren't on your instance but are protected by your backup retention period, use
rdsadmin.rdsadmin_archive_log_download to download them again. RDS for Oracle saves the
logs to the /rdsdbdata/log/arch directory on your DB instance.
1. Configure your retention period to ensure your downloaded archived redo logs are retained for the
duration you need them. Make sure to COMMIT your change.
RDS retains your downloaded logs according to the archived log retention policy, starting from the
time the logs were downloaded. To learn how to set the retention policy, see Retaining archived redo
logs (p. 1893).
2. Wait up to 5 minutes for the archived log retention policy change to take effect.
1895
Amazon Relational Database Service User Guide
Log tasks
For more information, see Downloading a single archived redo log (p. 1896) and Downloading a
series of archived redo logs (p. 1896).
Note
RDS automatically checks the available storage before downloading. If the requested logs
consume a high percentage of space, you receive an alert.
4. Confirm that the logs were downloaded from Amazon S3 successfully.
You can view the status of your download task in a bdump file. The bdump files have the path name
/rdsdbdata/log/trace/dbtask-task-id.log. In the preceding download step, you run a
SELECT statement that returns the task ID in a VARCHAR2 data type. For more information, see
similar examples in Monitoring the status of a file transfer (p. 2007).
The following example downloads the log with sequence number 20.
1896
Amazon Relational Database Service User Guide
RMAN tasks
FROM DUAL;
Use the Amazon RDS package rdsadmin.rdsadmin_rman_util to perform RMAN backups of your
Amazon RDS for Oracle database to disk. The rdsadmin.rdsadmin_rman_util package supports full
and incremental database file backups, tablespace backups, and archived redo log backups.
After an RMAN backup has finished, you can copy the backup files off the Amazon RDS for Oracle DB
instance host. You might do this for the purpose of restoring to a non-RDS host or for long-term storage
of backups. For example, you can copy the backup files to an Amazon S3 bucket. For more information,
see using Amazon S3 integration (p. 1992).
The backup files for RMAN backups remain on the Amazon RDS DB instance host until you remove them
manually. You can use the UTL_FILE.FREMOVE Oracle procedure to remove files from a directory. For
more information, see FREMOVE procedure in the Oracle Database documentation.
You can't use the RMAN to restore RDS for Oracle DB instances. However, you can use RMAN to restore a
backup to an on-premises or Amazon EC2 instance. For more information, see the blog article Restore an
Amazon RDS for Oracle instance to a self-managed instance.
Note
For backing up and restoring to another Amazon RDS for Oracle DB instance, you can continue
to use the Amazon RDS backup and restore features. For more information, see Backing up and
restoring (p. 590).
Topics
• Prerequisites for RMAN backups (p. 1897)
• Common parameters for RMAN procedures (p. 1898)
• Validating DB instance files (p. 1900)
• Enabling and disabling block change tracking (p. 1903)
• Crosschecking archived redo logs (p. 1904)
• Backing up archived redo logs (p. 1905)
• Performing a full database backup (p. 1910)
• Performing an incremental database backup (p. 1911)
• Backing up a tablespace (p. 1912)
• Backing up a control file (p. 1913)
• Make sure that your RDS for Oracle database is in ARCHIVELOG mode. To enable this mode, set the
backup retention period to a non-zero value.
1897
Amazon Relational Database Service User Guide
RMAN tasks
• When backing up archived redo logs or performing a full or incremental backup that includes archived
redo logs, and when backing up the database, make sure that redo log retention is set to a nonzero
value. Archived redo logs are required to make database files consistent during recovery. For more
information, see Retaining archived redo logs (p. 1893).
• Make sure that your DB instance has sufficient free space to hold the backups. When back up your
database, you specify an Oracle directory object as a parameter in the procedure call. RMAN places the
files in the specified directory. You can use default directories, such as DATA_PUMP_DIR, or create a
new directory. For more information, see Creating and dropping directories in the main data storage
space (p. 1926).
You can monitor the current free space in an RDS for Oracle instance using the CloudWatch metric
FreeStorageSpace. We recommend that your free space exceeds the current size of the database,
though RMAN backs up only formatted blocks and supports compression.
p_label varchar2 a-z, A-Z, 0-9, — No A unique string that is included in the backup
'_', '-', '.' file names.
Note
The limit is 30 characters.
p_owner varchar2 A valid — Yes The owner of the directory to contain the
owner of the backup files.
directory
specified in
p_directory_name.
p_tag varchar2 a-z, A-Z, 0-9, NULL No A string that can be used to distinguish
'_', '-', '.' between backups to indicate the purpose or
usage of backups, such as daily, weekly, or
incremental-level backups.
1898
Amazon Relational Database Service User Guide
RMAN tasks
p_compress boolean TRUE, FALSE FALSE No Specify TRUE to enable BASIC backup
compression.
1899
Amazon Relational Database Service User Guide
RMAN tasks
p_optimize boolean TRUE, FALSE TRUE No Specify TRUE to enable backup optimization,
if archived redo logs are included, to reduce
backup size.
1 for other
Oracle
Database
editions
number
p_section_size_mb A valid integer NULL No The section size in megabytes (MB).
varchar2 'PHYSICAL',
p_validation_type No
'PHYSICAL' The level of corruption detection.
'PHYSICAL
+LOGICAL' Specify 'PHYSICAL' to check for physical
corruption. An example of physical corruption
is a block with a mismatch in the header and
footer.
For more information about RMAN validation, see Validating database files and backups and VALIDATE
in the Oracle documentation.
Topics
1900
Amazon Relational Database Service User Guide
RMAN tasks
Validating a DB instance
To validate all of the relevant files used by an Amazon RDS Oracle DB instance, use the Amazon RDS
procedure rdsadmin.rdsadmin_rman_util.validate_database.
This procedure uses the following common parameters for RMAN tasks:
• p_validation_type
• p_parallel
• p_section_size_mb
• p_rman_to_dbms_output
For more information, see Common parameters for RMAN procedures (p. 1898).
The following example validates the DB instance using the default values for the parameters.
EXEC rdsadmin.rdsadmin_rman_util.validate_database;
The following example validates the DB instance using the specified values for the parameters.
BEGIN
rdsadmin.rdsadmin_rman_util.validate_database(
p_validation_type => 'PHYSICAL+LOGICAL',
p_parallel => 4,
p_section_size_mb => 10,
p_rman_to_dbms_output => FALSE);
END;
/
When the p_rman_to_dbms_output parameter is set to FALSE, the RMAN output is written to a file in
the BDUMP directory.
To view the files in the BDUMP directory, run the following SELECT statement.
To view the contents of a file in the BDUMP directory, run the following SELECT statement.
Replace the file name with the name of the file you want to view.
Validating a tablespace
To validate the files associated with a tablespace, use the Amazon RDS procedure
rdsadmin.rdsadmin_rman_util.validate_tablespace.
1901
Amazon Relational Database Service User Guide
RMAN tasks
This procedure uses the following common parameters for RMAN tasks:
• p_validation_type
• p_parallel
• p_section_size_mb
• p_rman_to_dbms_output
For more information, see Common parameters for RMAN procedures (p. 1898).
This procedure uses the following common parameter for RMAN tasks:
• p_validation_type
• p_rman_to_dbms_output
For more information, see Common parameters for RMAN procedures (p. 1898).
Validating an SPFILE
To validate only the server parameter file (SPFILE) used by an Amazon RDS Oracle DB instance, use the
Amazon RDS procedure rdsadmin.rdsadmin_rman_util.validate_spfile.
This procedure uses the following common parameter for RMAN tasks:
• p_validation_type
• p_rman_to_dbms_output
For more information, see Common parameters for RMAN procedures (p. 1898).
This procedure uses the following common parameters for RMAN tasks:
• p_validation_type
• p_parallel
• p_section_size_mb
• p_rman_to_dbms_output
1902
Amazon Relational Database Service User Guide
RMAN tasks
For more information, see Common parameters for RMAN procedures (p. 1898).
RMAN features aren't supported in a read replica. However, as part of your high availability
strategy, you might choose to enable block tracking in a read-only replica using the procedure
rdsadmin.rdsadmin_rman_util.enable_block_change_tracking. If you promote this read-only
replica to a source DB instance, block change tracking is enabled for the new source instance. Thus, your
instance can benefit from fast incremental backups.
Block change tracking procedures are supported in Enterprise Edition only for the following DB engine
versions:
Note
In a single-tenant CDB, the following operations work, but no customer-visible mechanism
can detect the current status of the operations. See also Limitations of a single-tenant
CDB (p. 1805).
1903
Amazon Relational Database Service User Guide
RMAN tasks
To enable block change tracking for a DB instance, use the Amazon RDS procedure
rdsadmin.rdsadmin_rman_util.enable_block_change_tracking. To disable block change
tracking, use disable_block_change_tracking. These procedures take no parameters.
To determine whether block change tracking is enabled for your DB instance, run the following query.
EXEC rdsadmin.rdsadmin_rman_util.enable_block_change_tracking;
EXEC rdsadmin.rdsadmin_rman_util.disable_block_change_tracking;
You can use this procedure to crosscheck the archived redo logs registered in the control file and
optionally delete the expired logs records. When RMAN makes a backup, it creates a record in the control
file. Over time, these records increase the size of the control file. We recommend that you remove
expired records periodically.
Note
Standard Amazon RDS backups don't use RMAN and therefore don't create records in the
control file.
This procedure uses the common parameter p_rman_to_dbms_output for RMAN tasks.
For more information, see Common parameters for RMAN procedures (p. 1898).
This procedure is supported for the following Amazon RDS for Oracle DB engine versions:
1904
Amazon Relational Database Service User Guide
RMAN tasks
The following example marks archived redo log records in the control file as expired, but does not delete
the records.
BEGIN
rdsadmin.rdsadmin_rman_util.crosscheck_archivelog(
p_delete_expired => FALSE,
p_rman_to_dbms_output => FALSE);
END;
/
The following example deletes expired archived redo log records from the control file.
BEGIN
rdsadmin.rdsadmin_rman_util.crosscheck_archivelog(
p_delete_expired => TRUE,
p_rman_to_dbms_output => FALSE);
END;
/
The procedures for backing up archived redo logs are supported for the following Amazon RDS for
Oracle DB engine versions:
Topics
• Backing up all archived redo logs (p. 1905)
• Backing up an archived redo log from a date range (p. 1906)
• Backing up an archived redo log from an SCN range (p. 1907)
• Backing up an archived redo log from a sequence number range (p. 1909)
This procedure uses the following common parameters for RMAN tasks:
• p_owner
• p_directory_name
• p_label
• p_parallel
• p_compress
• p_rman_to_dbms_output
1905
Amazon Relational Database Service User Guide
RMAN tasks
• p_tag
For more information, see Common parameters for RMAN procedures (p. 1898).
The following example backs up all archived redo logs for the DB instance.
BEGIN
rdsadmin.rdsadmin_rman_util.backup_archivelog_all(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_parallel => 4,
p_tag => 'MY_LOG_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/
This procedure uses the following common parameters for RMAN tasks:
• p_owner
• p_directory_name
• p_label
• p_parallel
• p_compress
• p_rman_to_dbms_output
• p_tag
For more information, see Common parameters for RMAN procedures (p. 1898).
1906
Amazon Relational Database Service User Guide
RMAN tasks
The following example backs up archived redo logs in the date range for the DB instance.
BEGIN
rdsadmin.rdsadmin_rman_util.backup_archivelog_date(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_from_date => '03/01/2019 00:00:00',
p_to_date => '03/02/2019 00:00:00',
p_parallel => 4,
p_tag => 'MY_LOG_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/
This procedure uses the following common parameters for RMAN tasks:
• p_owner
1907
Amazon Relational Database Service User Guide
RMAN tasks
• p_directory_name
• p_label
• p_parallel
• p_compress
• p_rman_to_dbms_output
• p_tag
For more information, see Common parameters for RMAN procedures (p. 1898).
The following example backs up archived redo logs in the SCN range for the DB instance.
BEGIN
rdsadmin.rdsadmin_rman_util.backup_archivelog_scn(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_from_scn => 1533835,
p_to_scn => 1892447,
1908
Amazon Relational Database Service User Guide
RMAN tasks
p_parallel => 4,
p_tag => 'MY_LOG_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/
This procedure uses the following common parameters for RMAN tasks:
• p_owner
• p_directory_name
• p_label
• p_parallel
• p_compress
• p_rman_to_dbms_output
• p_tag
For more information, see Common parameters for RMAN procedures (p. 1898).
1909
Amazon Relational Database Service User Guide
RMAN tasks
The following example backs up archived redo logs in the sequence number range for the DB instance.
BEGIN
rdsadmin.rdsadmin_rman_util.backup_archivelog_sequence(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_from_sequence => 11160,
p_to_sequence => 11160,
p_parallel => 4,
p_tag => 'MY_LOG_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/
This procedure uses the following common parameters for RMAN tasks:
• p_owner
• p_directory_name
• p_label
• p_parallel
• p_section_size_mb
• p_include_archive_logs
• p_optimize
• p_compress
• p_rman_to_dbms_output
• p_tag
For more information, see Common parameters for RMAN procedures (p. 1898).
This procedure is supported for the following Amazon RDS for Oracle DB engine versions:
1910
Amazon Relational Database Service User Guide
RMAN tasks
The following example performs a full backup of the DB instance using the specified values for the
parameters.
BEGIN
rdsadmin.rdsadmin_rman_util.backup_database_full(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_parallel => 4,
p_section_size_mb => 10,
p_tag => 'FULL_DB_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/
For more information about incremental backups, see Incremental backups in the Oracle documentation.
This procedure uses the following common parameters for RMAN tasks:
• p_owner
• p_directory_name
• p_label
• p_parallel
• p_section_size_mb
• p_include_archive_logs
• p_include_controlfile
• p_optimize
• p_compress
• p_rman_to_dbms_output
• p_tag
For more information, see Common parameters for RMAN procedures (p. 1898).
This procedure is supported for the following Amazon RDS for Oracle DB engine versions:
1911
Amazon Relational Database Service User Guide
RMAN tasks
The following example performs an incremental backup of the DB instance using the specified values for
the parameters.
BEGIN
rdsadmin.rdsadmin_rman_util.backup_database_incremental(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_level => 1,
p_parallel => 4,
p_section_size_mb => 10,
p_tag => 'MY_INCREMENTAL_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/
Backing up a tablespace
You can back up a tablespace using the Amazon RDS procedure
rdsadmin.rdsadmin_rman_util.backup_tablespace.
This procedure uses the following common parameters for RMAN tasks:
• p_owner
• p_directory_name
• p_label
• p_parallel
• p_section_size_mb
• p_include_archive_logs
• p_include_controlfile
• p_optimize
• p_compress
• p_rman_to_dbms_output
• p_tag
For more information, see Common parameters for RMAN procedures (p. 1898).
This procedure is supported for the following Amazon RDS for Oracle DB engine versions:
1912
Amazon Relational Database Service User Guide
RMAN tasks
The following example performs a tablespace backup using the specified values for the parameters.
BEGIN
rdsadmin.rdsadmin_rman_util.backup_tablespace(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_tablespace_name => MYTABLESPACE,
p_parallel => 4,
p_section_size_mb => 10,
p_tag => 'MYTABLESPACE_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/
This procedure uses the following common parameters for RMAN tasks:
• p_owner
• p_directory_name
• p_label
• p_compress
• p_rman_to_dbms_output
• p_tag
For more information, see Common parameters for RMAN procedures (p. 1898).
This procedure is supported for the following Amazon RDS for Oracle DB engine versions:
The following example backs up a control file using the specified values for the parameters.
BEGIN
rdsadmin.rdsadmin_rman_util.backup_current_controlfile(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_tag => 'CONTROL_FILE_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/
1913
Amazon Relational Database Service User Guide
Oracle Scheduler tasks
The rdsadmin.rdsadmin_dbms_scheduler procedures are supported for the following Amazon RDS
for Oracle DB engine versions:
To modify the
repeat interval for
the job, specify
'REPEAT_INTERVAL'.
To modify the
schedule name for
the job, specify
'SCHEDULE_NAME'.
1914
Amazon Relational Database Service User Guide
Oracle Scheduler tasks
When working with Amazon RDS DB instances, prepend the schema name SYS to the object name. The
following example sets the resource plan attribute for the Monday window object.
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE(
name => 'SYS.MONDAY_WINDOW',
attribute => 'RESOURCE_PLAN',
value => 'resource_plan_1');
END;
/
To modify the window, use the DBMS_SCHEDULER package. You might need to modify the maintenance
window settings for the following reasons:
• You want maintenance jobs to run at a different time, with different settings, or not at all. For
example, might want to modify the window duration, or change the repeat time and interval.
• You want to avoid the performance impact of enabling Resource Manager during maintenance. For
example, if the default maintenance plan is specified, and if the maintenance window opens while the
database is under load, you might see wait events such as resmgr:cpu quantum. This wait event is
related to Database Resource Manager. You have the following options:
• Ensure that maintenance windows are active during off-peak times for your DB instance.
• Disable the default maintenance plan by setting the resource_plan attribute to an empty string.
• Set the resource_manager_plan parameter to FORCE: in your parameter group. If your instance
uses Enterprise Edition, this setting prevents Database Resource Manager plans from activating.
The following output shows that the window is using the default values.
1915
Amazon Relational Database Service User Guide
Oracle Scheduler tasks
The following example sets the resource plan to null so that the Resource Manager won't run during
the maintenance window.
BEGIN
-- disable the window to make changes
DBMS_SCHEDULER.DISABLE(name=>'"SYS"."MONDAY_WINDOW"',force=>TRUE);
The following example sets the maximum duration of the window to 2 hours.
BEGIN
DBMS_SCHEDULER.DISABLE(name=>'"SYS"."MONDAY_WINDOW"',force=>TRUE);
DBMS_SCHEDULER.SET_ATTRIBUTE(name=>'"SYS"."MONDAY_WINDOW"', attribute=>'DURATION',
value=>'0 2:00:00');
DBMS_SCHEDULER.ENABLE(name=>'"SYS"."MONDAY_WINDOW"');
END;
/
The following example sets the repeat interval to every Monday at 10 AM.
BEGIN
DBMS_SCHEDULER.DISABLE(name=>'"SYS"."MONDAY_WINDOW"',force=>TRUE);
DBMS_SCHEDULER.SET_ATTRIBUTE(name=>'"SYS"."MONDAY_WINDOW"',
attribute=>'REPEAT_INTERVAL',
value=>'freq=daily;byday=MON;byhour=10;byminute=0;bysecond=0');
DBMS_SCHEDULER.ENABLE(name=>'"SYS"."MONDAY_WINDOW"');
END;
/
1. Connect to the database using a client such as SQL Developer. For more information, see Connecting
to your DB instance using Oracle SQL developer (p. 1808).
2. Set the default time zone as following, substituting your time zone for time_zone_name.
1916
Amazon Relational Database Service User Guide
Oracle Scheduler tasks
BEGIN
DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE(
attribute => 'default_timezone',
value => 'time_zone_name'
);
END;
/
VALUE
-------
Etc/UTC
BEGIN
DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE(
attribute => 'default_timezone',
value => 'Asia/Shanghai'
);
END;
/
For more information about changing the system time zone, see Oracle time zone (p. 2087).
This procedure uses the name common parameter for Oracle Scheduler tasks. For more information, see
Common parameters for Oracle Scheduler procedures (p. 1914).
BEGIN
rdsadmin.rdsadmin_dbms_scheduler.disable('SYS.CLEANUP_ONLINE_IND_BUILD');
END;
/
This procedure uses the name common parameter for Oracle Scheduler tasks. For more information, see
Common parameters for Oracle Scheduler procedures (p. 1914).
1917
Amazon Relational Database Service User Guide
Oracle Scheduler tasks
BEGIN
rdsadmin.rdsadmin_dbms_scheduler.enable('SYS.CLEANUP_ONLINE_IND_BUILD');
END;
/
This procedure uses the following common parameters for Oracle Scheduler tasks:
• name
• attribute
• value
For more information, see Common parameters for Oracle Scheduler procedures (p. 1914).
The following example modifies the repeat interval of the SYS.CLEANUP_ONLINE_IND_BUILD Oracle
Scheduler job.
BEGIN
rdsadmin.rdsadmin_dbms_scheduler.set_attribute(
name => 'SYS.CLEANUP_ONLINE_IND_BUILD',
attribute => 'repeat_interval',
value => 'freq=daily;byday=FRI,SAT;byhour=20;byminute=0;bysecond=0');
END;
/
This procedure uses the following common parameter for Oracle Scheduler tasks:
• name
• attribute
• value
For more information, see Common parameters for Oracle Scheduler procedures (p. 1914).
The following example modifies the repeat interval of the SYS.BSLN_MAINTAIN_STATS_JOB Oracle
Scheduler job.
BEGIN
DBMS_SCHEDULER.CREATE_SCHEDULE (
schedule_name => 'rds_master_user.new_schedule',
start_date => SYSTIMESTAMP,
1918
Amazon Relational Database Service User Guide
Diagnostic tasks
repeat_interval =>
'freq=daily;byday=MON,TUE,WED,THU,FRI;byhour=0;byminute=0;bysecond=0',
end_date => NULL,
comments => 'Repeats daily forever');
END;
/
BEGIN
rdsadmin.rdsadmin_dbms_scheduler.set_attribute (
name => 'SYS.BSLN_MAINTAIN_STATS_JOB',
attribute => 'schedule_name',
value => 'rds_master_user.new_schedule');
END;
/
• Roll back the Oracle Schedule job when the user transaction is rolled back.
• Create the Oracle Scheduler job when the main user transaction is committed.
The following example turns off autocommit for Oracle Scheduler, creates an Oracle Scheduler job,
and then rolls back the transaction. Because autocommit is turned off, the database also rolls back the
creation of the Oracle Scheduler job.
BEGIN
rdsadmin.rdsadmin_dbms_scheduler.set_no_commit_flag;
DBMS_SCHEDULER.CREATE_JOB(job_name => 'EMPTY_JOB',
job_type => 'PLSQL_BLOCK',
job_action => 'begin null; end;',
auto_drop => false);
ROLLBACK;
END;
/
no rows selected
1919
Amazon Relational Database Service User Guide
Diagnostic tasks
shows three incidents of this problem. For more information, see Diagnosing and resolving problems in
the Oracle Database documentation.
The Automatic Diagnostic Repository Command Interpreter (ADRCI) utility is an Oracle command-line
tool that you use to manage diagnostic data. For example, you can use this tool to investigate problems
and package diagnostic data. An incident package includes diagnostic data for an incident or all incidents
that reference a specific problem. You can upload an incident package, which is implemented as a .zip
file, to Oracle Support.
To deliver a managed service experience, Amazon RDS doesn't provide shell access to ADRCI.
To perform diagnostic tasks for your Oracle instance, instead use the Amazon RDS package
rdsadmin.rdsadmin_adrci_util.
By using the functions in rdsadmin_adrci_util, you can list and package problems and incidents,
and also show trace files. All functions return a task ID. This ID forms part of the name of log file that
contains the ADRCI output, as in dbtask-task_id.log. The log file resides in the BDUMP directory.
Listing incidents
To list diagnostic incidents for Oracle, use the Amazon RDS function
rdsadmin.rdsadmin_adrci_util.list_adrci_incidents. You can list incidents in either basic or
detailed mode. By default, the function lists the 50 most recent incidents.
• incident_id
1920
Amazon Relational Database Service User Guide
Diagnostic tasks
• problem_id
• last
If you specify incident_id and problem_id, then incident_id overrides problem_id. For more
information, see Common parameters for diagnostic procedures (p. 1920).
TASK_ID
------------------
1590786706158-3126
To read the log file, call the Amazon RDS procedure rdsadmin.rds_file_util.read_text_file.
Supply the task ID as part of the file name. The following output shows three incidents: 53523, 53522,
and 53521.
TEXT
-------------------------------------------------------------------------------------------------------
2020-05-29 21:11:46.193 UTC [INFO ] Listing ADRCI incidents.
2020-05-29 21:11:46.256 UTC [INFO ]
ADR Home = /rdsdbdata/log/diag/rdbms/orcl_a/ORCL:
*************************************************************************
INCIDENT_ID PROBLEM_KEY CREATE_TIME
----------- -----------------------------------------------------------
----------------------------------------
53523 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_003 2020-05-29
20:15:20.928000 +00:00
53522 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_002 2020-05-29
20:15:15.247000 +00:00
53521 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_001 2020-05-29
20:15:06.047000 +00:00
3 rows fetched
1921
Amazon Relational Database Service User Guide
Diagnostic tasks
2020-05-29 21:11:46.256 UTC [INFO ] The ADRCI incidents were successfully listed.
2020-05-29 21:11:46.256 UTC [INFO ] The task finished successfully.
14 rows selected.
To list a particular incident, specify its ID using the incident_id parameter. In the following example,
you query the log file for incident 53523 only.
TEXT
-------------------------------------------------------------------------------------------------------
2020-05-29 21:15:25.358 UTC [INFO ] Listing ADRCI incidents.
2020-05-29 21:15:25.426 UTC [INFO ]
ADR Home = /rdsdbdata/log/diag/rdbms/orcl_a/ORCL:
*************************************************************************
INCIDENT_ID PROBLEM_KEY
CREATE_TIME
-------------------- -----------------------------------------------------------
---------------------------------
53523 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_003 2020-05-29
20:15:20.928000 +00:00
1 rows fetched
2020-05-29 21:15:25.427 UTC [INFO ] The ADRCI incidents were successfully listed.
2020-05-29 21:15:25.427 UTC [INFO ] The task finished successfully.
12 rows selected.
Listing problems
To list diagnostic problems for Oracle, use the Amazon RDS function
rdsadmin.rdsadmin_adrci_util.list_adrci_problems.
This function uses the common parameters problem_id and last. For more information, see Common
parameters for diagnostic procedures (p. 1920).
To read the log file, call the rdsadmin.rds_file_util.read_text_file function, supplying the
task ID as part of the file name. In the following output, the log file shows three problems: 1, 2, and 3.
1922
Amazon Relational Database Service User Guide
Diagnostic tasks
TEXT
-------------------------------------------------------------------------------------------------------
2020-05-29 21:18:50.764 UTC [INFO ] Listing ADRCI problems.
2020-05-29 21:18:50.829 UTC [INFO ]
ADR Home = /rdsdbdata/log/diag/rdbms/orcl_a/ORCL:
*************************************************************************
PROBLEM_ID PROBLEM_KEY LAST_INCIDENT
LASTINC_TIME
---------- ----------------------------------------------------------- -------------
---------------------------------
2 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_003 53523
2020-05-29 20:15:20.928000 +00:00
3 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_002 53522
2020-05-29 20:15:15.247000 +00:00
1 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_001 53521
2020-05-29 20:15:06.047000 +00:00
3 rows fetched
2020-05-29 21:18:50.829 UTC [INFO ] The ADRCI problems were successfully listed.
2020-05-29 21:18:50.829 UTC [INFO ] The task finished successfully.
14 rows selected.
To read the log file for problem 3, call rdsadmin.rds_file_util.read_text_file. Supply the task
ID as part of the file name.
TEXT
-------------------------------------------------------------------------
2020-05-29 21:19:42.533 UTC [INFO ] Listing ADRCI problems.
2020-05-29 21:19:42.599 UTC [INFO ]
ADR Home = /rdsdbdata/log/diag/rdbms/orcl_a/ORCL:
*************************************************************************
PROBLEM_ID PROBLEM_KEY LAST_INCIDENT
LASTINC_TIME
---------- ----------------------------------------------------------- -------------
---------------------------------
3 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_002 53522
2020-05-29 20:15:15.247000 +00:00
1 rows fetched
2020-05-29 21:19:42.599 UTC [INFO ] The ADRCI problems were successfully listed.
2020-05-29 21:19:42.599 UTC [INFO ] The task finished successfully.
12 rows selected.
1923
Amazon Relational Database Service User Guide
Diagnostic tasks
• problem_id
• incident_id
Make sure to specify one of the preceding parameters. If you specify both parameters, incident_id
overrides problem_id. For more information, see Common parameters for diagnostic
procedures (p. 1920).
To create a package for a specific incident, call the Amazon RDS function
rdsadmin.rdsadmin_adrci_util.create_adrci_package with the incident_id parameter. The
following example creates a package for incident 53523.
To read the log file, call the rdsadmin.rds_file_util.read_text_file. You can supply
the task ID as part of the file name. The output shows that you generated incident package
ORA700EVE_20200529212043_COM_1.zip.
TEXT
-------------------------------------------------------------------------------------------------------
2020-05-29 21:20:43.031 UTC [INFO ] The ADRCI package is being created.
2020-05-29 21:20:47.641 UTC [INFO ] Generated package 1 in file /rdsdbdata/log/trace/
ORA700EVE_20200529212043_COM_1.zip, mode complete
2020-05-29 21:20:47.642 UTC [INFO ] The ADRCI package was successfully created.
2020-05-29 21:20:47.642 UTC [INFO ] The task finished successfully.
To package diagnostic data for a particular problem, specify its ID using the problem_id parameter. In
the following example, you package data for problem 3 only.
TEXT
-------------------------------------------------------------------------------------------------------
2020-05-29 21:21:11.050 UTC [INFO ] The ADRCI package is being created.
2020-05-29 21:21:15.646 UTC [INFO ] Generated package 2 in file /rdsdbdata/log/trace/
ORA700EVE_20200529212111_COM_1.zip, mode complete
2020-05-29 21:21:15.646 UTC [INFO ] The ADRCI package was successfully created.
2020-05-29 21:21:15.646 UTC [INFO ] The task finished successfully.
1924
Amazon Relational Database Service User Guide
Diagnostic tasks
To list the trace file names, call the Amazon RDS procedure
rdsadmin.rds_file_util.read_text_file, supplying the task ID as part of the file name.
TEXT
---------------------------------------------------------------
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-28
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-27
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-26
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-25
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-24
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-23
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-22
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-21
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log
9 rows selected.
To read the log file, call rdsadmin.rds_file_util.read_text_file. Supply the task ID as part of
the file name. The output shows the first 10 lines of alert_ORCL.log.
TEXT
1925
Amazon Relational Database Service User Guide
Other tasks
-----------------------------------------------------------------------------------------
2020-05-29 21:24:02.083 UTC [INFO ] The trace files are being displayed.
2020-05-29 21:24:02.128 UTC [INFO ] Thu May 28 23:59:10 2020
Thread 1 advanced to log sequence 2048 (LGWR switch)
Current log# 3 seq# 2048 mem# 0: /rdsdbdata/db/ORCL_A/onlinelog/o1_mf_3_hbl2p8xs_.log
Thu May 28 23:59:10 2020
Archived Log entry 2037 added for thread 1 sequence 2047 ID 0x5d62ce43 dest 1:
Fri May 29 00:04:10 2020
Thread 1 advanced to log sequence 2049 (LGWR switch)
Current log# 4 seq# 2049 mem# 0: /rdsdbdata/db/ORCL_A/onlinelog/o1_mf_4_hbl2qgmh_.log
Fri May 29 00:04:10 2020
10 rows selected.
Topics
• Creating and dropping directories in the main data storage space (p. 1926)
• Listing files in a DB instance directory (p. 1927)
• Reading files in a DB instance directory (p. 1927)
• Accessing Opatch files (p. 1928)
• Managing advisor tasks (p. 1930)
• Transporting tablespaces (p. 1932)
The create_directory and drop_directory procedures have the following required parameter.
The data dictionary stores the directory name in uppercase. You can list the directories by querying
DBA_DIRECTORIES. The system chooses the actual host pathname automatically. The following
example gets the directory path for the directory named PRODUCT_DESCRIPTIONS:
SELECT DIRECTORY_PATH
1926
Amazon Relational Database Service User Guide
Other tasks
FROM DBA_DIRECTORIES
WHERE DIRECTORY_NAME='PRODUCT_DESCRIPTIONS';
DIRECTORY_PATH
----------------------------------------
/rdsdbdata/userdirs/01
The master user name for the DB instance has read and write privileges in the new directory, and can
grant access to other users. EXECUTE privileges are not available for directories on a DB instance.
Directories are created in your main data storage space and will consume space and I/O bandwidth.
Note
You can also drop a directory by using the Oracle SQL command DROP DIRECTORY.
The following example grants read/write privileges on the directory PRODUCT_DESCRIPTIONS to user
rdsadmin, and then lists the files in this directory.
The following example creates the file rice.txt in the directory PRODUCT_DESCRIPTIONS.
1927
Amazon Relational Database Service User Guide
Other tasks
declare
fh sys.utl_file.file_type;
begin
fh := utl_file.fopen(location=>'PRODUCT_DESCRIPTIONS', filename=>'rice.txt',
open_mode=>'w');
utl_file.put(file=>fh, buffer=>'AnyCompany brown rice, 15 lbs');
utl_file.fclose(file=>fh);
end;
/
The following example reads the file rice.txt from the directory PRODUCT_DESCRIPTIONS.
To deliver a managed service experience, Amazon RDS doesn't provide shell access to Opatch. Instead,
the lsinventory-dbv.txt in the BDUMP directory contains the patch information related to
your current engine version. When you perform a minor or major upgrade, Amazon RDS updates
lsinventory-dbv.txt within an hour of applying the patch. To verify the applied patches, read
lsinventory-dbv.txt. This action is similar to running the opatch lsinventory command.
Note
The examples in this section assume that the BDUMP directory is named BDUMP. On a read
replica, the BDUMP directory name is different. To learn how to get the BDUMP name by
querying V$DATABASE.DB_UNIQUE_NAME on a read replica, see Listing files (p. 925).
The inventory files use the Amazon RDS naming convention lsinventory-dbv.txt
and lsinventory_detail-dbv.txt, where dbv is the full name of your DB version.
The lsinventory-dbv.txt file is available on all DB versions. The corresponding
lsinventory_detail-dbv.txt is available on the following DB versions:
For example, if your DB version is 19.0.0.0.ru-2021-07.rur-2021-07.r1, then your inventory files have the
following names.
lsinventory-19.0.0.0.ru-2021-07.rur-2021-07.r1.txt
lsinventory_detail-19.0.0.0.ru-2021-07.rur-2021-07.r1.txt
Ensure that you download the files that match the current version of your DB engine.
Console
1928
Amazon Relational Database Service User Guide
Other tasks
SQL
In the following sample query, replace dbv with your Oracle DB version. For example, your DB version
might be 19.0.0.0.ru-2020-04.rur-2020-04.r1.
SELECT text
FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP', 'lsinventory-dbv.txt'));
PL/SQL
To read the lsinventory-dbv.txt in a SQL client, you can write a PL/SQL program. This program
uses utl_file to read the file, and dbms_output to print it. These are Oracle-supplied packages.
In the following sample program, replace dbv with your Oracle DB version. For example, your DB version
might be 19.0.0.0.ru-2020-04.rur-2020-04.r1.
SET SERVEROUTPUT ON
DECLARE
v_file SYS.UTL_FILE.FILE_TYPE;
v_line VARCHAR2(1000);
v_oracle_home_type VARCHAR2(1000);
c_directory VARCHAR2(30) := 'BDUMP';
c_output_file VARCHAR2(30) := 'lsinventory-dbv.txt';
BEGIN
v_file := SYS.UTL_FILE.FOPEN(c_directory, c_output_file, 'r');
LOOP
BEGIN
SYS.UTL_FILE.GET_LINE(v_file, v_line,1000);
DBMS_OUTPUT.PUT_LINE(v_line);
EXCEPTION
WHEN no_data_found THEN
EXIT;
END;
END LOOP;
END;
/
Or query rdsadmin.tracefile_listing, and spool the output to a file. The following example
spools the output to /tmp/tracefile.txt.
SPOOL /tmp/tracefile.txt
SELECT *
FROM rdsadmin.tracefile_listing
WHERE FILENAME LIKE 'lsinventory%';
SPOOL OFF;
1929
Amazon Relational Database Service User Guide
Other tasks
The advisor task procedures are available in the following engine versions:
For more information, see Version 19.0.0.0.ru-2021-01.rur-2021-01.r1 in the Amazon RDS for Oracle
Release Notes.
• Version 12.2.0.1.ru-2021-01.rur-2021-01.r1 and higher Oracle Database 12c (Release 2) 12.2.0.1
versions
For more information, see Version 12.2.0.1.ru-2021-01.rur-2021-01.r1 in the Amazon RDS for Oracle
Release Notes.
Topics
• Setting parameters for advisor tasks (p. 1930)
• Disabling AUTO_STATS_ADVISOR_TASK (p. 1931)
• Re-enabling AUTO_STATS_ADVISOR_TASK (p. 1932)
p_task_name varchar2 — Yes The name of the advisor task whose parameters
you want to change. The following values are
valid:
• AUTO_STATS_ADVISOR_TASK
• INDIVIDUAL_STATS_ADVISOR_TASK
• SYS_AUTO_SPM_EVOLVE_TASK
• SYS_AUTO_SQL_TUNING_TASK
p_parameter varchar2 — Yes The name of the task parameter. To find valid
parameters for an advisor task, run the following
query. Substitute p_task_name with a valid
value for p_task_name:
1930
Amazon Relational Database Service User Guide
Other tasks
p_value varchar2 — Yes The value for a task parameter. To find valid
values for task parameters, run the following
query. Substitute p_task_name with a valid
value for p_task_name:
The following PL/SQL program sets ACCEPT_PLANS to FALSE for SYS_AUTO_SPM_EVOLVE_TASK. The
SQL Plan Management automated task verifies the plans and generates a report of its findings, but does
not evolve the plans automatically. You can use a report to identify new SQL plan baselines and accept
them manually.
BEGIN
rdsadmin.rdsadmin_util.advisor_task_set_parameter(
p_task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
p_parameter => 'ACCEPT_PLANS',
p_value => 'FALSE');
END;
BEGIN
rdsadmin.rdsadmin_util.advisor_task_set_parameter(
p_task_name => 'AUTO_STATS_ADVISOR_TASK',
p_parameter => 'EXECUTION_DAYS_TO_EXPIRE',
p_value => '10');
END;
Disabling AUTO_STATS_ADVISOR_TASK
To disable AUTO_STATS_ADVISOR_TASK, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.advisor_task_drop. The advisor_task_drop procedure accepts the
following parameter.
Note
This procedure is available in Oracle Database 12c Release 2 (12.2.0.1) and later.
p_task_name varchar2 — Yes The name of the advisor task to be disabled. The
only valid value is AUTO_STATS_ADVISOR_TASK.
1931
Amazon Relational Database Service User Guide
Other tasks
EXEC rdsadmin.rdsadmin_util.advisor_task_drop('AUTO_STATS_ADVISOR_TASK')
Re-enabling AUTO_STATS_ADVISOR_TASK
To re-enable AUTO_STATS_ADVISOR_TASK, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.dbms_stats_init. The dbms_stats_init procedure takes no
parameters.
EXEC rdsadmin.rdsadmin_util.dbms_stats_init()
Transporting tablespaces
Use the Amazon RDS package rdsadmin.rdsadmin_transport_util to copy a set of tablespaces
from an on-premises Oracle database to an RDS for Oracle DB instance. At the physical level, the
transportable tablespace feature incrementally copies source data files and metadata files to your target
instance. You can transfer the files using either Amazon EFS or Amazon S3. For more information, see
Migrating using Oracle transportable tablespaces (p. 1962).
Topics
• Importing transported tablespaces to your DB instance (p. 1932)
• Importing transportable tablespace metadata into your DB instance (p. 1933)
• Listing orphaned files after a tablespace import (p. 1934)
• Deleting orphaned data files after a tablespace import (p. 1935)
Syntax
FUNCTION import_xtts_tablespaces(
p_tablespace_list IN CLOB,
p_directory_name IN VARCHAR2,
p_platform_id IN NUMBER DEFAULT 13,
p_parallel IN INTEGER DEFAULT 0) RETURN VARCHAR2;
Parameters
1932
Amazon Relational Database Service User Guide
Other tasks
Examples
The following example imports the tablespaces TBS1, TBS2, and TBS3 from the directory
DATA_PUMP_DIR.
BEGIN
:task_id:=rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces('TBS1,TBS2,TBS3','DATA_PUMP_DIR');
END;
/
PRINT task_id
Syntax
PROCEDURE import_xtts_metadata(
p_datapump_metadata_file IN SYS.DBA_DATA_FILES.FILE_NAME%TYPE,
p_directory_name IN VARCHAR2,
p_exclude_stats IN BOOLEAN DEFAULT FALSE,
p_remap_tablespace_list IN CLOB DEFAULT NULL,
p_remap_user_list IN CLOB DEFAULT NULL);
Parameters
p_datapump_metadata_file —
SYS.DBA_DATA_FILES.FILE_NAME Yes The name of the Oracle
%TYPE Data Pump file that
contains the metadata
for your transportable
tablespaces.
1933
Amazon Relational Database Service User Guide
Other tasks
p_remap_tablespace_list
CLOB NULL No A list of tablespaces
to be remapped
during the metadata
import. Use the format
from_tbs:to_tbs.
For example, specify
users:user_data.
Examples
The example imports the tablespace metadata from the file xttdump.dmp, which is located in directory
DATA_PUMP_DIR.
BEGIN
rdsadmin.rdsadmin_transport_util.import_xtts_metadata('xttdump.dmp','DATA_PUMP_DIR');
END;
/
Syntax
Examples
FILENAME FILESIZE
-------------- ---------
1934
Amazon Relational Database Service User Guide
Other tasks
datafile_7.dbf 104865792
datafile_8.dbf 104865792
Syntax
PROCEDURE cleanup_incomplete_xtts_import(
p_directory_name IN VARCHAR2);
Parameters
Examples
BEGIN
rdsadmin.rdsadmin_transport_util.cleanup_incomplete_xtts_import('DATA_PUMP_DIR');
END;
/
The following example reads the log file generated by the previous command.
SELECT *
FROM TABLE(rdsadmin.rds_file_util.read_text_file(
p_directory => 'BDUMP',
p_filename => 'rds-xtts-
delete_xtts_orphaned_files-2023-06-01.09-33-11.868894000.log'));
TEXT
--------------------------------------------------------------------------------
orphan transported datafile datafile_7.dbf deleted.
orphan transported datafile datafile_8.dbf deleted.
1935
Amazon Relational Database Service User Guide
Configuring advanced RDS for Oracle features
Topics
• Storing temporary data in an RDS for Oracle instance store (p. 1936)
• Turning on HugePages for an RDS for Oracle instance (p. 1942)
• Turning on extended data types in RDS for Oracle (p. 1945)
Topics
• Overview of the RDS for Oracle instance store (p. 1936)
• Turning on an RDS for Oracle instance store (p. 1938)
• Configuring an RDS for Oracle instance store (p. 1938)
• Considerations when changing the DB instance type (p. 1940)
• Working with an instance store on an Oracle read replica (p. 1941)
• Configuring a temporary tablespace group on an instance store and Amazon EBS (p. 1941)
• Removing an RDS for Oracle instance store (p. 1942)
An instance store is based on Non-Volatile Memory Express (NVMe) devices that are physically attached
to the host computer. The storage is optimized for low latency, random I/O performance, and sequential
read throughput.
The size of the instance store varies by DB instance type. For more information about the instance store,
see Amazon EC2 instance store in the Amazon Elastic Compute Cloud User Guide for Linux Instances.
Topics
• Types of data in the RDS for Oracle instance store (p. 1936)
• Benefits of the RDS for Oracle instance store (p. 1937)
• Supported instance classes for the RDS for Oracle instance store (p. 1937)
• Supported engine versions for the RDS for Oracle instance store (p. 1938)
• Supported AWS Regions for the RDS for Oracle instance store (p. 1938)
• Cost of the RDS for Oracle instance store (p. 1938)
1936
Amazon Relational Database Service User Guide
Configuring the instance store
A temporary tablespace
Oracle Database uses temporary tablespaces to store intermediate query results that don't fit in
memory. Larger queries can generate large amounts of intermediate data that needs to be cached
temporarily, but doesn't need to persist. In particular, a temporary tablespace is useful for sorts,
hash aggregations, and joins. If your RDS for Oracle DB instance uses the Enterprise Edition or
Standard Edition 2, you can place a temporary tablespace in an instance store.
The flash cache
The flash cache improves the performance of single-block random reads in the conventional path. A
best practice is to size the cache to accommodate most of your active data set. If your RDS for Oracle
DB instance uses the Enterprise Edition, you can place the flash cache in an instance store.
By default, an instance store is configured for a temporary tablespace but not for the flash cache. You
can't place Oracle data files and database log files in an instance store.
By placing your temporary tablespace and flash cache on an instance store, you get the following
benefits:
By placing your temporary tablespace on the instance store, you deliver an immediate performance
boost to queries that use temporary space. When you place the flash cache on the instance store, cached
block reads typically have much lower latency than Amazon EBS reads. The flash cache needs to be
"warmed up" before it delivers performance benefits. The cache warms up by itself because the database
writes blocks to the flash cache as they age out of the database buffer cache.
Note
In some cases, the flash cache causes performance overhead because of cache management.
Before you turn on the flash cache in a production environment, we recommend that you
analyze your workload and test the cache in a test environment.
Supported instance classes for the RDS for Oracle instance store
Amazon RDS supports the instance store for the following DB instance classes:
• db.m5d
• db.r5d
• db.x2idn
• db.x2iedn
RDS for Oracle supports the preceding DB instance classes for the BYOL licensing model only. For more
information, see Supported RDS for Oracle instance classes (p. 1797) and Bring Your Own License
(BYOL) (p. 1793).
1937
Amazon Relational Database Service User Guide
Configuring the instance store
To see the total instance storage for the supported DB instance types, run the following command in the
AWS CLI.
Example
The preceding command returns the raw device size for the instance store. RDS for Oracle uses a small
portion of this space for configuration. The space in the instance store that is available for temporary
tablespaces or the flash cache is slightly smaller.
Supported engine versions for the RDS for Oracle instance store
The instance store is supported for the following RDS for Oracle engine versions:
Supported AWS Regions for the RDS for Oracle instance store
The instance store is available in all AWS Regions where one or more of these instance types are
supported. For more information on the db.m5d and db.r5d instance classes, see DB instance
classes (p. 11). For more information on the instance classes supported by Amazon RDS for Oracle, see
RDS for Oracle instance classes (p. 1796).
• Create an RDS for Oracle DB instance using a supported instance class. For more information, see
Creating an Amazon RDS DB instance (p. 300).
• Modify an existing RDS for Oracle DB instance to use a supported instance class. For more information,
see Modifying an Amazon RDS DB instance (p. 401).
db_flash_cache_size={DBInstanceStore*{0,2,4,6,8,10}/10}
This parameter specifies the amount of storage space allocated for the flash cache. This parameter is
valid only for Oracle Database Enterprise Edition. The default value is {DBInstanceStore*0/10}.
If you set a nonzero value for db_flash_cache_size, your RDS for Oracle instance enables the
flash cache after you restart the instance.
1938
Amazon Relational Database Service User Guide
Configuring the instance store
rds.instance_store_temp_size={DBInstanceStore*{0,2,4,6,8,10}/10}
This parameter specifies the amount of storage space allocated for the temporary tablespace.
The default value is {DBInstanceStore*10/10}. This parameter is modifiable for Oracle
Database Enterprise Edition and read-only for Standard Edition 2. If you set a nonzero value for
rds.instance_store_temp_size, Amazon RDS allocates space in the instance store for the
temporary tablespace.
The combined value of the preceding parameters must not exceed 10/10, or 100%. The following table
illustrates valid and invalid parameter settings.
db_flash_cache_size={DBInstanceStore*0/10}
rds.instance_store_temp_size={DBInstanceStore*10/10}
This is a valid
configuration for all
editions of Oracle
Database. Amazon
RDS allocates
100% of instance
store space to
the temporary
tablespace. This is
the default.
db_flash_cache_size={DBInstanceStore*10/10}
rds.instance_store_temp_size={DBInstanceStore*0/10}This is a valid
configuration for
Oracle Database
Enterprise Edition
only. Amazon RDS
allocates 100% of
instance store space
to the flash cache.
db_flash_cache_size={DBInstanceStore*2/10}
rds.instance_store_temp_size={DBInstanceStore*8/10}This is a valid
configuration for
Oracle Database
Enterprise Edition
only. Amazon RDS
allocates 20%
of instance store
space to the flash
cache, and 80% of
instance store space
to the temporary
tablespace.
1939
Amazon Relational Database Service User Guide
Configuring the instance store
db_flash_cache_size={DBInstanceStore*6/10}
rds.instance_store_temp_size={DBInstanceStore*4/10}This is a valid
configuration for
Oracle Database
Enterprise Edition
only. Amazon RDS
allocates 60%
of instance store
space to the flash
cache, and 40% of
instance store space
to the temporary
tablespace.
db_flash_cache_size={DBInstanceStore*2/10}
rds.instance_store_temp_size={DBInstanceStore*4/10}This is a valid
configuration for
Oracle Database
Enterprise Edition
only. Amazon RDS
allocates 20%
of instance store
space to the flash
cache, and 40% of
instance store space
to the temporary
tablespace.
db_flash_cache_size={DBInstanceStore*8/10}
rds.instance_store_temp_size={DBInstanceStore*8/10}This is an invalid
configuration
because the
combined
percentage of
instance store space
exceeds 100%.
In such cases,
Amazon RDS fails
the attempt.
You scale up or scale down the DB instance that supports the instance store.
The following values increase or decrease proportionally to the new size of the instance store:
• The new size of the flash cache.
• The space allocated to the temporary tablespaces that reside in the instance store.
1940
Amazon Relational Database Service User Guide
Configuring the instance store
You modify a DB instance that doesn't use an instance store to an instance that does use an instance
store.
In this case, RDS for Oracle removes the flash cache. RDS re-creates the tempfile that is currently
located on the instance store on an Amazon EBS volume. The maximum size of the new tempfile is
the former size of the rds.instance_store_temp_size parameter.
• You can't create a temporary tablespace on a read replica. If you create a new temporary tablespace on
the primary instance, RDS for Oracle replicates the tablespace information without tempfiles. To add a
new tempfile, use either of the following techniques:
• Use the Amazon RDS procedure rdsadmin.rdsadmin_util.add_inst_store_tempfile. RDS
for Oracle creates a tempfile in the instance store on your read replica, and adds it to the specified
temporary tablespace.
• Run the ALTER TABLESPACE … ADD TEMPFILE command. RDS for Oracle places the tempfile on
Amazon EBS storage.
Note
The tempfile sizes and storage types can be different on the primary DB instance and the read
replica.
• You can manage the default temporary tablespace setting only on the primary DB instance. RDS for
Oracle replicates the setting to all read replicas.
• You can configure the temporary tablespace groups only on the primary DB instance. RDS for Oracle
replicates the setting to all read replicas.
When you configure a temporary tablespace group on both an instance store and Amazon EBS, the
two tablespaces have significantly different performance characteristics. Oracle Database chooses
the tablespace to serve queries based on an internal algorithm. Therefore, similar queries can vary in
performance.
1941
Amazon Relational Database Service User Guide
Turning on HugePages
If the tablespace size in the instance store is insufficient, you can create additional temporary storage as
follows:
1. Assign the temporary tablespace in the instance store to a temporary tablespace group.
2. Create a new temporary tablespace in Amazon EBS if one doesn't exist.
3. Assign the temporary tablespace in Amazon EBS to the same tablespace group that includes the
instance store tablespace.
4. Set the tablespace group as the default temporary tablespace.
The following example assumes that the size of the temporary tablespace in the instance store
doesn't meet your application requirements. The example creates the temporary tablespace
temp_in_inst_store in the instance store, assigns it to tablespace group temp_group, adds the
existing Amazon EBS tablespace named temp_in_ebs to this group, and sets this group as the default
temporary tablespace.
Tablespace altered.
Tablespace altered.
GROUP_NAME TABLESPACE_NAME
------------------------------ ------------------------------
TEMP_GROUP TEMP_IN_EBS
TEMP_GROUP TEMP_IN_INST_STORE
PROPERTY_VALUE
--------------
TEMP_GROUP
You can use HugePages with all supported versions and editions of RDS for Oracle.
1942
Amazon Relational Database Service User Guide
Turning on HugePages
The use_large_pages parameter controls whether HugePages are turned on for a DB instance. The
possible settings for this parameter are ONLY, FALSE, and {DBInstanceClassHugePagesDefault}.
The use_large_pages parameter is set to {DBInstanceClassHugePagesDefault} in the default
DB parameter group for Oracle.
To control whether HugePages are turned on for a DB instance automatically, you can use the
DBInstanceClassHugePagesDefault formula variable in parameter groups. The value is determined
as follows:
HugePages are not turned on by default for the following DB instance classes.
DB instance class family DB instance classes with HugePages not turned on by default
db.m5 db.m5.large
For more information about DB instance classes, see Hardware specifications for DB instance
classes (p. 87).
To turn on HugePages for new or existing DB instances manually, set the use_large_pages parameter
to ONLY. You can't use HugePages with Oracle Automatic Memory Management (AMM). If you set
the parameter use_large_pages to ONLY, then you must also set both memory_target and
memory_max_target to 0. For more information about setting DB parameters for your DB instance, see
Working with parameter groups (p. 347).
You can also set the sga_target, sga_max_size, and pga_aggregate_target parameters. When
you set system global area (SGA) and program global area (PGA) memory parameters, add the values
together. Subtract this total from your available instance memory (DBInstanceClassMemory) to
determine the free memory beyond the HugePages allocation. You must leave free memory of at least 2
GiB, or 10 percent of the total available instance memory, whichever is smaller.
After you configure your parameters, you must reboot your DB instance for the changes to take effect.
For more information, see Rebooting a DB instance (p. 436).
Note
The Oracle DB instance defers changes to SGA-related initialization parameters until you reboot
the instance without failover. In the Amazon RDS console, choose Reboot but do not choose
1943
Amazon Relational Database Service User Guide
Turning on HugePages
Reboot with failover. In the AWS CLI, call the reboot-db-instance command with the --
no-force-failover parameter. The DB instance does not process the SGA-related parameters
during failover or during other maintenance operations that cause the instance to restart.
The following is a sample parameter configuration for HugePages that enables HugePages manually. You
should set the values to meet your needs.
memory_target = 0
memory_max_target = 0
pga_aggregate_target = {DBInstanceClassMemory*1/8}
sga_target = {DBInstanceClassMemory*3/4}
sga_max_size = {DBInstanceClassMemory*3/4}
use_large_pages = ONLY
memory_target = IF({DBInstanceClassHugePagesDefault}, 0,
{DBInstanceClassMemory*3/4})
memory_max_target = IF({DBInstanceClassHugePagesDefault}, 0,
{DBInstanceClassMemory*3/4})
pga_aggregate_target = IF({DBInstanceClassHugePagesDefault},
{DBInstanceClassMemory*1/8}, 0)
sga_target = IF({DBInstanceClassHugePagesDefault},
{DBInstanceClassMemory*3/4}, 0)
sga_max_size = IF({DBInstanceClassHugePagesDefault},
{DBInstanceClassMemory*3/4}, 0)
use_large_pages = {DBInstanceClassHugePagesDefault}
The parameter group is used by a db.r4 DB instance class with less than 100 GiB of memory. With
these parameter settings and use_large_pages set to {DBInstanceClassHugePagesDefault},
HugePages are turned on for the db.r4 instance.
Consider another example with following parameters values set in a parameter group.
memory_target = IF({DBInstanceClassHugePagesDefault}, 0,
{DBInstanceClassMemory*3/4})
memory_max_target = IF({DBInstanceClassHugePagesDefault}, 0,
{DBInstanceClassMemory*3/4})
pga_aggregate_target = IF({DBInstanceClassHugePagesDefault},
{DBInstanceClassMemory*1/8}, 0)
sga_target = IF({DBInstanceClassHugePagesDefault},
{DBInstanceClassMemory*3/4}, 0)
sga_max_size = IF({DBInstanceClassHugePagesDefault},
{DBInstanceClassMemory*3/4}, 0)
use_large_pages = FALSE
The parameter group is used by a db.r4 DB instance class and a db.r5 DB instance class, both with less
than 100 GiB of memory. With these parameter settings, HugePages are turned off on the db.r4 and
db.r5 instance.
Note
If this parameter group is used by a db.r4 DB instance class or db.r5 DB instance class with at
least 100 GiB of memory, the FALSE setting for use_large_pages is overridden and set to
ONLY. In this case, a customer notification regarding the override is sent.
After HugePages are active on your DB instance, you can view HugePages information by
enabling enhanced monitoring. For more information, see Monitoring OS metrics with Enhanced
Monitoring (p. 797).
1944
Amazon Relational Database Service User Guide
Turning on extended data types
If you don't want to use extended data types, keep the MAX_STRING_SIZE parameter set to STANDARD
(the default). In this case, the size limits are 4,000 bytes for the VARCHAR2 and NVARCHAR2 data types,
and 2,000 bytes for the RAW data type.
You can turn on extended data types on a new or existing DB instance. For new DB instances, DB instance
creation time is typically longer when you turn on extended data types. For existing DB instances, the DB
instance is unavailable during the conversion process.
• When you turn on extended data types, you can't change the DB instance back to use the standard
size for data types. After a DB instance is converted to use extended data types, if you set the
MAX_STRING_SIZE parameter back to STANDARD it results in the incompatible-parameters
status.
• When you restore a DB instance that uses extended data types, you must specify a parameter group
with the MAX_STRING_SIZE parameter set to EXTENDED. During restore, if you specify the default
parameter group or any other parameter group with MAX_STRING_SIZE set to STANDARD it results in
the incompatible-parameters status.
• When the DB instance status is incompatible-parameters because of the MAX_STRING_SIZE
setting, the DB instance remains unavailable until you set the MAX_STRING_SIZE parameter to
EXTENDED and reboot the DB instance.
• We recommend that you don't turn on extended data types for Oracle DB instances running on the
t2.micro DB instance class.
To set the parameter, you can either create a new parameter group or modify an existing parameter
group.
For more information, see Working with parameter groups (p. 347).
2. Create a new RDS for Oracle DB instance.
For more information, see Creating an Amazon RDS DB instance (p. 300).
3. Associate the parameter group with MAX_STRING_SIZE set to EXTENDED with the DB instance.
For more information, see Creating an Amazon RDS DB instance (p. 300).
1945
Amazon Relational Database Service User Guide
Turning on extended data types
The amount of time it takes to convert the data depends on the DB instance class, the database size, and
the time of the last DB snapshot. To reduce downtime, consider taking a snapshot immediately before
rebooting. This shortens the time of the backup that occurs during the conversion workflow.
Note
After you turn on extended data types, you can't perform a point-in-time restore to a time
during the conversion. You can restore to the time immediately before the conversion or after
the conversion.
If there are invalid objects in the database, Amazon RDS tries to recompile them. The conversion
to extended data types can fail if Amazon RDS can't recompile an invalid object. The snapshot
enables you to restore the database if there is a problem with the conversion. Always check for
invalid objects before conversion and fix or drop those invalid objects. For production databases, we
recommend testing the conversion process on a copy of your DB instance first.
To set the parameter, you can either create a new parameter group or modify an existing parameter
group.
For more information, see Working with parameter groups (p. 347).
3. Modify the DB instance to associate it with the parameter group with MAX_STRING_SIZE set to
EXTENDED.
For more information, see Modifying an Amazon RDS DB instance (p. 401).
4. Reboot the DB instance for the parameter change to take effect.
1946
Amazon Relational Database Service User Guide
Importing data into Oracle
For example, you can use the following tools, depending on your requirements:
Important
Before you use the preceding migration techniques, we recommend that you back up your
database. After you import the data, you can back up your RDS for Oracle DB instances by
creating snapshots. Later, you can restore the snapshots. For more information, see Backing up
and restoring (p. 590).
For many database engines, ongoing replication can continue until you are ready to switch over to the
target database. You can use AWS DMS to migrate to RDS for Oracle from either the same database
engine or a different engine. If you migrate from a different database engine, you can use the AWS
Schema Conversion Tool to migrate schema objects that AWS DMS doesn't migrate.
Topics
• Importing using Oracle SQL Developer (p. 1947)
• Importing using Oracle Data Pump (p. 1948)
• Importing using Oracle Export/Import (p. 1959)
• Importing using Oracle SQL*Loader (p. 1959)
• Migrating with Oracle materialized views (p. 1960)
• Migrating using Oracle transportable tablespaces (p. 1962)
After you install SQL Developer, you can use it to connect to your source and target databases. Use the
Database Copy command on the Tools menu to copy your data to your Amazon RDS instance.
1947
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump
Oracle also has documentation on how to migrate from other databases, including MySQL and SQL
Server. For more information, see http://www.oracle.com/technetwork/database/migration in the
Oracle documentation.
The examples in this section show one way to import data into an Oracle database, but Oracle Data
Pump supports other techniques. For more information, see the Oracle Database documentation.
The examples in this section use the DBMS_DATAPUMP package. You can accomplish the same tasks
using the Oracle Data Pump command line utilities impdp and expdp. You can install these utilities
on a remote host as part of an Oracle Client installation, including Oracle Instant Client. For more
information, see How do I use Oracle Instant Client to run Data Pump Import or Export for my Amazon
RDS for Oracle DB instance?
Topics
• Overview of Oracle Data Pump (p. 1948)
• Importing data with Oracle Data Pump and an Amazon S3 bucket (p. 1950)
• Importing data with Oracle Data Pump and a database link (p. 1954)
You can use Oracle Data Pump for the following scenarios:
• Import data from an Oracle database, either on-premises or on an Amazon EC2 instance, to an RDS for
Oracle DB instance.
• Import data from an RDS for Oracle DB instance to an Oracle database, either on-premises or on an
Amazon EC2 instance.
• Import data between RDS for Oracle DB instances, for example, to migrate data from EC2-Classic to
VPC.
To download Oracle Data Pump utilities, see Oracle database software downloads on the Oracle
Technology Network website. For compatibility considerations when migrating between versions of
Oracle Database, see the Oracle Database documentation.
1948
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump
2. Upload your dump file to your destination RDS for Oracle DB instance. You can transfer using an
Amazon S3 bucket or by using a database link between the two databases.
3. Import the data from your dump file into your RDS for Oracle DB instance.
• Perform imports in schema or table mode to import specific schemas and objects.
• Limit the schemas you import to those required by your application.
• Don't import in full mode or import schemas for system-maintained components.
Because RDS for Oracle doesn't allow access to SYS or SYSDBA administrative users, these actions
might damage the Oracle data dictionary and affect the stability of your database.
• When loading large amounts of data, do the following:
1. Transfer the dump file to the target RDS for Oracle DB instance.
2. Take a DB snapshot of your instance.
3. Test the import to verify that it succeeds.
If database components are invalidated, you can delete the DB instance and re-create it from the DB
snapshot. The restored DB instance includes any dump files staged on the DB instance when you took
the DB snapshot.
• Don't import dump files that were created using the Oracle Data Pump export parameters
TRANSPORT_TABLESPACES, TRANSPORTABLE, or TRANSPORT_FULL_CHECK. RDS for Oracle DB
instances don't support importing these dump files.
• Don't import dump files that contain Oracle Scheduler objects in SYS, SYSTEM, RDSADMIN, RDSSEC,
and RDS_DATAGUARD, and belong to the following categories:
• Jobs
• Programs
• Schedules
• Chains
• Rules
• Evaluation contexts
• Rule sets
RDS for Oracle DB instances don't support importing these dump files.
• To exclude unsupported Oracle Scheduler objects, use additional directives during the Data Pump
export. If you use DBMS_DATAPUMP, you can add an additional METADATA_FILTER before the
DBMS_METADATA.START_JOB:
DBMS_DATAPUMP.METADATA_FILTER(
v_hdnl,
'EXCLUDE_NAME_EXPR',
q'[IN (SELECT NAME FROM SYS.OBJ$
WHERE TYPE# IN (66,67,74,79,59,62,46)
AND OWNER# IN
(SELECT USER# FROM SYS.USER$
WHERE NAME IN ('RDSADMIN','SYS','SYSTEM','RDS_DATAGUARD','RDSSEC')
)
)
]',
'PROCOBJ'
1949
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump
);
If you use expdp, create a parameter file that contains the exclude directive shown in the following
example. Then use PARFILE=parameter_file with your expdp command.
exclude=procobj:"IN
(SELECT NAME FROM sys.OBJ$
WHERE TYPE# IN (66,67,74,79,59,62,46)
AND OWNER# IN
(SELECT USER# FROM SYS.USER$
WHERE NAME IN ('RDSADMIN','SYS','SYSTEM','RDS_DATAGUARD','RDSSEC')
)
)"
1. Export data on the source database using the Oracle DBMS_DATAPUMP package.
2. Place the dump file in an Amazon S3 bucket.
3. Download the dump file from the Amazon S3 bucket to the DATA_PUMP_DIR directory on the target
RDS for Oracle DB instance.
4. Import the data from the copied dump file into the RDS for Oracle DB instance using the package
DBMS_DATAPUMP.
Topics
• Requirements for Importing data with Oracle Data Pump and an Amazon S3 bucket (p. 1950)
• Step 1: Grant privileges to the database user on the RDS for Oracle target DB instance (p. 1951)
• Step 2: Export data into a dump file using DBMS_DATAPUMP (p. 1951)
• Step 3: Upload the dump file to your Amazon S3 bucket (p. 1952)
• Step 4: Download the dump file from your Amazon S3 bucket to your target DB instance (p. 1953)
• Step 5: Import your dump file into your target DB instance using DBMS_DATAPUMP (p. 1953)
• Step 6: Clean up (p. 1954)
Requirements for Importing data with Oracle Data Pump and an Amazon S3
bucket
The process has the following requirements:
• Make sure that an Amazon S3 bucket is available for file transfers, and that the Amazon S3 bucket is in
the same AWS Region as the DB instance. For instructions, see Create a bucket in the Amazon Simple
Storage Service Getting Started Guide.
• The object that you upload into the Amazon S3 bucket must be 5 TB or less. For more information
about working with objects in Amazon S3, see Amazon Simple Storage Service User Guide.
Note
If you dump file exceeds 5 TB, you can run the Oracle Data Pump export with the parallel
option. This operation spreads the data into multiple dump files so that you do not exceed the
5 TB limit for individual files.
1950
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump
• You must prepare the Amazon S3 bucket for Amazon RDS integration by following the instructions in
Configuring IAM permissions for RDS for Oracle integration with Amazon S3 (p. 1992).
• You must ensure that you have enough storage space to store the dump file on the source instance
and the target DB instance.
Note
This process imports a dump file into the DATA_PUMP_DIR directory, a preconfigured directory
on all Oracle DB instances. This directory is located on the same storage volume as your data
files. When you import the dump file, the existing Oracle data files use more space. Thus, you
should make sure that your DB instance can accommodate that additional use of space. The
imported dump file is not automatically deleted or purged from the DATA_PUMP_DIR directory.
To remove the imported dump file, use UTL_FILE.FREMOVE, found on the Oracle website.
Step 1: Grant privileges to the database user on the RDS for Oracle target DB
instance
In this step, you create the schemas into which you plan to import data and grant the users necessary
privileges.
To create users and grant necessary privileges on the RDS for Oracle target instance
1. Use SQL*Plus or Oracle SQL Developer to log in as the master user to the RDS for Oracle DB instance
into which the data will be imported. For information about connecting to a DB instance, see
Connecting to your RDS for Oracle DB instance (p. 1806).
2. Create the required tablespaces before you import the data. For more information, see Creating and
sizing tablespaces (p. 1870).
3. Create the user account and grant the necessary permissions and roles if the user account into which
the data is imported doesn't exist. If you plan to import data into multiple user schemas, create each
user account and grant the necessary privileges and roles to it.
For example, the following SQL statements create a new user and grant the necessary permissions
and roles to import the data into the schema owned by this user. Replace schema_1 with the name
of your schema in this step and in the following steps.
Note
Specify a password other than the prompt shown here as a security best practice.
The preceding statements grant the new user the CREATE SESSION privilege and the RESOURCE
role. You might need additional privileges and roles depending on the database objects that you
import.
1. Use SQL Plus or Oracle SQL Developer to connect to the source RDS for Oracle DB instance with
an administrative user. If the source database is an RDS for Oracle DB instance, connect with the
Amazon RDS master user.
2. Export the data by calling DBMS_DATAPUMP procedures.
1951
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump
The following script exports the SCHEMA_1 schema into a dump file named sample.dmp in the
DATA_PUMP_DIR directory. Replace SCHEMA_1 with the name of the schema that you want to
export.
DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => 'EXPORT',
job_mode => 'SCHEMA',
job_name => null
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl ,
filename => 'sample.dmp' ,
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_dump_file
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'sample_exp.log',
directory => 'DATA_PUMP_DIR' ,
filetype => dbms_datapump.ku$_file_type_log_file
);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,'SCHEMA_EXPR','IN (''SCHEMA_1'')');
DBMS_DATAPUMP.METADATA_FILTER(
v_hdnl,
'EXCLUDE_NAME_EXPR',
q'[IN (SELECT NAME FROM SYS.OBJ$
WHERE TYPE# IN (66,67,74,79,59,62,46)
AND OWNER# IN
(SELECT USER# FROM SYS.USER$
WHERE NAME IN ('RDSADMIN','SYS','SYSTEM','RDS_DATAGUARD','RDSSEC')
)
)
]',
'PROCOBJ'
);
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/
Note
Data Pump starts jobs asynchronously. For information about monitoring a Data Pump job,
see Monitoring job status in the Oracle documentation.
3. (Optional) View the contents of the export log by calling the
rdsadmin.rds_file_util.read_text_file procedure. For more information, see Reading files
in a DB instance directory (p. 1927).
SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
p_bucket_name => 'myS3bucket',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;
1952
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump
The SELECT statement returns the ID of the task in a VARCHAR2 data type. For more information, see
Uploading files from your RDS for Oracle DB instance to an Amazon S3 bucket (p. 2002).
Step 4: Download the dump file from your Amazon S3 bucket to your target DB
instance
Perform this step using the Amazon RDS procedure
rdsadmin.rdsadmin_s3_tasks.download_from_s3. When you download a file to a directory, the
procedure download_from_s3 skips the download if an identically named file already exists in the
directory. To remove a file from the download directory, use UTL_FILE.FREMOVE, found on the Oracle
website.
1. Start SQL*Plus or Oracle SQL Developer and log in as the master on your Amazon RDS target Oracle
DB instance
2. Download the dump file using the Amazon RDS procedure
rdsadmin.rdsadmin_s3_tasks.download_from_s3.
The following example downloads all files from an Amazon S3 bucket named myS3bucket to the
directory DATA_PUMP_DIR.
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => 'myS3bucket',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;
The SELECT statement returns the ID of the task in a VARCHAR2 data type. For more information,
see Downloading files from an Amazon S3 bucket to an Oracle DB instance (p. 2004).
Step 5: Import your dump file into your target DB instance using
DBMS_DATAPUMP
Use DBMS_DATAPUMP to import the schema into your RDS for Oracle DB instance. Additional options
such as METADATA_REMAP might be required.
1. Start SQL*Plus or SQL Developer and log in as the master user to your RDS for Oracle DB instance.
2. Export the data by calling DBMS_DATAPUMP procedures.
The following example imports the SCHEMA_1 data from sample_copied.dmp into your target DB
instance.
DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => 'IMPORT',
job_mode => 'SCHEMA',
job_name => null);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'sample_copied.dmp',
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_dump_file);
DBMS_DATAPUMP.ADD_FILE(
1953
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump
Note
Data Pump jobs are started asynchronously. For information about monitoring a Data Pump
job, see Monitoring job status in the Oracle documentation. You can view the contents of
the import log by using the rdsadmin.rds_file_util.read_text_file procedure. For
more information, see Reading files in a DB instance directory (p. 1927).
3. Verify the data import by listing the schema tables on your target DB instance.
For example, the following query returns the number of tables for SCHEMA_1.
Step 6: Clean up
After the data has been imported, you can delete the files that you don't want to keep.
1. Start SQL*Plus or SQL Developer and log in as the master user to your RDS for Oracle DB instance.
2. List the files in DATA_PUMP_DIR using the following command.
3. Delete files in DATA_PUMP_DIR that you no longer require, use the following command.
EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','filename');
For example, the following command deletes the file named sample_copied.dmp.
EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','sample_copied.dmp');
1. Connect to a source Oracle database, which can be an on-premises database, Amazon EC2 instance, or
an RDS for Oracle DB instance.
2. Export data using the DBMS_DATAPUMP package.
3. Use DBMS_FILE_TRANSFER.PUT_FILE to copy the dump file from the Oracle database to the
DATA_PUMP_DIR directory on the target RDS for Oracle DB instance that is connected using a
database link.
4. Import the data from the copied dump file into the RDS for Oracle DB instance using the
DBMS_DATAPUMP package.
1954
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump
The import process using Oracle Data Pump and the DBMS_FILE_TRANSFER package has the following
steps.
Topics
• Requirements for importing data with Oracle Data Pump and a database link (p. 1955)
• Step 1: Grant privileges to the user on the RDS for Oracle target DB instance (p. 1955)
• Step 2: Grant privileges to the user on the source database (p. 1956)
• Step 3: Create a dump file using DBMS_DATAPUMP (p. 1956)
• Step 4: Create a database link to the target DB instance (p. 1957)
• Step 5: Copy the exported dump file to the target DB instance using
DBMS_FILE_TRANSFER (p. 1957)
• Step 6: Import the data file to the target DB instance using DBMS_DATAPUMP (p. 1958)
• Step 7: Clean up (p. 1958)
Requirements for importing data with Oracle Data Pump and a database link
The process has the following requirements:
• You must have execute privileges on the DBMS_FILE_TRANSFER and DBMS_DATAPUMP packages.
• You must have write privileges to the DATA_PUMP_DIR directory on the source DB instance.
• You must ensure that you have enough storage space to store the dump file on the source instance
and the target DB instance.
Note
This process imports a dump file into the DATA_PUMP_DIR directory, a preconfigured directory
on all Oracle DB instances. This directory is located on the same storage volume as your data
files. When you import the dump file, the existing Oracle data files use more space. Thus, you
should make sure that your DB instance can accommodate that additional use of space. The
imported dump file is not automatically deleted or purged from the DATA_PUMP_DIR directory.
To remove the imported dump file, use UTL_FILE.FREMOVE, found on the Oracle website.
Step 1: Grant privileges to the user on the RDS for Oracle target DB instance
To grant privileges to the user on the RDS for Oracle target DB instance, take the following steps:
1. Use SQL Plus or Oracle SQL Developer to connect to the RDS for Oracle DB instance into which you
intend to import the data. Connect as the Amazon RDS master user. For information about connecting
to the DB instance, see Connecting to your RDS for Oracle DB instance (p. 1806).
2. Create the required tablespaces before you import the data. For more information, see Creating and
sizing tablespaces (p. 1870).
3. If the user account into which the data is imported doesn't exist, create the user account and grant the
necessary permissions and roles. If you plan to import data into multiple user schemas, create each
user account and grant the necessary privileges and roles to it.
For example, the following commands create a new user named schema_1 and grant the necessary
permissions and roles to import the data into the schema for this user.
Note
Specify a password other than the prompt shown here as a security best practice.
1955
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump
The preceding example grants the new user the CREATE SESSION privilege and the RESOURCE role.
Additional privileges and roles might be required depending on the database objects that you import.
Note
Replace schema_1 with the name of your schema in this step and in the following steps.
The following commands create a new user and grant the necessary permissions.
Note
Specify a password other than the prompt shown here as a security best practice.
1. Use SQL*Plus or Oracle SQL Developer to connect to the source Oracle instance with an administrative
user or with the user you created in step 2. If the source database is an Amazon RDS for Oracle DB
instance, connect with the Amazon RDS master user.
2. Create a dump file using the Oracle Data Pump utility.
The following script creates a dump file named sample.dmp in the DATA_PUMP_DIR directory.
DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => 'EXPORT' ,
job_mode => 'SCHEMA' ,
job_name => null
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'sample.dmp' ,
directory => 'DATA_PUMP_DIR' ,
filetype => dbms_datapump.ku$_file_type_dump_file
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl ,
filename => 'sample_exp.log' ,
directory => 'DATA_PUMP_DIR' ,
filetype => dbms_datapump.ku$_file_type_log_file
);
1956
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump
DBMS_DATAPUMP.METADATA_FILTER(
v_hdnl ,
'SCHEMA_EXPR' ,
'IN (''SCHEMA_1'')'
);
DBMS_DATAPUMP.METADATA_FILTER(
v_hdnl,
'EXCLUDE_NAME_EXPR',
q'[IN (SELECT NAME FROM sys.OBJ$
WHERE TYPE# IN (66,67,74,79,59,62,46)
AND OWNER# IN
(SELECT USER# FROM SYS.USER$
WHERE NAME IN ('RDSADMIN','SYS','SYSTEM','RDS_DATAGUARD','RDSSEC')
)
)
]',
'PROCOBJ'
);
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/
Note
Data Pump jobs are started asynchronously. For information about monitoring a Data Pump
job, see Monitoring job status in the Oracle documentation. You can view the contents of the
export log by using the rdsadmin.rds_file_util.read_text_file procedure. For more
information, see Reading files in a DB instance directory (p. 1927).
Perform this step connected with the same user account as the previous step.
If you are creating a database link between two DB instances inside the same VPC or peered VPCs, the
two DB instances should have a valid route between them. The security group of each DB instance must
allow ingress to and egress from the other DB instance. The security group inbound and outbound rules
can refer to security groups from the same VPC or a peered VPC. For more information, see Adjusting
database links for use with DB instances in a VPC (p. 1879).
The following command creates a database link named to_rds that connects to the Amazon RDS
master user at the target DB instance.
Step 5: Copy the exported dump file to the target DB instance using
DBMS_FILE_TRANSFER
Use DBMS_FILE_TRANSFER to copy the dump file from the source database instance to the target DB
instance. The following script copies a dump file named sample.dmp from the source instance to a target
database link named to_rds (created in the previous step).
BEGIN
DBMS_FILE_TRANSFER.PUT_FILE(
1957
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump
Step 6: Import the data file to the target DB instance using DBMS_DATAPUMP
Use Oracle Data Pump to import the schema in the DB instance. Additional options such as
METADATA_REMAP might be required.
Connect to the DB instance with the Amazon RDS master user account to perform the import.
DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => 'IMPORT',
job_mode => 'SCHEMA',
job_name => null);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'sample_copied.dmp',
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_dump_file );
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'sample_imp.log',
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_log_file);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,'SCHEMA_EXPR','IN (''SCHEMA_1'')');
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/
Note
Data Pump jobs are started asynchronously. For information about monitoring a Data Pump
job, see Monitoring job status in the Oracle documentation. You can view the contents of the
import log by using the rdsadmin.rds_file_util.read_text_file procedure. For more
information, see Reading files in a DB instance directory (p. 1927).
You can verify the data import by viewing the user's tables on the DB instance. For example, the
following query returns the number of tables for schema_1.
Step 7: Clean up
After the data has been imported, you can delete the files that you don't want to keep. You can list the
files in DATA_PUMP_DIR using the following command.
To delete files in DATA_PUMP_DIR that you no longer require, use the following command.
1958
Amazon Relational Database Service User Guide
Importing using Oracle Export/Import
For example, the following command deletes the file named "sample_copied.dmp".
EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','sample_copied.dmp');
The import process creates the necessary schema objects. Thus, you don't need to run a script to create
the objects beforehand.
The easiest way to install the Oracle the export and import utilities is to install the Oracle Instant Client.
To download the software, go to https://www.oracle.com/database/technologies/instant-client.html.
For documentation, see Instant Client for SQL*Loader, Export, and Import in the Oracle Database Utilities
manual.
1. Export the tables from the source database using the exp command.
The following command exports the tables named tab1, tab2, and tab3. The dump file is
exp_file.dmp.
The export creates a binary dump file that contains both the schema and data for the specified
tables.
2. Import the schema and data into a target database using the imp command.
The following command imports the tables tab1, tab2, and tab3 from dump file exp_file.dmp.
Export and Import have other variations that might be better suited to your requirements. See the
Oracle Database documentation for full details.
The easiest way to install Oracle SQL*Loader is to install the Oracle Instant Client. To download the
software, go to https://www.oracle.com/database/technologies/instant-client.html. For documentation,
see Instant Client for SQL*Loader, Export, and Import in the Oracle Database Utilities manual.
1959
Amazon Relational Database Service User Guide
Migrating with Oracle materialized views
2. On the target RDS for Oracle DB instance, create a destination table for loading the data. The clause
WHERE 1=2 ensures that you copy the structure of ALL_OBJECTS, but don't copy any rows.
3. Export the data from the source database to a text file. The following example uses SQL*Plus. For
your data, you will likely need to generate a script that does the export for all the objects in the
database.
SET LINESIZE 800 HEADING OFF FEEDBACK OFF ARRAY 5000 PAGESIZE 0
SPOOL customer_0.out
SET MARKUP HTML PREFORMAT ON
SET COLSEP ','
SPOOL OFF
4. Create a control file to describe the data. You might need to write a script to perform this step.
If needed, copy the files generated by the preceding code to a staging area, such as an Amazon EC2
instance.
5. Import the data using SQL*Loader with the appropriate user name and password for the target
database.
1960
Amazon Relational Database Service User Guide
Migrating with Oracle materialized views
Before you can migrate using materialized views, make sure that you meet the following requirements:
• Configure access from the target database to the source database. In the following example, access
rules were enabled on the source database to allow the RDS for Oracle target database to connect to
the source over SQL*Net.
• Create a database link from the RDS for Oracle DB instance to the source database.
1. Create a user account on both source and RDS for Oracle target instances that can authenticate with
the same password. The following example creates a user named dblink_user.
Note
Specify a password other than the prompt shown here as a security best practice.
2. Create a database link from the RDS for Oracle target instance to the source instance using your
newly created user.
Note
Specify a password other than the prompt shown here as a security best practice.
3. Test the link:
4. Create a sample table with primary key and materialized view log on the source instance.
ALTER TABLE customer_0 ADD CONSTRAINT pk_customer_0 PRIMARY KEY (id) USING INDEX;
6. On the target RDS for Oracle DB instance, refresh the materialized view.
1961
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces
7. Drop the materialized view and include the PRESERVE TABLE clause to retain the materialized view
container table and its contents.
The retained table has the same name as the dropped materialized view.
Topics
• Overview of Oracle transportable tablespaces (p. 1962)
• Phase 1: Set up your source host (p. 1964)
• Phase 2: Prepare the full tablespace backup (p. 1965)
• Phase 3: Make and transfer incremental backups (p. 1967)
• Phase 4: Transport the tablespaces (p. 1967)
• Phase 5: Validate the transported tablespaces (p. 1970)
• Phase 6: Clean up leftover files (p. 1970)
Topics
• Advantages and disadvantages of transportable tablespaces (p. 1962)
• Limitations for transportable tablespaces (p. 1963)
• Prerequisites for transportable tablespaces (p. 1963)
1962
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces
Note
Linux is fully tested and supported. Not all UNIX variations have been tested.
If you use transportable tablespaces, you can transport data using either Amazon S3 or Amazon EFS:
• When you use S3, you download RMAN backups to EBS storage attached to your DB instance. The
files remain in your EBS storage during the import. After the import, you can free up this space, which
remains allocated to your DB instance.
• When you use EFS, your backups remain in the EFS file system for the duration of the import. You
can remove the files afterward. In this technique, you don't need to provision EBS storage for your DB
instance. For this reason, we recommend using Amazon EFS instead of S3. For more information, see
Amazon EFS integration (p. 2020).
The primary disadvantage of transportable tablespaces is that you need relatively advanced knowledge
of Oracle Database. For more information, see Transporting Tablespaces Between Databases in the
Oracle Database Administrator’s Guide.
• Neither the source or target database can use Standard Edition 2 (SE2). Only Enterprise Edition is
supported.
• You can't migrate data from an RDS for Oracle DB instance using transportable tablespaces. You can
only use transportable tablespaces to migrate data to an RDS for Oracle DB instance.
• The Windows operating system isn't supported.
• You can't transport tablespaces into a database at a lower release level. The target database must be
at the same or later release level as the source database. For example, you can’t transport tablespaces
from Oracle Database 21c into Oracle Database 19c.
• You can't transport administrative tablespaces such as SYSTEM and SYSAUX.
• You can't transport tablespaces that are encrypted or use encrypted columns.
• If you transfer files using Amazon S3, the maximum supported file size is 5 TiB.
• If the source database uses Oracle options such as Spatial, you can't transport tablespaces unless the
same options are configured on the target database.
• You can't transport tablespaces into an RDS for Oracle DB instance in an Oracle replica configuration.
As a workaround, you can delete all replicas, transport the tablespaces, and then recreate the replicas.
• Review the requirements for transportable tablespaces described in the following documents in My
Oracle Support:
• Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID
2471245.1)
• Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where
Applicable (Doc ID 1454872.1)
• Primary Note for Transportable Tablespaces (TTS) -- Common Questions and Issues (Doc ID
1166564.1)
1963
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces
• Make sure that the transportable tablespace feature is enabled on your target DB instance. The feature
is enabled only if you don't get an ORA-20304 error when you run the following query:
If the transportable tablespace feature isn't enabled, reboot your DB instance. For more information,
see Rebooting a DB instance (p. 436).
• If you plan to transfer files using Amazon S3, do the following:
• Make sure that an Amazon S3 bucket is available for file transfers, and that the Amazon S3 bucket
is in the same AWS Region as your DB instance. For instructions, see Create a bucket in the Amazon
Simple Storage Service Getting Started Guide.
• Prepare the Amazon S3 bucket for Amazon RDS integration by following the instructions in
Configuring IAM permissions for RDS for Oracle integration with Amazon S3 (p. 1992).
• If you plan to transfer files using Amazon EFS, make sure that you have configured EFS according to
the instructions in Amazon EFS integration (p. 2020).
• We strongly recommend that you turn on automatic backups in your target DB instance. Because
the metadata import step (p. 1969) can potentially fail, it's important to be able to restore your DB
instance to its state before the import, thereby avoiding the necessity to back up, transfer, and import
your tablespaces again.
4. Set up the transportable tablespace utility as described in Oracle Support note 2471245.1.
The setup includes editing the xtt.properties file on your source host. The following sample
xtt.properties file specifies backups of three tablespaces in the /dsk1/backups directory.
These are the tablespaces that you intend to transport to your target DB instance.
#linux system
platformid=13
#list of tablespaces to transport
tablespaces=TBS1,TBS2,TBS3
#location where backup will be generated
src_scratch_location=/dsk1/backups
#RMAN command for performing backup
usermantransport=1
1964
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces
Topics
• Step 1: Back up the tablespaces on your source host (p. 1965)
• Step 2: Transfer the backup files to your target DB instance (p. 1965)
• Step 3: Import the tablespaces on your target DB instance (p. 1966)
1. If your tablespaces are in read-only mode, log in to your source database as a user with the ALTER
TABLESPACE privilege, and place your tablespaces in read/write mode. Otherwise, skip to the next
step.
The following example places tbs1, tbs2, and tbs3 in read/write mode.
2. Back up your tablespaces using the xttdriver.pl script. Optionally, you can specify --debug to
run the script in debug mode.
export TMPDIR=location_of_log_files
cd location_of_xttdriver.pl
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
• If the source and target hosts share an Amazon EFS file system, use an operating system utility such
as cp to copy your backup files and the res.txt file from your scratch location to a shared directory.
Then skip to Step 3: Import the tablespaces on your target DB instance (p. 1966).
• If you need to stage your backups to an Amazon S3 bucket, complete the following steps.
1965
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces
Step 2.3: Download the backups from your Amazon S3 bucket to your target DB instance
In this step, you use the procedure rdsadmin.rdsadmin_s3_tasks.download_from_s3 to download
your backups to your RDS for Oracle DB instance.
1. Start SQL*Plus or Oracle SQL Developer and log in to your RDS for Oracle DB instance.
2. Download the backups from the Amazon S3 bucket to your target DB instance by using the
Amazon RDS procedure rdsadmin.rdsadmin_s3_tasks.download_from_s3 to d. The
following example downloads all of the files from an Amazon S3 bucket named mys3bucket to the
DATA_PUMP_DIR directory.
The SELECT statement returns the ID of the task in a VARCHAR2 data type. For more information,
see Downloading files from an Amazon S3 bucket to an Oracle DB instance (p. 2004).
1. Start an Oracle SQL client and log in to your target RDS for Oracle DB instance as the master user.
2. Run the procedure rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces,
specifying the tablespaces to import and the directory containing the backups.
The following example imports the tablespaces TBS1, TBS2, and TBS3 from the directory
DATA_PUMP_DIR.
1966
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces
BEGIN
:task_id:=rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces('TBS1,TBS2,TBS3','DATA_PUMP_DIR
END;
/
PRINT task_id
Note
For long-running operations, you can also query V$SESSION_LONGOPS, V$RMAN_STATUS,
and V$RMAN_OUTPUT.
4. View the log of the completed import by using the task ID from the previous step.
Make sure that the import succeeded before continuing to the next step.
The steps are the same as in Phase 2: Prepare the full tablespace backup (p. 1965), except that the
import step is optional.
Topics
• Step 1: Back up your read-only tablespaces (p. 1967)
• Step 2: Export tablespace metadata on your source host (p. 1968)
• Step 3: (Amazon S3 only) Transfer the backup and export files to your target DB instance (p. 1968)
• Step 4: Import the tablespaces on your target DB instance (p. 1968)
• Step 5: Import tablespace metadata on your target DB instance (p. 1969)
The following example places tbs1, tbs2, and tbs3 in read-only mode.
1967
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces
expdp username/pwd \
dumpfile=xttdump.dmp \
directory=DATA_PUMP_DIR \
statistics=NONE \
transport_tablespaces=TBS1,TBS2,TBS3 \
transport_full_check=y \
logfile=tts_export.log
If DATA_PUMP_DIR is a shared directory in Amazon EFS, skip to Step 4: Import the tablespaces on your
target DB instance (p. 1968).
Step 3: (Amazon S3 only) Transfer the backup and export files to your target DB
instance
If you are using Amazon S3 to stage your tablespace backups and Data Pump export file, complete the
following steps.
Step 3.1: Upload the backups and dump file from your source host to your Amazon S3 bucket
Upload your backup and dump files from your source host to your Amazon S3 bucket. For more
information, see Uploading objects in the Amazon Simple Storage Service User Guide.
Step 3.2: Download the backups and dump file from your Amazon S3 bucket to your target DB
instance
In this step, you use the procedure rdsadmin.rdsadmin_s3_tasks.download_from_s3 to download
your backups and dump file to your RDS for Oracle DB instance. Follow the steps in Step 2.3: Download
the backups from your Amazon S3 bucket to your target DB instance (p. 1966).
1. Start an Oracle SQL client and log in to your target RDS for Oracle DB instance as the master user.
1968
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces
The following example imports the tablespaces TBS1, TBS2, and TBS3 from the directory
DATA_PUMP_DIR.
BEGIN
:task_id:=rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces('TBS1,TBS2,TBS3','DATA_PUMP_DIR
END;
/
PRINT task_id
Note
For long-running operations, you can also query V$SESSION_LONGOPS, V$RMAN_STATUS,
and V$RMAN_OUTPUT.
4. View the log of the completed import by using the task ID from the previous step.
Make sure that the import succeeded before continuing to the next step.
5. Take a manual DB snapshot by following the instructions in Creating a DB snapshot (p. 613).
Import the Data Pump metadata into your RDS for Oracle DB instance
1. Start your Oracle SQL client and log in to your target DB instance as the master user.
2. Create the users that own schemas in your transported tablespaces, if these users don't already exist.
3. Import the metadata, specifying the name of the dump file and its directory location.
BEGIN
rdsadmin.rdsadmin_transport_util.import_xtts_metadata('xttdump.dmp','DATA_PUMP_DIR');
END;
/
1969
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces
4. (Optional) Query the transportable tablespace history table to see the status of the metadata
import.
The following example lists the contents of the BDUMP directory and then queries the import log.
1. Start SQL*Plus or SQL Developer and log in to your target DB instance as the master user.
2. Validate the tablespaces using the procedure
rdsadmin.rdsadmin_rman_util.validate_tablespace.
SET SERVEROUTPUT ON
BEGIN
rdsadmin.rdsadmin_rman_util.validate_tablespace(
p_tablespace_name => 'TBS1',
p_validation_type => 'PHYSICAL+LOGICAL',
p_rman_to_dbms_output => TRUE);
rdsadmin.rdsadmin_rman_util.validate_tablespace(
p_tablespace_name => 'TBS2',
p_validation_type => 'PHYSICAL+LOGICAL',
p_rman_to_dbms_output => TRUE);
rdsadmin.rdsadmin_rman_util.validate_tablespace(
p_tablespace_name => 'TBS3',
p_validation_type => 'PHYSICAL+LOGICAL',
p_rman_to_dbms_output => TRUE);
END;
/
1970
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces
to list data files that were orphaned after a tablespace import, and then use
rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files procedure to delete them. For
syntax and semantics of these procedures, see Listing orphaned files after a tablespace import (p. 1934)
and Deleting orphaned data files after a tablespace import (p. 1935).
2. If you imported tablespaces but didn't import metadata for these tablespaces, you can delete the
orphaned data files as follows:
a. List the orphaned data files that you need to delete. The following example runs the procedure
rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files.
FILENAME FILESIZE
-------------- ---------
datafile_7.dbf 104865792
datafile_8.dbf 104865792
BEGIN
rdsadmin.rdsadmin_transport_util.cleanup_incomplete_xtts_import('DATA_PUMP_DIR');
END;
/
The cleanup operation generates a log file that uses the name format rds-xtts-
delete_xtts_orphaned_files-YYYY-MM-DD.HH24-MI-SS.FF.log in the BDUMP
directory.
c. Read the log file generated in the previous step. The following example reads log rds-xtts-
delete_xtts_orphaned_files-2023-06-01.09-33-11.868894000.log.
SELECT *
FROM TABLE(rdsadmin.rds_file_util.read_text_file(
p_directory => 'BDUMP',
p_filename => 'rds-xtts-
delete_xtts_orphaned_files-2023-06-01.09-33-11.868894000.log'));
TEXT
--------------------------------------------------------------------------------
orphan transported datafile datafile_7.dbf deleted.
orphan transported datafile datafile_8.dbf deleted.
3. If you imported tablespaces and imported metadata for these tablespaces, but you encountered
compatibility errors or other Oracle Data Pump issues, clean up the partially transported data files
as follows:
1971
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces
a. List the tablespaces that contain partially transported data files by querying
DBA_TABLESPACES.
TABLESPACE_NAME
--------------------------------------------------------------------------------
TBS_3
1972
Amazon Relational Database Service User Guide
Working with Oracle replicas
Topics
• Overview of RDS for Oracle replicas (p. 1973)
• Requirements and considerations for RDS for Oracle replicas (p. 1974)
• Preparing to create an Oracle replica (p. 1977)
• Creating an RDS for Oracle replica in mounted mode (p. 1978)
• Modifying the RDS for Oracle replica mode (p. 1979)
• Working with RDS for Oracle replica backups (p. 1980)
• Performing an Oracle Data Guard switchover (p. 1982)
• Troubleshooting RDS for Oracle replicas (p. 1988)
The following video provides a helpful overview of RDS for Oracle disaster recovery.
For more information, see the blog post Managed disaster recovery with Amazon RDS for Oracle cross-
Region automated backups - Part 1 and Managed disaster recovery with Amazon RDS for Oracle cross-
Region automated backups - Part 2.
Topics
• Read-only and mounted replicas (p. 1973)
• Multitenant read replicas (p. 1974)
• Archived redo log retention (p. 1974)
• Outages during replication (p. 1974)
Read-only
This is the default. Active Data Guard transmits and applies changes from the source database to all
read replica databases.
You can create up to five read replicas from one source DB instance. For general information about
read replicas that applies to all DB engines, see Working with DB instance read replicas (p. 438). For
information about Oracle Data Guard, see Oracle Data Guard concepts and administration in the
Oracle documentation.
1973
Amazon Relational Database Service User Guide
Requirements and considerations for Oracle replicas
Mounted
In this case, replication uses Oracle Data Guard, but the replica database doesn't accept user
connections. The primary use for mounted replicas is cross-Region disaster recovery.
A mounted replica can't serve a read-only workload. The mounted replica deletes archived redo log
files after it applies them, regardless of the archived log retention policy.
You can create a combination of mounted and read-only DB replicas for the same source DB instance.
You can change a read-only replica to mounted mode, or change a mounted replica to read-only mode.
In either case, the Oracle database preserves the archived log retention setting.
• Managed disaster recovery, high availability, and read-only access to your replicas
• The ability to create read replicas in a different AWS Region.
• Integration with the existing RDS read replica APIs: CreateDBInstanceReadReplica,
PromoteReadReplica, and SwitchoverReadReplica
To use this feature, you need an Active Data Guard license and an Oracle Database Enterprise Edition
license for both the replica and primary DB instances. There are no additional costs related to using CDB
architecture. You pay only for your DB instances.
RDS purges logs from the source DB instance after two hours or after the archive log retention hours
setting has passed, whichever is longer. RDS purges logs from the read replica after the archive log
retention hours setting has passed only if they have been successfully applied to the database.
In some cases, a primary DB instance might have one or more cross-Region read replicas. If
so, Amazon RDS for Oracle keeps the transaction logs on the source DB instance until they
have been transmitted and applied to all cross-Region read replicas. For information about
rdsadmin.rdsadmin_util.set_configuration, see Retaining archived redo logs (p. 1893).
1974
Amazon Relational Database Service User Guide
Requirements and considerations for Oracle replicas
Topics
• Version and licensing requirements for RDS for Oracle replicas (p. 1975)
• Option group considerations for RDS for Oracle replicas (p. 1975)
• Backup and restore considerations for RDS for Oracle replicas (p. 1976)
• Oracle Data Guard requirements and limitations for RDS for Oracle replicas (p. 1976)
• Miscellaneous considerations for RDS for Oracle replicas (p. 1976)
• If the replica is in read-only mode, make sure that you have an Active Data Guard license. If you place
the replica in mounted mode, you don't need an Active Data Guard license. Only the Oracle DB engine
supports mounted replicas.
• Oracle replicas are supported for the Oracle Enterprise Edition (EE) engine only.
• Oracle replicas of non-CDBs are supported only for DB instances created using version Oracle Database
12c Release 1 (12.1.0.2.v10) and higher 12c releases, and for non-CDB instances of Oracle Database
19c.
• Oracle replicas of CDBs are supported only for CDB instances created using version Oracle Database
19c and higher.
• Oracle replicas are available for DB instances running only on DB instance classes with two or more
vCPUs. A source DB instance can't use the db.t3.micro or db.t3.small instance classes.
• The Oracle DB engine version of the source DB instance and all of its replicas must be the same.
Amazon RDS upgrades the replicas immediately after upgrading the source DB instance, regardless
of a replica's maintenance window. For major version upgrades of cross-Region replicas, Amazon RDS
automatically does the following:
• Generates an option group for the target version.
• Copies all options and option settings from the original option group to the new option group.
• Associates the upgraded cross-Region replica with the new option group.
For more information about upgrading the DB engine version, see Upgrading the RDS for Oracle DB
engine (p. 2103).
• If your Oracle replica is in the same AWS Region as its source DB instance, make sure that it belongs
to the same option group as the source DB instance. Modifications to the source option group or
source option group membership propagate to replicas. These changes are applied to the replicas
immediately after they are applied to the source DB instance, regardless of the replica's maintenance
window.
For more information about option groups, see Working with option groups (p. 331).
• When you create an RDS for Oracle cross-Region replica, Amazon RDS creates a dedicated option
group for it.
You can't remove an RDS for Oracle cross-Region replica from its dedicated option group. No other DB
instances can use the dedicated option group for an RDS for Oracle cross-Region replica.
You can only add or remove the following nonreplicated options from a dedicated option group:
1975
Amazon Relational Database Service User Guide
Requirements and considerations for Oracle replicas
• NATIVE_NETWORK_ENCRYPTION
• OEM
• OEM_AGENT
• SSL
To add other options to an RDS for Oracle cross-Region replica, add them to the source DB instance's
option group. The option is also installed on all of the source DB instance's replicas. For licensed
options, make sure that there are sufficient licenses for the replicas.
When you promote an RDS for Oracle cross-Region replica, the promoted replica behaves the same
as other Oracle DB instances, including the management of its options. You can promote a replica
explicitly or implicitly by deleting its source DB instance.
For more information about option groups, see Working with option groups (p. 331).
• To create snapshots of RDS for Oracle replicas or turn on automatic backups, make sure to set the
backup retention period manually. Automatic backups aren't turned on by default.
• When you restore a replica backup, you restore to the database time, not the time that the backup was
taken. The database time refers to the latest applied transaction time of the data in the backup. The
difference is significant because a replica can lag behind the primary for minutes or hours.
• If your primary DB instance uses the single-tenant configuration of the multitenant architecture,
consider the following:
• You must use Oracle Database 19c or higher with the Enterprise Edition.
• Your primary CDB instance must be in an ACTIVE lifecycle.
• You can't convert a non-CDB primary instance to a CDB instance and convert its replicas in the same
operation. Instead, delete the non-CDB replicas, convert the primary DB instance to a CDB, and then
create new replicas
• Make sure that a logon trigger on a primary DB instance permits access to the RDS_DATAGUARD user
and to any user whose AUTHENTICATED_IDENTITY value is RDS_DATAGUARD or rdsdb. Also, the
trigger must not set the current schema for the RDS_DATAGUARD user.
• To avoid blocking connections from the Data Guard broker process, don't enable restricted
sessions. For more information about restricted sessions, see Enabling and disabling restricted
sessions (p. 1858).
1976
Amazon Relational Database Service User Guide
Preparing to create an Oracle replica
• If your DB instance is a source for one or more cross-Region replicas, the source DB retains its archived
redo logs until they are applied on all cross-Region replicas. The archived redo logs might result in
increased storage consumption.
• To avoid disrupting RDS automation, system triggers must permit specific users to log on to the
primary and replica database. System triggers include DDL, logon, and database role triggers. We
recommend that you add code to your triggers to exclude the users listed in the following sample
code:
• Block change tracking is supported for read-only replicas, but not for mounted replicas. You can
change a mounted replica to a read-only replica, and then enable block change tracking. For more
information, see Enabling and disabling block change tracking (p. 1903).
Topics
• Enabling automatic backups (p. 1977)
• Enabling force logging mode (p. 1977)
• Changing your logging configuration (p. 1977)
• Setting the MAX_STRING_SIZE parameter (p. 1978)
• Planning compute and storage resources (p. 1978)
1. Log in to your Oracle database using a client tool such as SQL Developer.
2. Enable force logging mode by running the following procedure.
For more information about this procedure, see Setting force logging (p. 1889).
1977
Amazon Relational Database Service User Guide
Creating a mounted Oracle replica
configuration after you create the replicas. Modifications can cause the online redo logging configuration
to get out of sync with the standby logging configuration.
Modify the logging configuration for a DB instance by using the Amazon RDS procedures
rdsadmin.rdsadmin_util.add_logfile and rdsadmin.rdsadmin_util.drop_logfile. For
more information, see Adding online redo logs (p. 1890) and Dropping online redo logs (p. 1891).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the Oracle DB instance that you want to use as the source for a mounted replica.
4. For Actions, choose Create replica.
5. For Replica mode, choose Mounted.
6. Choose the settings that you want to use. For DB instance identifier, enter a name for the read
replica. Adjust other settings as needed.
7. For Regions, choose the Region where the mounted replica will be launched.
8. Choose your instance size and storage type. We recommend that you use the same DB instance class
and storage type as the source DB instance for the read replica.
9. For Multi-AZ deployment, choose Create a standby instance to create a standby of your replica
in another Availability Zone for failover support for the mounted replica. Creating your mounted
replica as a Multi-AZ DB instance is independent of whether the source database is a Multi-AZ DB
instance.
10. Choose the other settings that you want to use.
11. Choose Create replica.
In the Databases page, the mounted replica has the role Replica.
1978
Amazon Relational Database Service User Guide
Modifying the replica mode
AWS CLI
To create an Oracle replica in mounted mode, set --replica-mode to mounted in the AWS CLI
command create-db-instance-read-replica.
Example
For Windows:
To change a read-only replica to a mounted state, set --replica-mode to mounted in the AWS CLI
command modify-db-instance. To place a mounted replica in read-only mode, set --replica-mode to
open-read-only.
RDS API
To create an Oracle replica in mounted mode, specify ReplicaMode=mounted in the RDS API operation
CreateDBInstanceReadReplica.
The change operation can take a few minutes. During the operation, the DB instance status changes
to modifying. For more information about status changes, see Viewing Amazon RDS DB instance
status (p. 684).
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the mounted replica database.
4. Choose Modify.
5. For Replica mode, choose Read-only.
6. Choose the other settings that you want to change.
7. Choose Continue.
1979
Amazon Relational Database Service User Guide
Working with Oracle replica backups
AWS CLI
To change a read replica to mounted mode, set --replica-mode to mounted in the AWS CLI command
modify-db-instance. To change a mounted replica to read-only mode, set --replica-mode to open-
read-only.
Example
For Windows:
RDS API
To change a read-only replica to mounted mode, set ReplicaMode=mounted in ModifyDBInstance. To
change a mounted replica to read-only mode, set ReplicaMode=read-only.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance or Multi-AZ DB cluster
that you want to modify.
3. Choose Modify.
4. For Backup retention period, choose a positive nonzero value, for example 3 days.
5. Choose Continue.
1980
Amazon Relational Database Service User Guide
Working with Oracle replica backups
AWS CLI
In the following example, we enable automated backups by setting the backup retention period to three
days. The changes are applied immediately.
Example
For Windows:
RDS API
To enable automated backups, use the RDS API ModifyDBInstance or ModifyDBCluster operation
with the following required parameters:
• DBInstanceIdentifier or DBClusterIdentifier
• BackupRetentionPeriod
The main consideration when you restore a replica backup is determining the point in time to which you
are restoring. The database time refers to the latest applied transaction time of the data in the backup.
When you restore a replica backup, you restore to the database time, not the time when the backup
1981
Amazon Relational Database Service User Guide
Performing an Oracle Data Guard switchover
completed. The difference is significant because an RDS for Oracle replica can lag behind the primary by
minutes or hours. Thus, the database time of a replica backup, and thus the point in time to which you
restore it, might be much earlier than the backup creation time.
To find the difference between database time and creation time, use the describe-db-snapshots
command. Compare the SnapshotDatabaseTime, which is the database time of the replica backup,
and the OriginalSnapshotCreateTime field, which is the latest applied transaction on the primary
database. The following example shows the difference between the two times:
{
"DBSnapshots": [
{
"DBSnapshotIdentifier": "my-replica-snapshot",
"DBInstanceIdentifier": "my-oracle-replica",
"SnapshotDatabaseTime": "2022-07-26T17:49:44Z",
...
"OriginalSnapshotCreateTime": "2021-07-26T19:49:44Z"
}
]
}
Topics
• Overview of Oracle Data Guard switchover (p. 1982)
• Preparing for the Oracle Data Guard switchover (p. 1986)
• Initiating the Oracle Data Guard switchover (p. 1986)
• Monitoring the Oracle Data Guard switchover (p. 1988)
1982
Amazon Relational Database Service User Guide
Performing an Oracle Data Guard switchover
Amazon RDS supports a fully managed, switchover-based role transition for Oracle Database replicas.
The replicas can reside in separate AWS Regions or in different Availability Zones (AZs) of a single Region.
You can only initiate a switchover to a standby database that is mounted or open read-only.
Topics
• Benefits of Oracle Data Guard switchover (p. 1984)
• Supported Oracle Database versions (p. 1984)
• AWS Region support (p. 1984)
• Cost of Oracle Data Guard switchover (p. 1984)
• How Oracle Data Guard switchover works (p. 1985)
1983
Amazon Relational Database Service User Guide
Performing an Oracle Data Guard switchover
• Reverses the roles of your primary database and specified standby database, putting the new standby
database in the same state (mounted or read-only) as the original standby
• Ensures data consistency
• Maintains your replication configuration after the transition
• Supports repeated reversals, allowing your new standby database to return to its original primary role
1984
Amazon Relational Database Service User Guide
Performing an Oracle Data Guard switchover
The original standby and original primary are the roles that exist before the switchover. The new standby
and new primary are the roles that exist after the switchover. A bystander replica is a replica database
that serves as a standby database in the Oracle Data Guard environment but is not switching roles.
Topics
• Stages of the Oracle Data Guard switchover (p. 1985)
• After the Oracle Data Guard switchover (p. 1985)
To perform the switchover, Amazon RDS must take the following steps:
1. Block new transactions on the original primary database. During the switchover, Amazon RDS
interrupts replication for all databases in your Oracle Data Guard configuration. During the switchover,
the original primary database can't process write requests.
2. Ship unapplied transactions to the original standby database, and apply them.
3. Restart the new standby database in read-only or mounted mode. The mode depends on the open
state of the original standby database before the switchover.
4. Open the new primary database in read/write mode.
Amazon RDS switches the roles of the primary and standby database. You are responsible for
reconnecting your application and performing any other desired configuration.
Topics
• Success criteria (p. 1985)
• Connection to the new primary database (p. 1985)
• Configuration of the new primary database (p. 1986)
Success criteria
The Oracle Data Guard switchover is successful when the original standby database does the following:
To limit downtime, your new primary database becomes active as soon as possible. Because Amazon
RDS configures bystander replicas asynchronously, these replicas might become active after the original
primary database.
Amazon RDS won't propagate your current database connections to the new primary database after the
switchover. After the Oracle Data Guard switchover completes, reconnect your application to the new
primary database.
1985
Amazon Relational Database Service User Guide
Performing an Oracle Data Guard switchover
If you perform a switchover to a cross-Region replica with different options, the new primary database
keeps its own options. Amazon RDS won't migrate the options on the original primary database. If the
original primary database had options such as SSL, NNE, OEM, and OEM_AGENT, Amazon RDS doesn't
propagate them to the new primary database.
Assume that db_maz is the primary database in a Multi-AZ deployment, and db_saz is a Single-AZ
replica. You initiate a switchover from db_maz to db_saz. Afterward, db_maz is a Multi-AZ replica
database, and db_saz is a Single-AZ primary database. The new primary database is now unprotected
by a Multi-AZ deployment.
• In preparation for a cross-Region switchover, the primary database doesn't use the same option group
as a DB instance outside of the replication configuration. For a cross-Region switchover to succeed, the
current primary database and its read replicas must be the only DB instances to use the option group
of the current primary database. Otherwise, Amazon RDS prevents the switchover.
Console
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
1986
Amazon Relational Database Service User Guide
Performing an Oracle Data Guard switchover
The Databases pane appears. Each read replica shows Replica in the Role column.
3. Choose the read replica that you want to switch over to the primary role.
4. For Actions, choose Switch over replica.
5. Choose I acknowledge. Then choose Switch over replica.
6. On the Databases page, monitor the progress of the switchover.
When the switchover completes, the role of the switchover target changes from Replica to Source.
AWS CLI
To switch over an Oracle replica to the primary DB role, use the AWS CLI switchover-read-replica
command. The following examples make the Oracle replica named replica-to-be-made-primary
into the new primary database.
Example
For Windows:
RDS API
To switch over an Oracle replica to the primary DB role, call the Amazon RDS API
SwitchoverReadReplica operation with the required parameter DBInstanceIdentifier. This
parameter specifies the name of the Oracle replica that you want to assume the primary DB role.
1987
Amazon Relational Database Service User Guide
Troubleshooting Oracle replicas
To confirm that the switchover completed successfully, query V$DATABASE.OPEN_MODE. Check that the
value for the new primary database is READ WRITE.
To look for switchover-related events, use the AWS CLI command describe-events. The following
example looks for events on the orcl2 instance.
Topics
• Monitoring Oracle replication lag (p. 1988)
• Troubleshooting Oracle replication failure after adding or modifying triggers (p. 1989)
For a read replica, if the lag time is too long, query the following views:
• V$ARCHIVED_LOG – Shows which commits have been applied to the read replica.
• V$DATAGUARD_STATS – Shows a detailed breakdown of the components that make up the
ReplicaLag metric.
• V$DATAGUARD_STATUS – Shows the log output from Oracle's internal replication processes.
For a mounted replica, if the lag time is too long, you can't query the V$ views. Instead, do the following:
1988
Amazon Relational Database Service User Guide
Troubleshooting Oracle replicas
For more information, see Miscellaneous considerations for RDS for Oracle replicas (p. 1976).
1989
Amazon Relational Database Service User Guide
Options for Oracle
Topics
• Overview of Oracle DB options (p. 1990)
• Amazon S3 integration (p. 1992)
• Oracle Application Express (APEX) (p. 2009)
• Amazon EFS integration (p. 2020)
• Oracle Java virtual machine (p. 2031)
• Oracle Enterprise Manager (p. 2034)
• Oracle Label Security (p. 2049)
• Oracle Locator (p. 2052)
• Oracle Multimedia (p. 2055)
• Oracle native network encryption (p. 2057)
• Oracle OLAP (p. 2065)
• Oracle Secure Sockets Layer (p. 2068)
• Oracle Spatial (p. 2075)
• Oracle SQLT (p. 2078)
• Oracle Statspack (p. 2084)
• Oracle time zone (p. 2087)
• Oracle time zone file autoupgrade (p. 2091)
• Oracle Transparent Data Encryption (p. 2097)
• Oracle UTL_MAIL (p. 2099)
• Oracle XML DB (p. 2102)
Topics
• Summary of Oracle Database options (p. 1990)
• Options supported for different editions (p. 1991)
• Memory requirements for specific options (p. 1991)
Option Option ID
APEX-DEV
1990
Amazon Relational Database Service User Guide
Overview of Oracle DB options
Option Option ID
OEM_AGENT
For more information, see describe-option-group-options in the AWS CLI Command Reference.
1991
Amazon Relational Database Service User Guide
Amazon S3 integration
Amazon S3 integration
You can transfer files between your RDS for Oracle DB instance and an Amazon S3 bucket. You can use
Amazon S3 integration with Oracle Database features such as Oracle Data Pump. For example, you can
download Data Pump files from Amazon S3 to your RDS for Oracle DB instance. For more information,
see Importing data into Oracle on Amazon RDS (p. 1947).
Note
Your DB instance and your Amazon S3 bucket must be in the same AWS Region.
Topics
• Configuring IAM permissions for RDS for Oracle integration with Amazon S3 (p. 1992)
• Adding the Amazon S3 integration option (p. 2000)
• Transferring files between Amazon RDS for Oracle and an Amazon S3 bucket (p. 2001)
• Troubleshooting Amazon S3 integration (p. 2007)
• Removing the Amazon S3 integration option (p. 2008)
RDS for Oracle supports uploading files from a DB instance in one account to an Amazon S3 bucket in a
different account. Where additional steps are required, they are noted in the following sections.
Topics
• Step 1: Create an IAM policy for your Amazon RDS role (p. 1992)
• Step 2: (Optional) Create an IAM policy for your Amazon S3 bucket (p. 1996)
• Step 3: Create an IAM role for your DB instance and attach your policy (p. 1997)
• Step 4: Associate your IAM role with your RDS for Oracle DB instance (p. 1999)
Before you create the policy, note the following pieces of information:
For more information, see Protecting data using server-side encryption in the Amazon Simple Storage
Service User Guide.
1992
Amazon Relational Database Service User Guide
Amazon S3 integration
Console
To create an IAM policy to allow Amazon RDS to access your Amazon S3 bucket
Object permissions are permissions for object operations in Amazon S3. You must grant them
for objects in a bucket, not the bucket itself. For more information, see Permissions for object
operations.
6. Choose Resources, and then do the following:
a. Choose Specific.
b. For bucket, choose Add ARN. Enter your bucket ARN. The bucket name is filled in automatically.
Then choose Add.
c. If the object resource is shown, either choose Add ARN to add resources manually or choose
Any.
Note
You can set Amazon Resource Name (ARN) to a more specific ARN value to allow
Amazon RDS to access only specific files or folders in an Amazon S3 bucket. For more
information about how to define an access policy for Amazon S3, see Managing access
permissions to your Amazon S3 resources.
7. (Optional) Choose Add additional permissions to add resources to the policy. For example, do the
following:
a. If your bucket is encrypted with a custom KMS key, select KMS for the service.
b. For Manual actions, select the following:
• Encrypt
• ReEncrypt from and ReEncrypt to
• Decrypt
• DescribeKey
• GenerateDataKey
c. For Resources, choose Specific.
d. For key, choose Add ARN. Enter the ARN of your custom key as the resource, and then choose
Add.
For more information, see Protecting Data Using Server-Side Encryption with KMS keys Stored
in AWS Key Management Service (SSE-KMS) in the Amazon Simple Storage Service User Guide.
e. If you want Amazon RDS to access to access other buckets, add the ARNs for these buckets.
Optionally, you can also grant access to all buckets and objects in Amazon S3.
1993
Amazon Relational Database Service User Guide
Amazon S3 integration
AWS CLI
Create an AWS Identity and Access Management (IAM) policy that grants Amazon RDS access to an
Amazon S3 bucket. After you create the policy, note the ARN of the policy. You need the ARN for a
subsequent step.
Include the appropriate actions in the policy based on the type of access required:
The following AWS CLI command creates an IAM policy named rds-s3-integration-policy with
these options. It grants access to a bucket named your-s3-bucket-arn.
Example
For Linux, macOS, or Unix:
1994
Amazon Relational Database Service User Guide
Amazon S3 integration
"kms:Decrypt",
"kms:Encrypt",
"kms:ReEncrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::your-s3-bucket-arn",
"arn:aws:s3:::your-s3-bucket-arn/*",
"arn:aws:kms:::your-kms-arn"
]
}
]
}'
For Windows:
1995
Amazon Relational Database Service User Guide
Amazon S3 integration
}
]
}'
• You plan to upload files to an Amazon S3 bucket from one account (account A) and access them from a
different account (account B).
• Account B owns the bucket.
• Account B needs full control of objects loaded into the bucket.
If the preceding conditions don't apply to you, skip to Step 3: Create an IAM role for your DB instance
and attach your policy (p. 1997).
To create your bucket policy, make sure you have the following:
Console
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to create a bucket policy for or
whose bucket policy you want to edit.
3. Choose Permissions.
4. Under Bucket policy, choose Edit. This opens the Edit bucket policy page.
5. On the Edit bucket policy page, explore Policy examples in the Amazon S3 User Guide, choose
Policy generator to generate a policy automatically, or edit the JSON in the Policy section.
If you choose Policy generator, the AWS Policy Generator opens in a new window:
a. On the AWS Policy Generator page, in Select Type of Policy, choose S3 Bucket Policy.
b. Add a statement by entering the information in the provided fields, and then choose Add
Statement. Repeat for as many statements as you would like to add. For more information
about these fields, see the IAM JSON policy elements reference in the IAM User Guide.
Note
For convenience, the Edit bucket policy page displays the Bucket ARN (Amazon
Resource Name) of the current bucket above the Policy text field. You can copy this
ARN for use in the statements on the AWS Policy Generator page.
c. After you finish adding statements, choose Generate Policy.
d. Copy the generated policy text, choose Close, and return to the Edit bucket policy page in the
Amazon S3 console.
6. In the Policy box, edit the existing policy or paste the bucket policy from the Policy generator. Make
sure to resolve security warnings, errors, general warnings, and suggestions before you save your
policy.
1996
Amazon Relational Database Service User Guide
Amazon S3 integration
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::account-A-ID:account-A-user"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::account-B-bucket-arn",
"arn:aws:s3:::account-B-bucket-arn/*"
]
}
]
}
7. Choose Save changes, which returns you to the Bucket Permissions page.
Step 3: Create an IAM role for your DB instance and attach your policy
This step assumes that you have created the IAM policy in Step 1: Create an IAM policy for your Amazon
RDS role (p. 1992). In this step, you create a role for your RDS for Oracle DB instance and then attach
your policy to the role.
Console
AWS CLI
1. Create an IAM role that Amazon RDS can assume on your behalf to access your Amazon S3 buckets.
We recommend using the aws:SourceArn and aws:SourceAccount global condition context keys
in resource-based trust relationships to limit the service's permissions to a specific resource. This is
the most effective way to protect against the confused deputy problem.
You might use both global condition context keys and have the aws:SourceArn value contain the
account ID. In this case, the aws:SourceAccount value and the account in the aws:SourceArn
value must use the same account ID when used in the same statement.
1997
Amazon Relational Database Service User Guide
Amazon S3 integration
In the trust relationship, make sure to use the aws:SourceArn global condition context key with
the full Amazon Resource Name (ARN) of the resources accessing the role.
The following AWS CLI command creates the role named rds-s3-integration-role for this
purpose.
Example
For Windows:
1998
Amazon Relational Database Service User Guide
Amazon S3 integration
For more information, see Creating a role to delegate permissions to an IAM user in the IAM User
Guide.
2. After the role is created, note the ARN of the role. You need the ARN for a subsequent step.
3. Attach the policy you created to the role you created.
The following AWS CLI command attaches the policy to the role named rds-s3-integration-
role.
Example
For Windows:
Replace your-policy-arn with the policy ARN that you noted in a previous step.
Step 4: Associate your IAM role with your RDS for Oracle DB instance
The last step in configuring permissions for Amazon S3 integration is associating your IAM role with your
DB instance. Note the following requirements:
• You must have access to an IAM role with the required Amazon S3 permissions policy attached to it.
• You can only associate one IAM role with your RDS for Oracle DB instance at a time.
• Your DB instance must be in the Available state.
Console
To associate your IAM role with your RDS for Oracle DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose Databases from the navigation pane.
3. Choose the RDS for Oracle DB instance name to display its details.
4. On the Connectivity & security tab, scroll down to the Manage IAM roles section at the bottom of
the page.
5. For Add IAM roles to this instance, choose the role that you created in Step 3: Create an IAM role
for your DB instance and attach your policy (p. 1997).
6. For Feature, choose S3_INTEGRATION.
1999
Amazon Relational Database Service User Guide
Amazon S3 integration
AWS CLI
The following AWS CLI command adds the role to an Oracle DB instance named mydbinstance.
Example
For Windows:
Replace your-role-arn with the role ARN that you noted in a previous step. S3_INTEGRATION must
be specified for the --feature-name option.
Console
1. Create a new option group or identify an existing option group to which you can add the
S3_INTEGRATION option.